id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.09400 | The Main Evolutionary Pathways of Massive Hierarchical Triple Stars | So far, stellar population studies have mainly focused on the evolution of
single and binary stars. Recent observations show that triple and higher order
multiple star systems are common, especially among massive stars. Introducing
three-body dynamical effects can influence the evolution of an individual
stellar system and hence affect the predicted rates of astrophysical sources
that are a product of stellar evolution. We aim to constrain the main
evolutionary pathways of massive hierarchical triple star systems and to
quantify the effect of the third star on the evolution of the system. We model
the massive triple star population by performing simulations of triple star
evolution with the TRES code, which combines stellar evolution with secular
evolution of triple systems, and explore how robust the predictions of these
simulations are under variations of uncertain initial conditions. We find that
interactions are common among massive triple stars. The majority of systems
(65-77\%) experience a phase of mass transfer in the inner binary, often with
an unevolved donor star. This differs significantly from isolated binary
evolution, where mass transfer is less frequent (52.3\% instead of 67\% for our
fiducial model) and the donors are typically post-main sequence stars. Initial
constraints for dynamical stability as well as eccentricity oscillations driven
by the third body facilitate the occurrence of interactions, for instance mass
transfer. The requirement of dynamical stability at formation places quite
stringent constraints on allowed orbital properties, reducing uncertainties in
triple evolution that resort from these initial conditions. Ignoring three-body
dynamics during evolution of non-interacting triples leads to triple
compact-object systems with stronger eccentricity oscillations and thereby
likely over-predicts the merger rate of compact objects in such systems. | F. Kummer, S. Toonen, A. de Koter | 2023-06-15T18:00:00Z | http://arxiv.org/abs/2306.09400v1 | # The Main Evolutionary Pathways of Massive Hierarchical Triple Stars
###### Abstract
Context:So far, stellar population studies have mainly focused on the evolution of single and binary stars. Recent observations show that triple and higher order multiple star systems are ubiquitous in the local population, especially among massive stars. Introducing three-body dynamical effects can influence the evolution of an individual stellar system and hence affect the predicted rates of astrophysical sources that are a product of stellar evolution. Therefore, predictions of triple star evolution are necessary for a more complete understanding of the evolutionary behavior of stellar populations and their end products.
Aims:We aim to constrain the main evolutionary pathways of massive hierarchical triple star systems and to quantify the effect of the third star on the evolution of the system.
Methods:We model the massive triple star population by performing simulations of triple star evolution with the TRES code, which combines stellar evolution with secular evolution of triple systems, and explore how robust the predictions of these simulations are under variations of uncertain initial conditions. We focus on coeval, hierarchical stellar triples in pre-mass-transfer phases.
Results:Interactions are common among massive triple stars. The majority of systems (65-77%) experience a phase of mass transfer in the inner binary, often with an unresolved donor star. This differs significantly from isolated binary evolution, where mass transfer is less frequent (52.3% instead of 67% for our fiducial model) and the donors are typically post-main sequence stars. Initial constraints for dynamical stability as well as eccentricity oscillations driven by the third body facilitate the occurrence of interactions, for instance mass transfer. The requirement of dynamical stability at formation places quite stringent constraints on allowed orbital properties, reducing uncertainties in triple evolution that resort from these initial conditions. Ignoring three-body dynamics during evolution of non-interacting triples leads to triple compact-object systems with stronger eccentricity oscillations and thereby likely over-predicts the merger rate of compact objects in such systems.
Conclusions:
## 1 Introduction
Massive stars (\(M\gtrsim 8\)M\({}_{\odot}\)) play an important role in the dynamical and chemical evolution of galaxies. Stellar feedback in the form of stellar winds, radiation pressure, photo-ionization and supernova (SN) explosions control star formation (e.g. Hayward & Hopkins 2017) and enrich the interstellar medium with metals (e.g. Burbidge et al. 1957; Larson 1974; Larson & Dinerstein 1975). Most massive stars are observed to have at least one stellar companion (e.g. Evans et al. 2006; Sana et al. 2014; Kobulnicky et al. 2014; Moe & Di Stefano 2017) and are expected to interact at some stage of their evolution (Sana et al. 2012). These interacting systems can be the progenitors for a great variety of observed astrophysical sources, such as gravitational wave (GW) mergers (e.g. Abbott et al. 2016), X-ray binaries (e.g. Verbunt 1993; Langer 2012), massive equivalents of Algol systems (e.g. Budding 1989) and blue stragglers (e.g. Perets & Fabrycky 2009), and ultra-stripped supernovae (e.g. Tauris et al. 2013). Mass transfer, specifically, can significantly impact the subsequent evolution of both the donor and accretary stars (e.g. Eldridge et al. 2008; Langer 2012; Laplace et al. 2021).
In the past, the vast majority of theoretical work in stellar evolution has focused on single- and binary stellar models. However, more recent advances in observational astronomy reveal that higher order multiples are prevalent, especially among the massive star population. Roughly 10% of the solar-type star systems are triples (Tokovinin 2014; Moe & Di Stefano 2017), while for early B-type and O-type stars this fraction increases to over 30% (Evans et al. 2006; Moe & Di Stefano 2017). Unsurprisingly, stellar products have been observed whose formation poses a challenge from the perspective of single- and binary evolution, but can more easily be explained with the inclusion of a third companion, e.g., low mass X-ray binaries containing a black hole (Podsiadlowski et al. 2003; Naoz et al. 2016), barium stars (Gao et al. 2023), cataclysmic variables with high accretion rates (Knigge et al. 2022) and binary stars with one magnetic component (Alecian et al. 2014; Schneider et al. 2016, 2019; Shultz et al. 2019). Therefore, complimentary population synthesis studies of triples are essential toward creating a more comprehensive picture of stellar evolution and interaction.
Introducing a third companion to a binary adds complexity to the system. On top of stellar evolution and interactions, three-body dynamics has to be taken into account to accurately describe the evolution of a triple star system over time. The lowest order manifestation of these three-body interactions in a stable triple system are the Von Zeipel-Lidov-Kozai (ZLK) oscillations (von Zeipel 1910; Lidov 1962; Kozai 1962), which are periodic variations of the eccentricity of the orbit of the inner two stars and the inclination of the third star with respect to the orbital
plane of the inner orbit. For an elaborate review on the effects of three-body dynamics, see Naoz (2016). Recent population synthesis studies of triple stars have shown that increased eccentricities induced by three-body dynamical effects such as ZLK oscillations can lead to an enhanced degree of stellar interaction between the components of the inner binary compared to such interactions in isolated binary stars (Antonini et al., 2017; Hamers & Thompson, 2019; Toonen et al., 2020; Stegmann et al., 2022, 2022, 2022).
In this paper, we aim to provide an inventory of the most common evolutionary channels within a population of massive triple stars and show how these predictions differ from standard binary evolution. To explore this, we perform simulations of massive hierarchical triple stars with a triple population synthesis code. We stop the computation of an individual stellar and orbital evolution when the system becomes dynamically unstable, unbound or at the onset of mass transfer, i.e., for this subset of evolutionary channels we do not continue until the final remnant stages. The main reasons for this are that the physics of mass transfer in a triple configuration is still poorly understood and awaits more detailed studies, and that it allows us to focus on the effects of a third companion on the initial orbital conditions and early evolution.
In Section 2, we give a description of the triple population synthesis code and motivate our choice of initial model conditions. In Section 3, we present the predicted incidence rates of each evolutionary channel and discuss a few channels in more detail. In Section 4, we compare our results with similar studies and discuss some caveats. Finally, we give a summary of our findings in Section 5.
## 2 Method
To investigate the most common evolutionary channels in massive hierarchical triple stars we use the triple population synthesis code TRES1(Toonen et al., 2016). This is a code that couples a single and binary stellar evolution code, in this case SeBa (Portegies Zwart & Verbunt, 1996; Toonen et al., 2012), which models stellar evolution, and a code that solves the orbital motions of three-body systems using the secular approximation (Toonen et al., 2016). In the secular approximation, the system is sufficiently hierarchical such that the tertiary star effectively orbits the center of mass of the inner two stars, forming the outer orbit. In these configurations, the energy of the outer orbit and the orbit of the inner two stars is separately conserved (Heggie, 1975). To model stellar evolution, SeBa makes use of analytic fitting formulae from Hurley et al. (2000, 2002), based on the single stellar evolution models produced by Pols et al. (1995). These fitting formulae allow for rapid simulation of stellar evolution, essential for population synthesis studies.
Footnote 1: This code is publicly available on: [https://github.com/amusecode/TRES](https://github.com/amusecode/TRES)
With TRES, we simulate ten sets of 25000 initially dynamically stable, coeval, massive hierarchical triple stars that are initialized at the zero age main sequence (ZAMS). These systems are then evolved for approximately a Hubble time (13.5 Gyr), unless the systems experience a phase of mass transfer, become dynamically unstable or unbind, after which the simulation is terminated. In a very small number of simulations (\(\lesssim 0.05\%\)) the timescale for dynamical interactions is extremely short compared to the nuclear timescale; we decided to discontinue these models for practical reasons. For a similar number of systems, the secular code could not find an analytical solution for the orbital elements. These too were discarded.
We will first discuss the adopted treatment of mass loss in stellar winds and our choices for the initial stellar and orbital properties of the primordial triple star population. Subsequently, we review a few physical implications of multiplicity that can affect the evolution of a system.
### Updated wind prescription
By default, SeBa follows the line-driven wind mass-loss model for massive stars of Vink et al. (2000, 2001) in the metallicity range \(0.03\,Z_{\odot}<Z<3\,Z_{\odot}\) and Nieuwenhuijzen & De Jager (1990) in the range where the Vink models are not applicable. We have modified these prescriptions by reducing the mass-loss rates by a factor 3, following and generalising the findings of Bjorklund et al. (2021) for the implemented mass range. Additionally, we have assumed luminous blue variable mass-loss rates following Belczynski et al. (2010), introducing a constant enhanced mass loss of \(1.5\times 10^{-4}\) M\({}_{\odot}/\)yr for stars exceeding a luminosity of \(6\times 10^{5}\,\mathrm{L}_{\odot}\).
### Initial parameters
To simplify the description of a hierarchical triple star, the system is split up into two subsystems: (i) The inner binary, that comprises the primary and secondary star with masses \(m_{1}\) and \(m_{2}\). These stars orbit each other with an inner semi-major axis \(a_{\mathrm{in}}\), eccentricity \(e_{\mathrm{in}}\) and argument of pericenter \(g_{\mathrm{in}}\); (ii) The outer binary, that comprises the centre of mass of the inner binary and the tertiary star, with mass \(m_{3}\). The outer binary has an outer semi-major axis \(a_{\mathrm{out}}\), eccentricity \(e_{\mathrm{out}}\) and argument of pericenter \(g_{\mathrm{out}}\). We define a relative inclination angle \(i_{\mathrm{rel}}\) between the orbital planes of the inner and outer orbit.
The incidence rates of transients depend on the ZAMS conditions of a stellar population (e.g. Abate et al., 2015; de Mink & Belczynski, 2015; Stevenson et al., 2022). Therefore, we would ideally like to possess a complete understanding of how the initial stellar and orbital parameters of massive triples are distributed. Unfortunately, the observed sample size of massive stars is limited compared to low-mass stars and our understanding is fragmented, as certain parts of the parameter space (e.g., low mass ratios and wide orbits) are difficult to probe. Moreover, observations suggest there is a degeneracy in constraining certain parameter combinations (Moe & Di Stefano, 2017). To account for these uncertainties, we produce a set of model variations with a diverse set of assumptions on the primordial parameter distributions.
Our fiducial model follows a Kroupa IMF distribution (Kroupa, 2001) for the stellar mass \(m_{1}\) in the range [10,100] M\({}_{\odot}\). This distribution is a power-law function with index \(\alpha=-2.3\). Above a mass of 100 M\({}_{\odot}\), extrapolation of the Hurley stellar evolution fitting formulae becomes less accurate. The stellar masses \(m_{2}\) and \(m_{3}\) are specified through the inner mass ratio \(q_{\mathrm{in}}\equiv m_{2}/m_{1}\) and the outer mass ratio \(q_{\mathrm{out}}\equiv m_{3}/(m_{1}+m_{2})\), respectively. Both the inner and outer mass ratios are sampled from a flat distribution in the range [0.1,1]2, based on mass-ratio observations of massive binary stars (Sana et al., 2012; Kobulnicky et al., 2014).
The orbital semi-major axis range for \(a_{\rm in}\) and \(a_{\rm out}\) both cover [5,5\(\times\)10\({}^{6}\)] R\({}_{\odot}\), and the orbital distribution functions (see below) are sampled uniformly in logarithmic space, following (Opik, 1924; Kobulnicky & Fryer, 2007). Below separations of 5 R\({}_{\odot}\), the stars of the inner binary are likely to touch at the ZAMS. Above \(5\times 10^{6}\) R\({}_{\odot}\), the system is only very weakly bound through gravity and can easily be disrupted by small perturbations, such as stellar flybys (Kouwenhoven et al., 2007, 2010). When the sampling from our orbital semi-major axis distribution function yielded \(a_{\rm in}>a_{\rm out}\), we decide to swap the values, as, by definition, \(a_{\rm in}<a_{\rm out}\). This is a different approach from, e.g., one of the models in Toonen et al. (2020), who fixed the primordial distribution of \(a_{\rm in}\) and accepts or rejects \(a_{\rm out}\) in accordance with the stability properties of the system. The resulting ZAMS distributions of \(a_{\rm in}\) and \(a_{\rm out}\) might differ between both methods. The eccentricities \(e_{\rm in}\) and \(e_{\rm out}\) are sampled from a thermal distribution in the range [0,0.9] (Ambartsumian, 1937; Heggie, 1975; Moe & Di Stefano, 2017; Hwang, 2023). The arguments of pericenter \(g_{\rm in}\) and \(g_{\rm out}\) are sampled from a uniform distribution in the range [-\(\pi\),\(\pi\)]. Lastly, the relative inclination \(i_{\rm rel}\) is sampled from a uniform distribution in cosine in the range [0,\(\pi\)].
In each population model we vary only one (two in a single case) of the aforementioned distributions, in an effort to isolate the impact each parameter has on the evolution of the population. Five model variations assume the distributions for the semi-major axes and eccentricities as described by Sana et al. (2012), based on an observed population of O-type multiple stars in nearby galactic open clusters. The orbital periods are sampled from a power law in the range \(0.15<\log_{\rm 10}\)P \(<8.5\), with index \(\pi=-0.55\) and are then converted to semi-major axes using Kepler's third law. The eccentricities are sampled from a power law with index \(\eta=-0.45\). In two model variations we assume a flat distribution of \(e_{\rm in}\) and \(e_{\rm out}\)(Kobulnicky et al., 2014). In one variation, we assume a power law with index \(\gamma=-1.5\) for the outer mass ratio, \(q_{\rm out}\), which seems more appropriate for early B- and O-type primaries with wide companions (Moe & Di Stefano, 2017). For the final variation, we assume \(i_{\rm rel}=0\). This model is motivated by Tokovinin (2017), who showed that for outer orbital separations smaller than about \(10^{4}\) R\({}_{\odot}\) the inner and outer orbits appear to be more aligned. Table 1 provides a complete overview of the parameter distributions, sampling ranges and model variations.
### Dynamical stability
To ensure long-term stability of a three-body system, a hierarchical configuration is generally required, i.e., the outer binary has a considerably larger orbit than the inner binary. In that case, the timescale at which the tertiary perturbs the system is long compared to the dynamical timescale of the inner binary. When the tertiary component orbits the inner binary too closely, the two timescales become comparable, breaking down the hierarchical structure and rendering the secular approximation invalid. We define the critical ratio for hierarchical stability following Mardling & Aarseth (1999, 2001) as a criterion for hierarchical stability:
\[\frac{a_{\rm out}}{a_{\rm in}}\Big{|}_{\rm crit}=\frac{2.8}{1-e_{\rm out}} \Big{(}1-\frac{0.3i_{\rm rel}}{\pi}\Big{)}\left(\frac{(1+q_{\rm out})(1+e_{ \rm out})}{\sqrt{1-e_{\rm out}}}\right)^{2/5}. \tag{1}\]
A system is dynamically stable if \(a_{\rm out}/a_{\rm in}>(a_{\rm out}/a_{\rm in})_{\rm crit}\). For instance, for a circular orbit, equal mass system (\(q_{\rm out}\)=0.5), with the inner and outer orbit in the same plane, \((a_{\rm out}/a_{\rm in})_{\rm crit}=3.3\).
Applying this stability criterion to our primordial population of triple stars affects the shape of the initial stellar and orbital parameter distributions. Fig. 1 shows how the sampled distributions of \(a_{\rm in}\) and \(a_{\rm out}\) are altered by rejecting systems that are dynamically unstable or Roche-lobe filling at birth. While the fiducial model follows a flat sampling distribution in the logarithm of both \(a_{\rm in}\) and \(a_{\rm out}\), the population of hierarchical triple stars that satisfies the stability criterion (black line) is clearly more biased against large (small) inner (outer) semi-major axes. The most compact inner binaries (\(a_{\rm in}<20\) R\({}_{\odot}\)) are often in contact at initialisation and are discarded. However, the number of systems discarded due to contact is much smaller for models with lower initial inner eccentricities. The important thing to notice is that the semi-major axis distributions of the ZAMS population of models that vary \(a_{\rm in}\) and/or \(a_{\rm out}\) are quite similar to the fiducial model, even though their sampling distributions are vastly differing.
### Three-body dynamics
The perturbations induced by the tertiary star onto the inner binary can be described through a mutual torque acting between the inner- and outer orbit. This torque allows angular momentum to be transported from one orbit to the other and vice versa, leading to an oscillatory behavior of the inner eccentricity and the relative inclination. The lowest order manifestation of this mechanism, the quadruple regime or ZLK effect, acts on a timescale that is mostly dependent on the ratio of orbital periods, \(P_{\rm out}^{2}/P_{\rm in}\), and \(e_{\rm out}\). An approximate expression for this timescale is (Kinoshita & Nakai, 1999; Antognini, 2015):
\[\tau_{\rm ZLK}\approx\frac{8}{15\pi}\Big{(}1+\frac{m_{1}}{m_{3}}\Big{)}\Big{(} \frac{P_{\rm out}^{2}}{P_{\rm in}}\Big{)}(1-e_{\rm out}^{2})^{3/2}. \tag{2}\]
In the test-particle approximation, large oscillations of the inner eccentricity occur only at initial relative inclinations \(i_{\rm init}\) between \(39.2^{\circ}-140.8^{\circ}\) for initially non-eccentric inner orbits (Ford et al., 2000; Naoz et al., 2013; Naoz, 2016). The maximum inner eccentricity reached through ZLK oscillations in the test-particle regime can be described analytically for circular orbits (Innanen et al., 1997; Grishin et al., 2017):
\[e_{\rm max}=\sqrt{1-\frac{5}{3}\mathrm{cos}^{2}(i_{\rm init})}. \tag{3}\]
Close to dynamical destabilisation and at non-zero outer eccentricities, higher order terms of the secular approximation treatment become important, such as the octupole term (Ford et al., 2000). Besides reaching extreme eccentricity amplitudes, in this regime three-body dynamics can provoke a flip in relative inclination (from prograde to retrograde or vice versa) (Naoz et al., 2011; Li et al., 2014) and inner eccentricity variations can occur at a wider range of relative inclinations than \(39.2^{\circ}-140.8^{\circ}\)(Ford et al., 2000). The octupole term is relevant when the octupole parameter,
\[\epsilon_{\rm oct}=\frac{m_{1}-m_{2}}{m_{1}+m_{2}}\frac{a_{\rm in}}{a_{\rm out }}\frac{e_{\rm out}}{1-e_{\rm out}^{2}}, \tag{4}\]
is of the order \(|\epsilon_{\rm oct}|\gtrsim 0.001\)-\(0.01\)(Lithwick & Naoz, 2011; Shappee & Thompson, 2013).
#### 2.4.1 Eccentricity oscillation tracking
We include a new function in the code that tracks the amplitude of the inner eccentricity oscillations during the entire evolution of a system. Per time step of the evolutionary code, the largest eccentricity amplitude is stored. This information can be used as a tool to quantify the impact of a third stellar component on the dynamical evolution of the system. In Fig. 2 we show the maximum change in inner eccentricity during each time step of the evolutionary code for an example system. The system has initial stellar masses \(m_{1}=20\) M\({}_{\odot}\), \(m_{2}=15\) M\({}_{\odot}\) and \(m_{3}=30\) M\({}_{\odot}\), initial semi-major axes \(a_{\rm in}=2\times 10^{4}\) R\({}_{\odot}\) and \(a_{\rm out}=1.5\times 10^{5}\) R\({}_{\odot}\), initial eccentricities \(e_{\rm in}=0.2\) and \(e_{\rm out}=0.4\), and an initial relative inclination \(i_{\rm rel}=\pi/2\). Until about 4.5 Myr, three-body dynamics persistently induce variations of the inner eccentricity up to 0.8. As the octupole parameter of this system is 0.009, the octupole term is important, which is manifested as the sinusoidal variations on timescales of about 0.8 Myr. With the analytical expression for the octupole timescale from Antognini (2015), we derive a comparable approximate oscillation time of 1.25 Myr.
### Tidal interaction
Stars in a binary system are subjected to tidal forces that act to synchronize the stellar rotation with the orbit and circularize the orbit by the dissipation of energy. Orbital properties, such as the semi-major axis, the eccentricity and the angular velocity are thereby affected and alter the further evolution and possibly final fate of the system. The tidal forces are strongly dependent on the ratio between the stellar radii and the semi-major axis, and hence only significantly impact compact binary systems. We adopt the equilibrium tidal model described by Hut (1980) for stars with convective envelopes. For stars with a radiative envelope, we adopt the dynamical tidal model described by Zahn (1977). Both the equilibrium and dynamical tidal models are parameterized by Hurley et al. (2002) and implemented in TREES.
In orbital configurations where both the tidal dissipation and secular timescales are short compared to the evolutionary timescale of the stars, interplay between tides and three-body dynamics can efficiently reduce the semi-major axis of the inner binary (Mazeh & Shaham, 1979; Kiseleva et al., 1998); ZLK oscillations increase the inner eccentricity, allowing tides to efficiently dissipate energy during pericenter passage (e.g. Fabrycky & Tremaine, 2007). After complete circularization, the semi-major axis has decreased by a factor \((1-e_{\rm in}^{2})\).
## 3 Results
We briefly describe the conditions at which we stop individual stellar evolution simulations. We differentiate between the following channels:
* Mass transfer in the inner binary when either the primary or secondary star fills its Roche lobe.
* Mass transfer from the tertiary star onto the inner binary, where the material may be accreted by one or both components.
* Dynamical destabilisation due to stellar evolution (e.g., orbital widening due to mass loss) violating stability criterium Eq. 1.
* Dynamical unbinding of the inner orbit when either the primary or secondary star experiences its core-collapse supernova.
* Dynamical unbinding of the outer orbit as a result of core-collapse of one of the three system components.
* None of the above interactions occur within a Hubble time. At this point all three components are compact remnants. We refer to this channel as non-interacting systems.
\begin{table}
\begin{tabular}{l l l} \hline \hline & \multicolumn{2}{c}{Fiducial model} \\ \hline \hline sampling property & value/range & distribution \\ \hline \hline Initial primary mass \(m_{1}\) & [10, 100] M\({}_{\odot}\) & Kroupa initial mass function \\ Initial inner mass ratio \(q_{\rm in}\) & [0.1, 1] & uniform distribution \\ Initial outer mass ratio \(q_{\rm out}\) & [0.1, 1] & uniform distribution \\ Initial inner semi-major axis \(a_{\rm in}\) & [5, 5\(\times 10^{6}\)] R\({}_{\odot}\) & uniform distribution in log-space \\ Initial outer semi-major axis \(a_{\rm out}\) & [5, 5\(\times 10^{6}\)] R\({}_{\odot}\) & uniform distribution in log-space \\ Initial inner eccentricity \(e_{\rm in}\) & [0, 0.9] & thermal distribution \\ Initial outer eccentricity \(e_{\rm out}\) & [0, 0.9] & thermal distribution \\ Initial relative inclination \(i_{\rm rel}\) & [0, \(\pi\)] & uniform distribution in cosine \\ Initial inner argument of pericenter \(g_{\rm in}\) & [\(-\pi,\pi\)] & uniform distribution \\ Initial outer argument of pericenter \(g_{\rm out}\) & [\(-\pi,\pi\)] & uniform distribution \\ Initial metallicity \(Z\) & Z\({}_{\odot}\) (0.014) & \\ \hline \hline \multicolumn{3}{c}{Model variations} \\ \hline \hline name & varied parameter(s) & distribution \\ \hline \(a_{\rm in}\)-Sana & \(a_{\rm in}\) & powerlaw (\(\log_{10}\)P\({}_{\rm in}\))\({}^{\pi}\), with \(\pi=-0.55\). Based on Sana et al. (2012) \\ \(a_{\rm out}\)-Sana & \(a_{\rm out}\) & powerlaw (\(\log_{10}\)P\({}_{\rm out}\))\({}^{\pi}\), with \(\pi=-0.55\). Based on Sana et al. (2012) \\ \(a_{\rm in}\)\&\(a_{\rm out}\)-Sana & \(a_{\rm in}\)\& \(a_{\rm out}\) \\ \(e_{\rm in}\)-Sana & \(e_{\rm in}\) & powerlaw \(e_{\rm in}^{\pi}\), with \(\eta=-0.45\). Based on Sana et al. (2012) \\ \(e_{\rm out}\)-Sana & \(e_{\rm out}\) & powerlaw \(e_{\rm in}^{\prime}\), with \(\eta=-0.45\). Based on Sana et al. (2012) \\ \(e_{\rm in}\)-flat & \(e_{\rm in}\) & uniform distribution \\ \(e_{\rm out}\)-flat & \(e_{\rm out}\) & uniform distribution \\ \(q_{\rm out}\)-Moe & \(q_{\rm out}\) & powerlaw \(q_{\rm out}^{\prime}\), with \(\gamma=-1.5\). Based on Moe \& Di Stefano (2017) \\ \(i_{\rm rel}\)-const & \(i_{\rm rel}\) & constant value of 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The initial sampling ranges and distributions of the triple star properties for the fiducial model and all model variations.
We do not explicitly identify the merging of two stars as a stopping condition as such events are preceded by mass transfer. Note that we do not include tidal effects and three-body dynamical effects such as ZLK oscillations as stopping conditions, while technically these could be described as orbital interactions. An overview of the evolutionary channels leading up to a model discontinuation is presented in Table 2.
The remainder of this section is structured as follows: in Section 3.1 we give an overview of the predictions of the evolutionary channels for our simulated populations. Then we discuss two channels in more detail; the systems that experience a phase of mass transfer initiated by the primary star in Section 3.2 and the systems that do not undergo any interaction during their entire evolution in Section 3.3. Next, we dive into few of the model variations in more detail in Section 3.4. Finally, we compare the predictions for massive triple star evolution with simulations of isolated binary stars in Section 3.5.
### Overview evolutionary channels
In Fig. 3 we show the predicted contribution of each evolutionary channel to the total population of massive triple stars for our fiducial model. Similar to massive binary star systems (e.g. Sana et al. 2012), the evolution of massive triple star systems is dominated by both stellar and orbital interactions. The vast majority of systems (\(67.3\pm 0.9\%\)) experiences an episode of mass transfer in the inner binary. This includes mass transfer initiated by both the primary and the secondary star, although the contribution of the latter is only very minor (\(\lesssim\)0.1%). The error bars are the \(3\sigma\) statistical sampling uncertainties obtained by bootstrapping the data (i.e., re-sampling the data with replacement). These uncer
\begin{table}
\begin{tabular}{c c} \hline \hline
**Interaction Type** & **Evolutionary Channel** \\ \hline \hline Stellar Interaction & Inner mass transfer \\ & Tertiary mass transfer \\ \hline Orbital Interaction & Dynamical destabilisation \\ & Inner orbit unbound \\ & Outer orbit unbound \\ \hline Other & Non-Interacting \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview and classification of the different evolutionary channels considered in this study.
Figure 1: Initial distributions of the inner- (blue) and outer (red) semi-major axis for our fiducial model and three model variations. The solid black lines represent the ZAMS population, consisting exclusively of initially stable, non-interacting hierarchical triple stars. The lighter colours correspond to the population before excluding dynamically unstable systems and systems that are Roche-lobe filling at birth. To enhance the features in the plot, we have cut off the top part of the distribution for model \(a_{\rm max,foot}\)-Sana. The bottom panel shows the initial sampling distributions of the fiducial model and the Sana distribution.
Figure 2: Example of a system with large inner eccentricity oscillations induced by three-body dynamical effects. _Top:_ for each evolutionary time step, the maximum increase in the inner eccentricity is shown. During the first few time steps the ZLK amplitudes are small, because at the start of the simulation the evolutionary time steps are very short. _Bottom:_ the evolution of the eccentricity of the inner orbit during the first 0.5 Myr.
tainties follow a Poisson distribution by approximation. A fairly large fraction of systems loses a companion as a result of a SN kick, unbinding either the inner- or outer orbit (\(10.0\pm 0.6\%\) and \(18.9\pm 0.7\%\) respectively)3. In case the velocity kick is not strong enough to unbind the orbit, it can still affect the system by altering the semi-major axis and the eccentricity (Pijlolo et al., 2012; Lu and Naoz, 2019). Only a few percent of the systems experiences mass transfer initiated by the tertiary star (\(1.8\pm 0.2\%\)) or reach dynamical destabilisation (\(1.7\pm 0.2\%\)). By far the smallest fraction of systems (\(0.2\pm 0.1\%\)) has not engaged in one of the aforementioned interactions within a Hubble time. A detailed overview of the rates of all population models can be found in Appendix A.
Footnote 3: We followed the SN kick model of Verbunt et al. (2017) in combination with the fallback prescription of Fryer et al. (2012) to scale down the kick velocities for black holes.
#### 3.1.1 Model variations
Apart from statistical uncertainties, the predicted contributions of evolutionary channels are dependent on the uncertainties related to the formation of the stellar system, i.e. initial stellar and orbital parameters. In Fig. 4, we show the predicted ranges on the contributing fraction of each evolutionary channel resulting from the model variations discussed in Section 2.2 relative to the fiducial model. Overall, the incidence rate of each evolutionary channel is fairly robust across the model variations; The rates only twice differ over a factor 2 compared to the fiducial model. The most significant outliers mainly originate from one model variation, Model \(q_{\rm out}\)-Moe. We discuss some of the outliers in more detail in Section 3.4. Moreover, the predicted rates for the fiducial model are close to the median for most of the evolutionary channels, confirming that the fiducial model is a good representation of an average model. For the more uncommon channels, such as tertiary mass transfer, non-interacting and dynamically destabilised systems, the fiducial model predictions agree less well with the median. Although, we can not rule out that this deviation is explained by the enhanced statistical uncertainties for the latter two channels.
In conclusion, the general evolution of a massive triple star population leading up to the first interaction is not seriously affected by the differing assumptions of the initial conditions. This can partially be explained by the initial stability check described in Section 2.3, reducing the differences in the ZAMS population between models with vastly differing initial parameter distributions. This is fortunate, as the initial parameters are empirically not well constrained. We stress that robust predictions for the incidence rates of evolutionary channels between two model variations do not necessarily imply that the complete evolution of stellar systems in both populations are identical. We terminate the simulation at the onset of the first interaction, but the outcome of certain interactions, such as mass transfer, and thus the subsequent evolution of the system, can be highly dependent on the stellar and orbital properties of the system at the ZAMS and similarly at the onset of the interaction. This is specifically important for the formation of more uncommon sources, such as gravitational wave mergers.
#### 3.1.2 Effect of initial conditions
We provide an overview of a subset of the initial conditions that generally lead to a specific evolutionary channel in Fig. 5, based on the triple systems in our fiducial model.
* _Mass transfer:_ Mass transfer initiated by the primary star can occur over a wide range of the inner and outer orbital period (\(1-10^{6.5}\) days for \(\rm P_{in}\) and \(20-10^{8.5}\) days for \(\rm P_{out}\)), as the maximum radius of massive stars can be large. Even at \(\rm P_{in}>10^{4}\) days, extreme inner eccentricities induced by ZLK oscillations can provoke the primary star to fill its Roche lobe during pericenter passage. For a few systems, the mass trans
Figure 4: The predicted number of systems that evolve through each evolutionary channel for all model variations with respect to the fiducial model (blue diamonds). A factor 0.5 and 2 indicate a factor two increase and decrease, respectively. The predicted rates of all model variations are combined into the median (red stars), and the largest outliers per channel are represented by the error bars. The red part of the error bars correspond to Model \(q_{\rm out}\)-Moe, the model for which the predictions deviate most significantly from the fiducial model. The statistical uncertainties (\(3\sigma\)) on the predictions for the fiducial model are also included (orange bars). The relative contribution of each evolutionary channel for the fiducial are given in percentages.
Figure 3: Percentage of systems evolving through each evolutionary channel for our fiducial population of massive hierarchical triple stars. For the outer orbit unbound, tertiary mass transfer and dynamical destabilisation channels, the tertiary star is directly involved.
fer in the inner binary is instead initiated by the secondary star. This can be counter-intuitive as the nuclear timescale of the primary star is shorter than that of the secondary star, hence the primary will evolve faster. A possible way to prevent the primary star from transferring mass first is by stripping the envelope via stellar winds before the star expands sufficiently to fill its Roche lobe. This does require the primary star to be very massive and is susceptible to the choice of wind model. Systems in which the secondary star transfers mass as their first interaction are restricted to initial inner orbital periods larger than \(\gtrsim\)10\({}^{4}\) days. As the triple systems are hierarchical, and thus the outer orbit is significantly wider than the inner orbit, the first phase of mass transfer will only be initiated by the tertiary star if it is initially more massive than either component of the inner binary. Therefore, the distribution of \(q_{\rm out}\) is a key parameter in determining the contribution of this channel and \(P_{\rm in}\) and \(P_{\rm out}\) are restricted within about 10\({}^{3}\) and 10\({}^{5}-10^{6}\) days, respectively, for this channel.
* _Orbital unbinding:_ The initial conditions for systems whose inner (outer) orbit become unbound as a result of a SN kick are restricted to inner (outer) orbital periods larger than a few thousand days. Their subsequent evolution is difficult to predict without proper n-body calculations, as a phase of non-secular and possibly chaotic evolution ensues. However, it is not unlikely to expect at least one of the bodies to be ejected, either directly as a result of the SN kick, or through subsequent dynamical interactions (Pijloo et al., 2012).
* _Dynamical destabilisation:_ most systems that become dynamically unstable do not have very large initial ratios of \(\rm P_{\rm out}/P_{\rm in}\) (generally within a hundred, apart from a few exceptions). The location of instability is dependent on other parameters as well, such as \(\epsilon_{\rm out}\), \(q_{\rm out}\) and \(i_{\rm rel}\), resulting in a quite generous spread of initial period ratios where systems experience a dynamical destabilisation. We identify several processes responsible for the transition from dynamically stable to unstable, the first being systems born close to the instability limit. Early on in their evolution, when the primary star is still on the main sequence (MS), three-body dynamical interactions can increase \(\epsilon_{\rm out}\) or decrease \(i_{\rm rel}\), shifting the critical semi-major axis ratio towards larger values (Toonen et al., 2020). Furthermore, once the primary star has evolved off the MS, either the increased mass-loss rate, resulting in the widening of the inner orbit, or an eventual SN kick, can push the system across the instability limit.
* _Non-interacting:_ Systems that do not have any form of interaction after a Hubble time of evolution are restricted to initial inner orbital periods of \(\gtrsim\) 6\(\times\)10\({}^{3}\) days. Smaller periods would most likely result in a phase of mass transfer within the inner binary.
#### 3.1.3 Time-evolution of evolutionary channels
The stellar and orbital properties of a system are affected when interactions between stars or orbits take place. Since interactions occur at different times for each system, the observational properties of a stellar population are expected to change over time. In Fig. 6, we present an overview of the typical stellar ages where each channel dominates. The non-interacting systems have been excluded, simply because their definition comprises the absence of any interaction.
At very early times (\(\lesssim\) 4 Myr), we predict solely dynamical destabilisation of the triple and mass transfer in the inner binary to take place. The extend of this epoch corresponds to roughly the MS lifetime of the most massive stars in our simulations. Both channels show a steep rise initially (\(<\) 1 Myr), comprising systems that have short secular timescales and/or are initialised near the instability limit. Later on, the rate of interactions are small.
At later times (\(\gtrsim\) 4 Myr), due to the fast radial expansion of post-MS stars and an eventual SN kick, the number of interactions increases rapidly within each channel and peaks just after 6.9 Myr. Interestingly, the systems in which the outer orbit becomes unbound typically interact at earlier times than systems in which the inner orbit becomes unbound. This may suggest that the former channel is biased towards high tertiary masses, as those stars evolve into a compact object on a shorter timescale. This is generally not the case. The explanation is that the inner binaries are generally more compact and have a higher binding
Figure 5: Initial inner- and outer orbital periods covered by each evolutionary channel. The triangle shape of the population is a direct consequence of the requirement on hierarchical structure and dynamical stability. The black dashed line shows where \(P_{\rm in}=P_{\rm out}\). There are no systems near this line due to the dynamical stability criterion. There is some overlap between colors, as the evolution of a system is determined by more properties than just the orbital periods.
energy compared to the systems in which the inner orbit becomes unbound. Therefore, for the inner binary the low kick velocities of the highest mass stars in our simulation are simply not sufficient to unbind their orbits. The rate of systems that undergo orbital unbinding of the outer binary or mass transfer from the tertiary onto the inner binary drops quickly after about 15 Myr. The reason for this is that \(m_{1}\) has a sharp cut-off at 10 M\({}_{\odot}\), while \(m_{3}\) extends to masses as low as 1.2 M\({}_{\odot}\). These low-mass stars do not experience a SN at the end of their lives, and reach different maximum radii compared to their massive counterparts. The exclusion of low-mass primary stars leads to a sharp drop in the number of systems that undergo a phase of mass transfer in the inner binary after 25 Myr.
Overall, we predict that within 3.7 Myr about 10% of systems have had an interaction. This fraction rapidly increase to \(\sim\)50% after 10.2 Myr. After that, the rate of interactions slows down, despite the majority of stars evolving on timescales longer than 10 Myr as a consequence of the initial mass function. This trend follows the declining rate of inner binary mass-transfer systems, as that constitutes the most frequent type of interaction.
Lastly, we ask ourselves how the triple population looks like for a realistic stellar population. This can be done by convolving the results from Fig. 6 with a star formation history. In Fig. 7, we show how the intrinsic population of massive triples looks like after 30 Myr, which is the typical lifetime of star-forming giant molecular clouds, assuming a constant star-formation rate, rather than a single burst of star formation. The majority of systems (62.3%) has already interacted. Most systems (45.1%) have undergone a phase of mass transfer, initiated by any of the three stars. The high incidence of mass transfer is consistent with predictions for massive binary stars (de Mink et al., 2014) and emphasizes the importance of mass transfer in multiple stars. A significantly smaller fraction of the systems (15.9%) has ejected at least one of the stars, while few systems (1.3%) have destabilised as a result of three-body dynamics. de Mink et al. (2014) showed that almost a third of post-mass-transfer binary systems result in a merger. Our findings could thus indicate that a large fraction of systems born as massive triple stars would be reduced to a population of single and binary stars at the moment of observation.
### Mass transfer initiated by primary
The majority of systems (64.6-76.6%) has a phase of mass transfer in the inner binary. As we stop our simulations at the first moment of interaction, the primary star will be the donor and the secondary star the accretret in most cases. One of the key ingredients in determining the outcome of mass-transfer systems is the stability of mass transfer. When a massive donor star approaches the giant branch, its envelope will transition from radiative to convective, and the mass transfer becomes more prone to instabilities (Klencki et al., 2021). Therefore, the evolutionary phase of the donor is a useful property to explore. We show the evolutionary phases of the donor and accretor stars at the onset of mass transfer for all model variations in Fig. 8. Given in chronological order, the phases at which the donor transfers mass are the Main Sequence (MS), Hertzsprung Gap (HG), First Giant Branch (FGB), Core Helium Burning (CHeB) and Asymptotic Giant Branch (AGB) phase. The CHeB phase is defined as the moment when helium is ignited in the core, which can occur before the star reaches the FGB phase for sufficiently high masses. These systems move directly from the HG phase to the CHeB phase, avoiding the FGB phase. We only discuss systems with a MS accretor, as the fraction of systems in which the secondary is an evolved star is extremely small (\(\lesssim\)0.5%). Since the evolutionary phases after the MS are short lived compared to the total stellar lifetime, this can only occur when the donor and the accretor have similar evolutionary timescales, and thus similar initial masses. Nonetheless, these systems can lead to interesting events, such as a double core common envelope (Dewi et al., 2006).
In the vast majority of systems, the donor star is a MS star (32.8-51.7%) or a HG star (37.8-50.5%). Only 7.9-12.7% of the donor stars are in the CHeB phase, while the FGB and AGB donors contribute merely a few percent. As discussed before, not all stars evolve through the FGB phase. Over 75% of the primary stars are initially more massive than about 13M\({}_{\odot}\) and move directly to the CHeB phase. Similarly, stars initially more massive than about 38M\({}_{\odot}\) do not evolve through the AGB phase for our choice of wind mass-loss prescription and metallicity. Instead, they transition to the Wolf-Rayet phase while burning helium in the core. Additionally, for stars that do evolve through the AGB phase, the fractional increase in radius is generally marginal compared to the HG and CHeB phases.
We discuss two effects that explain the large number of systems that undergo mass transfer during an early evolutionary phase of the donor. First, as shown in Fig. 1, the ZAMS distribution peaks at small inner semi-major axes as a result of satisfying the criterion for dynamical stability, disfavouring large inner semi-major axes. At small separations, the donor star does not need to expand much to fill its Roche lobe, hence reaching the onset of mass transfer early in its evolution. Second, three-body dynamical interactions can expedite the onset of mass transfer by decreasing the pericenter of the inner binary through oscillations of the inner eccentricity.
To quantify the impact of three-body dynamics on the systems that go through a phase of mass transfer initiated by the primary star, we show the maximum amplitude of the eccentricity oscillations for each system of the fiducial model in Fig. 9. In 9.1% of systems the inner eccentricity is increased between 0.05 and 0.96. The population model affected the most and the least by three-body dynamics are model \(e_{\rm in}\)-Sana, with low initial inner eccentricities, and model \(i_{\rm rel}\)-const, with initial relative inclinations of zero, respectively. For these models, 13.7% and 1.1% of the systems that go through a phase of mass transfer initiated by the primary star have experienced eccentricity oscillations with amplitudes between 0.05 and unity. The affected systems generally have small initial ratios of P\({}_{\rm out}\)/P\({}_{\rm in}\), ensuring relatively short timescales on which the three-body dynamical interaction acts. However, there is a generous spread towards orbital period ratios up to over a hundred, since additional parameters (\(e_{\rm out}\), \(q_{\rm out}\) and \(i_{\rm rel}\)) affect the timescale and magnitude of the oscillations.
### Non-interacting systems
Across all model variations only 0.1-0.4% of the systems form a bound triple compact object (TCO) system that has had no previous interaction during their entire evolution. We have shown that these systems have initially relatively wide inner orbits to prevent a phase of mass transfer, and are therefore also required to have wide outer orbits to ensure a stable hierarchical configuration. At these wide separations however, the weakly bound tertiary star is prone to become unbound due to a SN kick, justifying the small contribution of this channel.
Nevertheless, systems devoid of any interaction are an interesting target to study, as they are considered to be a main formation channel for producing gravitational wave mergers in hierarchical triple stars (Antonini et al., 2017; Silsbee & Tremaine
2017; Rodriguez & Antonini 2018; Fragione & Loeb 2019; Martinez et al. 2022). In this scenario, the inner binary forms a double compact object (DCO) generally too wide to merge within a Hubble time. The tertiary star can drive the inner two objects significantly closer through three-body dynamics over long timescales, eventually merging the inner binary. Alternatively, in systems with a very wide outer orbit (\(>1000\) AU), flybys can induce a gravitational wave merger by exciting the eccentricity of the outer orbit (Michaely & Perets 2020).
In view of these possible compact object merger futures, we discuss the properties of the systems from the non-interacting channel. In Fig. 10, we show the distribution of initial parameters that are of importance for driving the eccentricity oscillations. We compare these initial parameter distributions to the properties at TCO formation of the non-interacting systems for the fiducial model. Most of the properties are distributed quite dissimilar between the two populations. The TCO population evidently favors more moderate inner- and outer eccentricities. Highly eccentric systems are more likely to interact and hence
Figure 8: Evolutionary phase of the system at the onset of mass transfer in the inner binary. On the x-axis, the stellar types of the donor (primary) and accretor (secondary) are shown respectively. The symbols have the same meaning as those in Fig. 4.
Figure 6: The rate of interactions as a function of time for the fiducial model. _Left:_ cumulative normalized distribution per evolutionary channel. The black dashed line combines the contribution of all channels. _Right:_ The total number of interactions occurring at each moment in time, summed across all channels.
Figure 7: The incidence of pre- and post interacting massive triples assuming that stars formed at a constant star-formation rate. The contribution of binary stars are not included.
prevent forming a TCO that has had no interaction during its evolution. Interestingly, the models \(e_{\rm in}\)-Sana and \(e_{\rm out}\)-Sana, that peak at much smaller initial inner- and outer eccentricities respectively, have eccentricities at TCO formation that are very similar to those in the fiducial model. For all model variations, near-equal mass ratios between the primary and tertiary compact object (\(m_{3}/m_{1}\)) are strongly favored. The system has to survive three subsequent SN kicks in order to form a bound TCO. For our choice of SN kick-velocity model, each star of the triple system needs to be massive to prevent break-up. Consequently, we predict a dearth of initial primary and tertiary masses below \(35\) M\({}_{\odot}\). At high masses, all stars tend to evolve towards similar final black-hole masses due to high rates of wind mass loss. The orbital period ratios P\({}_{\rm out}\)/P\({}_{\rm in}\) are almost completely limited to ratios smaller than \(10^{4}\) for the TCO systems. The minimum inner orbital period to avoid mass transfer in the inner binary is around \(6\times 10^{3}\) days (see Fig. 5). As the outer orbital periods are confined to \(10^{5.5}\) days, the orbital period ratio never reaches extreme values. The TCO orbits slightly disfavor relative inclinations of \(90^{\circ}\). At these inclinations, the eccentricity oscillations in the inner binary are the largest, complicating the formation of a non-interacting TCO. The discussed properties set the timescale for the ZLK oscillations, which partially determines how effectively GW mergers can be produced due to the presence of a third compact object. The ZLK timescales (Eq. 2) for the TCO systems are significantly longer than for the complete initial population and range from 100 kyr to 1 Gyr. However, ZLK oscillations and possibly higher order three-body dynamical interactions can still affect the system over the span of a Hubble time (\(\sim 13.5\) Gyr).
Differences in the incidence of evolutionary channels and onset of mass transfer between model variations
The rates at which specific interactions occur vary quite significantly from the fiducial model for a few of the model variations (Fig. 4). In particular, models that favor small inner semi-major axes, such as model \(a_{\rm in}\)-Sana and \(a_{\rm in}\)&\(a_{\rm out}\)-Sana, or low tertiary masses, such as model \(q_{\rm out}\)-Moe, are responsible for the most significant differences.
Model \(a_{\rm in}\)-Sana and \(a_{\rm in}\)&\(a_{\rm out}\)-Sana show an increase in the incidence rate of systems that undergo mass transfer in the inner binary, with a rate of \(73.4\%\) and \(75.1\%\), respectively, compared to the fiducial model's rate of \(67.3\pm 0.9\%\). These models have more compact inner orbits, resulting in smaller Roche radii of the inner binary components and thus increasing the likelihood of mass transfer. Model \(a_{\rm in}\)&\(a_{\rm out}\)-Sana also has more systems where the tertiary star transfers mass onto the inner binary, with a rate of \(3.2\%\), compared to the fiducial model's rate of \(1.8\pm 0.2\%\). In this case, the outer orbits are more compact, making it more likely for the tertiary star to fill its Roche lobe.
Conversely, both models predict a decrease in the incidence rate of systems which unbound their inner orbit, with a rate of \(5.9\%\) and \(5.1\%\), respectively. In the fiducial model, the inner orbits were unbound in \(10.0\pm 0.6\%\) of all systems. We identify two processes that contribute to this decrease. Firstly, the inner orbits are typically more compact, resulting in a larger binding energy, which means that the range of supernova kick velocities and directions required to unbind the orbit becomes smaller. Secondly, at smaller semi-major axes, the system is more likely to undergo mass transfer before the supernova occurs that ultimately unbinds the orbit.
Figure 10: Cumulative step function of stellar and orbital properties for the fiducial model that are important in driving three-body dynamical interactions. In blue, we show the initial ZAMS conditions for all systems evolved in the simulation. In orange, the conditions at triple compact object formation are shown for the non-interacting systems.
Figure 9: The circularization radius (the semi-major axis if the orbit was circularized by tides) of the inner and outer orbit of each system at the onset of mass transfer initiated by the primary star. Color-coded are the maximum amplitudes of the inner eccentricity induced by three-body dynamical interactions. Their distribution is also presented as a histogram. A value of \(\Delta\epsilon_{\rm in}^{\rm max}=1\) indicates that during a single oscillation the inner eccentricity has varied between 0 and 1.
Similar to models \(a_{\rm in}\)-Sana and \(a_{\rm in}\)&\(a_{\rm out}\)-Sana, our predictions for model \(q_{\rm out}\)-Moe indicate an increase in the incidence rate of systems undergoing mass transfer in the inner binary, with a rate of 76.6%. However, we also predict a significant decrease for systems transferring mass from the tertiary star onto the inner binary, with a rate of 0.45%, compared to the fiducial model's rate of \(1.8\pm 0.2\)%. Additionally, we observe a large decrease in the incidence rate of systems experiencing an unbinding of the outer orbit. The predicted rate is 6.9%, compared to the fiducial model's rate of \(18.9\pm 0.7\)%. These differences arise due to the fact that the population peaks at small initial outer mass ratios, resulting in lower tertiary masses. When the tertiary star is less massive than either component of the inner binary, its evolutionary timescale is longer, allowing the inner binary to initiate mass transfer or unbind the inner orbit via a supernova kick before the tertiary star has evolved sufficiently to interact. Moreover, a larger fraction of tertiary stars in model \(q_{\rm out}\)-Moe will not experience a SN kick at all, but will instead end their lives as white dwarfs.
For a range of models we predict a significant relative increase in the incidence rate of systems that do not interact, with values ranging from 0.38% to 0.42%, compared to the fiducial model's rate of 0.24%. This includes all model variations with more moderate inner or outer eccentricities (\(e_{\rm in}\)-flat, \(e_{\rm out}\)-flat, \(e_{\rm in}\)-Sana & \(e_{\rm out}\)-Sana) and a low relative inclination (\(t_{\rm rel}\)-const). At low eccentricities, the pericenter distance as well as the specific binding energy between stars is larger, making it less likely for the system to initiate mass transfer or unbind. At low relative inclination, three-body dynamical effects are less important and will not be able to induce eccentricity oscillations that drive the system toward interaction.
In the models \(a_{\rm in}\)-Sana and \(a_{\rm in}\)&\(a_{\rm out}\)-Sana, donors are usually less evolved, and the percentage of main sequence (MS) donors increases to 49.3% and 51.7%, respectively, compared to the fiducial model's 38.1%. However, there are sigmoidately fewer donors in later evolutionary phases, which is expected because a smaller inner semi-major axis results in a smaller Roche lobe for the donor star. For model \(t_{\rm rel}\)-const, we expect fewer MS donor stars and more HG donors. This is due to the fact that three-body dynamics are not effective at low relative inclinations, resulting in negligible inner eccentricity oscillations. Consequently, the pericenter of the inner binary does not decrease, and the onset of mass transfer is expedited.
### Comparison with isolated binaries
In this section we quantitatively investigate how the evolution of the inner binary is affected by a tertiary star. Previously, we have come across two mechanisms that could alter the evolution of a system with a tertiary component: (1) the initial ZAMS properties are affected by constraints of triple star formation and (2) three-body dynamical effects alter the orbital elements. In order to study the implications of both mechanisms on the evolution of the systems, we present two additional simulations of a population of isolated binary stars based on the same initial sampling distributions as the fiducial model. The first simulation represents a population of true isolated binary stars; i.e. the systems are initiated and evolved without the presence of a tertiary star. We call this model Bin. The systems in the second simulation are initiated as a hierarchical triple (identical to the fiducial model), forcing the initial distributions of the inner orbital parameters to become skewed to ensure a stable hierarchical triple configuration. Subsequently, the tertiary star is removed. Therefore, when we model the evolution of this system from the ZAMS onward, there are only two stars in the system. We call this model Bin-Skew. The inclusion of the latter model allows us to disentangle the contribution of tertiary induced dynamical effects from the selection effects introduced at star formation.
We first investigate how significantly the general evolution of an isolated binary population is affected by skewing the orbital parameter distributions due to the presence of a tertiary star. To this end, we compare the predicted incidence rates of evolutionary channels for the models Bin and Bin-Skew (Fig. 11). There is an enormous discrepancy in the predicted number of systems that undergo mass transfer and systems whose orbit becomes unbound between both models. The model Bin-Skew has about one and a half times as many systems that experience an episode of mass transfer, 80.7% compared to 52.3%, and over a factor of two decrease in the number of orbits that become unbound, 18.0% compared to 45.0%. Furthermore, the number of systems that do not have any interaction during their entire evolution decreases by over a factor of two, 1.2% compared to 2.7%. The differences between these models are primarily a result of a smaller average pericenter at the ZAMS for the systems from model Bin-Skew. Interestingly, these differences are much bigger than the differences between the triple star models, as were shown in Fig. 4. In short, dynamical constraints imposed by a third star during the initialisation of a system can alter the interaction history for a large number of systems.
Next, we investigate the effects of three-body dynamics on the evolution of the inner binary. To this end, we compare the occurrence of mass transfer from the primary onto the secondary star at different evolutionary phases of the donor star between the fiducial model and Bin-Skew (Fig. 12). For the fiducial model, we find an increase of 7.6% in systems that initiate mass transfer on the MS compared to model Bin-Skew. Conversely, during later evolutionary stages, we predict a decrease of 23%-46%. Naively, one would have expected dynamical interactions to boost the total number of systems experiencing mass transfer in the inner binary. In only 67.3% of the systems from the fiducial model the inner binary components undergo mass transfer, while for model Bin-Skew this is 80.7%. This discrepancy follows from the fact that there are a few additional types of interactions unique to triple stars (Fig. 11). The systems in the fiducial model that undergo these unique triple-star interactions misinterpretabilities still undergo mass transfer in the inner binary at a later stage of evolution, but their contribution is excluded here as the simulations are terminated at the onset of the first interaction. Therefore, a direct comparison between the predictions of the binary and triple simulations would be undesirable.
However, the variations in the inner eccentricity due to three-body dynamics can give some insigincludegradnics into the difference in incidence rates of mass-transfer systems between the binary and triple simulations. For the triple population, the maximum oscillation amplitude of the inner eccentricity is most pronounced for the MS donors, with a measurable increase of more than 0.05 (0.01) in 19.1% (27.6%) of the systems. The MS lifetime is relatively long, which increases the opportunity for the quadruple terms as well as octupole terms to become relevant. At most other evolutionary phases the contribution of the tertiary star is significantly lower: 2.0% (6.6%) for HG donors, 0.27% (1.9%) for FGB donors, and 1.4% (6.0%) for AGB donors. The exception are CHeB donors, where the eccentricity increase is 15.4% (28.0%). Similar to the MS, the timescale of the CHeB phase is relatively long compared to the timescale of the other evolutionary phases.
## 4 Discussion
### Comparison with other work
Hamers & Thompson (2019) and Stegmann et al. (2022a) both study the intermediate and final evolutionary stages of massive hierarchical triple stars in the field using the respective triple stellar evolution codes TSE and SECULARMULTIPLE (Hamers & Portegies Zwart 2016; Hamers 2018). Toonen et al. (2020) addresses a similar scientific question as in this work, using the same code, but instead focus on the evolution of low mass triple stars (\(1\ \mathrm{M}_{\odot}<\mathrm{m}_{1}<7.5\ \mathrm{M}_{\odot}\)). We will compare the results of these studies with our results and comment on dissimilarities.
Hamers & Thompson (2019) investigate the merger rates of double neutron star and black hole-neutron star mergers in hierarchical triple systems. They sample the initial stellar and orbital properties from similar distributions, but the sampling boundaries differ somewhat from our study. First of all, the initial primary masses are sampled between 8 and 50 \(\mathrm{M}_{\odot}\), resulting in lower average masses. Second, their choice of the maximum initial inner orbital period is significantly longer compared to our study: \(10^{10}\) days opposed to \(10^{8.5}\) days. Stegmann et al. (2022a) investigate the evolution of a massive triple star population from the ZAMS until the formation of a compact object. Similar to Hamers & Thompson (2019), the initial primary masses reach down to 8 \(\mathrm{M}_{\odot}\). However, instead of sampling each initial property from an independent distribution, they use correlated distributions as presented in Moe & Di Stefano (2017). Both studies do not vary the initial properties, but do vary physics, such as the supernova kick model.
#### 4.1.1 Mass transfer initiated by primary
Hamers & Thompson (2019) find that mass transfer initiated by the primary star occurs in 44% of systems when including a non-zero SN kick prescription. That is 20% fewer systems than the lower boundary on our predictions. The most likely explanation for this discrepancy is that their initial orbits extend to longer periods. Naturally, with less strongly bound orbits, we would expect a SN kick to unbind the system more easily. This is supported by an increase of over 20% in the number of system where the inner and outer orbit becomes unbound in Hamers & Thompson (2019). Furthermore, with lower typical masses, eventual SN kicks are stronger on average and are hence more likely to unbind the orbit. It is evident that changes in the initial orbital parameter ranges can significantly alter the interaction history of a triple star population.
Stegmann et al. (2022a) find that at solar metallicity 74% of the systems undergo mass transfer in the inner binary. However, since they do not terminate the simulation at the onset of an interaction, this population includes systems that experience the unbinding of the outer orbit prior to the mass transfer event. If the tertiary star was originally more massive than either component of the inner binary, mass transfer in the inner binary could be achieved even after the SN kick of the tertiary star has unbound the outer orbit. Unsurprisingly, our results indicate that for most models a lower fraction of inner binaries initiate mass transfer, with a median of 67%. Since we do not have predictions for the number of systems that experience mass transfer in the inner binary after the first interaction, we refrain from drawing conclusions about the comparison between the incidence rates of mass-transfer systems.
The frequency of mass transfer episodes that are initiated by the primary star in low-mass hierarchical triple populations
Figure 11: Percentage of systems evolving through each evolutionary channel for the isolated binary models Bin and Bin-Skew, and the fiducial model for reference. The systems in model Bin are initialised and evolved as isolated binary stars. The systems in model Bin-Skew are initialised as hierarchical triple stars, but evolved as isolated binary stars after removing the tertiary star.
Figure 12: The number of systems per evolutionary phase that evolve through the (inner) mass transfer channel for the fiducial model (orange), the binary model with skewed initial parameters (blue) and the true isolated binary model (red). The hatched regions correspond to the number of inner binaries with maximum eccentricity amplitudes larger than 0.05 (black) and 0.01 (gray).
is similar as in our high mass population (Toonen et al., 2020). However, the mass transfer is dominated by more evolved donor stars, namely by red giant branch stars, while in our simulations the mass transfer is mainly initiated by donor stars on the MS or HG. Opposed to massive stars, low mass stars barely expand during the MS phase and avoid filling their Roche lobe. Additionally, low-mass stars leave the MS phase much closer to the Hayashi track, resulting in a shorter Hertzsprung gap phase.
#### 4.1.2 Non-Interacting systems
Previous studies have shown that the formation rate of TCO systems that have not experienced an interaction are highly dependent on the assumed SN kick model (Antonini et al., 2017; Silsbee and Tremaine, 2017; Rodriguez and Antonini, 2018; Fragione and Loeb, 2019). With lower kick velocities, the triple orbits are more likely to remain bound and the probability of producing a TCO becomes higher. With non-zero kick velocities, Stegmann et al. (2022) and Hamers and Thompson (2019) predict a fraction of 0.12% and 0.4% non-interacting systems, respectively, which is within the range of our model predictions. However, when neglecting the SN kick, this fraction becomes substantially larger. Naively, one would expect that the compact-object merger rates would be drastically lower when higher kick velocities are implemented. Interestingly, Antonini et al. (2017) showed that this is not the case for binary black-hole mergers, as bound systems that have received a stronger SN kick have larger eccentricities on average, making three-body dynamics more effective, which results in shorter merger delay times.
#### 4.1.3 GW sources
Studies exploring the effect of three-body dynamics on the merger rate of compact objects in non-interacting triple stars have often neglected three-body dynamics prior to TCO formation. We explore the consequences of neglecting three-body dynamics during the stellar evolution phase on the population of non-interacting triple stars at TCO formation by excluding the quadrupole and octupole terms. Consistent with other studies, we still include the stability criterion. Our results indicate that neglecting three-body dynamics during the stellar evolution phase has an impact on the properties of the TCO population. For example, the fraction of systems with relative inclinations between 40\({}^{\circ}\) and 140\({}^{\circ}\) (60\({}^{\circ}\) and 120\({}^{\circ}\)) is 80% (51%) for the model variation compared to 70% (34%) for the fiducial model. The preference for aligned orbits is consistent across the other models. In addition, the fraction of systems in the model variation with outer eccentricities larger than 0.5 (0.6) is 59% (46%) compared to 45% (27%) for the fiducial population. However, not all other models show a clear preference towards lower outer eccentricities. The impact on the ZLK timescales is negligible, but the maximum amplitudes of the inner eccentricity due to three-body dynamical interactions are smaller for the fiducial population (Eq. 3). Specifically, 62% of systems in the fiducial population have a maximum ZLK amplitude above 0.5, while this is 78% for the model variation. This suggests that previous studies ignoring three-body dynamics during stellar evolution have overestimated the rate of gravitational wave mergers resulting from non-interacting triple stars. These results are not surprising, as stellar systems with strong dynamics are expected to interact before TCO formation. Furthermore, we predict no discernible difference in the final masses, mass ratios or orbital periods of the inner binary.
### Main caveats
Simulations with binary population synthesis codes rely on many assumptions that unavoidably introduce uncertainties to the predicted population. Adding a third star into the equation complicates matters further. In particular, the uncertainties in the initial orbital properties of the triples are prominent. Fortunately, these uncertainties are less important then one may expect (see Section 2.3). Besides the initial properties, we address the other main sources of uncertainty in the following section.
#### 4.2.1 Massive star evolution
As mentioned in Section 2, the approximate nature of the single stellar evolution fitting formulae can lead to imprecise inferred final properties of the stars, especially for stars above 50M\({}_{\odot}\), as this is the upper mass limit of the Hurley stellar tracks (Hurley et al., 2000) and one needs to extrapolate at higher initial masses.
Furthermore, many aspects of stellar physics are still poorly understood. We highlight the two physical assumption most relevant for this study: mass loss throughout evolution and natal kick properties of compact objects. Uncertainties and intricacies in mass-loss from hot stars have recently been summarized by Vink (2022). Scatter in empirical rates as function of luminosity are typically at least a factor of three, while discrepancies between theory and observations may exceed that amount in certain parameter ranges (especially at luminosities below \(\sim 10^{5}\) L\({}_{\odot}\); e.g., Brands et al., 2022). For cooler stars, uncertainties are even larger. For luminous stars that evolve toward the Humphrey-Davidson limit and experience a Luminous Blue Variable phase, accurate theoretical and observation-based mass-loss relations are lacking altogether (Smith, 2014), forcing crude assumptions to be made. For Red Supergiants, which possibly suffer from multiple mechanisms causing the loss of gas (e.g., Cannon et al., 2021; Montarges et al., 2021), empirical prescriptions of mass loss deviate by up to an order of magnitude or more (e.g., Mauron and Josselin, 2011; Beasor et al., 2020). These uncertainties are particularly concerning as stellar mass loss plays a crucial role in governing supernova progenitor properties, such as the core mass, and can affect the outcome of binary evolution. Higher wind mass-loss rates lead to more pronounced orbital widening and can prevent systems from engaging in mass transfer.
Of key relevance as well are the natal kick properties imparted on the compact object formed during a supernova event. We have assumed the kick velocity model from Verbunt et al. (2017), which is based on the motions of young galactic pulsars, and scaled down for black holes according to the fallback prescription of Fryer et al. (2012) and the momentum of the object. However, the proper shape of the velocity distribution is still under debate. Some studies favour a single peaked Maxwellian distribution (Lyne and Lorimer, 1994; Hansen and Phinney, 1997; Hobbs et al., 2005), while others one that is double peaked (Fryer et al., 1998; Arzoumanian et al., 2002; Faucher-Giguere and Kaspi, 2006; Verbunt et al., 2017). For black holes, the lack of observed natal kicks complicates discrimination between different models even further (Fragos et al., 2009; Repetto et al., 2012, 2017; Mandel, 2016). The uncertainty in SN kicks on the occurrence rates of certain evolutionary channels can be significant (see Section 4.1.2).
#### 4.2.2 Early interactions
Roughly 4% of the systems interact within 0.1 Myr from the ZAMS. These are mainly systems that engage in mass transfer and a small population of systems that become dynamically unstable. The vast majority of these systems has experienced an appreciable increase in the inner eccentricity, indicating three-body dynamical interactions have been at play. Only a handful of systems maintain a nearly constant inner eccentricity, as the systems were initialized extremely close to Roche-lobe filling or dynamical destabilisation. The number of systems that interact early on is on the order of 3-6% across all model variations, apart from the model \(i_{\rm rel}\)-const. At initial relative inclinations of zero (coplanar orbits), we predict only 0.8% of systems to interact within 0.1 Myr, since at low inclinations three-body dynamics is not effective.
We address a few points that could complicate the physical significance of these fast interactions. First, the initial inner semi-major axis of the systems experiencing mass transfer are typically not very large (few tens to few hundreds of solar radii). Before thermal equilibrium sets in on the ZAMS, the stellar radii are larger and the possibility can not be excluded that the components of the inner binary would interact on the pre-main sequence, likely resulting in a stellar merger (Tokovinin & Moe, 2020).
Second, observed low velocity dispersion distributions in young stellar clusters suggest that the minimum separation of inner binary orbits at formation might be larger than assumed in this study (Sana et al., 2017; Ramirez-Tannus et al., 2021), and that the Sana distribution is only established \(\sim 1\) Myr after formation. With a larger minimum separation, we expect a decrease in the number of systems that interact early on, as systems with primary stars initially close to Roche-lobe filling are avoided and three-body dynamical effects need to be strong in order to drive wider systems to transfer mass.
Third, observations of solar mass triple stars suggest that tight inner orbits, \(a_{\rm in}<2000\)\({\rm R}_{\odot}\), are associated with relative inclinations smaller than 40\({}^{\circ}\)(Borkovits et al., 2016; Tokovinin, 2017). At low inclinations, three-body dynamical interactions are strongly reduced and likely diminish the contribution of these fast interactions.
#### 4.2.3 Eccentric mass transfer
An important challenge in population synthesis codes is the lack of understanding of how mass transfer progresses in eccentric orbits. Usually, this problem is circumvented by assuming the system circularises by the onset of mass transfer or initialising the orbits as circular. However, recent studies find that the tidal forces are generally inefficient in circularising the orbit of binary systems before mass transfer is initiated (Eldridge, 2009; Vigna-Gomez et al., 2020; Vick et al., 2021). Moreover, in triple star systems, if the three-body dynamical interactions are strong enough, complete circularisation due to tidal forces can be prevented (e.g. Toonen et al., 2020). In Fig. 13, we show the degree of circularisation of the inner orbit for all systems in which the primary star fills its Roche lobe for our fiducial model. While many of the most compact inner orbits are completely circularised, the majority of systems retain a nearly constant eccentricity. A non-negligible fraction of systems, located at small \(a_{\rm out}/a_{\rm in}\) even experiences a significant boost in eccentricity as three-body dynamical effects become more important. We stress that Fig. 13 is based on a singular tidal model, where (1) the tidal timescales are uncertain and (2) tidal processes, such as efficient dissipation in highly eccentric orbits (Moe & Kratter, 2018; Generozov et al., 2018), are ignored. As a result, we are likely to underestimate the degree and timescale of circularisation in the inner binary. This is especially valid for inner orbits with high eccentricities, but also for small semi-major axes, where tides could be more efficient during the pre-main sequence phase.
#### 4.2.4 Tertiary tides
Besides tidal forces between the primary and secondary star, the tertiary star can also dissipate energy from the inner binary through tidal interactions (Fuller et al., 2013). To investigate the importance of this effect, we apply the tertiary tidal model of Gao et al. (2020) on our fiducial population. As a result of tertiary tides, the semi-major axis of the inner binary shrinks as:
\[\begin{split}\frac{1}{a_{\rm in}}\frac{da_{\rm in}}{dt}& =2.22\times 10^{-8}{\rm yr}^{-1}\,\frac{4q_{\rm in}}{(1+q_{\rm in})^{2} }\left(\frac{{\rm R}_{3}}{100\;{\rm R}_{\odot}}\right)^{5.2}\\ &\times\left(\frac{a_{\rm in}}{0.2\;{\rm AU}}\right)^{4.8}\left( \frac{a_{\rm out}}{2\;{\rm AU}}\right)^{-10.2}\left(\frac{\tau}{0.534\;{\rm yr }}\right)^{-1.0},\end{split} \tag{5}\]
with \(R_{3}\) the radius and \(\tau\) the viscoelastic relaxation time of the tertiary star. We set \(\tau\) to \(10^{-4}\) yr in accordance with Gao et al. (2020). We did not find a noticeable impact of the tertiary tides on the evolution of our fiducial population for the chosen value of \(\tau\).
## 5 Conclusions
In this work, we have studied the main evolutionary pathways of massive hierarchical triple stars up to the first moment of interaction. Additionally, we have investigated what impact a tertiary companion has on the evolution of the system. We have done this
Figure 13: The circularization radius of the inner and outer orbit of each system at the onset of mass transfer initiated by the primary star. Color-coded is the ratio between the final and initial eccentricity of the inner orbit. Their distribution is also presented as a histogram in the smaller plot. The light-colored systems (\(e_{\rm in}^{\rm in}/e_{\rm in}^{\rm in}<1\)) at small circularization radii of the inner orbit have been completely circularized due to tidal dissipation. The dark-colored systems (\(e_{\rm in}^{\rm in}/e_{\rm in}^{\rm in}>1\)) at comparable inner- and outer circularization radii have been subjected to three-body dynamical effects. The evolution of the other systems has hardly been affected by either tidal effects or three-body dynamics.
by performing large-scale simulations of massive triple star evolution, beginning at the ZAMS up to the first point of stellar or orbital interaction. To account for uncertain formation properties in the simulated population, we included predictions for several model variations of the initial parameter distributions. Our key findings can be summarized as follows:
* The initial parameters of stable hierarchical triple stars are highly constrained by the combined requirement of dynamical stability and avoidance of contact of the inner binary, which restrict the period distribution. Triple initiation results in less wide inner orbits, leading to a high incidence of interactions such as mass transfer. The inner orbit period distribution strongly differs from a flat distribution and deviates substantially different from a Sana distribution.
* The vast majority of systems (65%-77%) have a phase of mass transfer initiated by the primary star as their first interaction. This occurs mainly at early evolutionary stages of the donor. In 32-50% of the cases the donor is still a main-sequence star, and it has evolved to the Hertzsprung gap in 38-50% of the systems. We have identified two main processes responsible for the large number of systems that initiate mass transfer at an early evolutionary phase. First, as a result of the dynamical stability criterion at formation, the inner orbits are typically more compact initially, and therefore the stars need to expand less in order to fill their Roche lobe, thus favoring mass transfer. Second, three-body dynamics can drive up the maximum eccentricity of the inner orbit, shrinking the eccentric Roche lobe. In the fiducial model, 10% of all systems that undergo mass transfer initiated by the primary star experience an increase of the inner eccentricity by at least 0.05.
* Across all triple models, we predict that fewer than 0.5% of the systems do not engage in any interaction after being evolved for a Hubble time, such that all three stars effectively evolve as single stars. These systems are known targets as progenitors for gravitational wave mergers. We have shown that systems with strong three-body dynamics tend to interact before a triple compact object (TCO) is formed and hence reduce the likelihood of producing compact objects that merge with the help of von Zeipel-Lidov-Kozai (ZLK) oscillations. However, the typical ZLK timescales are still a few orders of magnitude shorter than a Hubble time and can accelerate the inspiral of the compact objects. We have also shown that ignoring three-body dynamics before compact object formation results in TCOs with stronger eccentricity oscillations and thereby likely over-predicts the merger rate of compact objects in such systems.
* The predicted incidence rates of systems evolving through each evolutionary channel is most sensitive to variations in the inner/outer semi-major axis and the outer mass ratio distribution. The incidence rate of inner mass-transfer systems is dominant compared to other evolutionary channels leading to interaction. Though, the absolute differences between population models for these less common channels are not important, the relative differences can be relevant. For only two evolutionary channels, mass transfer from the tertiary star onto the inner binary and the unbinding of the outer orbit, the incidence rate differs by over a factor of two from the fiducial model's rate (see Fig. 4).
* The evolution of an isolated massive binary star population up to the first interaction differs significantly from that of massive hierarchical triple stars. The criterion of dynamical stability and eccentricity oscillations due to dynamical interaction ensure that at least an additional 15% of the population initiates mass transfer in the inner binary. Moreover, the donor stars are generally at an earlier evolutionary phase at the onset of mass transfer.
###### Acknowledgements.
The authors acknowledge support from the Netherlands Research Council NWO (VENI 639.041.645 and VIDI 203.061 grants).
|
2305.06263 | DMPP-3: confirmation of short-period S-type planet(s) in a compact
eccentric binary star system, and warnings about long-period RV planet
detections | We present additional HARPS radial velocity observations of the highly
eccentric ($e \sim 0.6$) binary system DMPP-3AB, which comprises a K0V primary
and a low-mass companion at the hydrogen burning limit. The binary has a $507$
d orbital period and a $1.2$ au semi-major axis. The primary component harbours
a known $2.2$ M$_{\oplus}$ planet, DMPP-3A b, with a $6.67$ day orbit. New
HARPS measurements constrain periastron passage for the binary orbit and add
further integrity to previously derived solutions for both companion and planet
orbits. Gaia astrometry independently confirms the binary orbit, and
establishes the inclination of the binary is $63.89 \pm 0.78 ^{\circ}$. We
performed dynamical simulations which establish that the previously identified
$\sim800$ d RV signal cannot be attributed to an orbiting body. The additional
observations, a deviation from strict periodicity, and our new analyses of
activity indicators suggest the $\sim800$ d signal is caused by stellar
activity. We conclude that there may be long period planet 'detections' in
other systems which are similar misinterpreted stellar activity artefacts.
Without the unusual eccentric binary companion to the planet-hosting star we
could have accepted the $\sim800$ d signal as a probable planet. Further
monitoring of DMPP-3 will reveal which signatures can be used to most
efficiently identify these imposters. We also report a threshold detection (0.2
per cent FAP) of a $\sim2.26$ d periodicity in the RVs, potentially attributed
to an Earth-mass S-type planet interior to DMPP-3A b. | Adam T. Stevenson, Carole A. Haswell, John R. Barnes, Joanna K. Barstow, Zachary O. B. Ross | 2023-05-10T15:55:40Z | http://arxiv.org/abs/2305.06263v1 | DMPP-3: confirmation of short-period S-type planet(s) in a compact eccentric binary star system, and warnings about long-period RV planet detections
###### Abstract
We present additional HARPS radial velocity observations of the highly eccentric (\(e\sim 0.6\)) binary system DMPP-3AB, which comprises a K0V primary and a low-mass companion at the hydrogen burning limit. The binary has a 507 d orbital period and a 1.2 au semi-major axis. The primary component harbours a known 2.2 \(\mathrm{M_{\oplus}}\) planet, DMPP-3A b, with a 6.67 day orbit. New HARPS measurements constrain periastron passage for the binary orbit and add further integrity to previously derived solutions for both companion and planet orbits. _Gaia_ astrometry independently confirms the binary orbit, and establishes the inclination of the binary is \(63.89\pm 0.78^{\circ}\). We performed dynamical simulations which establish that the previously identified \(\sim\)800 d RV signal cannot be attributed to an orbiting body. The additional observations, a deviation from strict periodicity, and our new analyses of activity indicators suggest the \(\sim\)800 d signal is caused by stellar activity. We conclude that there may be long period planet 'detections' in other systems which are similar misinterpreted stellar activity artefacts. Without the unusual eccentric binary companion to the planet-hosting star we could have accepted the \(\sim\)800 d signal as a probable planet. Further monitoring of DMPP-3 will reveal which signatures can be used to most efficiently identify these imposters. We also report a threshold detection (0.2 per cent FAP) of a \(\sim\)2.26 d periodicity in the RVs, potentially attributed to an Earth-mass S-type planet interior to DMPP-3A b.
keywords: planetary systems - binaries: close - techniques: radial velocities - binaries: spectroscopic - stars: low-mass
## 1 Introduction
DMPP-3 is a unique eccentric binary star system, where a hot super-Earth planet was found to orbit one of the stars in a close binary pair (Barnes et al., 2020, hereafter B20). The circumprimary (S-type) planet orbits the primary star DMPP-3A (HD 42936), a slowly rotating K0V star. The highly eccentric (\(e=0.6\)) very low mass stellar companion DMPP-3B is just above the mass required to sustain hydrogen burning (\(M_{\mathrm{B}}=82.5\)\(\mathrm{M_{\mathrm{\mathrm{J}up}}}\)). The DMPP-3AB orbit has a semi-major axis of \(\alpha_{\mathrm{AB}}=1.23\) au. Without the context of hosting an S-type planet, DMPP-3AB is not a particularly close binary, but it is the most compact binary system to harbour an S-type planet observed thus far. It is also one of the few systems containing a radial velocity (RV) detected S-type super-Earth around an FGK star (B20; Unger et al., 2021; Barros et al., 2022). Su et al. (2021) compare the known S-type RV discoveries, and show that DMPP-3A b is an extreme outlier in their sample. In Figure 1, we have similarly plotted the demographics of all known S-type planets, i.e. we have included those discovered through their transits. DMPP-3A b lies in the bottom left hand corner, with lowest separation and third lowest projected planetary mass in the sample. Only two other planets in this sample have a binary separation less than 10 au, but with minimum masses more than an order of magnitude higher. DMPP-3 provides an excellent opportunity to challenge the current planetary system formation and evolution models, through studying an extreme and previously unseen system configuration.
The scarcity of DMPP-3A b analogues found in the general demographics reported by Su et al. (2021) can arise from a few factors. In their work, the authors discuss the observational biases involved in RV surveys, which often disregard binary star systems. The stars in close binaries are typically unresolved, and the recorded spectrum is a blend of light from both objects. If the mass ratio between the two bodies is relatively equal, they will contribute a similar amount of light, and confuse the derived RVs (Su et al., 2021). For these so-called 'double-lined' spectroscopic binaries, the spectra consist of two sets of superimposed lines, which require novel techniques and extensive analyses to extract the signal of each component (Konacki et al., 2009, 2010). The results can still suffer from residual scatter due to the blending of the spectra.
Where the two stars in a stellar binary have very different luminosities, as is the case for DMPP-3AB, the light comes predominantly from the primary component and the spectrum effectively reveals
only a single set of stellar lines. In this case there is no need for complex deconvolution to recover individual spectra. It is possible to obtain very precise radial velocities on such single-lined binaries, as demonstrated by Standing et al. (2022) and Triaud et al. (2022). These studies focus on binary signal subtraction to detect circumbinary (P-type) planets orbiting outside the inner stellar pair, and are able to reach a residual root mean squared scatter of 3 m s\({}^{-1}\)(Standing et al., 2022). This is a different system architecture to the circumprimary (S-type) planet hosts shown in Fig. 1 and under discussion in the present paper.
The double-lined nature of many binary observations therefore hinders the detection of low mass planets, almost certainly causing them to be under-represented in the samples. However, some of the scarcity of known DMPP-3 analogues must be due to the influence that multiplicity has on the formation and retention of these planets (Holman and Wiegert, 1999; Jang-Condell, 2015; Kraus et al., 2016; Marzari and Thehault, 2019). A highly eccentric, close in companion dramatically affects the dynamical stability of any other orbiting bodies, and would truncate the protoplanetary disc through dynamical perturbations, limiting the available mass for the creation of planets.
Planetary stability in binary systems has been studied over the years in an attempt to solve the three-body problem and find regions where planets can reside such systems (Dvorak, 1986; Mardling and Aarseth, 1999). Holman and Wiegert (1999) performed numerical simulations to investigate the possible stable S-type orbits for companions with a variety of mass ratios, eccentricities, and semi-major axes. They developed a semi-empirical formula to determine the critical semi-major axis: a threshold value exterior to which a planet cannot orbit. DMPP-3A b is within their'safe' zone (which extends out to 0.16 au from the primary star in this system), a finding confirmed through simulations in B20.
Whilst DMPP-3A b's present orbit is stable, we must also consider challenges to the formation of this system. The protoplanetary disc would be truncated by the presence of another massive body, an effect strongest for large eccentricity and small semi-major axis of companion orbit: two features this system exemplifies (Jang-Condell, 2015). If S-type planets do begin to form, such systems can be short-lived, with secular orbital evolution resulting in ejection on timescales of millions to billions of years (Kraus et al., 2016). DMPP-3A is 9.6 Gyr old (Table 1). It seems likely the architecture of the DMPP-3 system has evolved to the current configuration through dynamical interactions.
This system was selected for intensive high precision, high cadence RV observations due to anomalously low chromospheric emission, characterised by the \(\log R^{\prime}_{\rm HK}\) metric, and attributed to the presence of circumstellar material around the primary star. This is hypothesized to be supplied by the loss of material from close-in planets (Haswell et al., 2020), and is therefore indicative of short period planets. The most dramatic examples of mass-losing short period planets are the so-called catastrophically disintegrating exoplanets (CDEs). These are rocky planets with very short orbital periods. They are heated so intensely that the rocky surface is vaporised and carried off in a thermal wind. The DMPP-3 system seems likely to be in a short-lived planetary mass-losing phase, perhaps following a dynamical reconfiguration. Employing an approximate relation for planetary temperature to orbital distance (Rappaport et al., 2012), a CDE surface temperature of approximately 2000 K (Jones et al., 2020) would correspond to a \(\sim\)1.6 d orbit around DMPP-3A. Small planets on 1-2 day orbits are therefore likely to lose mass.
Despite the often disastrous influence close-in eccentric binary companions are understood to have on planet formation (exemplified by the dearth of planets found in binaries with separation \(\lesssim 10\) au seen in Fig. 1), the occurrence rate for hot Jupiter planet hosts having a stellar companion is twice as high as the binarity ratio of field stars with projected separations 20 - 10000 au (Cadman et al., 2022). A secondary star can modify the projected spin-orbit angle between primary star and planet (observed through measuring the Rossiter-McLaughlin effect), exciting secular evolution and interactions that cause a departure from coplanarity for all orbiting bodies (Martin and Lubow, 2018; Franchini et al., 2020; Moe and Kratter, 2021; Best and Petrovich, 2022). Recent evidence indeed suggests that orbital misalignment between a planet and a secondary stellar companion is relatively common for close-in giants (\(P_{\rm P}<10\) d, \(R_{\rm P}>4~{}R_{\rm\earth}\)), with systems tending to favour a polar orbit. In comparison, smaller (\(a_{\rm P}<1\) au, \(R_{\rm P}<4~{}R_{\rm\earth}\)) planets are mostly found in less inclined orientations relative to a companion star, with inclination angles between the planetary and companion orbits of \(\sim\)10-50 degrees (Behmard et al., 2022).
The architecture we see in DMPP-3 could be created through the eccentric Kozai-Lidov mechanism (EKL: Lidov, 1962; Kozai, 1962). In hierarchical triple systems where the inclinations are not co-planar, a perturbing companion can stir up oscillations in the inclination and eccentricity, caused by the overall requirement for angular momentum to be conserved (Naoz, 2016). This is thought to be the cause of non-zero eccentricities of S-type planets in binaries. Although close-in planets have short tidal circularisation timescales, gravitational interactions with a binary companion can excite eccentricity on an even shorter dynamical timescale.
The EKL may have driven the migration for some hot Jupiter planets (Rubie et al., 2015; Morbidelli et al., 2015; Angelo et al., 2022). A companion can perturb the planet orbit into gradually more eccentric configurations over secular timescales (far longer than the orbital periods involved in the system), and with increased eccentricity the periastron will gradually move closer to the primary star. Tidal forces on the planet and star then tend to shrink the orbit and circularise it, creating a close-in planet (Fabrycky and Tremaine, 2007; Naoz, 2016).
Dynamical interactions can significantly alter orbital parameters
Figure 1: Planet mass as a function of binary separation for all S-type planets. Adapted from Figure 2 in Su et al. (2021), we have extended the sample of stars (to include other detection methods, see [http://exoplanet.eu](http://exoplanet.eu)) in order to highlight the outlying position of DMPP-3A b. The green diamonds correspond to single-planet systems, the blue circles denote multiple planets in a binary system, and the orange star shape identifies DMPP-3A b. Kepler-693A b resides in the next closest binary, with separation \(a_{\rm AB}=2.90\) au.
of the system, but can also change the structure altogether. Gong & Ji (2018) highlight that whilst formation of S-type planets in a close binary is challenging, the formation of circumbinary (P-type) planets would be comparatively easier to achieve - and should be relatively common throughout the Universe. So far, a small number of transiting P-type planets have been discovered, by the _Kepler_(Borucki et al., 2010) and _TESS_(Ricker et al., 2015) missions (eg. TIC 172900988 b; Kostov et al., 2021). The first P-type planet discovered purely through RVs, TOI-1338/BEBOP-1 c, was also recently announced by Standing et al. (2023). Despite these discoveries, no planets have yet been found with orbits of the scale of \(\sim 3\) au around a \(\sim 1\) au binary.
To study the evolution of such systems, Gong & Ji (2018) explore planet-planet scattering, occurring for multiple planets when the eccentricities are stirred up (perhaps by the close binary pair developing eccentricity too). Tidal interactions could cause the P-type planets to be captured by the primary star in the close binary, becoming S-type. The capture probability is low (and related to mass ratio and eccentricity of the binary), but scattering through multiple planet interactions would increase this probability by reducing the energy of the planet (Gong & Ji, 2018). The capture scenario should be possible for close binaries with semi-major axes of \(0.5-3\) au, such as the DMPP-3 system. DMPP-3 is therefore useful for investigating planetary system evolution. Several physical processes with important inferred roles in sculpting the demographics of the Galaxy's short period planets could be strongly at play in creating and maintaining the exotic architecture we observe.
This paper presents additional RV data for DMPP-3. The new observations are described in Section 2; updated stellar parameters of DMPP-3A are discussed in Section 3; the RV analysis, refined system parameters, and further periodicities identified are described in Section 4. We discuss mutually inclined orbital simulations in Section 5, reporting the resulting dynamical timescales for system disruption. In Section 6 we consider the issue of stellar activity and how that impacts the periodic variability of the spectra. Our results are then discussed in Section 7, and we summarise the findings and conclude in Section 8.
## 2 Observations
Previous RV observations with the High Accuracy Radial Velocity Planet Searcher (HARPS, Mayor et al., 2003) enabled the binary orbit of DMPP-3AB to be identified and characterised for the first time (B20). The best solution indicated a highly eccentric, low mass ratio binary, with large RV excursion around periastron. Since the HARPS observations did not sample orbital phases close to periastron, the RVs were supplemented with two further measurements from the CORALIE spectrograph at the 1.2-metre Leonhard Euler Telescope (see Fig. 2, and Figure 1 in B20). The CORALIE observations confirmed the RV trend predicted by HARPS, but are an order of magnitude less precise, with RV uncertainties of 9 m s\({}^{-1}\). Moreover, the use of a different spectrograph meant that the observations had to be treated as a separate dataset. This necessitated an additional RV offset parameter meaning that the two CORALIE observations do not place a strong constraint on the orbital solution. We thus secured five new observations in service mode with HARPS around the September 2021 periastron, see Table 2.
We use a total of 106 spectra. The five new observations we refer to as S21. The remaining observations comprise the data used by B20 and are as follows: eight HARPS observations made between 2008 and 2013 by the Calan-Hertfordshire Extrasolar Planet Search (CHEPS, Jenkins et al., 2009); the main body of ninety-one HARPS observations made between 2015 and 2018 (DMPP); and two CORALIE observations made in 2017.
Data from the HARPS spectrograph were reduced using the harps-terra software (Anglada-Escude & Butler, 2012) to determine the radial velocities from the wavelength-calibrated spectra through a template matching process. Ancillary measurements of activity from the cross-correlation functions (CCF) are obtained through the standard HARPS data reduction software (bns1) pipeline. For more information on the reduction process used, see B20.
Footnote 1: [http://www.eso.org/sci/facilities/lasilla/instruments/harps/doc/DRS.pdf](http://www.eso.org/sci/facilities/lasilla/instruments/harps/doc/DRS.pdf)
## 3 Stellar Parameters of DMPP-3A
Properties of DMPP-3A were derived from the stellar spectrum using the ten highest S/N spectra from the DMPP dataset with the latest version of the species code (Soto & Jenkins, 2018). Along with values from the literature, these are given in Table 1. The stellar parameters which are consistent with those reported in B20 comprise: the \(V\) and \(B-V\) magnitudes; the chromospheric activity index \(\log R^{\prime}_{\rm HK}\) (derived from core emission in the Calcium \(\rm n\) H&K lines); effective temperature \(T_{\rm eff}\); metalicity [Fe/H]; surface gravity \(\log g\); macroturbulence velocity \(v_{\rm mac}\); stellar mass \(M_{*}\); and age. The luminosity \(L_{*}\) was not included.
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter & Value & Reference \\ \hline Spectral type & KOV & Houk \& Cowley (1975) \\ Parallax (mas) & \(21.25\pm 0.11\) & Gaia Collaboration et al. (2022) \\ Distance (pc) & \(47.06\pm 0.25\) & Gaia Collaboration et al. (2022) \\ \(V\) & 9.09 & SIMBAD \\ \(B-V\) & 0.91 & SIMBAD \\ \(\log R^{\prime}_{\rm HK}\) & \(-5.14\pm 0.05\) & Jenkins et al. (2011) \\ \(T_{\rm eff}\) (K) & \(5201\pm 20\) & \\ \([\rm Fe/H]\) & \(0.147\pm 0.013\) & \\ \(\log g\) (cm s\({}^{-2}\)) & \(4.266\pm 0.045\) & \\ \(v\) sin (km s\({}^{-1}\)) & \(3.17\pm 0.1\) & \\ \(v_{\rm mac}\) (km s\({}^{-1}\)) & \(1.58\pm 0.10\) & \\ \(R_{\rm A}\) (R\({}_{\sun}\)) & \(0.861\pm 0.005\) & \\ \(M_{*}\) (M\({}_{\sun}\)) & \(0.900\pm 0.009\) & \\ \(L_{*}\) (L\({}_{\sun}\)) & \(0.510\pm 0.003\) & Gaia Collaboration (2018) \\ Age (Gyr) & \(9.6\pm 0.8\) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: HD42936 (DMPP-3A) stellar parameters. Updated from B20, the values are shown with corresponding \(1\sigma\) uncertainties where appropriate, and references are given for each external source. The remaining parameters were re-calculated for this work with the latest version of the srecrus code (Soto & Jenkins, 2018), using the ten highest S/N spectra from the DMPP HD42936 dataset. SIMBAD data are accessed from [http://simbad.u-strasbg.fr](http://simbad.u-strasbg.fr).
\begin{table}
\begin{tabular}{l c c} \hline \hline Date & BJD \(-\)2450000 & Local time (La Silla) \\ \hline
2021 Sept 07 & 9464.875 & 03:16:42 \\
2021 Sept 13 & 9470.885 & 03:53:47 \\
2021 Sept 16 & 9473.888 & 04:09:38 \\
2021 Sept 20 & 9477.861 & 03:46:53 \\
2021 Sept 22 & 9479.820 & 02:56:30 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The DMPP-3 observations taken in 2021. These science observations were performed with HARPS Echelle observing technique, for consistency with the previous data. Each exposure lasted 900 s.
The parameters that are not consistent with values and error bounds from B20 are similar enough to not drastically impact the analysis. The parallax and distance are altered solely due to the availability of another _Gaia_ data release between the analyses, but this has no impact on any numerical solutions.
The two remaining parameters both come from the species analysis using a program version updated from than that used by B20. The projected rotation velocity \(v\sin i\) changes considerably, from 1.97 to 3.17 km s\({}^{-1}\). Despite this increase, DMPP-3A remains an old, slowly rotating star. We will discuss the observed \(v\sin i\) and indicators of the rotation period in Section 6.6. The stellar radius \(R_{\bullet}\) is reduced by 0.05 R\({}_{\sun}\) (with error bounds in B20 being \(\pm\)0.02 R\({}_{\sun}\)).
## 4 RV analysis
For the analysis of RV data we have made use of the exo-striker software. This is a "transit and radial velocity interactive fitting tool for orbital analysis and N-body simulations" (Trifonov, 2019). The tool takes transit and / or RV data as input, and provides quick access tabs on a graphical user interface (GUI) to a suite of useful functions. The tool can be used to fit Keplerian orbits to reflex RV signals present in the data, with periods being identified from the generalised Lomb-Scargle periodogram (GLS: Zechmeister & Kurster, 2009). Whilst fitting RV solutions, the routines can simultaneously determine offsets between different datasets for the same object, which is particularly useful here as the baseline covers instrument fibre upgrades and COVID-19 shutdown. The maximum likelihood parameters for the system can be explored using the built-in emcee MCMC sampling functionality (Forneman-Mackey et al., 2013).
The DMPP-3AB binary RV modulation (hereafter Signal 1) is shown in Fig. 2. The S21 observations with HARPS confirm the previous fit and provide tighter constraints on the solution. The new binary parameters and derived system parameters (using our new estimates of the stellar properties from Table 1) are listed in the Signal 1 column of Table 3. The significance of the binary orbit, quantified by the Bayesian information criterion (BIC), is improved by three orders of magnitude. The uncertainties in all the parameters are reduced, in some case by orders of magnitude, compared to those in B20. The 3\(\sigma\) lower limit on the mass of DMPP-3B is now 80.9 \(M_{\rm Jup}\) due to increased semi-amplitude and our recalculated mass of DMPP-3A. This suggests DMPP-3B is indeed massive enough to sustain hydrogen burning, and is an object located at the very bottom of the main sequence.
Fig. 3 c shows the periodogram of the RV data after subtracting our best-fitting binary orbit. The 6.67 d super-Earth planet DMPP-3A b (hereafter Signal 2), is resoundingly detected. The phase-folded RV curve of DMPP-3A b is shown in the middle right panel of Fig. 2. For the parameters of DMPP-3A b (Table 3 Column 3), most values remain close to those previously reported. Period and semi-major axis are both consistent with previous results. \(P\) uncertainty remains the same, whilst uncertainty on \(a\) is reduced by a factor of 4. The value for \(K\) has reduced from 0.97 to 0.82 m s\({}^{-1}\), with uncertainties an order of magnitude smaller. \(M_{0}\) and \(\omega_{0}\) have changed by \(\sim\) 30 \({}^{\circ}\), with much smaller uncertainties now as they previously covered a wide range. The eccentricity is changed slightly, at \(e=0.17\) compared with \(e=0.14\) as found by B20. We started with the value of 0.14, and used a uniform prior of \(0\leq e\leq 0.8\) during maximum _a posteriori_ fitting and MCMC simulations. The difference in the BIC between circular and eccentric solutions is A BIC \(\sim 5\), indicating the data are sufficient to constrain \(e\). The uncertainties in \(e\) are broadly similar to those of B20, with eccentricity value ranges consistent between both analyses. \(M_{\rm p}\) sin \(i\) is reduced as a result of a smaller fitted \(K\) value, changing from 2.58 to 2.22 M\({}_{\oplus}\), with uncertainties remaining the same order of magnitude at roughly \(\pm\)0.5 M\({}_{\oplus}\)
There is some variation in a few of the statistical parameters. The RV offset (\(\gamma\)) values are different from those reported in B20. The HARPS data were re-reduced with the terra software as a single dataset, leading to different systematic offset values. The sampling of periastron has slightly changed the offset parameters too, but only has a small effect. The stellar jitter parameters are reduced slightly, except for \(\sigma_{\rm CORALE}\) which is now small instead of non-zero as previously reported (B20).
B20 also identified a periodogram power peak at \(\sim\)800 d in the RV residuals after fitting for the signals corresponding to DMPP-3B and DMPP-3A b, at 6 per cent false alarm probability (FAP). They tentatively attributed this to intrinsic stellar activity and active regions on DMPP-3A. If dynamical and co-planar, this RV signal would indicate a body on an orbit crossing that of DMPP-3B. A longer temporal baseline now allows us to investigate the cause of this signal (henceforth Signal 3). The addition of the S21 RV data affirms both Signals 2 and 3 in the residual GLS peaks at 6.67 d and \(\sim\)800 d (Fig. 3 c). The phase fold for Signal 3 is shown in Fig. 2. This strengthens the argument that Signal 3, with maximum _a posteriori_ fitted period of 809 d, is genuinely present and persists throughout the entire timespan of the data. The semi-amplitude of the variation is 3.5 m s\({}^{-1}\). If attributed to a Keplerian orbit, the best-fitting period corresponds to a semi-major axis of approximately 1.64 au.
### Long period RV variation
Despite producing a relatively convincing phase fold (Fig. 2, bottom left panel), it is difficult to attribute Signal 3 to a body orbiting the host star. If Signals 1 and 3 are co-planar, the orbits would cross due to the large eccentricity of the DMPP-3AB binary orbit (as shown in Fig. A1). We have performed initial orbital integrations to confirm that in the coplanar case, a putative body following the \(\sim\)800 d orbit is either ejected or cannot be described with continuous variables. As a further test, we report dynamical simulations relaxing the assumption that the orbits are co-planar in Section 5, performing a more thorough analysis of potential orbital configurations. As a consequence of the simulation results, we suspect that this signal is due to activity, and this is investigated in Section 6. A further point to note is that it is unlikely that this confusing signal is a direct artefact of aliasing, as the window function (Fig. 3 a) has a substantial local minimum at the period of the signal.
Alias periods occur due to convolution between the uneven temporal sampling of observations and other physical periods present in the data. The window function is calculated by taking the discrete Fourier transform (DFT) of the time samples. To accomplish this, we first create a frequency array which depends on the smallest and largest frequencies to be used, as well as the step size between samples. For each frequency, an array of observation phases is calculated. Sums of the sine and cosines for every phase are then combined to determine window function power at that frequency. Peaks of this function denote frequencies that cause the largest interference on physical signals. Alias frequencies (or periods upon inversion) are calculated simply using \(f_{\rm alias}=f\pm f_{\rm window}\), where \(f\) is the peak of the RV periodogram, and \(f_{\rm window}\) the peaks in the window function (Dawson & Fabrycky, 2010). Taking Signal 1 as the dominant period that could be convolved with sampling, and Signal 3 as the potential alias period, we can rearrange the previous equation to determine where a window function peak would be required for Signal 3 to
Figure 2: **Top:** Radial velocities of DMPP-3A. The RVs and residuals are shown along with the model fit for the 4 Keplerian solution described in Table 3. **Middle left:** Phase folded plot of Signal 1, showing the DMPP-3AB binary orbit RVs (with an inset panel showing the S21 data points around periastron passage). **Middle right:** The phase fold created from the Keplerian solution for a MLE fit at a period of 6.67 d. **Bottom left:** The RV phase fold of Signal 3 with a folded period of 809 d from MCMC analysis. **Bottom right:** Phase folded residuals, showing Signal 4. The best-fitting period is 2.26 d. By observing BIC changes it was determined that there was no statistical significance of fitting eccentricity as a free parameter, which is forced here to be \(e=0\). **All plots:** RV folds were created for each Signal with all other identified Signals subtracted off, to show the individual solutions. Archival CHEPS measurements are marked with dark blue hexagons, CORALIE data points are red triangles, the main body of DMPP observations are green circles, and the S21 observations are black squares.
indeed be an alias of the binary orbit:
\[\frac{1}{809.38}=\frac{1}{506.89}\pm\frac{1}{P_{\rm window}}\ \ \ [1/{\rm d}]. \tag{1}\]
This gives a required window function peak at 1356 d. There is no peak at this period in the window function (see Fig. 3 a): aliasing seems unlikely to cause Signal 3.
While there is no window function peak at 1356 d, there is power there. To investigate whether Signal 3 could be an artefact of the temporal sampling of the DMPP-3B orbit, we simulated RVs for Signal 1 _only_ at the observation epochs of our data. The GLS of the simulated data shows a broad period identified at \(\sim\)760 d that could be an alias (Fig. 4 Top panel). By comparison, there is a narrower peak in the observed RV data (Fig. 3 b) at a similar period. We next simulated the reflex RV signals of both the binary and the planet DMPP-3A b. After fitting and subtracting the binary orbit, the residual GLS periodogram only shows a significant peak at 6.67 d (Fig. 4 Bottom panel), rather than at 6.67 and 800 d which we would see if Signal 3 was due to sampling.
We thus confidently rule out the possibility that Signal 3 is created by the temporal sampling of our observations. It is a genuine signal.
As another test of RV signal nature, we investigated the stacked Bayesian GLS periodogram (s-BGLS). Introduced by Mortier & Collier Cameron (2017), this involves computing Bayesian GLS periodogram (Mortier et al., 2015) for an initial number of observations, and then incrementally including successive observations to the GLS computation. With this method we can check the coherence of a signal. A planet signal should be coherent and continually increasing in significance, whereas activity signals (which are often quasi-periodic) should be unstable and incoherent.
We see that Signal 2 (Fig. 5 top panel) is coherent and grows monotonically in strength as one would expect for a genuine planet. The s-BGLS for Signal 3 (Fig. 5 bottom panel) also seems to show coherence from observation 20 onwards, after the period of the signal
\begin{table}
\begin{tabular}{l l l l l} \hline Parameter & Signal 1 & Signal 2 & Signal 3 & Signal 4 \\ & DMPP-3B & DMPP-3A b & — & DMPP-3A c \\ \hline FAP (GLS) & \(4.5\times 10^{-15}\) & \(8.7\times 10^{-7}\) & \(8.2\times 10^{-6}\) & \(1.9\times 10^{-3}\) \\ \(\Delta\ln\mathcal{L}\) & \(\sim 10^{7}\) & 20.10 & 31.48 & 10.38 \\ \(\chi_{r}^{2}\) & 1.19 & 1.26 & 1.02 & 0.95 \\ r.m.s (m s\({}^{-1}\)) & 1.58 & 1.34 & 0.98 & 0.97 \\ \(\Delta\)BIC & \(\sim 10^{8}\) & 16.89 & 44.31 & 2.14 \\ \(P\) (d) & \(506.89^{+0.01}_{-0.01}\) & \(6.67^{+0.01}_{-0.01}\) & \(809.38^{+0.20}_{-0.34}\) & \(2.26^{+0.20}_{-0.10}\) \\ \(K\) (m s\({}^{-1}\)) & \(2657.31^{+0.33}_{-0.02}\) & \(0.82^{+0.20}_{-0.07}\) & \(3.52^{+0.20}_{-0.34}\) & \(0.52^{+0.09}_{-0.14}\) \\ \(M_{0}\) (\({}^{\circ}\)) & \(126.08^{+0.03}_{-0.05}\) & \(210.26^{+0.05}_{-0.47}\) & \(196.50^{+0.02}_{-0.40}\) & \(33.50^{+0.49}_{-0.11}\) \\ \(e\) & \(0.596^{+0.01}_{-0.001}\) & \(0.174^{+0.02}_{-0.084}\) & [0, fixed] & [0, fixed] \\ \(\omega_{0}\) (\({}^{\circ}\)) & \(158.88^{+0.03}_{-0.01}\) & \(52.63^{+0.10}_{-0.10}\) & \(286.71^{+0.28}_{-0.11}\) & \(47.40^{+0.08}_{-0.22}\) \\ \(M_{\rm p}\sin i\) & \(82.52^{+0.53}_{-0.53}\) M\({}_{\rm Jup}\) & \(2.22^{+0.50}_{-0.50}\) M\({}_{\rm\earth}\) & \(0.156^{+0.007}_{-0.007}\) M\({}_{\rm Jup}\) & \(1.065^{+0.173}_{-0.259}\) M\({}_{\rm\earth}\) \\ \(a_{\rm p}\) (au) & \(1.139^{+0.004}_{-0.004}\) & \(0.0670^{+0.0003}_{-0.002}\) & \(1.641^{+0.006}_{-0.005}\) & \(0.033^{+0.002}_{-0.0001}\) \\ \(\gamma\)CHEPS (m s\({}^{-1}\)) & & -669.16\({}^{+0.37}_{-0.37}\) & \\ \(\gamma\)YCRALLE (m s\({}^{-1}\)) & & 3583.29\({}^{+0.41}_{-0.41}\) & \\ \(\gamma\)DMPP (m s\({}^{-1}\)) & & -661.66\({}^{+0.28}_{-0.28}\) & \\ \(\gamma_{\rm 21}\) (m s\({}^{-1}\)) & & -680.00\({}^{+0.30}_{-0.30}\) & \\ \(\sigma\)CHEPS (m s\({}^{-1}\)) & & \(0.25^{+0.23}_{-0.11}\) & \\ \(\sigma\)CORAILE (m s\({}^{-1}\)) & & \(0.17^{+0.20}_{-0.09}\) & \\ \(\sigma\)DMPP (m s\({}^{-1}\)) & & \(0.11^{+0.02}_{-0.10}\) & \\ \(\sigma\)\(\
has been sampled. As the observing runs are much shorter than this period, the peak is broad. However, with the addition of the S21 data the significance decreases slightly, and the peak power shifts in period by \(\sim 2.5\) d. As these observations are separated from the initial DMPP dataset by 3 years, any signals caused by activity could evolve in the intervening time, whereas planetary signals are strictly periodic. The change in significance and period after \(\sim 100\) observations suggests that Signal 3 is caused by activity. We discuss this in later sections.
### An additional interior S-type planet?
We fitted a circular Keplerian to the RV residuals after the subtraction of Signals 1 and 2, thus removing the \(\sim 800\) d Signal 3. The resulting residuals were searched for further periodic modulations. The GLS periodogram shows a 2.26 d signal, hereafter Signal 4 (see Fig. 3 e). This is a tentative detection at \(\sim 0.2\) per cent FAP. The inclusion of this signal improves the BIC by just over 2, the threshold for positive evidence (Raftery, 1995). This BIC improvement corresponds to a \(\sim 2\)-sigma detection, hence our current tentative stance on the planetary nature of this signal (see Table 2 in Standing et al. (2022), where \(\Delta\) BIC = 2 is equivalent to a Bayes Factor = 3). The log likelihood improves by \(\sim 10\), meaning this statistic shows significant evidence that this signal is present in the data (with the threshold for strong evidence being \(\Delta\) ln\(\mathcal{L}~{}>7\), Kass & Raftery 1995). Fig. 2 (bottom right panel) shows the residual RVs folded on the best fitting solution. The posterior orbital period and other parameters for Signal 4 are given in Table 3.
After fitting and subtracting Signal 4, no further signals remain in the GLS periodogram at FAP \(<10\) per cent. Signal 4 corresponds to
Figure 4: GLS periodograms for investigation into sampling. **Top:** The periodogram for simulated RVs of DMPP-3B and DMPP-3Ab. This plot shows a similar but much broader peak in the 700–800 d region than Fig. 3 b, with a maximum power at \(\sim 760\) d (denoted by a vertical dashed line, as is the binary period at \(\sim 507\) d). **Bottom:** After fitting for the binary orbit, the simulated 6.67 d period is by far the strongest peak in the periodogram. There are no long period peaks, and if Signal 3 was caused by sampling, we would expect to see a feature at 800 d (and this plot would mimic an idealised version of Fig. 3 c).
Figure 3: **(a):** Spectral window function computed by taking the discrete Fourier transform, with a vertical line plotted at a period of 1356 d (see Section 4.1). **(b):** The GLS periodogram of the radial velocities, identifying a \(\sim 507\) d period corresponding to Signal 1. **(c):** The 507 d Keplerian has been removed, clearly showing significant peaks at 6.67 d (the planet) and \(\sim 800\) d. The other significant peaks in the 10–100 d range are likely a result of aliasing and are in a very busy region of the window function, so are neglected for signal fitting. **(d):** Upon removal of the 6.67 d planet RVVs by fitting Signal 2, the periodicity at \(\sim 800\) d remains. **(e):** After fitting for Signal 3, a periodicity at \(\sim 2.26\) d is found in the residuals, at \(\sim 0.2\) per cent FAP. In all periodogram plots the dashed horizontal lines correspond to 0.1, 1 and 10 per cent FAP levels, where FAP gets smaller with increasing power. The vertical lines overlaid correspond to the periods of signals listed in Table 3, identifying periodogram peaks.
a putative planet orbiting at 0.033 au from DMPP-3A, interior to the planet DMPP-3A b.
The stability of this two planet configuration has been validated with the exo-striker package, using the symplectic massive body algorithm (SyMBA) functionality (Duncan et al., 1998). This orbital integrator allows for the time step to be recursively reduced upon close interactions between bodies in the system to fully simulate the interaction - whilst retaining the speed of traditional mixed variable symplectic (MVS) algorithms, the previous gold standard of orbital simulations (Wisdom and Holman, 1991).
There is no indication that Signal 4 is due to aliasing: a 1-day alias of DMPP-3A b would lie at 1.18 d. The stellar activity and line profile indicators show no periodicities on \(\sim 2\) day timescales. This is as expected because the timescale is too short to be induced by stellar rotation in an old star, but too long to be attributed to stellar p-mode oscillations (Collier Cameron et al., 2019; Costes et al., 2021).
### Combined RV and Astrometry
The reflex motion of a star caused by companions (either planetary or stellar) can be assessed with other techniques aside from radial velocities. Astrometry involves measuring the positions of objects very accurately, and with this we can map out the 3D spatial and velocity distribution of stars. If accompanied by another body, the positions of a host star will vary as a result of motion about the common barycentre, indicating a perturbing presence.
The cutting-edge of stellar astrometry measurements comes from the third _Gaia_ release (DR3; Gaia Collaboration et al., 2022). _Gaia_ has been in operation for \(\sim\)34 months, and therefore has amassed sufficient observations to make the search for binary stars with _Gaia_ possible for the first time.
Stars in the _Gaia_ catalogue are considered for multi-star analysis when their motion does not fit the single star astrometric models. Data processing identified these stars through use of the renormalised unit weight error (RUWE) statistic. This is effectively a goodness-of-fit metric, quantifying how well a star can be described as a single body considering the observed movements through space (Almenara et al., 2022). For RUWE \(\gtrsim 1.4\), the motion is not consistent with a single star, with the excess astrometric noise potentially indicating an unseen companion (Lindegren et al., 2021; Almenara et al., 2022). DMPP-3 has RUWE = 6.85, so it is likely that Gaia has detected significant deviation from a single star solution, and should be able to quantify the motion of the binary system via astrometric measurements.
The orbital solutions for non-single stars in DR3 are published online (Gaia Collaboration et al., 2022), and using object identifiers, the parameters can be extracted for particular systems. For stars such as DMPP-3, which have both astrometric and single-lined spectroscopic solutions, the orbital parameters are expressed in the Thiele-Innes elements (\(A,B,F,G,C,H\)). We have used the conversions detailed in the appendices of Halbwachs et al. (2022) to convert these to the traditional Campbell orbital elements (\(a,\omega,\Omega,t\)), along with associated errors.
Using this conversion, we can determine the inclination of the binary system DMPP-3AB with respect to the observers line of sight. This allows us to eliminate the sin \(i\) term in the projected mass retrieved from RV analysis. By assuming that the binary and planet(s) orbit in the same plane, we can also form estimates for 'coplanar' masses of any bodies orbiting the central star. This adds credibility to the super-Earth mass of DMPP-3A b, and is further evidence for a rocky composition. These masses are listed in Table 4.
have used multiple integrators to ensure confidence in our results. The first integrator used was IAS15, a non-symplectic 15th order Gauss-Radau integrator with adaptive time stepping (see Rein & Spiegel, 2015 for further information). The second was WHFast, a symplectic Wisdom-Holman integrator (Wisdom & Holman, 1991; Rein & Tamayo, 2015). A high order kernel was used in WHFast to improve the accuracy of the integrations (Rein et al., 2019).
To investigate the stability of the companion star and a hypothetical body orbiting as described by Signal 3, we neglected the circumprimary planet(s): their small mass and close-in orbits mean they have a negligible effect on the large scale system dynamics.
Duncan et al. (1998) describe the ideal time step needed for an integrator as \(\sim 10^{-3}\) of the smallest orbital period. Accordingly we set the time step for WHFast to 0.5 days. We assumed the orbit of DMPP-3B is edge-on (\(i=90^{\circ}\)), and varied the inclination of Signal 3 between \(90^{\circ}\) and \(270^{\circ}\), to investigate the full range of prograde and retrograde relative orientations. Despite having an estimate of the inclination from _Gaia_ astrometry (see Section 4.3), we retain the edge-on assumption here. If DMPP-3B inclination is included, the mass would be higher by a factor of \(\frac{1}{\sin i}\) and the gravitational interactions we are simulating would be stronger. To investigate potential stability, we have therefore chosen to use the very minimum possible mass for DMPP-3B, to rule out even the most feasible scenario for stable orbits. Relaxing this assumption therefore would only strengthen the conclusions we will draw from the simulations. We performed simulations for each mutual inclination for a set of 100 configurations: a grid of \(10\times 10\) evenly spaced starting points for both DMPP-3B and the potential Signal 3 object. Signal 3's orbit was assumed to be circular, as there is no evidence for non-zero eccentricity.
We present the results of the investigation into dynamical simulations in Fig. 6. Simulations were generally performed until the eccentricity of the 800 d signal became \(>1\). In most cases this happened very quickly (years), so we did not need to extend total integration time to very large values. The evolution was generally chaotic. A few initial set-ups led to long \(e<1\) timescales. Upon investigation, these all asymptotically approached being unbound. We therefore pragmatically dealt with this by slightly reducing the criteria for system disruption. We chose to use \(e=0.98\) as the threshold.
The resulting trend in disruption timescales can be easily understood. The stability time rises from the co-planar case (where the orbits are sure to cross) up to a maximum at approximately \(75^{\circ}\)-\(80^{\circ}\) mutual inclination. As the Signal 3 orbit approaches face-on, the putative mass asymptotically approaches infinity and thus the mutual gravitational interactions strengthen.
Mutual inclinations of \(90^{\circ}\)-\(180^{\circ}\) degrees indicate retrograde orbits. In all cases, the disruption time is shorter for the retrograde case than in the corresponding prograde case because close approaches occur more frequently.
The simulations imply there is no stable situation for Signal 3 to be attributed to an orbiting object. The longest median timescale identified in the simulations is 14-15 yr. This also suggests that it is unlikely the system could exist in this state for a short period and evolve to some other configuration. The observed 800 d modulation appears roughly sinusoidal over the 13 yr baseline (Fig. 2) showing no sign of the chaos and rapidly-induced eccentricity we see in the simulations. We rule out the possibility that Signal 3 is caused by a planet with \(P_{\rm orb}\sim 800\) d, and instead focus on assessing whether this modulation can be attributed to stellar activity.
## 6 Stellar activity
We use the CCF full-width at half-maximum (FWHM), CCF contrast, and bisector inverse slope (BIS) time-series from the drs pipeline, as well as the S-index, H \(\alpha\), and Na D from HARPS spectra to thoroughly investigate our data for signatures of stellar activity. The atomic emission line strengths are extracted from the spectra following the procedure detailed in Barnes et al. (2016), and we direct the reader
\begin{table}
\begin{tabular}{l c} \hline Parameter & Value \\ \hline \(a_{\rm A}\) (au) & \(0.093\pm 0.005\) \\ \(i\) (\({}^{\circ}\)) & \(63.89\pm 0.78\) \\ \(M_{\rm B}\) (M\({}_{\rm Jup}\)) & \(91.90\pm 0.85\) \\ \(M_{\rm A\,b}\) (M\({}_{\oplus}\)) & \(2.47\pm 0.56\) \\ \(M_{\rm Sig.4}\) (M\({}_{\oplus}\)) & \(1.186\pm 0.289\) \\ \hline \end{tabular}
\end{table}
Table 4: Gaia derived parameters, and derived masses constrained by inclination and the assumption that all bodies lie in the same orbital plane.
Figure 6: Figure showing median time (for 100 different set-ups) until system disruption, the time where eccentricity passes the value \(e=0.98\), indicating the orbit becomes unbounded. This is calculated for different orbital configurations, showing variation with mutual inclination between the two objects. The inclination for DMPP-3B is set at \(90^{\circ}\), and the inclination of an 800 d period object varied. **Top:** integrations for this plot are performed with IAS15. **Bottom:** integrations performed with WHFast. All simulations predict instability at very short timescales. The two integrators give the same result, verifying that different truncation errors in simulations are not causing the observed timescales.
to that work for further information on how these time-series were produced.
As we explain in Section 4, we suspect Signal 3 is a stellar activity artefact. We also wish to check that the signals we attribute to orbiting bodies are indeed of dynamical origin.
### S-index
The S-index is derived from the Calcium ii H & K line core emission, and is calculated as described in Lovis et al. (2011) and Costes et al. (2021). The S/N for this indicator is not high due to the very low chromospheric emission from DMPP-3 and the poor throughput at the extreme blue end of the HARPS spectral coverage. The average S/N for HARPS spectral order 7 (containing both H&K lines) in our data is \(<15\) (in comparison to S/N of \(\sim 100\) at redder orders), and will be reduced further for the calcium lines, due to these being at the edges of the order. The blaze function peaks very strongly at the centre of the order, so the flux will be diminished towards the edges.
We searched for periodicities in the S-index time-series using the GLS periodogram shown in Fig. 7 a. There is an extremely broad band of power exceeding 0.1 per cent FAP ranging from around 500-1000 d which covers the 800 d period of Signal 3. The S-index and RV residuals after subtraction of DMPP-3B's orbit show a very weak negative correlation. The noise present in the S-index time-series may be obscuring the correlation.
There is no trace of significant S-index or other activity indicator power at either the 6.67 d period of DMPP-3A b, or the 2.26 d period potential planet Signal 4 (Fig. 8 a).
### Full width at half maximum
The FWHM of the CCF used to extract the stellar reflex RVs is another identifier we can use to track stellar activity (Queloz et al., 2009; Barnes et al. in preparation).
To analyse the FWHM values for any long term periodicities, we must first correct for instrumental effects, see Appendix B. The period analysis for the corrected brs FWHM values gives a GLS periodogram with a peak at \(\sim\)800 d (Fig. 7 b), with a best-fitting sinusoid of 792 d period. These values are consistent with Signal 3 (Fig. 3 c). We searched for additional periodicities in the residual FWHM time-series after subtracting a 792 d period sinusoid. None were found. Through our extension of the baseline and a formal treatment of the CHEPS archival measurements, the \(\sim 800\) d FWHM periodicity identified by B20 has been refined: the peak has become sharper, with a higher level of significance.
We inspected the correlation between the FWHM activity indicator and RV residuals, after subtraction of Signals 1 & 2, shown in Fig. 9. The Pearson's \(r\) is 0.39, with \(F\)-test \(p\)-statistic of \(4.9\times 10^{-5}\). This indicates a stronger detection of correlation than reported in B20, \(r=0.30\) & \(p=3.7\times 10^{-3}\). The connection between stellar activity and the origin of Signal 3 is supported.
We also searched for periodicities in the FWHM to assess the origin of Signals 2 and 4: as with S-index, there is no significant power at the 6.67 d period of DMPP-3A b, or for the potential planet Signal 4 at 2.26 d (Fig. 8 b).
### Bisector inverse slope
We performed periodogram analysis of the BIS time-series which failed to identify any significant periods, with all the peaks below the 10 per cent FAP level. The bisectors appear to be well behaved, with only a couple of observations showing changes in the line profile shapes, which can be seen in Fig. 10. The most drastically different line shapes correspond to the lowest signal-to-noise (S/N) observations. There is no significant sign of activity in these bisector shapes. This is consistent with a dynamical origin for Signals 2 and 4, but provides no evidence that Signal 3 is an activity artefact. Importantly, the bisectors are also not modulated on the binary orbital period either, further solidifying the dynamical interpretation of the eccentric binary solution.
### CCF contrast and area
Contrast is defined as the amplitude (or depth in comparison to the continuum) of the inverse Gaussian function fitted to the CCF profile (Gunther et al., 2018; Collier Cameron et al., 2019). As the intensity of the spectral lines described by the CCF changes, the depth will also change, tracking stellar activity where fluxes are affected by dark
Figure 7: GLS periodograms for activity indicators. The S-index **(a)**, FWHM **(b)**, CCF Area **(c)** and H \(\alpha\) **(d)** are shown for the periods longer than 100 d, with the most significant peaks generally found around \(\sim 800\) d. The S-index and H \(\alpha\) both show local peaks that are not the most significant globally, whereas CCF FWHM and Area have stronger connections to the period of Signal 3 - illustrated with a dashed vertical line in all plots. The dashed horizontal lines correspond to 0.1, 1 and 10 per cent FAP levels as before, with these peaks in S-index, FWHM, and Area clearly more significant than the 0.1 per cent level.
spots or bright plages. The measurements of FWHM and contrast are expected to be linked, as changes in the spectral lines should alter both the depth and width of the fitted profile. We do measure a weak negative correlation between FWHM and depth: the data give a Pearson's \(r=-0.163\), with a \(p\)-statistic = 0.104
Combining the FWHM with contrast we obtain the CCF area, which is also often used as an activity diagnostic (Collier Cameron et al., 2019; Costes et al., 2021). The GLS periodogram of this time-series is shown in Fig. 7 c, and shows a \(\sim 800\) d periodicity with similar significance and a sharper peak than that in the FWHM alone. This is further evidence for a connection between Signal 3 and the stellar line profiles.
### Periastron events
Stellar activity can be enhanced by the tidal and magnetic interactions in close binary stars. With a periastron distance of 0.498 au, DMPP-3 is not a particularly close binary. Since magnetic forces between stars generally decline more rapidly with separation than tidal forces (e.g. Bromley and Kenyon, 2022), the tidal effects will dominate in DMPP-3AB. A number of binary systems show signs of stimulated activity around periastron passage, suggesting a connection with tidal effects induced by the companion star (Moreno et al., 2011). Close binary stars generally show higher levels of chromospheric activity than single stars of the same mass (Eker et al., 2008; Qian et al., 2012), and for eccentric orbits this is strongest at periastron (Frasca et al., 2022). Moreno et al. (2011) study the causes of enhanced activity during periastron events. They find that as tidal shear deforms the
Figure 8: GLS periodograms focused on periods corresponding to S-type planet Signals 2 and 4, with vertical dashed lines indicating periods of 2.26 and 6.67 d. **(a)**: S-index, **(b)**: FWHM, **(c)**: Na \(D_{1}\), **(d)**: Na \(D_{2}\), **(e)**: H \(\alpha\). No indicator shows significant periodicities at the relevant period values, with FAP levels denoted by dashed horizontal lines, as before.
Figure 10: Bisectors of DMPP-3.A, colour coded by phase for Signal 3 (Table 3). Projected quadrature points of the signal (were this due to a orbiting body) are blue and red, with conjunctions in black. The bisectors are parallel shifted by the barycentric drift corrected RV velocity, output from the HARPS DRS.
Figure 9: Median subtracted FWHM plotted against RV residuals, after subtraction of Signal 1 & 2 in Table 3.
stellar surface, energy is dissipated into the stellar layers as heat. The rapid increase in energy deposited around periastron is a promising mechanism to explain increased stellar activity at these orbital phases.
The manifestation of increased activity is likely to be noticed in emission line changes or enhanced X-ray luminosity during periastron (Moreno et al., 2005; Lavail et al., 2020). When the star's surface is perturbed by a companion, patches of the photosphere move at different speeds, and the departure from uniform effective temperatures and gravities induces line profile variability (Harrington et al., 2016). Inhomogeneity of the external stellar gas layers is evident when observing changes in photospheric lines. The strongest line variation is also seen during periastron, when the influence of the binary companion is strongest.
For the configuration of DMPP-3AB we might expect some sign of enhanced activity around periastron. Fig. 7 shows that there are no well-defined peaks in most activity indicator periodograms at the binary orbital period. This may be because the modulation is non-sinusoidal. We have searched the time series of activity indicators, to investigate whether we see any enhanced activity. The only indicator that hints at increased activity around periastron is the sodium emission. By combining the two doublet lines, we have inspected the binary orbit phase-folded points and found a slight enhancement around periastron (phase 0, Fig. 11, top panel). The periodogram of this time-series, shown in the bottom panel of Fig. 11, identifies a global peak relatively close to the \(\sim 507\) d period of the binary orbit (indicated with a dashed vertical line). However, it is perhaps overzealous to attribute the increased sodium emission to an observable activity increase in the DMPP-3 system: Eker et al. (2008) predict that periods longer than a few hundred days will not be important for increased chromospheric activity.
Future observations will establish whether the elevated Na D emission is reproducible. So far, we only have HARPS activity indicator information during a single periastron passage. The next periastron epoch is forecast to be in 2023 (around February 09), and observing this would provide a far stronger assessment of periastron effects in the DMPP-3 binary system.
### Rotation period searches
Values for projected rotation velocity (\(v\sin i\)) of DMPP-3A vary from source to source, as shown in Table 5. DMPP-3A is a slow rotator and reliable measurement of \(v\sin i\) close to or below the instrument resolution is challenging (Reiners, 2007). HARPS has resolution \(R\) (\(\Delta\lambda/\lambda\)) of \(\sim 115000\), corresponding to \(\sim 2.6\) km s\({}^{-1}\) so we can rule out \(v\sin i>>2.6\) km s\({}^{-1}\) but a precise measurement is difficult and dependent on details of the instrumental line spread function. Nevertheless, by using \(P_{\rm rot}=2\pi R\sin i/\,v\sin i\), we can obtain lower limits for \(P_{\rm rot}\) assuming \(0.5\) km s\({}^{-1}\)\(\,\)\(v\sin i\)\(\,\)\(<3\) km s\({}^{-1}\), c.f. Table 5. We find \(15\) d \(\leq P_{\rm rot}/\sin i\leq 90\) d. These is as expected for a star of this spectral type and age (Staukoff & Hartmann, 1986; Suarez Mascareno et al., 2015; Angus et al., 2020).
We searched periodograms of the activity identifiers for signs of the rotation period between 15 and 90 d. The most significant periodicities were found in the analyses of H\(\alpha\), Na D1 & D2 and FWHM; as shown in Fig. 12. These indicators seem to share peaks at \(\sim 33-36\) days. This might suggest that the rotation period lies in the range 33-36 d. A period of around 35 days would correspond to a \(v\sin i\) of approximately \(1.25\) km s\({}^{-1}\). However, upon analysis of the spectral window function (Fig. 12 a) for the same period range, we see that these activity periodograms look similar to the window plot. Especially for the FWHM, where there is not much periodic structure over shorter periods, the periodogram appears to be strongly affected by the sampling. When phase-folding the activity series on the identified periods, it becomes clear that sampling effects are at play. The points cluster into two distinct groups (with the majority of points in a single region), and phase coverage is very obviously uneven. These effects are characteristics of time series data folded on alias periods.
It is perhaps unwise to draw conclusions on the rotation period of the star from such activity analysis, when the temporal sampling is so influential. Further study is warranted, however a different approach might be needed: either through observations specifically scheduled to reduce aliasing; or a different method to determine the rotation period given that this is a slowly rotating, low activity star.
Figure 11: Investigation into enhanced activity around periastron. **Top:** Combined observations of line strengths for the sodium doublet, phase folded to the binary orbital period (which is at phase \(=0\), denoted with the dashed vertical line). We see a slight increase in comparison with the main body of remaining observations. **Bottom:** The GLS periodogram computed for the sodium time series data, identifying a period near the binary orbit of \(\sim 507\) d (again marked with a dashed vertical line).
\begin{table}
\begin{tabular}{l l} \hline \(v\sin i\) (km s\({}^{-1}\)) & Reference \\ \hline \(0.5\pm 1\) & Hojjatpanah et al. (2020) \\ \(1.4\pm 0.1\) & Ivanyuk et al. (2017) \\ \(1.643\pm 0.437\) & Soto \& Jenkins (2018) \\ \(1.97\pm 0.14\) & Barnes et al. (2020) \\ \(2.7\) & Jenkins et al. (2011) \\ \(3.17\pm 0.1\) &species (see Table 1) \\ \hline \end{tabular}
\end{table}
Table 5: Projected rotation velocities of the star DMPP-3A (HD42936). These values were obtained from the SIMBAD database, and are listed with references to the publications they originate from. For information on how each rotation velocity was calculated, see individual sources.
## 7 Discussion
DMPP-3 is an exciting discovery which offers several potentially productive avenues to explore. Most of these are phenomena we anticipated or could have anticipated when the Dispersed Matter Planet Project was conceived (Haswell et al., 2020), but the dynamically impossible 800 d RV signal, Signal 3, is a surprise. We begin our discussion by considering how it might arise. We did consider the possibility Signal 3 could be an instrumental artefact, but this possibility is quite easy to dismiss, given the excellent stability of HARPS, the many other stars which have been monitored over a similar temporal baseline, and our analyses of the various observed line profile properties.
### Activity sources
Stellar activity can affect RVs, even in relatively inactive old stars such as DMPP-3A. The main contribution to activity induced RVs for slowly-rotating, low activity stars is the suppression of convective blueshift (Cretignier et al., 2020; Costes et al., 2021). This is also the dominant effect in the Sun's RV variability (Meunier et al., 2010). Convective blueshift (CB) is caused by up-welling of material in a star's convection granules on the photosphere. The rising of hot (bright) material seen at the centre of the region outweighs the effect from the cooler falling material, resulting in a net blue-shifted effect in the spectral lines. In active regions, magnetic fields hinder the convection and suppress the rising of new material, causing stellar regions such as spots and plages to be red-shifted in comparison to the surrounding disc (Bauer et al., 2018). The mean line profile used to determine the RVs for the star will therefore include shifted lines from the individual regions of suppressed CB, introducing an overall broadening (that will affect the RVs), that will be reflected in an altered FWHM measurement. Cretignier et al. (2020) simulate stellar activity for slowly rotating stars and find that the changes in RVs caused by diminished CB inside faculae likely dominates over the RV changes caused by the reduced flux of dark spots.
Convective blueshifts of stars with HARPS have been calculated recently by Liebing et al. (2021), who find empirical relations for the scale factor between CB on the Sun and other stellar types. For K0 type stars, like DMPP-3A, the relation predicts a value of 0.409 \(\rm{CB}_{\sun}\).
Spectroscopic and photometric variations of stars with periodicities \(<100\) d will primarily be induced by the movement of active regions as the star rotates, and hence show modulation on the timescale of the rotation period. For longer periods (longer than the evolution timescale of individual activity features) the measured activity change would correspond to the activity level changing in a similar way to the solar magnetic cycle modulation on the Sun. The overall convective blueshift is dependent on the global activity level (Meunier et al., 2017). Therefore if the activity level changes over the course of a stellar cycle, the convective blueshift should roughly trace the global activity level. The amplitude of variation from CB over the Sun's magnetic cycle is 11 m s\({}^{-1}\)(Meunier et al., 2010), and by comparison the full amplitude of Signal 3 is \(\sim\)7 m s\({}^{-1}\)(Table 3). This value is broadly consistent with the relation from Liebing et al. (2021) discussed above, where the CB effect will be reduced compared to the effect seen on the Sun. Therefore, whilst the \(\sim\) 800 d Signal 3 cannot arise from rotation, it perhaps may arise from activity modulation in DMPP-3A.
### Stellar activity cycles
A modulation period of \(\sim 800\) d (\(\sim 2.2\) yr) is much shorter than the Sun's magnetic cycle (approximately 11 yr). However, the cycle period changes from star to star, depending on numerous factors such as spectral type and binarity. A recent investigation into stellar cycles discusses stars with identified cycle periods just over 1000 d (Sairam and Triaud, 2022). As our Signal 3 seems too long for rotation, it is likely to be a short stellar cycle. We can compare with Suarez Mascareflo, Rebolo and Gonzalez Hernandez (2016), who studied magnetic cycles of 12 K-type stars, and found a mean cycle length of 6.7 years. Two cycles identified in stellar photometry were however 1.7 and 2.7 years, similar in length to our 2.2 year Signal 3.
Figure 12: The spectral window function **(a)** and GLS periodograms of activity identifiers, to search for rotation period. **(b) - (e)**: H \(\alpha\), Na \(D_{1}\), Na \(D_{2}\) and CCF FWHM. The plots use a logarithmic scale on the x-axes and focus on the region between 10 and 100 days. We expect the rotation period to lie in this range, based on initial stellar radius (Table 1) and rotation velocity values (Table 5). The shaded band highlights periods in the range 33 – 36 d, where these indicators all share peak features.
Suarez Mascareño et al. (2016) also identify some stars with multiple cycles. For example, GJ 729 has two superimposed cycles, of 7.1 and 2.1 years. The authors suggest that the shorter period might be a "flip-flop" cycle. A flip-flop cycle is the repeated inversion in longitude of spot regions on the stellar photosphere. This effect was first studied on the Sun, where spot activity alternates in longitude on timescales of 1.5 to 3 years, causing cycles of 3.8 and 3.65 yr in the northern and southern hemispheres, respectively (Berdyugina & Usoskin, 2003). These periods are roughly 1/3 of the 11 year sunspot cycle, a ratio that is seen to persist over long timescales. This flip-flop effect can be observed in both spectroscopic and photometric measurements (Suarez Mascareño et al., 2016).
The cycle observed on DMPP-3A with period of 2.21 years has a range of potential origins. The cause could be: a short stellar magnetic activity cycle; a sub-cycle of a longer cycle that we are not sensitive to detect; a harmonic of a longer cycle; or a flip-flop cycle. The cycle length is consistent with a flip-flop cycle 1/3 the length of a longer stellar magnetic activity cycle, given that the average found by Suarez Mascareño et al. (2016) for K-type stars was 6.7 years - although this could be purely coincidental.
### Implications for RV detections
The lack of any well-defined periodicity at 800 d in the S-index and the BIS is consistent with a dynamical interpretation of the 800 d period. The absence of any systematic variation of the line bisector shapes on the 800 d period is also a positive sign for a dynamical interpretation. We did see power in the S-index between 540 d and 1000 d and more localised and higher significance peaks around 800 d in the FWHM and CCF area periodograms. It is possible these latter periodicities would have caused us to doubt the 800 d signal was due to a planet even if it had been found in a single-star system. It is also possible that we would have attributed Signal 3 to a planet after examining the S-index and BIS periodograms and the line bisector shapes. It was the impossibility of the dynamical explanation of Signal 3 which motivated a very detailed examination of all the available activity indicators, along with a consideration of the long-term effects observed in the Sun.
This raises the possibility that there may be published planet 'detections' which have an origin analogous to that of Signal 3. The techniques for discerning the long-period spectroscopic signatures of stellar activity cycles are only now being developed. Signal 3 in DMPP-3AB provides a cautionary example and an opportunity to learn. Further monitoring of DMPP-3 may reveal more clear-cut evidence that Signal 3 is not strictly periodic, as the S21 data already begins to suggest (cf. Fig. 5). If we are to confidently detect long period planets, including analogues of the Solar System planets, we must hope that this is the case. This is perhaps our most important conclusion as it has serious and widely-applicable implications.
### Consequences of a second planet
If the 2.26 d period Signal 4 is a planet, DMPP-3 becomes an even more exciting prospect for study into planetary system formation, evolution and dynamics. The tightest S-type planetary system currently known to contain multiple planets2 (strictly, a gas giant and a brown dwarf) is HD 87646, where the primary star in a binary of separation 19.5 au hosts two substellar objects (Ma et al., 2016). A second planet in DMPP-3 would confront and challenge ideas about planetary system evolution. A major factor in governing evolution in binary star planetary systems is the separation; the nearest multi-planet analogue has a binary separation over an order of magnitude higher than that of DMPP-3.
Footnote 2: Planets in binary systems are recorded in a machine-readable table, accessed from [https://lesia.obspm.fr/perso/philippe-thebault/plan_bin500au.txt](https://lesia.obspm.fr/perso/philippe-thebault/plan_bin500au.txt). The table is complete for all planets on 5-type orbits in binaries of separations up to 500 au (last updated 2022 Sept 01, contains 171 systems).
The two close-in S-type planets will interact gravitationally with each other as well as with the very low mass stellar eccentric binary companion DMPP-3B. Their orbits will evolve under dynamic excitation of eccentricity and tidal circularisation. They potentially offer opportunities to derive constraints on the material properties of the objects, including the viscoelasticity and tidal response. This ultimately could offer some empirical clues to the interior structure of the two low mass planets, cf. Makarov et al. (2003). This is particularly interesting as these planets are likely to be losing mass, and may be the remnant cores of previously more massive objects. They lie below the Neptune desert in the orbital period - planet mass plane. Furthermore, transmission spectroscopy may reveal the chemical composition of the vaporising planetary surface, complementing the information deduced from the tidal properties.
Our potential detection of a second S-type planet highlights the power of the DMPP target selection. A mere ten years ago, Roell et al. (2012) made conclusions on the state of discoveries of exoplanets in multiple stellar systems, with two of the main points being that so far planets hadn't been detected in close binaries with separation \(<10\) au, and that multiple planets had not been detected in systems with separation \(<100\) au. DMPP has identified a system which defies these limitations thus providing an excellent laboratory to study extra-solar system formation and evolution.
### Planet formation in a tight binary
The evolutionary history of the DMPP-3 system is an interesting topic for debate. Much consideration has been given to the circularisation of eccentric binary star orbits by tidal forces, with a comprehensive recent summary of the theoretical work given in the introduction of Zanazzi (2022). Tidal forces are obviously most influential for short orbital periods, and observational selection effects make systems with short orbital periods more likely to be discovered and further studied. Zanazzi (2022) accordingly do not consider periods as long as that of DMPP-3AB. Cakirli (2022) use a comprehensive sample of eclipsing binaries to conclude that long period binaries may be tidally circularised significantly more efficiently than is usually assumed. DMPP-3AB is more eccentric than any of their sample - cf. fig. 6 of Cakirli (2022) - but offsetting the high eccentricity, DMPP-3B has the lowest mass possible for a star. It is therefore difficult to rule out the stellar binary existing in more or less the current configuration for the entire main sequence lifetime of the primary star. HD 137496 is orbitally similar to DMPP-3, with a dense, hot super-Mercury on a 1.6 d orbit and a cold \(M_{p}\sin i\approx 7.7\) M\({}_{\rm J}\) giant on a 480 d orbit with \(e=0.477\pm 0.004\)(Azevedo Silva et al., 2022). It is possible these two systems share aspects of their evolutionary history.
#### 7.5.1 In situ planet formation
Assuming the system began in essentially the present configuration, we can consider the likelihood of in-situ planet formation. Though
they orbit in the semi-empirical'safe' zone for S-type planets (Holman and Wiegert, 1999), the available mass for DMPP-3A b (Signal 2) and the putative Signal 4 planet to form out of would have been limited by truncation of the protoplanetary disc. Theoretical models of tidal truncations (for dust discs in the Taurus region) are described by Manara et al. (2019), who provide an analytical function for truncation radius depending on input binary parameters. Using their equation C.1 for the DMPP-3 system, we are able to determine truncation radii for differing Reynolds numbers (which inform the analytically derived coefficients used for a \(\mu\sim 0.1\) binary). The Reynolds number is related to the magnitude of viscous stress, which resonant torques need to overcome in order to truncate the disc (Zeng et al., 2022).
The resulting radii are \(<0.4\) au, for Reynolds numbers \(10^{4}\)-\(10^{6}\). Zeng et al. (2022) show in their study of the Gliese-86 system that truncation radius decreases with increasing Reynolds number, and consider Reynolds number in the range in \(10^{3}\)-\(10^{14}\). We thus take a lenient upper limit of 0.4 au as the truncation radius. To form the \(\sim 2\) M\({}_{\oplus}\) planet from a disc with radius 0.4 au, we would require a mean dust surface density of \(\Sigma_{\rm d}\sim 100\) g cm\({}^{-2}\). Tazzari et al. (2017) investigate dust densities in protoplanetary discs (in the Lupus star forming complex) observed with ALMA. Through use of their equations 1&2 and data in their table 3 we can determine mean \(\Sigma_{\rm d}\). For the 22 systems included in Tazzari et al. (2017), we calculate \(\Sigma_{\rm d}\lesssim 2\) g cm\({}^{-2}\). For the discs studied, there seems to be no relationship between disc parameters and stellar parameters (Tazzari et al., 2017). We can conclude that unless the disc that formed DMPP-3A b was anomalously dense, it is therefore unlikely that the system formed in the current configuration. This conclusion is reinforced by the likelihood that the hot planet(s) may also have been losing mass since their formation.
#### 7.5.2 Mass loss
DMPP target stars were selected through the spectroscopic signature of circumstellar gas attributed to mass losing hot planets (Haswell et al., 2020). DMPP-3 is \(\sim\)10 Gyr old; the planet(s) would need to initially be more massive if they have been continually losing mass. CDEs provide the most dramatic examples of mass loss, exemplified by Kepler-1520b. The planet is a hot, \(\sim 0.1\) M\({}_{\oplus}\), rocky planet heated to \(\sim\) 2100 K, where extreme irradiation vaporises the rocky surface (Rappaport et al., 2012). Dust condenses from the metal-rich vapour and subsequently forms a comet-like tail which causes variable-depth transits. The planet DMPP-3A b has an estimated equilibrium temperature of \(T_{\rm eq}\sim 850\) K (B20). The putative Signal 4 planet would have \(T_{\rm eq}\sim 1800\) K, almost as hot as Kepler-1520 b.
Temperature is however not the only factor that dictates the mass-loss rate for a distintegrating planet. Perez-Becker and Chiang (2013) model the mass-loss history of a CDE for different initial masses. The larger the formation mass of the planet, the more likely it is to retain material due to the stronger surface gravity. Perez-Becker and Chiang (2013) found for \(T=2145K\), rocky planets of initial mass above \(\sim 0.12\) M\({}_{\oplus}\) are able to survive for longer than 10 Gyr (see their figure 9).
Booth et al. (2022) extend the simulations of CDE mass-loss history. For the first time, they include models for the formation of the dust grains, as well as progressing the treatment of dust heating by considering both stellar and re-emitted thermal radiation from the planet itself. Their analysis builds on previous work, and predicts mass-loss rate for a range of planet masses _and_ temperatures (Figure 6; Booth et al., 2022). This then implies how long the planets would survive under gas and dust loss. The shortest lived planets are the hottest and smallest, in agreement with previous works.
The planet(s) in the DMPP-3 system, despite being heated to high temperatures, have minimum masses that are far above the threshold required for significant mass loss. The putative Signal 4 planet has mass \(M_{\rm P}>0.8\) M\({}_{\oplus}\), in comparison with the far lower formation mass of \(\sim 0.05\) M\({}_{\oplus}\) required for it to not survive 10 Gyr at 1800 K (cf. Fig. 6 in Booth et al., 2022). The mass-loss would be negligible in comparison to the total mass of the planet, and the age of the system. Therefore, solely from an evaporation history viewpoint, it would be plausible that the planet(s) could reside in the current orbit(s) whilst retaining the majority of their material.
#### 7.5.3 Circumbinary capture
The protoplanetary disk truncation arguments (Section 7.5.1) imply that DMPP-3 has undergone dynamical reconfiguration. Formation of planets outside a close binary would be much easier to accomplish, and is feasible for distances far enough away from the central stars (Meschiari, 2012). There exists an instability zone outside the binary where the companion stirs up eccentricity of planetesimals causing higher encounter velocities. Faster impacts reduce the likelihood of forming large bodies through accretion (Paardekooper et al., 2012). Holman and Wiegert (1999) simulated the critical semi-major axis around the inner binary where a planet will always be stable, and using their semi-empirical formula we find a P-type critical semi-major axis of 3.95 au for the DMPP-3 system.
Some circumbinary planets have been found orbiting within this 'critical semi-major axis' in their system, e.g. Kepler-16 b (Meschiari, 2012). The stability limit for a system also depends on mean motion resonances, with regions of stability in-between first-degree (\(N\) : 1) resonances (Quarles et al., 2018; Martin and Fitzmaurice, 2022). The most likely evolutionary scenario would be the formation of a core far enough away from any unstable regions, followed by migration inwards (often quickly passing through unstable resonances) due to gravitational interactions - with either the protoplanetary disk or a second circumbinary planet (Meschiari, 2012; Fitzmaurice, Martin and Fabrycky, 2022).
Through migration towards the centre of the system, there is a possibility that such a planet could then be captured by one of the components of the binary. This was investigated by Gong and Ji (2018), who found that there was a low (but non-zero) probability of such a scenario occurring. The probability is improved with planet-planet scattering before a capture. This process would require an instability mechanism to stir up chaotic motions for any chance of capture - such as destabilising mean motion resonances (e.g. 5:1, 6:1, 7:1) with the inner binary (Martin and Fitzmaurice, 2022), or mutual inward resonant migration of two P-type planets forcing an inner planet too close to the central stars (Fitzmaurice et al., 2022). A P-type to S-type conversion would be very rare, but could be possible provided that the unstable motion does not end with ejection or collision with one of the stars (Sutherland and Fabrycky, 2016).
P-type planet capture seems plausible, albeit unlikely, for a single planet, but not so for two planets in an S-type orbit around the primary. The likelihood for two planets to be captured, and not ejected during any interactions, will then be far smaller. Further RV observations of DMPP-3 to confirm the putative second planet are needed to reveal whether or not this formation channel is viable.
#### 7.5.4 Alternative initial configurations
We can also consider migrations within the system. The current location of the binary and the instability zone it creates (Holman and Wiegert, 1999) would mean that for any planetary migrations from wider orbits, the binary star would also have needed to start life on a wider orbit. Interactions with protoplanetary discs can cause migrations that allow formation of planet pairs in mean motion resonance (MMR) despite the presence of a binary companion. Roisin et al. (2022) simulate the evolution of planet pairs, and find that resonances (such as the 3:1 resonance, close to the orbital period ratio between Signal 2 to Signal 4) can arise during migrations.
If DMPP-3B formed further out and migrated inwards, perhaps caused by interaction with an external object, then it would provide a safer environment for the circumprimary planet(s) to form. Low mass hydrogen burning secondary stars are typically formed through fragmentation of protostellar accretion disks (Kaplan et al., 2012). This fragmentation tends to happen on scales of \(\sim 100\) au, but a lot of these very low mass stars (VLMSs) end up close to the primary through scattering and secular migration (Kaplan et al., 2012).
The final scenario to consider here in the formation and evolution discussion is the possibility that the binary companion was not present during the formation of the planets. It seems plausible for planets to form (and potentially migrate inwards) close enough to the parent star, where they could remain safe during a capture of a more massive body at some point during the system's history. Exchange reactions with another system could potentially swap existing companions in four-body interactions, but would be rather unlikely (Kaplan et al., 2012). Tidal capture could occur between two objects, creating a binary system, which would tend to produce tight binaries (Bodenheimer, 2011). A chaotic capture of a VLMS by a Sun-like star could also be the mechanism that provided the large eccentricity we observe for DMPP-3B.
## 8 Conclusions
We have studied the dynamics of the compact, eccentric S-type binary DMPP-3. New observations allowed us to study the reflex radial velocities and examine the signatures of stellar activity in more depth than previous work. Our main conclusions are:
1. We derive significantly more precise parameters for the DMPP-3AB binary orbit. The 3\(\sigma\) lower limit on the projected mass of DMPP-3B is now \(80.9\,\mathrm{M}_{\mathrm{jup}}\). This establishes DMPP-3B can sustain hydrogen burning, and is a star at the very bottom of the main sequence.
2. The dynamically problematic \(800\,\mathrm{d}\) RV signal identified by \(\mathrm{B20}\)is confirmed, though the stacked periodogram suggests the signal may not be strictly periodic.
3. Numerical simulations demonstrate that there is no mutual inclination for which DMPP-3AB can harbour an object producing the 800 d signal via reflex radial velocities. This confirms and strengthens the conclusion of [14], that the \(800\,\mathrm{d}\) signal must arise from stellar activity.
4. Comprehensive investigation of the activity indicators provides evidence (from the S-index, FWHM, and CCF area) that the \(\sim\)\(800\,\mathrm{d}\) Signal 3 is an artefact of stellar activity.
5. We confirm the detection of the \(6.67\,\mathrm{d}\) S-type super-Earth planet DMPP-3A b and refine its parameters, finding \(M_{\mathrm{p}}\sin i=2.224^{+0.502}_{-0.279}\,\mathrm{M}_{\mathrm{Q}}\).
6. An additional \(2.26\,\mathrm{d}\) Earth mass S-type planet candidate is tentatively detected (Signal 4; 0.2 per cent FAP; 2\(\sigma\) significance). Being both hotter and lower mass than DMPP-3A b this planet candidate would be more likely to produce radiation-driven mass loss, and create a diffuse circumstellar gas shroud. Further high precision RV observations are required to confirm this planet candidate.
7. There is no sign of either DMPP-3A b or Signal 4 being due to stellar activity, and orbital simulations demonstrate stability for a two-planet system.
8. The DMPP-3AB binary is detected astrometrically by \(Gaia\). The resulting orbital inclination, \(63.89\pm 0.78^{\circ}\), allows us to constrain the mass of DMPP-3B to \(91.90\pm 0.85\,\mathrm{M}_{\mathrm{jup}}\). If the planet(s) lie in the same orbital plane, we can estimate 'coplanar' masses of \(M_{\mathrm{Ab}}=2.47\pm 0.56\,\mathrm{M}_{\oplus}\) and \(M_{\mathrm{Sig},4}=1.186\pm 0.289\,\mathrm{M}_{\oplus}\)
9. There may be published long period planet 'detections' which have an origin analogous to that of Signal 3. Further RV monitoring of DMPP-3 will reveal signatures which can be used to most efficiently identify these imposters. This is perhaps our most important conclusion as it has serious and widely-applicable implications.
## Acknowledgements
These results were based on observations made with the ESO 3.6 m telescope and HARPS, under ESO programme IDs: 081.C-0148(A); 088.C-0662(A); 091.C-0866(C); 096.C-0876(A); 098.C-0269(A); 098.C0499(A); 098.C0269(B); 099.C-0798(A); 0100.C-0836(A). Specifically, the new observations reported here were obtained under ESO programme 107.22UN. These observations were obtained in service mode by staff at ESO La Silla Observatory.
The authors thank Matthew R. Standing and the anonymous referee for useful comments which greatly improved the quality of this manuscript. ATS and ZOBR are supported by STFC studentship. CAH and JRB are supported by grants ST/T000295/1 and ST/X001164/1 from STFC. JKB is supported by an STFC Ernest Rutherford Fellowship (grant ST/T004479/1). This research has made use of the SIMBAD data base, operated at CDS, Strasbourg, France. Simulations in this paper made use of the rebound code which can be downloaded freely at [http://github.com/hannorein/rebound](http://github.com/hannorein/rebound). Stellar parameter estimation in this work has made use of the species software ([https://github.com/msotov/SPECIES](https://github.com/msotov/SPECIES)). Radial velocity data were analysed with the Exo-striker software ([https://github.com/3fonfonow/exostriker](https://github.com/3fonfonow/exostriker)).
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data Availability
The data underlying this article are available in Open Research Data Online (ORDO; [https://ordo.open.ac.uk/](https://ordo.open.ac.uk/)), at [https://doi.org/10.21954/ou.rd.21324549.v1](https://doi.org/10.21954/ou.rd.21324549.v1). The datasets were derived from HARPS spectra in the public domain, accessed from ESO Phase 3 Archive ([http://archive.eso.org/wdb/wdb/adp/phase3_main/form?phase3_collection=HARPS](http://archive.eso.org/wdb/wdb/adp/phase3_main/form?phase3_collection=HARPS)).
|
2307.08437 | Synthesis of single-crystalline LuN films | In the nitrogen-doped lutetium hydride (Lu-H-N) system, the presence of Lu-N
chemical bonds plays a key role in the emergence of possible room-temperature
superconductivity at near ambient pressure. However, due to the synthesis of
single-crystalline LuN being a big challenge, the understanding of LuN is
insufficient thus far. Here, we report on the epitaxial growth of
single-crystalline LuN films. The crystal structures of LuN films were
characterized by high-resolution X-ray diffraction. The measurement of
low-temperature electrical transport indicates the LuN film is semiconducting
from 300 to 2 K, yielding an activation gap of $\sim$ 0.02 eV. Interestingly,
negative magnetoresistances can be observed below 12 K, which can result from
the defects and magnetic impurities in LuN films. Our results uncover the
electronic and magnetic properties of single-crystalline LuN films. | Guanhua Su, Shuling Xiang, Jiachang Bi, Fugang Qi, Peiyi Li, Shunda Zhang, Shaozhu Xiao, Ruyi Zhang, Zhiyang Wei, Yanwei Cao | 2023-07-17T12:31:49Z | http://arxiv.org/abs/2307.08437v1 | # Synthesis of single-crystalline LuN films
###### Abstract
In the nitrogen-doped lutetium hydride (Lu-H-N) system, the presence of Lu-N chemical bonds plays a key role in the emergence of possible room-temperature superconductivity at near ambient pressure. However, due to the synthesis of single-crystalline LuN being a big challenge, the understanding of LuN is insufficient thus far. Here, we report on the epitaxial growth of single-crystalline LuN films. The crystal structures of LuN films were characterized by high-resolution X-ray diffraction. The measurement of low-temperature electrical transport indicates the LuN film is semiconducting from 300 to 2 K, yielding an activation gap of \(\sim\) 0.02 eV. Interestingly, negative magnetoresistances can be observed below 12 K, which can result from the defects and magnetic impurities in LuN films. Our results uncover the electronic and magnetic properties of single-crystalline LuN films.
Introduction
Very recently, room-temperature superconductivity-like transitions at the near ambient pressure were observed in a mixture of LuH\({}_{3-\delta}\)N\({}_{e}\) (92.25% purity) and LuN\({}_{1-\delta}\)H\({}_{e}\) (7.29% purity) compounds which have both N-substitution and H-vacancy defects [1]. This amazing report immediately ignites the worldwide interest of studying Lu-H-N compounds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Interestingly, similar high-temperature transitions (with non-zero resistance) can be repeated independently by other researchers [31]. As highlighted, the presence of Lu-N chemical bonds plays a key role in stabilizing the crystal structure and inducing possible room-temperature superconductivity at the near ambient pressure [1; 9; 30]. However, most experimental studies of the Lu-H-N system focus on Lu-H not Lu-N compounds at present.
Generally, bulk LuN with a rock-salt crystal structure can only be synthesized at high temperatures and high pressures (such as 2000 K and 30 GPa) in the diamond anvil cell [36]. As the last and heaviest element of the lanthanide rare earth family, the electronic configuration of Lu is \(4f^{14}5d^{1}6s^{2}\). Therefore, Lu\({}^{3+}\) has a fully filled \(4f\) shell, indicating there is no net magnetic moment in ideal LuN. More interestingly, in rare-earth (RE) nitrides from 57 (La) to 71 (Lu), the \(f\) electrons become more localized with increasing atomic numbers. The Coulomb interaction U can be as large as 10.9 eV in LuN [37]. At ambient pressure, the stable phase of bulk LuN has a rock-salt crystal structure with the cubic \(Fm\bar{3}m\) space group [38]. The lattice parameter of bulk LuN is \(a\sim\) 4.76 A, which does not match common oxide substrates with a lattice parameter \(a\sim\) 4.0 Afor epitaxial growth [39]. Due to the synthesis of single-crystalline LuN being a big challenge, the understanding of its properties is insufficient thus far [36; 37; 38; 40; 41; 42].
To address the above, we synthesized single-crystalline LuN films by magnetron sputtering. The measurement of high-resolution X-ray diffraction (XRD) indicates the cubic crystal structure of LuN films. The characterization of low-temperature electrical transport reveals the LuN film is semiconducting, which is consistent with the calculated value by the first-principle calculations. Interestingly, negative magnetoresistances can be observed below 12 K, which can result from defects or magnetic impurities. Our results uncover the electronic and magnetic properties of single-crystalline LuN films.
## II Experiments and calculations
Single-crystalline LuN films (thickness \(\sim\) 230 nm) were epitaxially grown on (110)-oriented YAlO\({}_{3}\) (YAO) single-crystal substrates (5\(\times\)5 \(\times\)0.5 mm\({}^{3}\)) by a home-made radio frequency (RF) magnetron sputtering, the setup of which is analogous to our previous reports [43; 44; 45]. The base vacuum pressure of the sputtering system is better than 1 \(\times\) 10\({}^{-7}\) Torr, and the purities of the 2-inch Lu target and the reactive gas are 99.99% and 99.999%, respectively. During the growth process, the substrate was held at 950 \({}^{\circ}\)C. The pressure of the reactive gas with a mixture of Ar and N\({}_{2}\) (9:1 ratio) was kept at 20 mTor with a flow rate of 5.4 sccm. The crystal structures of LuN films were characterized by high-resolution XRD (Bruker D8 Discovery) with the Cu K\(\alpha\) source (\(\lambda\) = 1.5405 A). The electrical transport properties of LuN films were measured from 300 to 2 K by Physical Property Measurement System (PPMS) in a van der Pauw geometry (DynaCool, Quantum Design).
Our first-principle calculations were accomplished based on density-functional theory (DFT), using the VASP package. Perdew-Burke-Ernzerhof (PBE), one kind of generalized gradient approximation (GGA) was applied to describe the electron-ion interactions. A primitive cell including one Lu and one N was adopted in the calculations. K-points over the Brillouin zone were 18 \(\times\) 18 \(\times\) 18 and 27 \(\times\) 27 \(\times\) 27 for structure optimization and static self-consistent computation, ensuring calculation accuracy. The convergence criteria of total energy and force components were set to be 1 \(\times\) 10\({}^{-6}\) eV and 0.01 eV/A, respectively, whereas
420 eV was selected as the cut-off energy of the plane-wave basis. For this strongly correlated electron system, we did a bandgap correction by applying the PBE+U scheme.
## III Results and discussion
First, we investigate the crystal structures of LuN films by high-resolution XRD. As seen in Fig. 1(b), only one film peak (002) near 37.905\({}^{\circ}\) can be observed in the wide-range 2\(\theta\)-\(\omega\) scan of LuN films on YAO substrates, corresponding to a lattice constant \(c\sim\) 4.74 A, which is consistent with the lattice constant of bulk LuN (\(\sim\) 4.76 A) [38]. It is noteworthy that bulk YAO has an orthorhombic structure with lattice parameters \(a_{o}\) = 5.176 A, \(b_{o}\) = 5.307 A, and \(c_{o}\) = 7.355 A. Therefore, in view of pseudocubic symmetry, the lattice constants of the YAO (110) plane are \(a_{ps}\) = 3.678 A and \(b_{ps}\) = 3.707 A. Since the lattice parameter of bulk LuN is near 4.76 A, the epitaxial growth of LuN (001) films is available with a 45\({}^{\circ}\) rotation of crystal structure on YAO (110) substrates. To confirm the symmetry of LuN films, we performed a Psi scan of the LuN (111) crystal plane (see Fig. 1(c)), indicating the four-fold symmetry of LuN films, which further demonstrates the cubic symmetry of LuN films.
Next, we characterize the electronic properties of LuN films by measuring low-temperature electrical transport. As seen in Fig. 2, with decreasing the temperature from 300 to 1.8 K, the sheet resistance of the LuN film increases from 0.77 k
Figure 1: (a) Schematic of cubic LuN crystal structure. (b) Wide-range 2\(\theta\)-\(\omega\) scan of the LuN film on the YAO substrate. (c) Psi scan of the LuN film on the YAO substrate.
Figure 3: Calculated density of states of LuN without (a) and with (b) Hubbard interaction U.
Figure 2: Temperature-dependent sheet resistance of the LuN film from 300 to 2 K.
K to 1.96 k\(\Omega\)/\(\Box\) at 1.8 K, indicating a semiconducting behavior. The estimated activation gap by fitting the conductance (see the inset in Fig. 2) is \(\sim\) 0.02 eV. To further understand the electronic structure of LuN, we performed the first-principle calculations. As shown in Fig. 3, the Coulomb interaction U plays a vital role in the electronic structure of LuN. Theoretically, LuN is metallic when U = 0, whereas there is an opened gap with increasing U. The calculated band gap is 0.27 eV when applying U\({}_{f}\) = 5.5 eV to Lu-\(f\) orbitals, which is consistent with previously calculated indirect band gaps (such as 0.14 and 0.36 eV) [37; 38]. The deviation between experiments and calculations can be due to the presence of defects and impurities in real materials. In addition, it is noted that the DOS of spin-up and spin-down are perfectly symmetrical due to the fully filled 4\(f\) shell of Lu\({}^{3+}\).
At last, we probe the magnetic properties of LuN films by carrying out measurements of temperature-dependent magnetoresistances. Here, the magnetoresistance is defined as \(R(B)-R(B=0)/R(B=0)\) with applying the magnetic field (perpendicular to the sample surface) from -4 T to 4 T. Unexpectedly, as shown in Fig. 4, a distinct negative magnetoresistance can be observed at the temperature 1.8 to 12 K, which is a contrast to the physical picture of fully filled 4\(f\)-shell of Lu\({}^{3+}\) in ideal LuN (see Fig. 3). Interestingly, with increasing the temperature from 1.8 to 12 K, the signal of magnetoresistance is strongly suppressed. Near 15 K, the sign of magnetoresistance becomes very weak, and changes from negative to positive. There are several origins leading to the emergence of negative magnetoresistance behavior in nonmagnetic materials: (a) chiral anomaly in Weyl semimetals, (b) current jetting effects, (c) weak localization effect, and (d) ferromagnetic impurities in samples [46; 47; 48; 49; 50]. Here, on the one hand, the signal of negative magnetoresistance is very small (\(\sim\) 2%). On the other hand, there are several magnetic purities (such as Ni and Fe) at the level of 30 ppm in the Lu metal, which can lead to significant spin-glass transitions near 200 K [6]. Therefore, the observation of negative magnetoresistances at low temperatures in LuN films can arise from magnetic impurities and defects in LuN films.
Figure 4: Temperature-dependence of magnetoresistances of LuN films. (a) 1.8 - 10 K. (b) 12 - 50 K.
Conclusion
In summary, we successfully synthesized single-crystalline LuN films on YAO substrates by magnetron sputtering. The crystal and electronic strcutures of LuN films were investigated by high-resolution XRD, low-temperature electrical transport, and the first principle calculations. In this work, the LuN film is semiconducting with an activation gap \(\sim\) 0.02 eV. More interestingly, distinct negative magnetoresistances can be observed below 12 K, resulting from defects or magnetic impurities. Our results uncover electronic and magnetic properties of single-crystalline LuN films.
## V Acknowledgments
We acknowledge insightful discussions with Jiandong Guo, Jiandi Zhang, Er-jia Guo, Bing Shen, Xiong Yao, and Rui Peng. This work was supported by the National Key R&D Program of China (Grant No. 2022YFA1403000), the National Natural Science Foundation of China (Grant Nos. U2032126, 11874058, and U2032207), the Pioneer Hundred Talents Program of the Chinese Academy of Sciences, the Zhejiang Provincial Natural Science Foundation of China under Grant No. LXR22E020001, the Beijing National Laboratory for Condensed Matter Physics, the Ningbo Natural Science Foundation (Grant No. 2022J292), and the Ningbo Science and Technology Bureau (Grant No. 2022Z086).
|
2307.07704 | Bulk Johnson-Lindenstrauss Lemmas | For a set $X$ of $N$ points in $\mathbb{R}^D$, the Johnson-Lindenstrauss
lemma provides random linear maps that approximately preserve all pairwise
distances in $X$ -- up to multiplicative error $(1\pm \epsilon)$ with high
probability -- using a target dimension of $O(\epsilon^{-2}\log(N))$. Certain
known point sets actually require a target dimension this large -- any smaller
dimension forces at least one distance to be stretched or compressed too much.
What happens to the remaining distances? If we only allow a fraction $\eta$ of
the distances to be distorted beyond tolerance $(1\pm \epsilon)$, we show a
target dimension of $O(\epsilon^{-2}\log(4e/\eta)\log(N)/R)$ is sufficient for
the remaining distances. With the stable rank of a matrix $A$ as
$\lVert{A\rVert}_F^2/\lVert{A\rVert}^2$, the parameter $R$ is the minimal
stable rank over certain $\log(N)$ sized subsets of $X-X$ or their unit
normalized versions, involving each point of $X$ exactly once. The linear maps
may be taken as random matrices with i.i.d. zero-mean unit-variance
sub-gaussian entries. When the data is sampled i.i.d. as a given random vector
$\xi$, refined statements are provided; the most improvement happens when $\xi$
or the unit normalized $\widehat{\xi-\xi'}$ is isotropic, with $\xi'$ an
independent copy of $\xi$, and includes the case of i.i.d. coordinates. | Michael P. Casey | 2023-07-15T04:32:05Z | http://arxiv.org/abs/2307.07704v1 | # Bulk Johnson-Lindenstrauss Lemmas
###### Abstract
For a set \(X\) of \(N\) points in \(\mathbb{R}^{D}\), the Johnson-Lindenstrauss lemma provides random linear maps that approximately preserve all pairwise distances in \(X\) - up to multiplicative error \((1\pm\epsilon)\) with high probability - using a target dimension of \(O(\epsilon^{-2}\log(N))\). Certain known point sets actually require a target dimension this large - any smaller dimension forces at least one distance to be stretched or compressed too much. What happens to the remaining distances? If we only allow a fraction \(\eta\) of the distances to be distorted beyond tolerance \((1\pm\epsilon)\), we show a target dimension of \(O(\epsilon^{-2}\log(4e/\eta)\log(N)/R)\) is sufficient for the remaining distances. With the stable rank of a matrix \(A\) as \(\left\|A\right\|_{F}^{2}/\left\|A\right\|^{2}\), the parameter \(R\) is the minimal stable rank over certain \(\log(N)\) sized subsets of \(X-X\) or their unit normalized versions, involving each point of \(X\) exactly once. The linear maps may be taken as random matrices with i.i.d. zero-mean unit-variance sub-gaussian entries. When the data is sampled i.i.d. as a given random vector \(\xi\), refined statements are provided; the most improvement happens when \(\xi\) or the unit normalized \(\widehat{\xi-\xi^{\prime}}\) is isotropic, with \(\xi^{\prime}\) an independent copy of \(\xi\), and includes the case of i.i.d. coordinates.
* e-mail: [email protected]
* MSC: primary 68Q87, 68R12, 60B20; secondary 62G30, 68T09
* keywords: dimension reduction, Johnson-Lindenstrauss lemma, Hanson-Wright inequality, stable rank, effective rank, intrinsic dimension, order statistics, Walecki construction, bulk, batch, minibatch, random projection
## 1 Introduction
The Johnson-Lindenstrauss lemma [1] concerns the approximate preservation of distances in a finite point set in Euclidean space. Specifically, for a subset \(X\subset\mathbb{R}^{D}\) of \(N\) points and a tolerance \(\epsilon\in(0,1)\), there exist \(k\times D\) matrices \(Z\) and a constant \(\gamma(\epsilon)\) for which
\[(1-\epsilon)\left\|x-x^{\prime}\right\|_{2}\leq\sqrt{\frac{\gamma(\epsilon)}{ k}}\left\|Z(x-x^{\prime})\right\|_{2}\leq(1+\epsilon)\left\|x-x^{\prime}\right\|_{2}\] (JL)
holds _for all_ pairs of points \(x,x^{\prime}\in X\) simultaneously, with probability at least \(1-\delta\), provided
\[k=O(\epsilon^{-2}\log(N^{2}/\delta))=:D_{JL}(N)=:D_{JL}.\]
The matrices \(Z\) are drawn randomly, and much work has been done since the original paper to equip \(Z\) with special properties, such as allowing fast matrix multiplication, preserving sparsity, restricting the matrix entries to discrete distributions, and so forth; see [11] for a recent review. The matrix \(Z\) provides a linear method for dimension reduction, which, at the very least, reduces the amount of space needed to store the dataset \(X\) on the computer, provided one can work with approximate versions of the pairwise distances. One would expect that "robust" downstream algorithms that depend on distance data, now working on |
2301.12939 | Data-driven soiling detection in PV modules | Soiling is the accumulation of dirt in solar panels which leads to a
decreasing trend in solar energy yield and may be the cause of vast revenue
losses. The effect of soiling can be reduced by washing the panels, which is,
however, a procedure of non-negligible cost. Moreover, soiling monitoring
systems are often unreliable or very costly. We study the problem of estimating
the soiling ratio in photo-voltaic (PV) modules, i.e., the ratio of the real
power output to the power output that would be produced if solar panels were
clean. A key advantage of our algorithms is that they estimate soiling, without
needing to train on labelled data, i.e., periods of explicitly monitoring the
soiling in each park, and without relying on generic analytical formulas which
do not take into account the peculiarities of each installation. We consider as
input a time series comprising a minimum set of measurements, that are
available to most PV park operators. Our experimental evaluation shows that we
significantly outperform current state-of-the-art methods for estimating
soiling ratio. | Alexandros Kalimeris, Ioannis Psarros, Giorgos Giannopoulos, Manolis Terrovitis, George Papastefanatos, Gregory Kotsis | 2023-01-30T14:35:47Z | http://arxiv.org/abs/2301.12939v1 | # Data-driven soiling detection in PV modules
###### Abstract
Soiling is the accumulation of dirt in solar panels which leads to a decreasing trend in solar energy yield and may be the cause of vast revenue losses. The effect of soiling can be reduced by washing the panels, which is, however, a procedure of non-negligible cost. Moreover, soiling monitoring systems are often unreliable or very costly. We study the problem of estimating the soiling ratio in photo-voltaic (PV) modules, i.e., the ratio of the real power output to the power output that would be produced if solar panels were clean. A key advantage of our algorithms is that they estimate soiling, without needing to train on labelled data, i.e., periods of explicitly monitoring the soiling in each park, and without relying on generic analytical formulas which do not take into account the peculiarities of each installation. We consider as input a time series comprising a minimum set of measurements, that are available to most PV park operators. Our experimental evaluation shows that we significantly outperform current state-of-the-art methods for estimating soiling ratio.
_Keywords:_ Solar energy, solar panels, soiling, performance loss, time series analysis
## 1 Introduction
Soiling is the accumulation of dirt on the surfaces of photo-voltaic (PV) modules, which leads to a loss in the power output. Soiling is typically caused by airborne particles, including for example dust, pollen and soot. Depending on the location, soiling may also be caused by heavier material such as ice, bird droppings, or falling leaves.
One standard way to quantify soiling is by the _soiling ratio_\(SR\)[1], which is defined as the ratio of the real power output to the power output that would be produced if solar panels were clean. _Soiling loss_ is then defined as \(1-SR\), and _soiling rate_ is defined as the (daily) rate of change of the soiling loss. Other metrics have been also proposed, e.g., the insolation-weighted soiling ratio [13], aiming to better capture the loss induced by soiling.
To reduce the effect of soiling, PV modules must be cleaned on strategically chosen dates to reduce the cost induced by energy loss while taking into account cleaning costs. Detection of time periods during which soiling severely affects power output is therefore significant for the efficient scheduling of cleanings. What makes the problem challenging is the shortage of labelled data which is caused by the fact that soiling monitoring systems are often considered unreliable or costly. For example, soiling stations which are the most common commercially available soiling monitoring solution [1], still require regular cleanings and maintenance, which can be expensive, especially in remote locations, and imperfect cleanings can result in significant measurement uncertainty [14]. Therefore, soiling periods must be deduced from measurements of a number of reliable variables, e.g., power output, irradiance, temperature.
Existing methods that detect soiling follow two alternative strategies: a) they train a model on labelled data, i.e., data where the soiling of the panels has been logged using specialized sensors and cleaning events
have been explicitly recorded (e.g., [12, 13]) and b) by using an analytical formula for optimal energy output based on environmental readings (e.g., [14, 15, 16]). The former strategy is more accurate but requires significant resources to produce the labelled data, which must be produced for each different installation. The latter strategy does not take into account the peculiarities of each installation and leads to less accurate results (as we demonstrate in Section 4). The main advantage of our method, is that it is purely data-driven, in the sense that it does not require a generic analytical formula for the relation between power output and the commonly used environmental readings, but it learns this relation in a self-supervised manner (without the need for labelled data). This way we achieve better results than methods that rely on analytical formulas without the cost of methods that need explicitly labelled data.
We consider as input the monitoring data from the park operation, i.e., a time series with measurements of power output, irradiance, and module temperature for a certain array or string of PV modules, precipitation, and dates on which the solar panels were manually cleaned for maintenance (if such information exists). The soiling ratio over a sequence of timestamps \(t_{1},\ldots,t_{n}\) is defined as \(\mathcal{SR}=\frac{P_{t_{1}}}{P_{t_{1}}^{*}},\ldots,\frac{P_{t_{n}}}{P_{t_{n}} ^{*}}\), where each \(P_{t_{i}}\) is the actual power output corresponding to timestamp \(t_{i}\), and \(P_{t_{i}}^{*}\) is the expected power output assuming that the solar panels are clean, corresponding to the same timestamp. Our framework trains a regression model \(\mathcal{M}\) which accurately predicts \(P_{t_{1}}^{*},\ldots,P_{t_{n}}^{*}\) (which are not given as input). This yields an estimate for the soiling ratio as \(\mathcal{SR}_{\mathcal{M}}=\frac{P_{t_{1}}}{\tilde{P}_{t_{1}}},\ldots,\frac{P _{t_{n}}}{\tilde{P}_{t_{n}}}\), where each \(\tilde{P}_{t_{i}}\) is the value predicted by \(\mathcal{M}\) for timestamp \(t_{i}\). We aim for \(\mathcal{M}\) such that \(\mathcal{SR}_{\mathcal{M}}\approx\mathcal{SR}\). Raining periods (extracted from precipitation measurements), and manual cleanings, are used in the "learning" phase of our proposed model. One of our methods can run exclusively on rain information, in case manual cleanings are not performed or logged. Our approach is robust to misinformation about manual cleanings because it checks each potential cleaning to determine its effect on power output. Manual cleanings that are not logged, have a negligible effect; they can only affect the quality of the training set positively.
The main advantages of our method are that they do not require measurements of soiling from specialized equipment which can be costly or inaccurate, they do not rely on the accuracy of an analytical formula for the optimal energy output of the park, and they agnostic to the type of PV modules employed. As a purely data-driven approach, it solely depends on the availability of data, and in particular a minimal set of generally available variables. Our approach is robust to misinformation about manual cleanings because it checks each potential cleaning to determine its effect on power output. Moreover, manual cleanings that are not logged, have a negligible effect on our approach; their existence can only affect the quality of the training set positively.
In Section 2, we discuss related work, in Section 3.1 we provide necessary background, in Section 3.2 we present a detailed description of our methods, and in Section 4 we present our experimental findings.
## 2 Related work
PVUSA introduced a method for rating PV systems based on a simple regression model [1] which employs the simplified assumption that array current depends only on irradiance and that array voltage depends only on module temperature. Massi Pavan et al. [12] compare the standard test conditions (STC) (irradiance: \(1000W/m^{2}\), module temperature: \(25^{\circ}C\)) performance of a PV park before and after its cleaning. In order to determine the performance at STC conditions they use a regression model, suggested in [12], that accepts as input the two main climate features, i.e. the in-plane global irradiance and the photo voltaic module temperature. However, their work requires as input labelled data, i.e. time series extracted from both clean and soiled PV modules. Massi Pavan et al. [12] developed four Bayesian Neural Network (BNN) models with the aim to calculate the STC performance of two plants before and after a complete clean-up of their modules. The idea is that differences between the STC power before and after the clean-up represent the losses due to the soiling effect. However, their work also requires as input labelled data, i.e. time series extracted from both clean and soiled PV modules.
Closer to our work are methods which estimate soiling losses based on PV system data. The Fixed Rate Precipitation (FRP) method [14] calculates the daily soiling loss. The method requires as input:
the slope of the performance metric/index during the longest dry period, a cleaning threshold for rains, i.e., the minimum amount of daily precipitation required to have a cleaning effect on PV modules, and a number of days after a raining period for which no soiling occurs. The method implicitly assumes that the soiling rate remains the same throughout time. This requirement can be very restrictive, because of the different types of soiling that may occur, depending also on the location or the season. For the same reason, it is unrealistic to assume that there is a certain minimum value classifying rains as effective. More recently, Deceglie, Micheli, and Muller [14] developed a new method for quantifying soiling loss, which compares favourably to FRP. The new method is termed the stochastic rate and recovery (SRR) method. It uses an analytical formula, calculated over values for irradiance and module temperature, to compute the expected power output, which is then used to compute a performance metric. The method first detects soiling intervals in a dataset, and then, based on the observed characteristics of each interval, estimates the total loss. Notice that SRR provides an aggregate estimate of soiling loss, calculated for the whole input period, while our focus lies on determining soiling loss even on shorter periods of time. Skomedal and Deceglie [1] proposed the combined degradation and soiling method for further analyzing a performance metric signal. Finally, Micheli et al. [13] consider non-linear degradation in soiling intervals, and they apply various methods for changepoints detection to obtain a refined soiling profile. All methods studied there are based on finding changepoints on the performance metric curve, as calculated by SRR. On the contrary, our approach detects changepoints as an intermediate step towards computing a performance metric. It is apparent from recent work that improvements on estimating the expected power output directly translate to improvements on various tasks in PV data analysis.
## 3 Methodology
### Preliminaries
#### 3.1.1 Basic assumptions and definitions
Our input consists of a multi-variate time series containing measurements for: i) power output, ii) irradiance, iii) module temperature, iv) precipitation. Our methods can be further enhanced if we are also given as input the dates on which the PV modules were manually cleaned.
Let \(\mathcal{R}\) be the set of all rains, defined as follows: \([t,t^{\prime}]\in\mathcal{R}\) if and only if there is a rain starting at \(t\) and ending at \(t^{\prime}\). Rains are extracted from input as maximal time intervals containing positive precipitation values. Similarly, if manual cleanings are provided let \(\mathcal{C}\) be the set of all such intervals, defined as follows: \([t,t^{\prime}]\in\mathcal{C}\) if and only if we know that the PV modules were being cleaned between timestamps \(t\) and \(t^{\prime}\). We denote by \(\mathcal{W}_{p}\) the set of all potential cleaning events, defined as \(\mathcal{W}_{p}=\mathcal{C}\cup\mathcal{R}\). We assume that precipitation measurements are sufficiently frequent, so that we can accurately detect rains.
#### 3.1.2 Regression models
A basic component of our methods is regression. We fit regression models to represent power output during "dirty" or "clean" periods and we use prediction errors to detect performance changes. We consider as feature variables the irradiance and the module temperature, and the target outcome corresponds to the power output. We apply _Ridge Regression with polynomial features_, which is parameterized by the degree of the regression polynomial, and a regularization strength parameter for the linear least squares function (the loss function) where regularization is given by the \(\ell_{2}\)-norm. The parameters were selected during the initial stages of the algorithm development process, where we experimented with cross-validation and hyper-parameter tuning techniques. The exact values used in our experiments are discussed in Section 4. Our model selection was a consequence of preliminary experiments with various (simple) regression models such as Ordinary Least Squares, Support Vector Regression, etc., that we executed in a CPU with maximum processor frequency at 3.7GHz, and available RAM at 256Gb. In the experiment that we conducted, we randomly choose 100 time intervals of maximum duration of one month from the time series provided in [1], which are also discussed in Section 4, and we randomly split them into training and testing subsets containing 80%
and \(20\%\) of the points respectively. Our choice satisfies a bifold objective: i) good accuracy and ii) fast fitting time. The latter is vital in our method which fits one model for each potential cleaning. Table 3 contains MAPE values and fitting times for four different models. Polynomial features and the polynomial kernel used in Support Vector Regression (SVR) are of degree \(3\). The highest accuracy is achieved by SVR with linear kernel and polynomial features, being roughly \(0,4\%\) better than Ridge Regression which is the second best. However, the fitting time of SVR is at least one order of magnitude higher than that of Ridge Regression. Ridge Regression is a simple model that adds only one extra tunable parameter to our learning pipeline, and the regularization it provides acts as a measure to prevent overfitting. We also emphasize the fact that one can easily plug-in any regression model in our approach.
Several steps in our approach rely on computing measures for the prediction accuracy of our model. Let \(\mathcal{Y}=Y_{t_{1}},\ldots,Y_{t_{n}}\), \(\tilde{\mathcal{Y}}=\tilde{Y}_{t^{\prime}_{1}},\ldots,\tilde{Y}_{t^{\prime}_{n}}\) be two univariate time series, and let \(T=\{t_{1},\ldots,t_{n}\},T^{\prime}=\{t^{\prime}_{1},\ldots,t^{\prime}_{n}\}\). We use a variant of the mean absolute percentage error (MAPE) which is defined over time intervals as follows: for any \([t,t^{\prime}]\subseteq T\cap T^{\prime}\),
\[\text{mape}_{0}(\mathcal{Y},\tilde{\mathcal{Y}},[t,t^{\prime}])=\frac{\text{ mean}(\{|Y_{j}-\tilde{Y}_{j}|\mid j\in[t,t^{\prime}]\}}{\text{mean}(\{|Y_{j}| \mid j\in[t,t^{\prime}]\})}.\]
Note that \(\text{mape}_{0}\) is robust to zero true values (as long as not all of them are zeroes) since it uses as denominator the mean of the values, as opposed to standard MAPE where all actual values appear as denominators leading to singularities even if there is only one zero true value. When \(\mathcal{Y}\) and \(\tilde{\mathcal{Y}}\) are clear from the context, we omit them from our notation and we simply write \(\text{mape}_{0}([t,t^{\prime}])\). We also use the median multiplicative error defined as \(\text{mede}(\mathcal{Y},\tilde{\mathcal{Y}})=\text{median}\left(\left\{ \frac{Y_{i}}{\tilde{Y}_{j}}\mid i\in T,j\in T^{\prime}\right\}\right)\).
### Soiling detection
In this section, we formally describe our methods, which are composed of two main steps. The first step is that of detecting cleaning events. Then, using these cleaning events we define training periods for regression models aiming to capture the optimal performance of the PV modules. In all our methods, we fit regression models which capture the dependence of power output on the values of irradiance and module temperature, i.e., power output is the dependent variable, while irradiance and module temperature are the feature variables. Measurements are scaled to \([0,1]\) by subtracting the minimum value and dividing by the range of values. Figure 1 summarizes the main steps of our methods.
#### 3.2.1 Baseline soiling estimator
We first present our baseline approach for estimating the soiling ratio. Our baseline algorithm is based on the following assumption: manual cleanings alone define points in time where the PV modules are clean. While these points are not sufficiently many to define a training set, we can extend them to short intervals of a user-defined length \(w_{train}\). This is the amount of time during which we can safely assume that the panels remain clean.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Model** & **MAPE** & **Fitting time (s)** \\ \hline Linear Regression & 0.0812 & 0.0015 \\ with polynomial features & 0.0807 & 0.0012 \\ \hline Support Vector Regression & 1.0648 & 0.0177 \\ with polynomial kernel & 0.0770 & 0.0666 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation of regression models.
We fit a regression model that aims to capture the power output when PV modules are clean. To this purpose, we fit a regression model \(\mathcal{M}\) on the set of input points with timestamps from \(\bigcup_{[t,t^{\prime}]\in\mathcal{C}}[t^{\prime},t^{\prime}+w_{train}]\). We define \(\mathcal{SR}_{\mathcal{M}}=\frac{P_{t_{1}}}{\tilde{P}_{t_{1}}},\ldots,\frac{P_ {t_{n}}}{\tilde{P}_{t_{n}}}\) as the modelled soiling ratio where each \(P_{t_{i}}\) is an input power output value, and \(\tilde{P}_{t_{i}}\) is the value predicted \(\mathcal{M}\).
#### 3.2.2 Forward checking soiling estimator (FCSE)
Our first method examines each potential cleaning event independently and assigns scores which represent the significance of the detected change of behavior. Five input parameters are required: the length of the training period \(w_{1}\), the length of the validation period \(w_{2}\), the length of the test period \(w_{3}\), a parameter \(q\) defining the quantile of the scores which classifies events as cleanings, and the length \(w_{train}\) defining the training set for the final regression model used to estimate soiling. For each interval \([t,t^{\prime}]\in\mathcal{W}_{p}\), we fit a regression model in the time interval \([t-w_{1}-w_{2},t-w_{2})\), we validate it in the time interval \([t-w_{2},t)\) and we test it in the time interval \((t^{\prime},t^{\prime}+w_{3}]\). We compute the function \(\text{mape}_{0}\) on the validation interval and if the returned value is greater than 5% then we consider this event invalid and we discard it from further consideration. This threshold aims to discard events that we are unable to classify with certainty. The reasons behind choosing 5% as our threshold are the following. First, due to the nature of our task, the regression model is required to make very accurate predictions and detect power deviations at a very small scale. This requires high accuracy of our regression models; therefore a tight threshold. On the other hand, this threshold must be pragmatic: having an extremely small value as a threshold will lead to unrealistic outputs where no cleaning events are detected and, consequently, no soiling estimation can be derived. We experimentally validate our choice of 5% in Section 4.2.2.
The intuition is that if the PV modules under-perform due to soiling, for a time period preceding \(t\), then the regression model captures this under-performing behaviour and if \([t,t^{\prime}]\) is a cleaning event then the model should underestimate the power output in \((t^{\prime},t^{\prime}+w_{3}]\). To compute the score of the potential cleaning event \([t,t^{\prime}]\), we first compute \(\mathcal{PI}_{val}\) as the sequence of actual power output values divided by the predicted power output values for the time interval \([t-w_{2},t)\), and \(\mathcal{PI}_{test}\) as the sequence of actual power output values divided by the predicted power output values for the time interval \((t^{\prime},t^{\prime}+w_{3}]\). Then, the score assigned to \([t,t^{\prime}]\) is \(\text{mede}(\mathcal{PI}_{val},\mathcal{PI}_{test})\). We define as cleaning events all intervals \([t,t^{\prime}]\in\mathcal{W}_{p}\) with score above the \(q\)th-quantile of all scores. Let \(\mathcal{W}_{1}\) be the set of detected cleaning events. We fit a regression model \(\mathcal{M}\) on the input points with timestamps from \(\bigcup_{[t,t^{\prime}]\in\mathcal{W}_{1}}[t^{\prime},t^{\prime}+w_{train}]\). The intuition is that cleaning events define points in time where the PV modules are clean. Obviously, these points are not sufficiently many to define a proper training set. By extending these points to (short) intervals, of length \(w_{train}\), we increase the size of
Figure 1: Basic steps of our methods. Manual cleanings are optional for FCSE. To detect cleaning events, FCSE fits one regression model before each potential cleaning event, while BCSE fits one regression model using manual cleaning dates and uses it in classifying all cleaning events.
the training set without (significantly) affecting its quality. We define \(\mathcal{SR_{M}}=\frac{P_{t_{1}}}{P_{t_{1}}},\ldots,\frac{P_{t_{n}}}{P_{t_{n}}}\) as the estimated soiling ratio where each \(P_{t_{i}}\) is an input power output value, and \(\tilde{P}_{t_{i}}\) is the value predicted by the regression model \(\mathcal{M}\).
Notice that FCSE does not require having the cleaning dates \(\mathcal{C}\) as input, and we could simply have \(\mathcal{W}_{p}=\mathcal{R}\).
#### 3.2.3 Backward checking soiling estimator (BCSE)
Our second method builds upon the baseline approach. This method requires five input parameters \(w_{1}\), \(w_{2}\), \(w_{3}\), \(q\), \(w_{train}\). Parameters \(w_{1}\) and \(w_{2}\) denote the length of the testing period preceding the potential cleaning event and the length of the validation period following the potential cleaning event respectively. Parameter \(w_{3}\) denotes the length of the time period following each \([t,t^{\prime}]\in\mathcal{C}\) such that the modules remain clean. Parameter \(q\) defines the quantile of the scores which classifies events as cleanings. Parameter \(w_{train}\) is used to define the training set of the final regression model for estimating the soiling ratio. We train one regression model on the set of points defined by timestamps in \(\bigcup_{[t,t^{\prime}]\in\mathcal{C}}[t^{\prime},t^{\prime}+w_{3}]\). This model aims to capture modules' "clean" performance. For each \([t,t^{\prime}]\in\mathcal{W}_{p}\), we use our model to make predictions on \([t-w_{1},t)\) and \((t^{\prime},t^{\prime}+w_{2}]\). If \(\text{map}_{0}((t^{\prime},t^{\prime}+w_{2}])\) is greater than 5% then we consider this interval invalid and we discard if from further consideration. As in FCSE, this filtering step is to avoid considering events that our models fail to classify with a good amount of certainty.
The intuition is that if \([t,t^{\prime}]\) is a cleaning event, then the PV modules' performance during \([t^{\prime},t^{\prime}+w_{2}]\) must resemble the "clean" performance as predicted by our regression model. Similarly, if the modules under-perform during \([t-w_{1},t)\), then the induced ratio of the actual power output over the predicted power output must be significantly smaller than 1. To compute the score of the potential cleaning event \([t,t^{\prime}]\), we first compute \(\mathcal{PI}_{before}\) as the sequence of actual power output values divided by the predicted power output values for the time interval \([t-w_{1},t)\), and \(\mathcal{PI}_{after}\) as the sequence of actual power output values divided by the predicted power output values for the time interval \((t^{\prime},t^{\prime}+w_{2}]\). Then, the score assigned to \([t,t^{\prime}]\) is \(\text{mede}(\mathcal{PI}_{before},\mathcal{PI}_{after})\). We define as our threshold parameter \(thrsh\) the \(q\)th-quantile of all scores. We define as cleaning events all intervals \([t,t^{\prime}]\in\mathcal{W}_{p}\) with score above the \(q\)th-quantile of all scores. Let \(\mathcal{W}_{2}\), be the set of detected cleaning events. We fit a regression model \(\mathcal{M}\) on the input points with timestamps from \(\bigcup_{[t,t^{\prime}]\in\mathcal{W}_{2}}[t^{\prime},t^{\prime}+w_{train}]\). As in FCSE, the intuition is that cleaning events define points in time where the PV modules are clean. Obviously, these points are not sufficiently many to define a training set. By extending these points to (short) intervals, of length \(w_{train}\), we increase the size of the training set without (significantly) affecting its quality. We define \(\mathcal{SR_{M}}=\frac{P_{t_{1}}}{P_{t_{1}}},\ldots,\frac{P_{t_{n}}}{P_{t_{n}}}\) as the estimated soiling ratio where each \(P_{t_{i}}\) is an input power output value, and \(\tilde{P}_{t_{i}}\) is the value predicted by \(\mathcal{M}\).
## 4 Experiments
### Datasets
State-of-the-art datasetTo evaluate our methods, we use a dataset provided in [1], which contains a set of current-voltage (I-V) curves and associated meteorological data for PV modules representing all flat-plate PV technologies and for three different locations and climates for approximately one-year periods. For each location, we are given values for a normalized metric, called _soiling derate_ which is computed using measurements for short-circuit current and irradiance from two identical PV modules; one that is cleaned during daily maintenance, and one that is not. Soiling derate is the result of dividing daily values of ampere-hours per kilowatt-hours per square meter Plane of Array (POA) irradiance for the not-cleaned PV module, by the corresponding values of the cleaned PV module [1]. The soiling derate aims to provide a performance index analogous to soiling ratio, estimated on real measurements. We emphasize that soiling derate is only used for the evaluation of our methods and are not utilized as input (nor in SRR). The time granularity is 5 minutes, and measurements are provided for all hours of daylight. The
three locations are Cocoa, Florida, USA; Eugene, Oregon, USA; and Golden, Colorado, USA. PV modules in Cocoa and Eugene were cleaned when this was necessary in order to ensure that levels of soiling loss were maintained at a reasonable level. PV modules in Golden were not cleaned because frequent rains helped maintaining a reasonable level of soiling loss. Cocoa has a minimum soiling derate of 0.985, Eugene has a minimum soiling derate of 0.964, and Golden has a minimum soiling derate of 0.977.
In our methods, we use measurements for the maximum power of the PV module in watts, the amount of solar irradiance in watts per square meter received on the PV module surface, the PV module back-surface temperature and the accumulated daily total precipitation. The dataset also provides dates on which all PV modules were cleaned. We apply our methods on PV modules that were used in estimating the soiling derate, and in particular on those that were not cleaned every day. As discussed in Section 3.1.2 our methods utilize Ridge Regression models. For those models, we use polynomial features of the 3rd degree and a regularization strength parameter \(alpha=10^{-4}\) during the fitting stages.
Real-world datasetWe also consider a real-world scenario, where no ground truth is available. We test our methods on a dataset from a very different location and of different climate conditions, comprising measurements from a solar park located in Greece. We are given values for power output, irradiance, module temperature and precipitation on a time granularity of 15 min for a period of approximately 7 years, and 15 dates of manual cleanings.
### Method evaluation and discussion
#### 4.2.1 Soiling estimation
We evaluate our methods, by comparing them to the analogous model used in SRR. To show robustness of our methods in different parameter settings, we try various lengths for the periods used in changepoint detection. Table 2 lists the respecting values (in days) for parameters \(w_{1},w_{2},w_{3}\) in FCSE and \(w_{1},w_{2}\) in BCSE. The rest of the parameters are set as follows: we apply FCSE with parameters \(q=0.9\), and \(w_{train}=30\) days and BCSE with parameters \(q=0.9\), \(w_{3}=30\) days, and \(w_{train}=30\) days. The baseline soiling estimator is applied with \(w_{train}=30\) days. Since our methods are unsupervised, classic automated methods fail to optimize the above parameters. Essentially, domain expertise is the main lead for selecting parameters appropriately, also depending on the properties of each location that affect the rate at which soiling progresses. However, as Table 2 indicates, the methods are robust within a range of reasonable values for the parameters. The fixed parameters \(w_{train}\) (and \(w_{3}\) in BCSE) define time periods during which a clean solar panel is likely to remain clean. While smaller values for \(w_{train}\) (resp. \(w_{3}\)) seem to provide safer conclusions, larger values provide a bigger size and diversity of the induced training set. The parameter \(q\) defines a threshold on how important a changepoint should be to be considered as a cleaning event. Setting \(q=0.9\) implies that the top-scored 10% of potential cleanings will be considered as cleaning events. Factors that must be taken into account when setting this parameter include the total number of potential changepoints, parameters \(w_{3}\), \(w_{train}\), and the size of the dataset. While larger values of \(q\) tend to lead to safer conclusions about cleaning events, this may lead to a decreased size of the training set, negatively affecting the final regression model.
We juxtapose our estimated soiling ratio with the ground-truth soiling derate and the performance metric used in SRR. We have three different ways of estimating the soiling ratio: our baseline approach, FCSE and BCSE, which are described in Section 3.2. In our estimates, we map negative values and values greater than one to zero and one, respectively. Then, we apply a rolling median with windows of one day.
For computing the performance metric as in SRR, we rely again on the publicly available RdTools package [10]. We use as input aggregate daily values calculated on measurements taken between 12:00 and 14:00, with irradiance greater than \(500W/m^{2}\). We first compute the performance metric as the ratio of realized to modelled PV energy yield, where modelled PV energy yield is derived from a standard formula which is implemented in pvlib package [14]. Then, we perform a few processing steps as suggested in RdTools' tutorials1. We first normalize the time series with the expected power, we then apply default filters to remove clipping effects and outliers, and finally, we resample to one-day values.
Footnote 1: [https://rdtools.readthedocs.io/en/stable/examples/degradation_and_soiling_example_pvdaq_4.html](https://rdtools.readthedocs.io/en/stable/examples/degradation_and_soiling_example_pvdaq_4.html)
Let \(\mathcal{SD}\) be the soiling derate time series. We denote by \(\mathcal{PM}\) the performance metric used in SRR. In Figure 2, we plot our estimated soiling ratio, for all three models discussed in Section 3.2, the soiling derate and the performance metric used in SRR, for the site of Eugene. Compared to the other datasets, Eugene has periods of declining performance which are more apparent. PV modules at the Eugene site were cleaned on March 11, July 10, August 14, August 21, and August 26. No significant precipitation is observed during July and August, which leads to a rapid drop in the performance.
We also calculate the root-mean-square error (RMSE) comparing the soiling derate with each modelled ratio, for all three sites. Since no manual cleanings were performed in Golden, the baseline algorithm and BCSE cannot be executed. We list these results in Table 2. It becomes evident, both from the RMSE values and from the visual inspection of the figure, that a better estimation of the soiling ratio can be derived by our models, compared to the model based on an analytical formula which is employed by SRR, in a setting where a soiling tendency needs to be detected, nearly real-time, on newly incoming data. Further, BCSE compares favourably to FCSE, and improves upon the baseline algorithm in the Eugene dataset. On the other hand, both the baseline algorithm and BCSE cannot be executed in the Golden dataset, due to the lack of manual cleanings. FCSE and BCSE present slightly diverse behaviors, rendering each potentially preferable in diverse real-world settings, depending on the exact objective of a solar park operator. Specifically, BCSE provides the most accurate method in approximating soiling ratio, thus preferable when small to medium soiling events are tolerable by the operator, as long as "false alarms" are minimised. On the other hand, FCSE, while slightly missing in accuracy, it is more sensitive in the detection of smaller (potential) soiling events, making it ideal in cases when even small soiling events need to be handled. Finally, we can see that the formula used in SRR essentially predicts the majority of the considered period as soiling; a behavior with small practical use in a real-world deployment scenario.
Figure 2: soiling ratio predicted by our models, and the performance metric used in SRR, for the Eugene dataset. FCSE with parameters \(w_{1}=10,w_{2}=5,w_{3}=10\) and BCSE with parameters \(w_{1}=5,w_{2}=10\).
#### 4.2.2 Required accuracy of regression models
We experimentally justify our choice of \(5\%\) as a threshold for validation MAPE of our models in methods FCSE (\(w_{1}=10,w_{2}=5,w_{3}=10\)), BCSE (\(w_{1}=5,w_{2}=10\)), as discussed in Section 3.2. To be able to execute both methods for various thresholds, we employ them on the two datasets that are accompanied by manual cleaning information, i.e., Eugene and Cocoa. For both methods, we calculate the mean (over the two sites) RMSE against \(\mathcal{SD}\). Experiments in Table 3 indicate that the best result is obtained for \(5\%\) (or above), for BCSE.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**RMSE against \(\mathcal{SD}\)**} \\ \cline{2-4} & **Eugene** & **Cocoa** & **Golden** \\ \hline Baseline & 0.006 & 0.006 & - \\ \hline FCSE (\(w_{1}=2,w_{2}=1,w_{3}=2\)) & 0.010 & 0.006 & 0.008 \\ \hline FCSE (\(w_{1}=10,w_{2}=5,w_{3}=10\)) & 0.007 & 0.008 & 0.008 \\ \hline FCSE (\(w_{1}=30,w_{2}=10,w_{3}=30\)) & 0.009 & 0.007 & 0.008 \\ \hline BCSE (\(w_{1}=1,w_{2}=2\)) & 0.008 & 0.006 & - \\ \hline BCSE (\(w_{1}=5,w_{2}=10\)) & 0.005 & 0.007 & - \\ \hline BCSE (\(w_{1}=10,w_{2}=30\)) & 0.007 & 0.007 & - \\ \hline \(\mathcal{PM}\) used in SRR & 0.019 & 0.020 & 0.028 \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation.
Figure 3: Segmentation and estimated soiling ratio obtained by FCSE.
#### 4.2.3 Industrial use-case (absence of ground-truth)
In this section, we test our methods on the dataset described in Section 4.1. First, we apply FCSE for the detection of cleaning events. We filter out rains with maximum precipitation of at most 0.1 to remove noise. Figure 3 (resp. Figure 4) illustrates the cleaning events detected by FCSE(resp. BCSE) and our modelled soiling ratio. Within each interval defined by two consecutive changepoints, we compute a line using the Theil-Sen method [10, 11] on the estimated soiling ratio (on a 15min granularity). The Theil-Sen method is a way of fitting a line to a set of points, which is robust to outliers. The line is chosen by selecting the median slope over all lines defined by pairs of points. We plot the lines with negative slope as red dotted line segments lying in the corresponding intervals, over the course of 5 years. We also plot a smoothed version of our estimated soiling ratio, where we have applied a rolling median of 5 days.
In both figures, in almost all time periods defined by two consecutive changepoints, we observe that there is a decreasing trend in the time series for the detected period, as dictated by the slope of the line fitted by the Theil-Sen regression (red-dotted line segments). This decreasing trend ends with rain or manual cleaning, illustrated by a blue vertical line, which is detected by our method as a cleaning event. This example is an _indication_ of the effectiveness and generalizability of the proposed method. Despite the lack of labels to be able to explicitly verify the result, the trend identified is consistent with soiling and it is verifiable through the effect of washing.
## 5 Conclusion
We have described a method for estimating the soiling ratio, which uses a set of easily accessed measurements from sensors that are commonly deployed in PV parks. Our method is data-driven, in the sense that it models the optimal performance by efficiently learning it from the data, without relying on generic formulas that fail to capture the peculiarities of the site.
Estimating the soiling ratio is useful for PV park administrators since it allows them to schedule cleaning procedures more effectively by taking into account the rate of soil accumulation and the effectiveness of past cleaning efforts without the need for frequent visual inspections or installing specialized equipment which induces extra cost and maintenance efforts.
Our method effectively estimates the soiling ratio in historical data. Future possible directions include extending our method to forecasting soiling losses in the future, which would assist in deciding cleaning
Figure 4: Segmentation and estimated soiling ratio obtained by BCSE.
actions at a short notice.
## 6 Acknowledgements
The authors were partially supported by the EU's Horizon 2020 Research and Innovation programme, under the grant agreement No. 957345: "MORE".
|
2306.10151 | Calculating the matrix profile from noisy data | The matrix profile (MP) is a data structure computed from a time series which
encodes the data required to locate motifs and discords, corresponding to
recurring patterns and outliers respectively. When the time series contains
noisy data then the conventional approach is to pre-filter it in order to
remove noise but this cannot apply in unsupervised settings where patterns and
outliers are not annotated. The resilience of the algorithm used to generate
the MP when faced with noisy data remains unknown. We measure the similarities
between the MP from original time series data with MPs generated from the same
data with noisy data added under a range of parameter settings including adding
duplicates and adding irrelevant data. We use three real world data sets drawn
from diverse domains for these experiments Based on dissimilarities between the
MPs, our results suggest that MP generation is resilient to a small amount of
noise being introduced into the data but as the amount of noise increases this
resilience disappears | Colin Hehir, Alan F. Smeaton | 2023-06-16T19:41:07Z | http://arxiv.org/abs/2306.10151v1 | # Calculating the matrix profile from noisy data
## Abstract
The matrix profile (MP) is a data structure computed from a time series which encodes the data required to locate motifs and discords, corresponding to recurring patterns and outliers respectively. When the time series contains noisy data then the conventional approach is to pre-filter it in order to remove noise but this cannot apply in unsupervised settings where patterns and outliers are not annotated. The resilience of the algorithm used to generate the MP when faced with noisy data remains unknown. We measure the similarities between the MP from original time series data with MPs generated from the same data with noisy data added under a range of parameter settings including adding duplicates and adding irrelevant data. We use three real world data sets drawn from diverse domains for these experiments Based on dissimilarities between the MPs, our results suggest that MP generation is resilient to a small amount of noise being introduced into the data but as the amount of noise increases this resilience disappears.
## 1 Introduction
Two well-used time series data mining examinations relate to motifs and discords [1]. A time series motif is a pair of previously unknown sequences in a time series or sub-sequences of a longer time series which are very similar to each other [2] while a time series discord is a sub-sequence of a long time series which is the most different from all the rest of the time series sub-sequences [3]. The matrix profile (MP) [4], is a data structure computed from a time series which locates the distance to, as well as the location of, the nearest neighbour of every sub-sequence in a time series. The MP encodes all details required to provide a solution for the detection of motifs and discords. This makes the MP suitable for detecting both outliers and recurring patterns. While detecting these can be regarded as classical AI problems and can be identified using other methods including machine learning, those other methods suffer from the curse of dimensionality or are complex and have multiple parameters to be adjusted [5]. Anomalies and patterns are important characteristics often studied in time series analysis. A similarity join [6] is a common technique for detecting such anomalies and patterns in a time series however it is inefficient whereas MP algorithms can significantly reduce computing time for these tasks [7].
The MP offers a solution to detection of outliers and recurring patterns through efficient computation while it is also able to consider sub-sequences of any length and it has several advantages. These include that it is an exact solution and provides no false positives or false negatives and it includes an exact solution for motif discovery, discord
discovery, time series joins, etc. In contrast to other algorithms which require building and tuning access methods the matrix profile is parameter-free and is space-efficient, with a space requirement which is linear in time series length with a small constant allowing processing of massively large data sets. MP can also leverage parallel hardware including multicore processors and GPUs [7]. It is domain agnostic and requires only one input parameter, the sub-sequence length \(m\). It has a time complexity of \(O(n^{2}\log{(n)})\) that is constant across sub-sequence lengths [4]. It can be re-computed incrementally as a time series grows and thus it can support real time applications [4]. A variation of the matrix profile algorithm called the Motif Discovery with Missing Data (MDMS) has recently been introduced [8] with the ability to handle missing data in that it can provide answers guaranteed to have no false negatives but which may have false positives and we shall return to this point later.
The two main components of the matrix profile are a distance profile and a profile index. A vector of minimum Z-Normalised Euclidean Distances constitutes the distance profile. The initial nearest-neighbour index is included in the profile index which is essentially the position of the sequence's most comparable sub-sequence [4]. In summary, the steps for computing the matrix profile from a time series \(X\) are as follows:
1. Choose a subsequence length \(m\) that is appropriate for the application. This length should be smaller than the length of the time series \(X\) that is being analysed;
2. Compute the matrix profile, which is an array of length \(n-m+1\) where \(n\) is the length of the time series \(X\). The matrix profile contains the distances between each subsequence of length \(m\) in \(X\) and its nearest neighbour subsequence in \(X\).
3. Compute the matrix profile index, which is an array of length \(n-m+1\) that contains indices of the nearest neighbour subsequence for each subsequence in \(X\). This can be used to quickly retrieve nearest neighbour subsequences.
After computing the matrix profile and matrix profile index, they can be used to efficiently perform various time series data mining tasks including exact motif discovery, anomaly detection, and similarity search.
Fig 1 shows an example of some original time series data representing the volume of traffic in Dublin City Centre and the MP plot derived from that data. The motif is a repeated pattern in the original time series with a matching area demonstrating low MP distance values while the discord or anomaly is a mismatch region demonstrating high MP distance values. Even if the MP distance value evaluated is non-zero, a localised MP minimum value may be utilised to detect a near match, which is an essential characteristic of the MP [4].
There are numerous algorithms that compute the MP. The brute force approach of the Naive algorithm is inefficient and the current best-in-class is SCRIMP++ [9]. STAMP Incremental, or STAMPI, is also useful because it facilities the MP to be incrementally maintained [4] which means it can be used in real time applications. Once a MP is generated there are techniques to extract the top-K repeated patterns/anomalies in a given time series using the MP data structure [10] and to perform segmentation analysis to allow navigation through the resultant MP.
Applications of the MP for time series data mining have already generated many insights [11]. One study discovered motifs using MP in retail product sales time series and used them to analyse the temporary sales correlations among products thus indicating that customers' product preferences are not stable and change with time [12]. The MP has also been used in anomaly detection on IT operations time series data to address the issue of monitoring IT systems' Key Performance Indicators [13]. The MP has offered market analysis based techniques in terms of stock-market financial time
series data [14], while recent research has shown that in predicting COVID-19 cases, a hybrid of the MP and an attention-based long short-term memory (LSTM) model performed best when compared to other models [15].
The MP algorithm has enabled the discovery of motifs from time series of substantial lengths where previously the memory and processing requirements obstructed exact motif search from time series which have a length of more than one hundred million data points [7]. This scalability characteristic of the MP algorithm is attractive but its most useful and important feature is its generalisation across any application-agnostic time series [16].
One of the known drawbacks with the MP is its performance on large-scale sets of noisy data, such as occur in most natural applications [10] and that is the specific issue we focus on in this paper and where we make our contribution. Many time series data in real world applications have noise which could interfere with the generation of an accurate MP. In some forms of time series analysis such as generating periodograms, algorithms such as the Lomb-Scargle [17] have been developed that are tolerant to unequally sampled data, to data sets with missing values and to data sets with other forms of noise. In such cases it is the tolerance of the algorithm itself that handles the noise in the data but that is not the case for algorithms which generate the MP.
Earlier we mentioned that a very recent variation of the matrix profile algorithm called MDMS can handle missing data in the original time series by providing answers which are guaranteed to have no false negatives [8]. That paper acknowledges that there is no other algorithm that can find motifs in the way the matrix profile does, in the presence of missing data. The paper generates pseudo missing data in the same way as we do here, and their definition of missing data covers random insertions (referred to as "spikes", noise or corrupted data and gaps in data capture corresponding to blocks of missing data. The work uses data from two case studies, seismological data and activity data, but the amount of data corruption is quite small, corresponding to deletion of 50 individual data points and removal of two blocks of data of length 25 values each from time series of the order of thousands of values. The performance of the MDMS algorithm is evaluated by examining the before and after matrix profiles and comparing the resulting graphs manually rather than in a quantitative way.
Prior to the very recent single example of work on developing an algorithm to handle missing or noisy data in generating the matrix profile mentioned above, the standard approach has been to eliminate the noise from, or to fill the gaps in the data. A recent example of that approach can be seen in work by Berjab _et al._[18] where the authors concentrated on recovering missing data but do not deal with other forms of noisy data. That work focused on detecting false data injection attacks with missing data appearing
Fig 1: Sample time series data from Dublin city centre traffic flow with matrix profile illustrating motif and discord regions.
with probabilities between 0.001 and 0.002 in two test datasets of 2.5 million and 20.9 million data points respectively. Our work here focuses on real world cases where noisy data can take many other forms and can occur much more frequently.
De Paepe _et al._[19] have recently applied noise elimination on real internet traffic time series data and subsequently detected anomalous behaviours through generating a matrix profile. Related work in [20] has demonstrated how the elimination of noise as a pre-process to MP generation can help in anomaly detection from noisy data. This was tested on the Numenta Anomoly Benchmark [21], a well-known collection of data sets which focus on detecting anomalies from time series which contain noisy data. The Numenta Anomoly Benchmark has recently been superseded by the more comprehensive ADBench [22] which has 57 data sets each with different noise levels and it benchmarks the effectiveness of 30 different algorithms for anomaly detection on noisy data. The noise filtering in [20] was achieved with the same overall computational complexity as MP generation but was tested on a time series of only 2,000 data values with synthetic noise added. Furthermore there was no investigation into the impact different amounts of noise have on the generation of the MP. It may be that MP generation has a tolerance to a certain amount of noise inherent in the data used to generate it that we do not yet know about and that is what we investigate and report on here.
In the work in this paper we measure the effect of noise on standard MP generation without the overhead of pre-filtering as reported in [18, 19] and elsewhere. Our motivation is that pre-filtering noise from a time series may also dilute whatever anomalies, discords or motifs exist in the original data. In this paper we generate MPs from three data sets of different sizes and we artificially introduce noise at different levels of intensity to polutecorrupt each data set using noise creation techniques and parameters from ADBench [22]. The amount and types of data corruption we introduce into the original datasets are far in excess of those reported in other work which uses either noise elimination or works with missing data [8]. We then re-generate MPs and compare the characteristics of MPs on clean data with the equivalent MPs from data with noise added where the amount and types of noise introduced to corrupt the data is more realistic than reported elsewhere, and that is the main contribution of this paper. This addresses the underlying research question of what is the actual impact of noise of different types and different magnitudes, on MP outputs.
## 2 Materials and methods
We present details of three case studies where we apply the matrix profile in different domains to time series data from real-world scenarios and then we describe how we add noise to each data set.
### Case study 1: keystroke timings
Lifelogging involves gathering digital records or logs of a person's lifestyle, activities, and encounters during a typical day, in an automatic fashion [23]. Such data is collected by an individual and is not normally shared with others or made public. Lifelogs represent a personal record that may be analysed either directly by the individual gathering the data, or by others on their behalf [24]. This is done in order to observe long-term behavioural patterns and changes in terms of health, well-being or cognitive changes, as well as to facilitate retrieval of information from the individual's past [25].
One form of automatic lifelogging is keystroke dynamics which uses a software application, a keystroke logger, to collect timing information about every key pressed on a keyboard or mobile device when the individual has been typing [26]. Precise timing information on keystrokes, namely the time taken to type two adjacent characters, is
captured to the nearest millisecond. We can then examine inter-keystroke timings to determine differences among individuals or differences within the timings for an individual. The initial application of this analysis was in the area of user authentication based on the feature that every individual has unique keystroke timing patterns [27]. A more recent application is analysing a user's cognitive processes while typing, where we compare timings from an individual's baseline gathered over a period of time with the current dynamic [28] to determine writing fluency. In turn, writing fluency can reveal when the subject was pausing and revising their writing indicating revision rather than creation of new material.
Keystroke timing data was collected in a previous study using the Loggerman logging tool [29]. Timing information for keystrokes was obtained for one user for 2,522,186 characters typed over a 12-month period and the data is available at [30]. For privacy reasons the specific characters typed were anonymised via random mapping which is consistent across the data-set, permitting the extraction of character and bigram timing information. This typing data is already noisy because of occasional data missing because the Loggerman tool stops recording keystrokes when it suspects the user is about to enter a username, password or other confidential information and it does this in a very conservative way. The keystroke timing information in [30] was processed to compute the time elapsed between all adjacent typed characters and those bigrams typed greater than 1,000ms apart were removed. In this paper we focus on the timing for most frequently occurring bigram which occurs 56,545 times when typed in less than 1,000ms during the several months of recording.
To calculate the MP on the keystroke timing data the only parameter needed is the window size, in addition to the time series of 56,545 values. Our choice of window size was a sequence of 20 characters, large enough to capture patterns, while not being smaller than a potential sub-sequence pattern. Once generated, the matrix profile provides an array of z-normalised Euclidean distances to their nearest neighbour (i.e. the MP values), including other values such as the MP indices. Fig 2 shows the 56,545 keystroke timing occurrences and the MP for the most frequently occurring bigram.
### Case study 2: movement sensors on new-born calves
Improving efficiency in the area of animal management and livestock welfare has resulted in the emergence and use of precision agriculture technologies [31]. This includes the gathering of continuous data on the activities and behaviour of cattle which has great potential for both effective food production and improved animal welfare [32].
Wearable 3-D accelerometers can be used to monitor animal behaviour [33]. In the case of new born calves, an accelerometer attached to a collar around the neck was used to measure movements such as walking, trotting and running. Locomotor play, a
Fig 2: Occurrences and MP for the top occurring bigram
repetitive and exaggerated movement, is also demonstrable in young calves via behaviour consisting of jumping, bucking and running [33].
Raw data from a neck-worn AX3 accelerometer sensor [34] on new-born calves was used where the sensor was worn from birth for several weeks by each calf. The data is available at [35]. Attributes from the movement data include the timestamp and the values for x-acceleration, y-acceleration, and z-acceleration. An additional movement attribute independent of sensor orientation was derived known as the acceleration magnitude (\(A_{mag}\)), for use as a single time-series for analysis [33] and shown in Eq 1. This eliminates the impact of rotation of the sensor around its neck of the calf as the collar rotates and produces a measure of how quickly the velocity of the calf changes in any direction.
\[A_{mag}=\sqrt{{accel_{x}}^{2}+{accel_{y}}^{2}+{accel_{z}}^{2}} \tag{1}\]
The sampling frequency for the AX3 accelerometer was 12.5Hz, resulting in more than one hundred million data points from each calf's accelerometer over the logging period which was approximately 6 weeks. Movement values were re-sampled to one minute intervals by calculating the mean of the \(A_{mag}\) values contained within non-overlapping one minute intervals. Similar to the case study on keystroke dynamics, a window size was selected to generate a MP for the acceleration magnitude. The raw movement data aggregated to one sample every minute for the logging period yielded 60,480 data points per calf and a value of 60 was chosen for the window size when generating the MP to represent a span of one hour to capture any recurring motifs and discords. For the purpose of generating the MP and adding noise, we used the movement data from one calf from the herd.
### Case study 3: city centre traffic volumes
The _Sydney Coordinated Adaptive Traffic System_ (SCATS) is used to collect traffic volume data across many cities worldwide, including Dublin, Ireland [36]. By also managing the timing of traffic signals to control traffic flow, SCATS acts as an intelligent transportation system. The mechanics of its operation are that it detects vehicle presence in each lane at points on roads typically just before junctions, as shown in Fig 3 as well as counting the number of pedestrians at sites waiting to cross a road. The sensors are installed under the road surface as inductive loops and the data from all the sensors in a city feeds into a control system for the city's traffic management. While it has advantages, the SCATS data is noisy and it cannot provide insights into what is normal traffic behaviour or what are deviations from that normal behaviour. This results in reactive responses by control room staff who monitor data streams manually [37]. Since a labelled training data-set is not available it is not possible to detect traffic anomalies by building a classification model.
SCATS traffic volume data from January to May 2022 was provided for analysis by Dublin City Council and is available at
[https://data.gov.ie/dataset/dcc-scats-detector-volume-jan-jun-2022](https://data.gov.ie/dataset/dcc-scats-detector-volume-jan-jun-2022). A count of the volume of traffic on approaches to road junctions is an indicative representation of overall traffic flow within regions of the city. The accuracy of data from the sensors at each site cannot be guaranteed due to faulty detectors or to sensor communication issues thus making this data noisy. In order to indicate patterns in the flow of traffic for the city centre as well as to combat data collection errors, the sum of total traffic volume for the city centre was computed per hour for a 5-month logging period. A window size of 24 was selected to generate the MP, representing a full one day. The total amount of data consisted of 5 months recording from 132 traffic sensors
with overall traffic volume sampled hourly yielding a total of 475,200 individual data values aggregated into a time series of 3,600 data points.
Fig 4 shows the time series for the hourly traffic volume for Dublin city showing a regular daily pattern while Fig 5 shows the matrix profile for the traffic volume with some discords and motifs.
### Adding noise to time series data
Since one of the objectives of the matrix profile is to pinpoint instances in a time series that deviate significantly from the time-series as a whole, there is interest to determine the limitations to the MP in terms of stability and robustness under different levels of data noise. As noted in [19] and [20], time-series data from real-world applications usually suffer from noise and data corruption to some extent. However the impact of this on the matrix profile has only been researched in relation to noise elimination as a pre-filter before generating a MP. This paper examines the impact of noise on the MP algorithm without pre-filtering.
The ADBench [22] is a comprehensive review of 30 algorithms for anomaly detection on 57 benchmark data sets and covers their performances under different levels of
Fig 4: Traffic volume for Dublin city between January and May 2022.
Fig 3: Traffic signals and 132 SCATS sensor locations in Dublin city centre marked as red dots.
supervision, anomaly types and noisy and corrupted data. For the noise and data corruption settings, ADBench considers three types of noise namely duplicated anomalies, the insertion of irrelevant features, and annotation errors. Interestingly, these anomalies all involve additions to the time series data and not the removal of any data but because that is what ADBench does, we will do likewise in this paper.
In an ideal situation, to evaluate the tolerance of the MP to noisy data we would generate MPs from data sets with labelled anomalies in a supervised setting, we would add noise to these, recompute the MPs and compare the characteristics of the before and after MPs. Since the three data-sets used in this study are unlabelled we use workarounds similar to those who used the Numenta benchmark in [21].
ADBench defines duplicate anomalies as likely to repeat multiple times in data for reasons such as recording errors. We added duplicated data values to each of our three data sets using the same parameters as Adbench namely that a randomly selected 5% of additional data values were denoted as anomalies then duplicated up to 6 times in multiple runs. That means up to an additional 25% of data added as we add duplicates times 2, times 3, times 4, times 5 and times 6.
For the irrelevant features noise type, ADBench indicates these are likely to be caused by measurement noise or inconsistent measuring units meaning that detecting anomalies such as discords and motifs in a time-series would be more difficult as they may be hidden. We added irrelevant features to each of our three real-world data sets using the same parameters as in ADBench. Irrelevant data points were randomly added values to each time series in stages with additions of 1%, 5%, 10%, 25% and up to 50% of the total data points. The added values came from generating features from the uniform distribution
\[Unif\ (min(\mathbf{X}),max(\mathbf{X}))\]
where \(min()\) and \(max()\) refer to the minimum and maximum values in the time series \(X\) and where we assume data values in the time series have a normal distribution. We include these in the original data. Irrelevant data points were added randomly into the original series without shuffling the order of data points in the original series. For adding noise to our three data sets we used the code available at
[https://github.com/Minqi824/ADBench/](https://github.com/Minqi824/ADBench/)
In order to measure the impact of noise on the data used to generate a MP, we compare the MPs generated from clean data and from data with noise added. In effect the comparison of two MPs is the comparison between two resulting time series. The metrics we use to describe and compare MPs include the mean, maximum and minimum values of each MP though we are warp that these descriptive statistics can hide much of the similarities or differences between MPs, as illustrated by Anscombe's Quartet [38].
Since the two MPs being compared will be of different lengths because of the
Fig 5: MP values for Dublin city between January and May 2022.
addition rather than removal of noise data to the original, increasing it by up to 25% of the total MP length in some cases, we use dynamic time warping [39] to account for matching under this constraint. The implementation we use is FastDTW developed by Salvador and Phillip [40] and the distance measure between the two MPs is the absolute difference between matched values. This is a dissimilarity measure with identical MPs yielding a value of 0 and as the value increases it reflects increasingly dissimilar MPs.
Figure 6 shows a schematic of how MP dissimilarity is calculated for data which has noise from duplicated anomalies x2 added. The original MP has length \(x_{1}\) and the duplicates add another 5% making the length \(x_{2}\). The MP for the original data is shown in blue, and the MP for the data with noise is shown in red. The lengths of the green lines (there should be one for each data point in the original data) correspond to absolute differences between corresponding values from the MPs as matched by the dynamic time warping algorithm. Each of the generated MPs will have a maximum value which is \(y_{1}\) in the case of the MP from the original data (in blue) and \(y_{2}\) in the case of the MP from the noisy data. To normalise the dissimilarity between the two MPs we divide the sum of the absolute differences between corresponding values (the sum of the lengths of the green lines in Fig 6 by the the number of values in the original time series and also by the maximum value of the MP from the original data, \(y_{1}\).
In FastDTW the radius parameter is used to approximate the exact DTW. If the radius is equal to the length of the times series being examined, then FastDTW is optimised and is equal to DTW. If the radius is less the time series length then FastDTW is not as accurate as DTW but is more computationally efficient though this has been questioned recently [41]. We selected a radius of 30 for FastDTW for all data as a fixed width radius is sufficient so long as it is "wide enough" to allow the insertion of duplicates and irrelevant features without disruption of the similarity computation.
We now present the results from generating MPs on each of the original time series data sets and on MPs where noise has been added. For these time series there is good variety among the sizes of the time series with sizes of 3,600 (traffic), 56,545 (keystrokes) and 60,480 (calf movements).
Fig 6: Schematic of DTW dissimilarity calculation and normalisation between MPs.
Results
We added noise to the original data sets consisting of duplicated anomalies up to 6 times and irrelevant features up to 50%, regenerated MPs for the 10 different parameter settings and in Tables 1, 2 and 3 we compare the MPs from noisy data against the MP for the original data, for each data set.
Other work which has examined the impact of noise on the matrix profile has done so by directly comparing the two matrix profile graphs, one before and one after corrupting the time series data [8] In our work rather than "eyeball" the two MP graphs which would be unwieldy because of their sizes (56,545, 60,480 and 3,600 data points respectively) we extract quantitative characteristics of the before and after matrix profile graphs, and this is part of the novelty of this paper. In addition to presenting the mean, maximum and minimum values of the before and after matrix profiles, as well as the the sum of the absolute differences between corresponding MP values, in Table 4 we present the normalised dissimilarities for each noise parameter setting and for each data set.
\begin{table}
\begin{tabular}{c|l|c|c|c|c} \hline \hline
**Signal** & **Type** & \(\Sigma\) **abs diffs** & **Mean** & **Max** & **Min** \\ & & **MP values** & & & \\ \hline \multirow{6}{*}{Keystrokes} & Original Matrix Profile & 0 & 1.87 & 3.63 & 0.33 \\ & Duplicated Anomaly \(\times\) 2 & **9,306** & 1.88 & 3.71 & 0.37 \\ & Duplicated Anomaly \(\times\) 3 & 10,669 & 1.88 & 3.58 & 0.36 \\ & Duplicated Anomaly \(\times\) 4 & 11,856 & 1.86 & 3.61 & 0.34 \\ & Duplicated Anomaly \(\times\) 5 & 12,820 & 1.86 & 3.58 & 0.39 \\ & Duplicated Anomaly \(\times\) 6 & 14,053 & 1.86 & 3.56 & 0.39 \\ \hline \multirow{6}{*}{Keystrokes} & Irrelevant Features - 1\% & 9,683 & 1.80 & 3.64 & 0.30 \\ & Irrelevant Features - 5\% & **15,333** & 1.54 & 3.76 & 0.33 \\ \cline{1-1} & Irrelevant Features - 10\% & 19,528 & 1.39 & 3.85 & 0.38 \\ \cline{1-1} & Irrelevant Features - 25\% & 25,787 & 1.42 & 3.86 & 0.42 \\ \cline{1-1} & Irrelevant Features - 50\% & 27,368 & 1.66 & 3.2 & 0.47 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Matrix Profile value changes for keystroke timing data (N=56,545) with noisy data added under different parameter settings.
\begin{table}
\begin{tabular}{c|l|c|c|c|c} \hline \hline
**Signal** & **Type** & \(\Sigma\) **abs diffs** & **Mean** & **Max** & **Min** \\ & & **MP values** & & & \\ \hline \multirow{6}{*}{Calf \(A_{Mag}\)} & Original Matrix Profile & 0 & 5.10 & 8.28 & 0.63 \\ & Duplicated Anomaly \(\times\) 2 & **23,585** & 5.41 & 8.05 & 0.06 \\ & Duplicated Anomaly \(\times\) 3 & 31,467 & 5.71 & 8.15 & 0.09 \\ & Duplicated Anomaly \(\times\) 4 & 38,424 & 5.93 & 8.21 & 2.32 \\ & Duplicated Anomaly \(\times\) 5 & 50,321 & 6.04 & 8.21 & 2.79 \\ & Duplicated Anomaly \(\times\) 6 & 51,830 & 6.19 & 8.27 & 2.45 \\ \hline \multirow{6}{*}{Calf \(A_{Mag}\)} & Irrelevant Features - 1\% & 19,811 & 5.09 & 8.09 & 0.05 \\ & Irrelevant Features - 5\% & **32,825** & 5.46 & 7.90 & 0.06 \\ \cline{1-1} & Irrelevant Features - 10\% & 47,781 & 5.92 & 7.85 & 2.10 \\ \cline{1-1} & Irrelevant Features - 25\% & 97,028 & 6.68 & 7.96 & 3.88 \\ \cline{1-1} & Irrelevant Features - 50\% & 117,820 & 7.01 & 8.13 & 5.12 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Matrix Profile value changes for calf movement data (N=60,480) with with noisy data added under different parameter settings.
## 4 Discussion
By introducing noise into a time series of data, which may or may not be anomalies, this will disrupt the generated MP and thus the noise itself will be detected as anomalies because the MP cannot make a distinction between noise and real data. In particular, duplicates introduced as noise will appear in a MP as a pattern and will really disrupt the generated MP.
The results in Tables 1, 2 and 3 as well as in Table 4 present several insights which we discuss in turn. Before examining these we point out that at the high end of the data corruption the amount of noise we introduce to the time series is exceptionally large and not likely to occur in any useful real world application. Our reason for going to these extreme noise levels is to discover the resilience of the MP generation algorithm at all noise levels, from minor to very large. Thus in examining the results, the most important are at where we introduce duplicate anomalies x2 and irrelevant features at 1% or 5%.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Data set** & **Keystrokes** & **Calf \(A_{Mag}\)** & **Traffic** \\ N = & 56,545 & 60,480 & 3,600 \\ \hline
**MP Length** & 56,545 & 60,480 & 3,600 \\
**Maximum value** & 3.63 & 8.28 & 2.25 \\ \hline Duplicated Anomaly \(\times\) 2 & 0.045 & 0.047 & 0.214 \\ Duplicated Anomaly \(\times\) 3 & **0.051** & **0.062** & **0.336** \\ Duplicated Anomaly \(\times\) 4 & 0.057 & 0.076 & 0.369 \\ Duplicated Anomaly \(\times\) 5 & 0.062 & 0.100 & 0.477 \\ Duplicated Anomaly \(\times\) 6 & 0.068 & 0.103 & 0.580 \\ \hline Irrelevant Features - 1\% & 0.047 & 0.039 & 0.094 \\ Irrelevant Features - 5\% & **0.074** & **0.065** & **0.260** \\ Irrelevant Features - 10\% & 0.095 & 0.095 & 0.333 \\ Irrelevant Features - 25\% & 0.125 & 0.193 & 0.563 \\ Irrelevant Features - 50\% & 0.133 & 0.235 & 1.434 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Normalised dissimilarities between MPs generated from clean data and from data with various noise parameter settings, for each data set.
\begin{table}
\begin{tabular}{c|l|c|c|c|c} \hline \hline
**Signal** & **Type** & \(\Sigma\)**abs diffs** & **Mean** & **Max** & **Min** \\ & & **MP values** & & & \\ \hline \multirow{6}{*}{Traffic} & Original Matrix Profile & 0 & 0.34 & 2.25 & 0.13 \\ & Duplicated Anomaly \(\times\) 2 & **1,739** & 0.91 & 4.09 & 0.13 \\ & Duplicated Anomaly \(\times\) 3 & 2,724 & 1.39 & 4.23 & 0.20 \\ & Duplicated Anomaly \(\times\) 4 & 2,992 & 1.70 & 4.49 & 0.20 \\ & Duplicated Anomaly \(\times\) 5 & 3,864 & 1.99 & 4.52 & 0.26 \\ & Duplicated Anomaly \(\times\) 6 & 4,701 & 2.17 & 4.66 & 0.48 \\ \hline \multirow{6}{*}{Traffic} & Irrelevant Features - 1\% & 764 & 0.54 & 3.29 & 0.13 \\ & Irrelevant Features - 5\% & **2,109** & 1.08 & 3.96 & 0.16 \\ \cline{1-1} & Irrelevant Features - 10\% & 2,705 & 1.59 & 3.84 & 0.20 \\ \cline{1-1} & Irrelevant Features - 25\% & 4,567 & 2.47 & 4.46 & 0.61 \\ \cline{1-1} & Irrelevant Features - 50\% & 11,617 & 3.17 & 4.46 & 1.34 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Matrix Profile value changes for traffic movement data (N=3,600) with noisy data added under different parameter settings.
The introduction of irrelevant features have greater impact on generated MPs than the introduction of duplicate values, making the MPs more dissimilar to the MP on the original data. This occurs for each data set and arises because this noise parameter pollutes the original time series extensively, adding up to 50% additional noisy data. We also observe that MPs for noisy traffic data (N=3,600) are more dissimilar to the MP from the original data than for the keystrokes data (N=56,545) which in turn are less than for the calf data (N=60,480). This means that the longer the time series, the more dissimilar the resulting MPs according to FastDTW. This can be seen when comparing dissimilarity values across columns in Fig 4 as well as in the sums of the absolute values of the differences between points in the MPs in Tables 1, 2 and 3 though these are not normalised.
In experiments where we introduce duplicated anomalies x2, these duplicates appear at 5% of the time thus should be equivalent to the introduction of irrelevant features at 5%. However the resulting MPs are not equivalently dissimilar to the MP from the original data with large differences in the results for the keystroke (9303 vs. 15333), calf movement (23585 vs 32825) and traffic (1739 vs. 2109) data sets (these results are bolded in Tables 1, 2 and 3 and in Table 4 for convenience). The reason for this is because the introduction of a duplicate introduces a pattern, the duplicate itself which is detected by the MP as a motif whereas when introducing an irrelevant feature there is no pattern so its not a MP motif. This is shown by the max values in the traffic MP being higher as duplicates are added appearing as discords where the maximum value in the MP for the original data goes from 2.25 to 4.09 when duplicated anomalies x2 are added.
The calf data appears to be quite regular anyway so the maximum value of 8.28 in the original MP did not change much when duplicates and irrelevant noise features were added. The high minimum value of 0.63 in that MP indicates there were already some patterns in the calf data as calves have movement habits to do with their 24 h circadian rhythm. When either kind of noise was introduced the patterns in the original data disappeared in the resulting MPs with the minimum value dropping from 0.63 for the original data to 0.06 and 0.05 when duplications x2 and irrelevant feature at 1% were added. As more noise was added, the noise itself had a pattern shown as the minimum values increased with more and more noise added. For the keystrokes data in Table 1 the minimum values were not affected by the addition of either kind of noise so MPs maintained their detection of patterns and the maximums remained approximately the same.
In an unsupervised setting where we do not have annotations or ground truth in the data to work with, we cannot pre-filter anomalies unlike the approaches taken in [19, 20] or in [18]. Thus determining the capacity of the MP generation algorithm itself to work with noisy data is important. Our results have discovered that when the amount of noise in the original data is minor such as duplicates occurring at 5% or irrelevant features at 1%, the generated MPs are dissimilar to the MPs from the original data in approximate proportion to the amount of noisit. As more and more duplicates are added, the generated MPs get more and more dissimilar to the original MP but not to a significant extent and this is true for the keystrokes and calf data but not for the traffic data. An equivalent observation about adding irrelevant features cannot be made in Table 4 because of the volume of noise added, up to 50% at the extreme level, for any of the data sets. For the traffic data, when an additional 50% of irrelevant data is added, the dissimilarity value between the MP from the original and the noisy data, has risen to 1.434. The explanation for this is that the MP from the noisy data is so very different to the original MP that the DTW algorithm, even for a comparatively short time series of 3,600 values, cannot detect any equivalent pairs across the MPs.
Conclusions
The matrix profile has proved to be a useful tool in terms of making most time series data mining tasks intuitive and to require less effort compared to other methods such as using a dimensionality reduced representation via a brute force approach. The MP provides no false negatives or false positives in terms of motif and discord discovery, time series joins and classification via shapelet discovery for example, because it provides an exact solution. The ability to use the MP in a simple manner without any tuning, apart from selecting the window size, makes it relatively parameter free.
This paper has presented three case studies where the matrix profile has enabled the identification of discords and motifs in relation to human typing behaviour, movement of new-born calves and vehicular traffic flow in a city centre. The contribution of the paper is to demonstrate effect of introducing various forms of data corruption into the time series data and to show that such data corruption has led to the generation of MPs which are similar to the MPs from the original data provided the amount of noise is small and only for some data sets. Our results have also shown that once the amount of noise increases, the generated MPs are very different from the original MPs. If the original time series is short (N=3,600 in our case) then introducing even a small amount of noise leads to a more significant change in the generated MP.
The experimental results in this paper provide some direction to encourage further development of algorithms for generating the matrix profile where there is known to be different forms of data corruption including missing single and blocks of data, insertion of irrelevant data and transposition of one data value to another. Further work in this area should also examine the impact of noise on generating an incrementally updated matrix profile, on the impact of noise on hyper-large time series and possibly on the introduction of other types of noise into the time series data.
## Acknowledgements
CH received no specific funding for this work. The work of AS was part-funded by Science Foundation Ireland under grant number SFI/12/RC/2289_P2, co-funded by the European Regional Development Fund. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2305.09150 | Transmutation operators and complete systems of solutions for the radial
Bicomplex Vekua equation | The construction of a pair of transmutation operators for the radial main
Vekua equation with a Bicomplex-valued coefficient is presented. The pair of
operators transform the Bicomplex analytic functions into the solutions of the
main Vekua equation. The analytical properties of the operators in the space of
classical solutions and the pseudoanalytic Bergman space are established. With
the aid of such operators, a complete system of solutions for the main Vekua
equation, called radial formal powers, is obtained. The completeness of the
radial formal powers is proven with respect to the uniform convergence on
compact subsets and in the Bicomplex pseudoanalytic Bergman space with the
$L_2$-norm. For the pseudoanalytic Bergman space on a disk, we show that the
radial formal powers are an orthogonal basis, and a representation for the
Bergman kernel in terms of such basis is obtained. | Víctor A. Vicente-Benítez | 2023-05-16T03:58:41Z | http://arxiv.org/abs/2305.09150v1 | # Transmutation operators and complete systems of solutions for the radial Bicomplex Vekua equation
###### Abstract
The construction of a pair of transmutation operators for the radial main Vekua equation with a Bicomplex-valued coefficient is presented. The pair of operators transforms the Bicomplex analytic functions into the solutions of the main Vekua equation. The analytical properties of the operators in the space of classical solutions and the pseudoanalytic Bergman space are established. With the aid of such operators, a complete system of solutions for the main Vekua equation, called radial formal powers, is obtained. The completeness of the radial formal powers is proven with respect to the uniform convergence on compact subsets and in the Bicomplex pseudoanalytic Bergman space with the \(L_{2}\)-norm. For the pseudoanalytic Bergman space on a disk, we show that the radial formal powers are an orthogonal basis, and a representation for the Bergman kernel in terms of such basis is obtained.
**Keywords:** Bicomplex main Vekua equation, transmutation operators, pseudoanalytic functions, Bergman spaces, complete systems of solutions.
**MSC Classification:** 30E10; 30G20; 30H20; 35A22; 35A35; 35C10.
## 1 Introduction
In this work, we study the main Vekua equation
\[\overline{\boldsymbol{\partial}}W(z)=\frac{\overline{\boldsymbol{\partial}}f( z)}{f(z)}\overline{W}(z),\quad z\in\Omega, \tag{1}\]
where \(\Omega\subset\mathbb{C}\) is a bounded domain star-shaped with respect to \(z=0\), \(f\) is a complex valued function that does not vanish in the whole domain \(\Omega\) and only depends on the radial component \(r=|z|\), and \(\overline{\boldsymbol{\partial}}:=\frac{1}{2}\left(\frac{\partial}{\partial x }+\mathbf{j}\frac{\partial}{\partial y}\right)\) is the Bicomplex Cauchy-Riemann operator. The conjugation \(\overline{W}\) is respect to the Bicomplex unit \(\mathbf{j}\). Solutions of Eq. (1) are called _pseudoanalytic functions_ (or _generalized analytic functions_) whose theory was introduced in [5, 35] for the complex case. Bicomplex Vekua equations appear, for example, in the study
of the Dirac system with fixed energy and with a scalar potential in the two-dimensional case [10, 12]. Solutions of (1) are closely related to the solutions of the two-dimensional Schrodinger equation. Indeed, if \(W=u+{\bf j}v\) is a Bicomplex solution of (1), where \(u\) and \(v\) are scalar complex valued functions, then \(u\) is a solution of the Schrodinger equation \(\triangle u-qu=0\) with \(q=\frac{\triangle f}{f}\), and \(v\) is a solution of the Darboux transformed equation [23, 24]. In [8], a generalization of the results of the Bers-Kravchenko theory developed in [5, 24], to the case of Bicomplex functions, is presented.
Theory of transmutation operators, also called transformation operators, is a widely used tool in studying partial differential equations, in particular, in the study of Schrodinger and the main Vekua equations [3, 10, 11, 25, 26, 28, 30]. In the case of the radial Schrodinger equation, S. Bergman (in [4] for the case of an analytic potential) and R. Gilbert and K. Atkinson (in[21] for \(C^{1}\)-potentials) showed that any classical solution is the image of a harmonic function \(h\) under the action of the integral operator
\[{\bf T}_{f}h(z)=h(z)+\int_{0}^{1}\sigma G^{f}(r,1-\sigma^{2})h(\sigma^{2}z)d\sigma,\]
where \(G^{f}\) is a \(C^{2}\) function. In [30], the properties of the operator \({\bf T}_{f}\) are studied, and it is shown that \({\bf T}_{f}\) is a transmutation operator that relates the operators \(r^{2}(\triangle-q)\) and \(r^{2}\triangle\).
The aim of this paper is to construct a pair of transmutation operators that relates the main Vekua operator \(\overline{\boldsymbol{\partial}}-\overline{\boldsymbol{\partial}}fC\) (where \(C\) is the Bicomplex conjugation) with the Bicomplex Cauchy Riemann operator. Following the ideas presented in [10, 28, 26] (where the Vekua equation with a function \(f\) that only depends on the variable \(x\), is studied), we found a pair of relations between the operator \({\bf T}_{f}\) and their Darboux transformed operator, that is, the operator \({\bf T}_{\frac{1}{f}}\) for the associated Darboux transformed radial Schrodinger equation. With the aid of such relations, we construct a pair of transmutation operators \(\mathcal{T}_{f}\) and \(\mathcal{T}_{\frac{1}{f}}\) whose scalar and vectorial parts are given precisely \({\bf T}_{f}\) and \({\bf T}_{\frac{1}{f}}\). As a consequence, a complete system of solutions for (1), that we called the _Bicomplex radial formal powers_, is obtained as a result of transmuting the Bicomplex powers \(\widehat{z}^{n}:=(x+{\bf j}y)^{n}\), with \(x,y\in\mathbb{R}\), \(n\in\mathbb{N}_{0}\). The completeness of the formal powers in the topology of the uniform convergence on compact subsets of \(\Omega\) is shown. In the same way, the properties of the pseudoanalytic Bergman space of \(L_{2}\)-solutions of (1) is established. We establish the completeness of the pseudoanalytic Bergman space and the existence of a reproducing Bergman kernel. These results generalize those obtained in [17, 9], for the case of the complex Vekua equation. We shown that the operators \(\mathcal{T}_{f}\) and \(\mathcal{T}_{\frac{1}{f}}\) are bounded in the Bergman space, and we obtain the conditions in the domain \(\Omega\) so that the radial formal powers are a complete system in the \(L_{2}\)-norm. Furthermore, in the particular case when \(\Omega\) is a disk, the radial formal powers are an orthogonal basis.
The paper is structured as follows. Section 2 presents an overview of the main results concerning to the algebra of Bicomplex numbers and Bicomplex holomorphic functions. In section 3, the basic properties of the solutions of the main Vekua equation are presented, together with the main properties of the pseudoanalytic Bergman space, and the corresponding Bergman reproducing kernel. In section 4 we present the construction of a pair of transmutation operators, that transmute the Bicomplex holomorphic functions in solutions of the radial main Vekua equation. The analytical properties of the operators are studied, both in the topology of the uniform convergence on compact subsets, and in the pseudoanalytic
Bergman space. Section 5 presents the construction of the radial formal powers. It is shown that this system is complete, both in the topology of the uniform convergence on compact subsets and in the pseudoanalytic Bergman space with the \(L_{2}\)-norm. In the particular case when the domain \(\Omega\) is a disk, it is shown that the formal powers are an orthogonal basis for the pseudoanalytic Bergman space.
## 2 Bicomplex analytic functions
This section summarizes the main properties of Bicomplex numbers and Bicomplex valued analytic functions of one complex variables, which can be found, for example, in [8, 22, 24, 32, 34].
### Basic properties
Consider the set \(\mathbb{C}^{2}\) with the usual addition and multiplication by an scalar. The product of two elements \(W=(w_{1},w_{2}),Z=(z_{1},z_{2})\in\mathbb{C}^{2}\) is defined as follows: \(WZ:=(w_{1}z_{1}-w_{2}z_{2},w_{1}z_{2}+w_{2}z_{1})\). With this product, \(\mathbb{C}^{2}\) is a commutative algebra over \(\mathbb{C}\) with unitary element \((1,0)\), and it is called the algebra of the _Bicomplex numbers_, that is denoted by \(\mathbb{B}\). Denote \(\mathbf{j}:=(0,1)\). Identifying \(a\in\mathbb{C}\) with \((a,0)\) we have that any \(W\in\mathbb{B}\) can be written in a unique form as \(W=u+\mathbf{j}v\), with \(u,v\in\mathbb{C}\). We call \(u\) and \(v\) the _Scalar_ and _Vectorial_ parts of \(W\), respectively, and are denoted by \(\operatorname{Sc}W=u\) and \(\operatorname{Vec}W=v\). We have that \(\mathbf{j}^{2}=-1\) and \(i\mathbf{j}=\mathbf{j}i\). The bicomplex conjugate of \(W\) is defined by \(\overline{W}:=u-\mathbf{j}v\). Note that \(W\overline{W}=u^{2}-v^{2}\in\mathbb{C}\). The algebra \(\mathbb{B}\) contains zero divisors. Indeed, consider
\[\mathbf{k}:=i\mathbf{j},\qquad\mathbf{p}^{\pm}:=\frac{1}{2}(1\pm\mathbf{k}),\]
hence \(\mathbf{p}^{+}\mathbf{p}^{-}=0\), \((\mathbf{p}^{\pm})^{2}=\mathbf{p}^{\pm}\) and \(\mathbf{p}^{+}+\mathbf{p}^{-}=1\). The set of the zero divisors of \(\mathbb{B}\) is denoted by \(\sigma(\mathbb{B})\). The following proposition summarizes some properties of the Bicomplex numbers shown in [8, 32].
**Proposition 1**: _Let \(W,V\in\mathbb{B}\)._
1. \(W\in\sigma(\mathbb{B})\) _iff_ \(W\overline{W}=0\)_._
2. _If_ \(W\overline{W}\neq 0\)_,_ \(W\) _is invertible and_ \(W^{-1}=\frac{\overline{W}}{W\overline{W}}\)_._
3. _There exist unique_ \(W^{\pm}\in\mathbb{C}\) _such that_ \(W=\mathbf{p}^{+}W^{+}+\mathbf{p}^{-}W^{-}\)_. Furthermore,_ \(W^{\pm}=\operatorname{Sc}W\mp i\operatorname{Vec}W\)_._
4. \(W\in\sigma(\mathbb{B})\) _iff_ \(W=\mathbf{p}^{+}W^{+}\) _or_ \(W=\mathbf{p}^{-}W^{-}\)_._
5. _The product_ \(WV\) _can be written as_ \[WV=\mathbf{p}^{+}W^{+}V^{+}+\mathbf{p}^{+}W^{-}V^{-}.\] (2) _In particular_ \[W^{n}=\mathbf{p}^{+}(W^{+})^{n}+\mathbf{p}^{-}(W^{-})^{n},\quad\forall n\in \mathbb{N}.\] (3)
Denote \(\mathcal{R}(\mathbb{B}):=\mathbb{B}\setminus\sigma(\mathbb{B})\). Thus, by Proposition 1(1)-(2), \(W\in\mathcal{R}(\mathbb{B})\) iff \(W\) is invertible, and from (2) we have
\[W^{-1}=\frac{\mathbf{p}^{+}}{W^{+}}+\frac{\mathbf{p}^{-}}{W^{-}}. \tag{4}\]
From this, formula (3) is valid until for \(n\in\mathbb{Z}\). For \(W\in\mathbb{B}\) define
\[W^{\dagger}:=\mathbf{p}^{+}(W^{+})^{*}+\mathbf{p}^{-}(W^{-})^{*}, \tag{5}\]
where \({}^{*}\) denotes the standard complex conjugation. Operation \(W\mapsto W^{\dagger}\) is an involution in \(\mathbb{B}\)[32, Ch. I]. For \(W,V\in\mathbb{B}\) define
\[\langle W,V\rangle_{\mathbb{B}}:=\operatorname{Sc}WV^{\dagger}=\frac{W^{+}(V^ {+})^{*}+W^{-}(V^{-})^{*}}{2}.\]
It is not difficult to see that \(\langle W,V\rangle_{\mathbb{B}}\) is a complex inner product in \(\mathbb{B}\), and we have the norm
\[|W|_{\mathbb{B}}:=\sqrt{\langle W,W\rangle_{\mathbb{B}}}=\sqrt{\frac{|W^{+}|^{ 2}+|W^{-}|^{2}}{2}}.\]
where \(|W^{\pm}|\) denotes the complex absolute value. The following inequality is immediate
\[\frac{1}{\sqrt{2}}|W^{\pm}|\leqslant|W|_{\mathbb{B}}\leqslant\frac{1}{\sqrt{2 }}\left(|W^{+}|+|W^{-}|\right). \tag{6}\]
**Remark 2**: _A direct computation shows the following relations_
\[W^{\dagger} = (\operatorname{Sc}W)^{*}-\mathbf{j}\,(\operatorname{Vec}W)^{*} \tag{7}\] \[\langle W,V\rangle_{\mathbb{B}} = (\operatorname{Sc}W)(\operatorname{Sc}V)^{*}+(\operatorname{Vec}W )(\operatorname{Vec}V)^{*},\] (8) \[|W|_{\mathbb{B}} = \sqrt{|\operatorname{Sc}W|^{2}+|\operatorname{Vec}W|^{2}},\] (9) \[WV^{\dagger} = \langle W,V\rangle_{\mathbb{B}}+\mathbf{j}\langle W,\mathbf{j}V \rangle_{\mathbb{B}}\] (10) \[|WV|_{\mathbb{B}} \leqslant \sqrt{2}|W|_{\mathbb{B}}|V|_{\mathbb{B}}. \tag{11}\]
_Hence the Bicomplex product is a continuous operation in \(\mathbb{B}\) and \(\mathcal{R}(\mathbb{B})\) is open [8, Prop. 3]._
The exponential of a Bicomplex number \(W\) is defined as follows [8]
\[e^{W}:=\mathbf{p}^{+}e^{W^{+}}+\mathbf{p}^{-}e^{W^{-}}. \tag{12}\]
The Bicomplex exponential satisfies the properties: (i) \(e^{W+V}=e^{W}e^{V}\) for all \(W,V\in\mathbb{B}\); (ii) \(e^{W}\in\mathcal{R}(\mathbb{B})\) and \((e^{W})^{-1}=e^{-W}\) for all \(W\in\mathbb{B}\) (see [8]).
### Bicomplex analytic functions
Let \(\Omega\subset\mathbb{C}\) be a bounded domain. Consider a Bicomplex-valued function of the complex variable \(z=x+iy\in\Omega\), \(F:\Omega\to\mathbb{B}\). Hence \(F=f_{1}+\mathbf{j}f_{2}\), where \(f_{1}(z)=\operatorname{Sc}F(z)\) and \(f_{2}(z)=\operatorname{Vec}F(z)\). By Proposition 1(3) we can write \(F=\mathbf{p}^{+}F^{+}+\mathbf{p}^{-1}F^{-}\) where \(F^{\pm}=f_{1}\mp if_{2}\). We say that \(F\in C^{k}(\Omega;\mathbb{B})\) (\(k\in\mathbb{N}_{0}\)) iff \(f_{1},f_{2}\in C^{k}(\Omega)\). Similarly, if \(1\leqslant p\leqslant\infty\), \(F\in L_{p}(\Omega;\mathbb{B})\)
iff \(f_{1},f_{2}\in L_{p}(\Omega;\mathbb{B})\). This is equivalent to the condition \(F^{\pm}\in L_{p}(\Omega)\). For the case \(p=2\), \(L_{2}(\Omega;\mathbb{B})\) is a complex Hilbert space with the inner product and the norm
\[\langle F,G\rangle_{L_{2}(\Omega;\mathbb{B})}:=\iint_{\Omega}\langle F(z),G(z) \rangle_{\mathbb{B}}dA_{z},\quad\|F\|_{L_{2}(\Omega;\mathbb{B})}=\left(\iint_{ \Omega}|F(z)|_{\mathbb{B}}^{2}dA_{z}\right)^{\frac{1}{2}} \tag{13}\]
Since \(L_{2}(\Omega;\mathbb{B})\) can be regarded as the product space \(L_{2}(\Omega)\times L_{2}(\Omega)\), and \(L_{2}(\Omega)\) is separable [6, Sec. 4.3], then \(L_{2}(\Omega;\mathbb{B})\) is a separable complex Hilbert space. The Sobolev spaces \(W^{2,k}(\Omega;\mathbb{B})\), with \(k\in\mathbb{N}\), are defined in a similar way. Given \(z=x+iy\) with \(x,y\in\mathbb{R}\), we denote
\[\widehat{z}:=x+\mathbf{j}y\in\mathbb{B}. \tag{14}\]
Note that if \(z,z_{0}\in\mathbb{C}\), then
\[\widehat{z}-\widehat{z_{0}}=\mathbf{p}^{+}(z^{*}-z_{0}^{*})+\mathbf{p}^{-}(z^ {*}-z_{0}^{*}). \tag{15}\]
Denote \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). From Proposition 1(5) we have
\[\left(\widehat{z}-\widehat{z_{0}}\right)^{n}=\mathbf{p}^{+}(z^{*}-z_{0}^{*})^ {n}+\mathbf{p}^{-}(z-z_{0})^{n},\quad\forall n\in\mathbb{N}_{0}. \tag{16}\]
**Remark 3**:
* _A direct computation shows that_ \(\mathbb{C}\ni z\mapsto\widehat{z}\in\mathbb{B}\) _is a monomorphism of algebras, then_ \(\{x+jy\,|\,x,y\in\mathbb{R}\}\) _is a field isomorphic to_ \(\mathbb{C}\)_, and_ \(\widehat{z}\dagger=\overline{\widehat{z}}=\widehat{z^{*}}\)_._
* _Using (_15_) and (_12_), it is straightforward to show that_ \(e^{\widehat{z}}=e^{x}\left(\cos(y)+\mathbf{j}\sin(y)\right)\)_. In particular_ \[e^{\mathbf{j}\theta}=\cos(\theta)+\mathbf{j}\sin(\theta),\quad\forall\theta \in\mathbb{R}.\] (17)
The Bicomplex _Cauchy-Riemann_ operators are defined by
\[\boldsymbol{\partial}:=\frac{1}{2}\left(\frac{\partial}{\partial x}-\mathbf{j }\frac{\partial}{\partial y}\right),\quad\overline{\boldsymbol{\partial}}:= \frac{1}{2}\left(\frac{\partial}{\partial x}+\mathbf{j}\frac{\partial}{ \partial y}\right). \tag{18}\]
The following relations hold
\[\overline{\boldsymbol{\partial}}=\mathbf{p}^{+}\frac{\partial}{\partial z}+ \mathbf{p}^{-}\frac{\partial}{\partial z^{*}},\quad\boldsymbol{\partial}= \mathbf{p}^{+}\frac{\partial}{\partial z^{*}}+\mathbf{p}^{-}\frac{\partial}{ \partial z}, \tag{19}\]
where \(\frac{\partial}{\partial z}:=\frac{1}{2}\left(\frac{\partial}{\partial x}-i \frac{\partial}{\partial y}\right)\) and \(\frac{\partial}{\partial z^{*}}:=\frac{1}{2}\left(\frac{\partial}{\partial x}+i \frac{\partial}{\partial y}\right)\) are the usual complex Cauchy-Riemann operators (see [8]). A function \(W\in C^{1}(\Omega;\mathbb{B})\) is called \(\mathbb{B}\)-holomorphic or \(\mathbb{B}\)-analytic (anti-holomorphic or anti-analytic) if \(\overline{\boldsymbol{\partial}}W=0\) (\(\boldsymbol{\partial}W=0\)).
**Remark 4**:
* _From Proposition_ 1_(_5_) and (_19_), a function_ \(W\in C^{1}(\Omega;\mathbb{B})\) _is_ \(\mathbb{B}\)_-holomorphic (anti-holomorphic) iff_ \(W^{\pm}\) _are anti-holomorphic and holomorphic, respectively, in the complex sense._
* _By (_19_) and (_16_), the powers_ \(\{(\widehat{z}-\widehat{z_{0}})^{n}\}_{n=0}^{\infty}\) _are_ \(\mathbb{B}\)_-holomorphic and_ \(\boldsymbol{\partial}(\widehat{z}-\widehat{z_{0}})^{n}=n(\widehat{z}-\widehat {z_{0}})^{n-1}\)_. In a similar way, the powers_ \(\{(\widehat{z^{\dagger}}-\widehat{z_{0}}^{\dagger})^{n}\}_{n=0}^{\infty}\) _are_ \(\mathbb{B}\)_-anti-holomorphic. By (_12_),_ \(e^{\widehat{z}}\) _is_ \(\mathbb{B}\)_-holomorphic._
The space \(C(\Omega;\mathbb{B})\) can be endowed with a standard topology as follows. Let \(\{K_{n}\}_{n=1}^{\infty}\) be a sequence of compact subsets of \(\Omega\) such that: (I) \(K_{n}\subset\operatorname{Int}K_{n+1}\)1, \(\forall n\in\mathbb{N}\); (II) \(\Omega=\bigcup_{n=1}^{\infty}K_{n}\).
Footnote 1: \(\operatorname{Int}K_{n}\) denotes the interior of the set \(K_{n}\)
One can choose, for example, \(K_{n}=\left\{z\in\Omega\,|\,\operatorname{dist}(z,\partial\Omega)\geqslant \frac{1}{n}\right\}\cap B_{n}^{\mathbb{C}}(0)\). The topology of \(C(\Omega;\mathbb{B})\) is generated by the family of semi-norms \(\left\{\|F\|_{n}:=\max_{z\in K_{n}}|F(z)|_{\mathbb{B}}\right\}_{n=1}^{\infty}\). With this topology \(C(\Omega;\mathbb{B})\) is a Frechet space. Properties (I) and (II) implies that for any compact \(K\subset\Omega\), there exists \(N\in\mathbb{N}\) such that \(K\subset K_{N}\). Then the convergence in \(C(\Omega;\mathbb{B})\) is induced by the uniform convergence on compact subsets of \(\Omega\). The topology is independent of the choice of \(\{K_{n}\}_{n=1}^{\infty}\) (for the proof of these facts see [14, Ch. VII, Sec. 1] or [19, Prop. 4. 39]).
**Remark 5**: _Using Remark 2 and (6), it is not difficult to see that a sequence \(\{W_{n}\}\subset C(\Omega;\mathbb{B})\) converges to \(W\in C(\Omega;\mathbb{B})\) in this topology iff \(\{\operatorname{Sc}W_{n}\}\) and \(\{\operatorname{Vec}W_{n}\}\) converges uniformly on compact subsets of \(\Omega\) to \(\operatorname{Sc}W\) and \(\operatorname{Vec}W_{n}\), respectively (or equivalently, if \(\{W_{n}^{\pm}\}\) converges uniformly on compact subsets to \(W^{\pm}\))._
**Proposition 6**: _Denote by \(\operatorname{Hol}(\Omega;\mathbb{B})\) the class of \(\mathbb{B}\)-holomorphic functions in \(\Omega\)._
* \(\operatorname{Hol}(\Omega;\mathbb{B})\) _is a closed subspace of_ \(C(\Omega;\mathbb{B})\) _and_ \(\boldsymbol{\partial}:\operatorname{Hol}(\Omega;\mathbb{B})\to\operatorname{Hol }(\Omega;\mathbb{B})\) _is a continuous operator._
* \(W\in\operatorname{Hol}(\Omega;\mathbb{B})\) _iff for all_ \(z_{0}\in\Omega\)_,_ \(W\) _can be written as its Taylor series in the disk_ \(B_{R}^{\mathbb{C}}(z_{0})\subset\Omega\)_, with_ \(R=\operatorname{dist}(z_{0},\partial\Omega)\)_, that is,_ \[W(z)=\sum_{n=0}^{\infty}\frac{\boldsymbol{\partial}^{n}W(z_{0})}{n!}(\hat{z}- \hat{z_{0}})^{n},\quad z\in B_{R}^{\mathbb{C}}(z_{0}),\] (20) _and series converges absolutely and uniformly on compact subsets of_ \(B_{R}^{\mathbb{C}}(z_{0})\)_._
**Proof.**
* That \(\operatorname{Hol}(\Omega;\mathbb{B})\) is closed follows from Remark 4(i) and the fact that complex holomorphic and anti-holomorphic functions are closed in \(C(\Omega)\). The continuity of \(\boldsymbol{\partial}\) follows from (19) and the corresponding continuity of the complex Cauchy-Riemann operators in the spaces of complex holomorphic and anti-holomorphic functions (see [14, Ch. VII, Sec. 2]).
* It follows from the fact that \(W^{+}\) and \(W^{-}\) are complex holomorphic and anti-holomorphic, from (19) and from Remark 5.
### The Bicomplex analytic Bergman space
We introduce the Bicomplex analytic Bergman space in \(\Omega\) as
\[\mathcal{A}^{2}(\Omega;\mathbb{B}):=\left\{W\in\operatorname{Hol}(\Omega; \mathbb{B})\,|\,W\in L_{2}(\Omega;\mathbb{B})\right\}. \tag{21}\]
Let \(W\in\mathcal{A}^{2}(\Omega;\mathbb{B})\). By Remark 4(i) and the definition of \(L_{2}(\Omega;\mathbb{B})\), \(W^{+}\in\overline{\mathcal{A}}^{2}(\Omega)\) and \(W^{-}\in\mathcal{A}^{2}(\Omega)\), where \(\mathcal{A}^{2}(\Omega)\) and \(\overline{\mathcal{A}}^{2}(\Omega)\) are the complex analytic and anti-analytic Bergman spaces. Then \(\mathcal{A}^{2}(\Omega;\mathbb{B})\) can be regarded as the direct sum of the complex Hilbert spaces \(\mathcal{A}^{2}(\Omega)\oplus\overline{\mathcal{A}}^{2}(\Omega)\). Thus, \(\mathcal{A}^{2}(\Omega;\mathbb{B})\) is a complex Hilbert space, and since \(\mathcal{A}^{2}(\Omega;\mathbb{B})\) is subspace of the separable complex Hilbert space \(L_{2}(\Omega;\mathbb{B})\), then \(\mathcal{A}^{2}(\Omega;\mathbb{B})\) is separable [6, Prop. 3.25]. For each \(z\in\Omega\), a direct computation shows that \(W(z)\) can be written in terms of the Bergman kernels \(L_{\Omega}(z,\zeta)\) and \(K_{\Omega}(z,\zeta)\) of \(\overline{\mathcal{A}}^{2}(\Omega)\) and \(\mathcal{A}^{2}(\Omega)\) as follows
\[W(z)=\iint_{\Omega}W(z)\left(\mathbf{p}^{+}L_{\Omega}(z,\zeta)+\mathbf{p}^{-} K_{\Omega}(z,\zeta)\right)dA_{\zeta}.\]
Then the function \(\mathscr{K}_{\Omega}(z,\zeta):=P^{+}L_{\Omega}(z,\zeta)+P^{-}K_{\Omega}(z,\zeta)\) has similar properties to a reproducing Bergman kernel. In particular, for \(\Omega=\mathbb{D}\), with \(\mathbb{D}:=B_{1}^{\mathbb{C}}(0)\), we obtain
\[\mathscr{K}_{\mathbb{D}}(z,\zeta)=P^{+}\frac{1}{1-z\zeta^{*}}+P^{-}\frac{1}{1 -z^{*}\zeta}=(1-\hat{z}\hat{\zeta}\dagger)^{-1} \tag{22}\]
**Proposition 7**: _Consider \(\Omega=B_{R}^{\mathbb{C}}(0)\), for some \(R>0\). Then the powers \(\{\hat{z}^{n},\mathbf{j}\hat{z}^{n}\}\) are an orthogonal basis for the complex Hilbert space \(\mathcal{A}^{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})\)._
**Proof.** Similar to the complex case [18, Ch. I], using the characterization of the inner product in \(\mathbb{B}\) in terms of the scalar and vectorial part, a simple computation with polar coordinates shows the following relations for each \(n,m\in\mathbb{N}_{0}\)
\[\langle\widehat{z}^{n},\mathbf{j}\widehat{z}^{m}\rangle_{L_{2}(B_{R}^{ \mathbb{C}}(0);\mathbb{B})}=0,\quad\langle\Lambda\widehat{z}^{n},\Lambda \widehat{z}^{m}\rangle_{L_{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})}=\frac{2\pi R ^{n+m+2}}{n+m+2}\delta_{m,n},\ \ \mbox{where }\Lambda\in\{1,\mathbf{j}\}. \tag{23}\]
In particular \(\left\|\Lambda\hat{z}^{n}\right\|_{L_{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})}= \sqrt{\frac{\pi}{n+1}}R^{n+1}\), \(n\in\mathbb{N}_{0}\), \(\Lambda\in\{1,\mathbf{j}\}\). The proof of completeness is similar to that of the classical complex case given in [18, Ch. I], adapted to the inner product (13) and using representation (20)
The corresponding orthonormal basis is given by \(\{e_{n}(z;R),\mathbf{j}e_{n}(z;R)\}_{n=0}^{\infty}\), where
\[e_{n}(z;R):=\sqrt{\frac{n+1}{\pi}}\frac{\widehat{z}^{n}}{R^{n+1}}. \tag{24}\]
**Remark 8**: _The system \(\{e_{n}(z;R),\mathbf{j}e_{n}(z;R)\}_{n=0}^{\infty}\) is an orthonormal basis for the_ **complex** _Hilbert space \(\mathcal{A}^{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})\). If \(W\in\mathcal{A}^{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})\), due to (10) we have_
\[W(z)=\sum_{n=0}^{\infty}\left\{\iint_{B_{R}^{\mathbb{C}}(0)}W(z)\left(e_{n}(z; R)\right)^{\dagger}dA_{z}\right\}e_{n}(z;R).\]
_In this sense \(\{e_{n}(z;R)\}_{n=0}^{\infty}\) is an orthonormal "basis" if we consider Bicomplex coefficients. Actually, the spaces \(L_{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})\) and \(\mathcal{A}^{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})\) are \(\mathbb{B}\)-modules and it is possible to introduce a \(\mathbb{B}\)-valued inner product taking the integral \(\iint_{B_{\rho}^{\mathbb{C}}(0)}W(z)V^{\dagger}(z)dA_{z}\) (note that \(\langle W,V\rangle_{L_{2}(B_{R}^{\mathbb{C}}(0);\mathbb{B})}\) is the scalar part of this integral). It is possible to develop a theory for \(\mathbb{B}\)-valued Hilbert spaces, see, e.g., [22, 32, 34]._
The Bicomplex Vekua equation
### Main results about the Vekua equation
Let \(\Omega\subset\mathbb{C}\) be a bounded domain and \(f\in C^{2}(\Omega)\) a scalar function that does not vanish in \(\Omega\) (i.e., \(f(z)\neq 0\) for all \(z\in\Omega\)). Set \(q_{f}=\frac{\triangle f}{f}\), where \(\triangle:=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y ^{2}}\) is the Laplacian operator. The study of the Schrodinger operator
\[\triangle\,f(z)-q_{f}(z)f(z)=0,\quad z\in\Omega, \tag{25}\]
for the case when \(f\) is real-valued leads us to the following factorization (see [23])
\[\frac{1}{4}\left(\triangle-q_{f}(z)\right)\varphi(z)=\left(\frac{\partial}{ \partial z^{*}}+\frac{f_{z}}{f}C\right)\left(\frac{\partial}{\partial z}-\frac {f_{z}}{f}C\right)\varphi(z)=\left(\frac{\partial}{\partial z}+\frac{f_{z^{*} }}{f}C\right)\left(\frac{\partial}{\partial z^{*}}-\frac{f_{z^{*}}}{f}C \right)\varphi(z),\]
valid for real-valued functions \(\varphi\in C^{2}(\Omega)\) (here, \(f_{z^{*}}:=\frac{\partial f}{\partial z^{*}}\) and \(C\) denotes the operator of complex conjugation). In the case of a complex-valued function \(f\), it is known that the factorization is given in terms of Bicomplex-valued functions and the Bicomplex Cauchy-Riemann operators.
**Theorem 9** ([24], Th. 148): _Let \(f\in C^{2}(\Omega)\) a scalar complex-valued function. Then for any scalar complex-valued function \(\varphi\in C^{2}(\Omega)\), the following equalities hold_
\[\frac{1}{4}\left(\triangle-q_{f}(z)\right)\varphi(z)=\left(\overline{\mathbf{\partial}}+\frac{\mathbf{\partial}f}{f}C_{\mathbb{B}}\right) \left(\mathbf{\partial}-\frac{\mathbf{\partial}f}{f}C_{ \mathbb{B}}\right)\varphi(z)=\left(\mathbf{\partial}+\frac{\overline {\mathbf{\partial}}f}{f}C_{\mathbb{B}}\right)\left(\overline{\mathbf{\partial}}-\frac{\overline{\mathbf{\partial}}f}{f}C_{ \mathbb{B}}\right)\varphi(z), \tag{26}\]
_where \(C_{\mathbb{B}}\) denotes the operator of Bicomplex conjugation._
The **main Bicomplex Vekua equation** associated to \(f\) is given by
\[\overline{\mathbf{\partial}}W(z)=\frac{\overline{\mathbf{ \partial}}f(z)}{f(z)}\overline{W}(z),\quad z\in\Omega. \tag{27}\]
We consider classical solutions \(W\in C^{2}(\Omega;\mathbb{B})\) of (27). These functions are a special kind of pseudoanalytic functions (see [24, Ch. II]). The following result holds.
**Theorem 10** ([24], Th. 150): _Let \(W\in C^{1}(\Omega;\mathbb{B})\). If \(W\) is a solution of (27), then \(u=\operatorname{Sc}W\) is a solution of (25), and \(v=\operatorname{Vec}W\) is a solution of the Darboux associated equation_
\[\left(\triangle-q_{\frac{1}{f}}(z)\right)v(z)=0,\quad\text{ with }q_{\frac{1}{f}}=2\frac{f_{x}^{2}+f_{y}^{2}}{f^{2}}-q_{f}=f\triangle\, \left(\frac{1}{f}\right), \tag{28}\]
_where \(f_{x}=\frac{\partial f}{\partial x}\) and \(f_{y}=\frac{\partial f}{\partial y}\)._
The potential \(q_{\frac{1}{f}}\) is called the Darboux transformation of \(q_{f}\)[33]. Denote the space of solutions of the main Vekua equation (27) by
\[\operatorname{V}_{f}(\Omega;\mathbb{B}):=\left\{W\in C^{1}(\Omega;\mathbb{B}) \,|\,W\text{ is solution of \eqref{eq:27}}\right\}. \tag{29}\]
**Theorem 11** ([8], Th.13): _The space \(\operatorname{V}_{f}(\Omega;\mathbb{B})\) is closed in \(C(\Omega;\mathbb{B})\)._
### The pseudoanalytic Bergman space
We introduce the pseudoanalytic Bergman space associated to the main Vekua equation (27) as follows
\[\mathcal{A}_{f}^{2}(\Omega;\mathbb{B}):=\left\{W\in\mathrm{V}_{f}(\Omega; \mathbb{B})\,|\,W\in L_{2}(\Omega;\mathbb{B})\right\}. \tag{30}\]
When \(f\equiv 1\) we obtain the analytic Bergman space \(\mathcal{A}_{2}(\Omega;\mathbb{B})\). The following proposition generalized the proof given in [17] for the complex pseudoanalytic Bergman space, that is, with \(f\) real-valued (see also [9] for the proof in the case of the real-valued Bergman pseudoanalytic space).
**Theorem 12**: _Let \(\Omega\subset\mathbb{C}\) be a bounded domain and \(f\in C^{1}(\overline{\Omega})\) that does not vanish in \(\Omega\). Then \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) is a complex Hilbert space._
**Proof.** Consider the Bicomplex Theodorescu operator (introduced in [8]) given by
\[T_{\Omega}\Phi(z):=P^{+}B_{\Omega}\Phi^{+}(z)+P^{-}A_{\Omega}\Phi^{+}(z),\quad \Phi\in L_{2}(\Omega;\mathbb{B}),\]
where \(A_{\Omega}g(z):=\dfrac{1}{\pi}\iint_{\Omega}\dfrac{g(\zeta)}{\zeta-z}dA_{\zeta}\) and \(B_{\Omega}g(z):=\dfrac{1}{\pi}\iint_{\Omega}\dfrac{g(\zeta)}{\zeta^{*}-z^{*}} dA_{\zeta}\) for \(g\in L_{2}(\Omega)\). It is known that \(A_{\Omega},B_{\Omega}\in\mathcal{B}(L_{2}(\Omega),W^{1,2}(\Omega))\) (see, e.g., [2, Prop. 5.2.1]). Thus, \(T_{\Omega}\in\mathcal{B}\left(L_{2}(\Omega;\mathbb{B}),W^{1,2}(\Omega;\mathbb{ B})\right)\). Furthermore, it is known that \(\frac{\partial}{\partial z}A_{\Omega}g=g\) and \(\frac{\partial}{\partial z^{*}}B_{\Omega}g=g\) for \(g\in L_{2}(\Omega)\) ([2, Prop. 5.2.1]). Hence
\[\overline{\boldsymbol{\partial}}T_{\Omega}W=P^{+}\frac{\partial}{\partial z^ {*}}B_{\Omega}W^{+}+P^{-}\frac{\partial}{\partial z}A_{\Omega}W^{-}=W\quad \text{ for }\;W\in L_{2}(\Omega;\mathbb{B}).\]
By the Weyl Lemma [16, Ch. 18, Cor. 4.11], if \(W\in L_{2}(\Omega;\mathbb{B})\) satisfies \(\overline{\boldsymbol{\partial}}W=0\) in the weak sense (that is, \(\frac{\partial W^{+}}{\partial z}=\frac{\partial W^{-}}{\partial z^{*}}=0\) in the weak sense), then \(W\in\mathrm{Hol}(\Omega;\mathbb{B})\). Take \(W\in L_{2}(\Omega;\mathbb{B})\) and consider
\[H=W-T_{\Omega}[\alpha_{f}\overline{W}],\quad\text{where }\alpha_{f}:=\dfrac{ \overline{\boldsymbol{\partial}}f}{f}.\]
Since \(f\in C^{1}(\overline{\Omega})\), \(\alpha_{f}\overline{W}\in L_{2}(\Omega;\mathbb{B})\) and hence \(H\in L_{2}(\Omega;\mathbb{B})\). Suppose that \(W\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\). Then
\[\overline{\boldsymbol{\partial}}H=\overline{\boldsymbol{\partial}}W-\alpha_{f }\overline{W}=0. \tag{31}\]
Thus, \(H\in\mathcal{A}_{2}(\Omega;\mathbb{B})\). Reciprocally, if \(H\in\mathcal{A}^{2}(\Omega;\mathbb{B})\), Eq. (31) implies that \(W\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\). Now consider a sequence \(\{W_{n}\}\subset\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) such that \(W_{n}\to W\) in \(L_{2}(\Omega;\mathbb{B})\) for some \(W\in L_{2}(\Omega;\mathbb{B})\). Then \(H_{n}=W_{n}-T_{\Omega}[\alpha_{f}\overline{W}_{n}]\in\mathcal{A}^{2}(\Omega; \mathbb{B})\). Since \(T_{\Omega}\in\mathcal{B}(L_{2}(\Omega;\mathbb{B}))\), then \(H_{n}\to H=W-T_{\Omega}[\alpha_{f}\overline{W}]\) in \(L_{2}(\Omega;\mathbb{B})\). Hence \(H\in\mathcal{A}^{2}(\Omega;\mathbb{B})\) which implies that \(W\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\).
**Remark 13**: _Since \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) is a subspace of the separable Hilbert space \(L_{2}(\Omega;\mathbb{B})\), then \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) is a separable Hilbert space [6, Prop. 3.25]._
**Proposition 14**: _Suppose that \(f\in C^{1}(\overline{\Omega})\cap C^{2}(\Omega)\cap W^{2,\infty}(\Omega)\). For any \(K\subset\Omega\) there exists a constant \(C_{K}>0\) such that_
\[\max_{z\in K}|W(z)|_{\mathbb{B}}\leqslant C_{K}\|W\|_{L_{2}(\Omega;\mathbb{B}) },\quad\forall W\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{B}). \tag{32}\]
**Proof.** Let \(W\in{\cal A}_{f}^{2}(\Omega;\mathbb{B})\) and \(K\subset\Omega\) be compact. Set \(u=\operatorname{Sc}W\) and \(v=\operatorname{Vec}W\). Note that \(u,v\in L_{2}(\Omega)\). By Theorem 10, \(u\) is a solution of (25) and \(v\) a solution of (28). Since \(f\in W^{2,\infty}(\Omega)\), \(q_{f},q_{\frac{1}{f}}\in L_{\infty}(\Omega)\), and according to [20] and [31, Sec. 2], there exist constants \(C_{1},C_{2}>0\) such that
\[\max_{z\in K}|u(z)|\leqslant C_{1}\|u\|_{L_{2}(\Omega)},\quad\text{and}\quad \max_{z\in K}|v(z)|\leqslant C_{1}\|v\|_{L_{2}(\Omega)}.\]
Taking \(C_{3}=\max\{C_{1},C_{2}\}\) we obtain that for any \(z\in K\)
\[|W(z)|_{\mathbb{B}}\leqslant|u(z)|+|v(z)|\leqslant C_{3}(\|u\|_{L_{2}(\Omega) }+\|v(z)\|_{L_{2}(\Omega)})\leqslant 2C_{3}\|W\|_{L_{2}(\Omega;\mathbb{B})}.\]
Thus, \(\max_{z\in K}|W(z)|_{\mathbb{B}}\leqslant C_{K}\|W\|_{L_{2}(\Omega;\mathbb{B})}\), with \(C_{K}=2C_{3}\).
Under the conditions of Proposition 14, for any \(z\in\Omega\), the functionals \({\cal A}_{f}^{2}(\Omega;\mathbb{B})\ni W\mapsto\operatorname{Sc}W(z), \operatorname{Vec}W(z)\in\mathbb{C}\) are bounded. Then by the Riesz representation theorem there exist kernels \(K_{z},L_{z}\in{\cal A}_{f}^{2}(\Omega;\mathbb{B})\) such that
\[\operatorname{Sc}W(z)=\iint_{\Omega}\langle W(\zeta),K_{z}(\zeta)\rangle_{ \mathbb{B}}dA_{\zeta},\quad\operatorname{Vec}W(z)=\iint_{\Omega}\langle W( \zeta),L_{z}(\zeta)\rangle_{\mathbb{B}}dA_{\zeta}.\]
These results are similar to those obtained in [9] for the complex-valued Bergman pseudo-analytic space.
**Remark 15**: _The kernels \(K_{z}(\zeta)\) and \(L_{z}(\zeta)\) satisfies the relations_
\[\operatorname{Sc}K_{z}(\zeta)=(\operatorname{Sc}K_{\zeta}(z))^{*},\quad \operatorname{Vec}L_{z}(\zeta)=(\operatorname{Vec}L_{\zeta}(z))^{*},\quad \operatorname{Sc}L_{z}(\zeta)=(\operatorname{Vec}K_{\zeta}(z))^{*} \tag{33}\]
_Indeed, since \(K_{z}\in{\cal A}_{f}^{2}(\Omega;\mathbb{B})\) we have_
\[\operatorname{Sc}K_{z}(\zeta)=\langle K_{z},K_{\zeta}\rangle_{L_{2}(\Omega; \mathbb{B})}=\langle K_{\zeta},K_{z}\rangle_{L_{2}(\Omega;\mathbb{B})}^{*}=( \operatorname{Sc}K_{\zeta})^{*}\,(z).\]
_The other two equalities are proved analogously._
Using (33), for any \(W\in{\cal A}_{f}^{2}(\Omega;\mathbb{B})\) we obtain the following relation
\[W(z) =\operatorname{Sc}W(z)+\mathbf{j}\operatorname{Vec}W(z)\] \[=\iint_{\Omega}\left(\operatorname{Sc}W(\zeta)(\operatorname{Sc} K_{z}(\zeta))^{*}+\operatorname{Vec}W(\zeta)(\operatorname{Vec}K_{z}(\zeta))^{*} \right)dA_{\zeta}\] \[\quad+\mathbf{j}\iint_{\Omega}\left(\operatorname{Sc}W(\zeta)( \operatorname{Sc}L_{z}(\zeta))^{*}+\operatorname{Vec}W(\zeta)(\operatorname{Vec }L_{z}(\zeta))^{*}\right)dA_{\zeta}\] \[=\iint_{\Omega}\left(\operatorname{Sc}W(\zeta)\operatorname{Sc} K_{\zeta}(z)+\operatorname{Vec}W(\zeta)\operatorname{Sc}L_{\zeta}(z)\right)dA_{\zeta}\] \[\quad+\mathbf{j}\iint_{\Omega}\left(\operatorname{Sc}W(\zeta) \operatorname{Vec}K_{\zeta}(z)+\operatorname{Vec}W(\zeta)\operatorname{Vec }L_{\zeta}(z)\right)dA_{\zeta},\]
from where we obtain the relation
\[W(z)=\iint\limits_{\Omega}\left(\operatorname{Sc}W(\zeta)K_{\zeta}(z)+ \operatorname{Vec}W(\zeta)L_{\zeta}(z)\right)dA_{\zeta}. \tag{34}\]
For all \(z,\zeta\in\Omega\), define \(K(z,\zeta)=K_{\zeta}(z)\) and \(L(z,\zeta)=L_{\zeta}(z)\). Following [9], we introduce the next definition of the Bergman kernel.
**Definition 16**: _The Bergman kernel of the space \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) with coefficient \(A\in\mathbb{B}\) is defined by_
\[\mathscr{K}_{\Omega}^{f}(A;z,\zeta):=\operatorname{Sc}(A)K(z,\zeta)+ \operatorname{Vec}(A)L(z,\zeta),\quad z,\zeta\in\Omega. \tag{35}\]
Then we have the reproducing property
\[W(z)=\iint\limits_{\Omega}\mathscr{K}_{\Omega}^{f}(W(\zeta);z,\zeta)dA_{\zeta}. \tag{36}\]
This definition for the Bergman kernel of the Vekua equation was introduced in [9] for the case when \(f\) is real valued (and \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{C})\) is a _real_ Hilbert space).
**Remark 17**: _Note that \(\mathscr{K}_{\Omega}^{f}(A;\cdot,\zeta)\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{ B})\) for all \(A\in\mathbb{B}\), \(\zeta\in\Omega\). Since \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) is a separable Hilbert space, it admits an orthonormal basis of the form \(\{\Phi_{n}\}_{n=0}^{\infty}\)[6, Th. 5.11]. Then_
\[\mathscr{K}_{\Omega}^{f}(A;z,\zeta)=\sum_{n=0}^{\infty}\left\langle\mathscr{K }_{\Omega}^{f}(A;\cdot,\zeta),\Phi_{n}\right\rangle_{L_{2}(\Omega;\mathbb{B})} \Phi_{n}(z),\]
_and_
\[\left\langle\mathscr{K}_{\Omega}^{f}(A;\cdot,\zeta),\Phi_{n} \right\rangle_{L_{2}(\Omega;\mathbb{B})} =\left\langle\operatorname{Sc}(A)K(\cdot,\zeta)+\operatorname{Vec }(A)L(\cdot,\zeta),\Phi_{n}(\cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})}\] \[=\operatorname{Sc}(A)\left\langle K_{\zeta}(\cdot),\Phi_{n}(z) \right\rangle_{L_{2}(\Omega;\mathbb{B})}+\operatorname{Vec}(A)\left\langle L _{\zeta}(\cdot),\Phi_{n}(\cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})}\] \[=\operatorname{Sc}(A)\left(\operatorname{Sc}\Phi_{n}(\zeta) \right)^{*}+\operatorname{Vec}(A)\left(\operatorname{Vec}\Phi_{n}(\zeta) \right)^{*}=\langle A,\Phi_{n}(\zeta)\rangle_{\mathbb{B}}.\]
_Thus,_
\[\mathscr{K}_{\Omega}^{f}(A;z,\zeta)=\sum_{n=0}^{\infty}\langle A,\Phi_{n}( \zeta)\rangle_{\mathbb{B}}\Phi_{n}(z). \tag{37}\]
_Since \(\Omega\) is bounded, the constant functions belong to \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\), so the coefficients of (37) lie in \(\ell_{2}(\mathbb{N}_{0})\) and the series (37) converges in the \(L_{2}(\Omega;\mathbb{B})\)-norm. By Proposition 14, series (37) converges in the variable \(z\) uniformly on compact subsets of \(\Omega\)._
**Remark 18**: _Since \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\) is a closed subspace of \(L_{2}(\Omega;\mathbb{B})\), we have the orthogonal decomposition \(L_{2}(\Omega;\mathbb{B})=\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\oplus\left( \mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\right)^{\perp}\). Let \(\mathbf{P}_{\Omega}:L_{2}(\Omega;\mathbb{B})\to\mathcal{A}_{f}^{2}(\Omega; \mathbb{B})\) the corresponding orthogonal projection, that we call the Bergman projection. Similar to the complex-valued case [9, Prop 1.4], we have that the Bergman projection can be written as_
\[\mathbf{P}_{\Omega}\Phi(z)=\iint_{\Omega}\mathscr{K}_{\Omega}^{f}(\Psi(\zeta ),z,\zeta)dA_{\zeta},\qquad\forall\Psi\in L_{2}(\Omega;\mathbb{B}). \tag{38}\]
_Indeed, let \(W={\bf P}_{\Omega}\Psi(z)\) for \(\Psi\in L_{2}(\Omega;\mathbb{B})\). Hence \(W\in{\cal A}_{f}^{2}(\Omega;\mathbb{B})\) and we can write \(W\) using (36). Now, in the construction of (36), we obtain the equality_
\[\iint_{\Omega}\mathscr{K}_{\Omega}(W(\zeta),z,\zeta)=\langle W,K_{z}\rangle_{L _{2}(\Omega;\mathbb{B})}+{\bf j}\langle W,L_{z}\rangle_{L_{2}(\Omega;\mathbb{ B})}.\]
_Notice that the right-hand side of this equality is well-defined even for \(L_{2}\)-functions. Since the orthogonal projection is a self-adjoint operator [15, Prop. 3.3], we have_
\[{\bf P}_{\Omega}\Psi(z) =\langle{\bf P}_{\Omega}\Psi,K_{z}\rangle_{L_{2}(\Omega;\mathbb{ B})}+{\bf j}({\bf P}_{\Omega}\Psi,L_{z})_{L_{2}(\Omega;\mathbb{B})}=\langle \Psi,{\bf P}_{\Omega}K_{z}\rangle_{L_{2}(\Omega;\mathbb{B})}+{\bf j}\langle \Psi,{\bf P}_{\Omega}L_{z}\rangle_{L_{2}(\Omega;\mathbb{B})}\] \[=\langle\Psi,K_{z}\rangle_{L_{2}(\Omega;\mathbb{B})}+{\bf j} \langle\Psi,L_{z}\rangle_{L_{2}(\Omega;\mathbb{B})}=\iint_{\Omega}\mathscr{K} _{\Omega}^{f}(\Psi(\zeta),z,\zeta)dA_{\zeta}.\]
**Remark 19**: _In the case \(f\equiv 1\) we obtain the Bicomplex analytic Bergman space, and \({\bf j}W\in{\cal A}^{2}(\Omega;\mathbb{B})\) if \(W\in{\cal A}^{2}(\Omega;\mathbb{B})\). It is not difficult to see that \(\langle{\bf j}W,V\rangle_{L_{2}(\Omega;\mathbb{B})}=-\langle W,{\bf j}V\rangle _{L_{2}(\Omega;\mathbb{B})}\) for \(W,V\in L_{2}(\Omega;\mathbb{B})\), and that \(\operatorname{Sc}({\bf j}W)=-\operatorname{Vec}W\),\(\operatorname{Vec}({\bf j}W)=\operatorname{Sc}W\). Hence_
\[\operatorname{Sc}L(z,\zeta) =\operatorname{Sc}L_{\zeta}(z)=\operatorname{Vec}\left({\bf j}L_ {\zeta}(z)\right)=\langle{\bf j}L_{\zeta},L_{z}\rangle_{L_{2}(\Omega;\mathbb{ B})}=-\langle L_{\zeta},{\bf j}L_{z}\rangle_{L_{2}(\Omega;\mathbb{B})}\] \[=\left(\operatorname{Vec}({\bf j}L_{z}(\zeta))\right)^{*}=- \left(\operatorname{Sc}L(\zeta,z)\right)^{*}.\]
_In a similar way \(\operatorname{Sc}K(z,\zeta)=(\operatorname{Vec}L(\zeta,z))^{*}\). Using these equalities together with (33) we obtain_
\[L(z,\zeta) =\operatorname{Sc}L(z,\zeta)+{\bf j}\operatorname{Vec}L(z,\zeta)= -\left(\operatorname{Sc}L(\zeta,z)\right)^{*}+{\bf j}\left(\operatorname{Sc}K (\zeta,z)\right)^{*}\] \[=-\operatorname{Vec}K(z,\zeta)+{\bf j}\operatorname{Sc}K(z,\zeta) ={\bf j}K(z,\zeta).\]
_Substituting this equality in (35) and (36) we obtain that \(K(z,\zeta)\) is the analytic Bergman kernel obtained in Subsection 2.3._
## 4 Transmutation operators
### The Vekua equation in polar coordinates
From now on, we assume that \(\Omega\) is a bounded star-shaped domain with respect to \(z=0\), that is, \(0\in\Omega\) and for any \(z\in\Omega\), the segment \([0,z]:=\{tz\,|\,0\leqslant t\leqslant 1\}\) belongs to \(\Omega\). In this case, we denote \(\varrho_{\Omega}:=\sup_{z\in\Omega}|z|\). Suppose that \(q\in C^{1}[0,\varrho_{\Omega}]\) is scalar complex-valued and consider the radial Schrodinger equation
\[\left(\triangle-q(|z|)\right)u(z)=0,\quad z\in\Omega. \tag{39}\]
In this case, the potential only depends on the radial component \(r=|z|\). In polar coordinates \(z=re^{i\theta}\), Eq. (39) has the form
\[\left(\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{\partial r }+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\theta^{2}}-q(r)\right)u(r,\theta )=0,\quad(r,\theta)\in(0,\varrho_{\Omega})\times[0,2\pi]. \tag{40}\]
Suppose that there exists a non-vanishing scalar complex-valued solution of (39) \(\widetilde{f}\in C^{2}(\overline{\Omega})\) that only depends on the radial component and take \(f(r):=\widetilde{f}(|z|)\). A direct computation shows that if \(f\in C^{2}[0,\varrho_{\Omega}]\) is a solution of the Bessel equation
\[f^{\prime\prime}(r)+\frac{1}{r}f^{\prime}(r)-q(r)f(r)=0,\quad 0<r<\varrho_{ \Omega}. \tag{41}\]
From now on, we identify \(f\) with \(\widetilde{f}\).
**Remark 20**: _The Bicomplex Cauchy-Riemann operators can be written in the form_
\[\overline{\boldsymbol{\partial}}=\frac{\mathrm{e}^{\mathbf{j}\theta}}{2}\left( \frac{\partial}{\partial r}+\frac{\mathbf{j}}{r}\frac{\partial}{\partial \theta}\right),\quad\boldsymbol{\partial}=\frac{e^{-\mathbf{j}\theta}}{2} \left(\frac{\partial}{\partial r}-\frac{\mathbf{j}}{r}\frac{\partial}{ \partial\theta}\right) \tag{42}\]
In consequence we obtain
\[\frac{\overline{\boldsymbol{\partial}}f(z)}{f(z)}=\frac{\mathrm{e}^{\mathbf{j }\theta}}{2}\frac{f^{\prime}(r)}{f(r)}. \tag{43}\]
Thus, the main Vekua equation associated to \(f\) can be written as
\[\frac{\mathrm{e}^{\mathbf{j}\theta}}{2}\left(\frac{\partial}{\partial r}W+ \frac{\mathbf{j}}{r}\frac{\partial}{\partial\theta}W-\frac{f^{\prime}(r)}{f(r )}\overline{W}\right)=0. \tag{44}\]
that is reduced to
\[\frac{\partial}{\partial r}W+\frac{\mathbf{j}}{r}\frac{\partial}{\partial \theta}W-\frac{f^{\prime}(r)}{f(r)}\overline{W}=0,\quad(r,\theta)\in(0, \varrho_{\Omega})\times[0,2\pi]. \tag{45}\]
Eq. (45) will be called the **radial (main) Vekua equation**. If \(W=u+\mathbf{j}v\) with \(u=\mathrm{Sc}\,W,v=\mathrm{Vec}\,W\), Eq. (45) is equivalent to the system
\[f\frac{\partial}{\partial r}\left(\frac{u}{f}\right) = \frac{1}{r}\frac{\partial v}{\partial\theta} \tag{46}\] \[\frac{1}{f}\frac{\partial}{\partial r}\left(fv\right) = -\frac{1}{r}\frac{\partial v}{\partial\theta}. \tag{47}\]
On the other hand, since \(\frac{\partial f}{\partial x}=\cos(\theta)f^{\prime}(r)\) and \(\frac{\partial f}{\partial y}=\sin(\theta)f^{\prime}(r)\), the Darboux potential is given by
\[q_{\frac{1}{f}}(z)=2\left(\frac{f^{\prime}(r)}{f(r)}\right)^{2}-q(r).\]
Then \(q_{f}(r)=\frac{1}{f(r)}\left(f^{\prime\prime}(r)+\frac{1}{r}f^{\prime}(r)\right)\) and \(q_{\frac{1}{f}}(r)=2\left(\frac{f^{\prime}(r)}{f(r)}\right)^{2}-q_{f}(r)\). Hence, \(u\) and \(v\) satisfies system (46),(47) iff are solutions of the radial Schrodinger equation with potentials \(q_{f}\) and \(q_{\frac{1}{f}}\), respectively.
### Transmutation operators for the radial Schrodinger equation
Suppose that \(q_{f}\in C^{1}[0,\varrho_{\Omega}]\). Denote by \({\bf S}_{f}:=-\bigtriangleup+q_{f}(r)\) the radial Schrodinger operator with potential \(q_{f}\), and define
\[{\rm Sol}^{{\bf S}_{f}}(\Omega):=\{u\in C^{2}(\Omega)\,|\,u\mbox{ is solution of }{\bf S}_{f}u=0\ \mbox{ in }\Omega\}.\]
The following result establishes that any solution \(u\in{\rm Sol}^{{\bf S}_{f}}(\Omega)\) can be written as the image of a harmonic function \(h\in{\rm Har}(\Omega)\) under the action of an integral operator.
**Theorem 21** ([21]): _Let \(q_{f}\in C^{1}[0,\varrho_{\Omega}]\)._
* _There exists a kernel_ \(G^{f}\in C^{2}([0,\varrho_{\Omega}]\times[0,1])\) _satisfying the partial differential equation_ \[r(G_{r}r(r,t)-q_{f}(r)G^{f}(r,t))-G_{r}^{f}(r,t)+2(1-t)G_{rt}^{f}(r,t)=0,\quad (r,t)\in[0,\varrho_{\Omega}]\times[0,1],\] (48) _with the initial conditions_ \[G^{f}(r,0)=\int_{0}^{r}\tau q(\tau)d\tau\;\forall r\in[0,\varrho_{\Omega}]; \quad G^{f}(0,t)=0,\;\forall t\in[0,1].\] (49)
* _For any solution_ \(u\in{\rm Sol}^{{\bf S}_{f}}(\Omega)\)_, there exists a unique_ \(h\in{\rm Har}(\Omega)\) _such that_ \[{\bf T}_{f}u(z)=h(z)+\int_{0}^{1}\sigma G^{f}(r,1-\sigma^{2})h(\sigma^{2}z)d\sigma.\] (50) _Then the operator_ \({\bf T}_{f}\) _transforms_ \({\rm Har}(\Omega)\) _onto_ \({\rm Sol}^{{\bf S}_{f}}(\Omega)\)_._
Furthermore, the operator \({\bf T}_{f}\) is a _transmutation operator_, in the sense of the following definition.
**Definition 22** ([27]): _Let \(E\) be a topological vector space, \(E_{1}\subset E\) a linear subspace (not necessarily closed), and \({\bf A},{\bf B}:E_{1}\to E\) linear operators. A linear invertible operator \({\bf T}:E\to E\), such that \(E_{1}\) is \({\bf T}\)-invariant, is said to be a **transmutation operator** for the pair of operators \({\bf A},{\bf B}\), if the following conditions are fulfilled:_
1. _Both the operator_ \({\bf T}\) _and its inverse_ \({\bf T}^{-1}\) _are continuous in_ \(E\)_._
2. _The following operator equality is valid_ \[{\bf AT}={\bf TB}\quad\mbox{ in }E_{1}.\] (51)
In this case, \(E=C(\Omega)\) with the topology of the uniform convergence on compact subsets and \(E_{1}=C^{2}(\Omega)\).
**Theorem 23** ([30]): _The operator \({\bf T}_{f}\) and its inverse are continuous in the Frechet space \(C(\Omega)\), and \({\bf T}_{f}\) is a transmutation operator for the pair of operators \(r^{2}{\bf S}_{f}\) and \(r^{2}\bigtriangleup\). Explicitly, the following relation holds_
\[r^{2}{\bf S}_{f}{\bf T}_{f}u={\bf T}_{f}r^{2}\bigtriangleup u,\quad\forall u \in C^{2}(\Omega). \tag{52}\]
The corresponding transmutation operator for the Darboux transformed operator \({\bf S}_{\frac{1}{f}}=\triangle-q_{\frac{1}{f}}(r)\) is denoted by \({\bf T}_{\frac{1}{f}}\).
Let \(W\in{\rm V}_{f}(\Omega;\mathbb{B})\). By Theorem 10, \(u={\rm Sc}\,W\in{\rm Sol}^{{\bf S}_{f}}(\Omega)\) and \(v={\rm Vec}\,W\in{\rm Sol}^{{\bf S}_{\frac{1}{f}}}(\Omega)\). Due to Theorem 21(ii), there exists \(h_{1},h_{2}\in{\rm Har}(\Omega)\) such that
\[W={\bf T}_{f}h_{1}+{\bf j}{\bf T}_{\frac{1}{f}}h_{2}. \tag{53}\]
The idea is to use an operator of the form
\[{\cal T}_{f}:={\bf T}_{f}\,{\rm Sc}\,+{\bf j}{\bf T}_{\frac{1}{f}}\,{\rm Vec} \tag{54}\]
to transform the space \({\rm Hol}(\Omega;\mathbb{B})\) onto \({\rm V}_{f}(\Omega;\mathbb{B})\). Similarly, we define \({\cal T}_{\frac{1}{f}}\). Operators of form (53) have been used to transform holomorphic functions into solutions of the Vekua equation for the case when \(f\) is a function that only depends on the first component \(x={\rm Re}\,z\) (see [11, 28]). Before studying the transmutation property of \({\cal T}_{f}\) we establish their analytical properties.
**Remark 24**: _It is clear that the operator \({\cal T}_{f}\) is linear in the complex space \(C(\Omega;\mathbb{B})\). Consider \(C(\Omega;\mathbb{B})\) as a \(\mathbb{B}\)-module. Take \(A\in\mathbb{B}\), \(W\in C(\Omega;\mathbb{B})\) and \(a={\rm Sc}\,A\), \(b={\rm Vec}\,A\), \(u={\rm Sc}\,W\), \(v={\rm Vec}\,W\). We have_
\[{\cal T}_{f}[AW] ={\cal T}_{f}[(a+{\bf j}b)(u+{\bf j}v)]={\cal T}_{f}[(au-bv)+{\bf j }(bu+av)]\] \[={\bf T}_{f}[au-bv]+{\bf j}\left({\bf T}_{\frac{1}{f}}[bu+av] \right)=a{\bf T}_{f}u-b{\bf T}_{f}v+{\bf j}\left(b{\bf T}_{\frac{1}{f}}u+a{ \bf T}_{\frac{1}{f}}u\right)\] \[=a{\cal T}_{f}W+{\bf j}b{\cal T}_{\frac{1}{f}}W.\]
_Thus, the operator \({\cal T}_{f}\) fails to be \(\mathbb{B}\)-linear. However, we have the relation_
\[{\cal T}_{f}[AW]={\rm Sc}(A){\cal T}_{f}W+{\bf j}\,{\rm Vec}(A){\cal T}_{\frac {1}{f}}W,\quad A\in\mathbb{B},W\in C(\Omega;\mathbb{B}). \tag{55}\]
_In particular_
\[{\cal T}_{f}[{\bf j}W]={\bf j}{\cal T}_{\frac{1}{f}}W. \tag{56}\]
_For this reason, we consider \(V_{f}(\Omega;\mathbb{B})\) as a complex linear space._
**Proposition 25**: _The operator \({\cal T}:C(\Omega;\mathbb{B})\to C(\Omega;\mathbb{B})\) is continuous and invertible, and its inverse is given by_
\[{\cal T}_{f}^{-1}={\bf T}_{f}^{-1}\,{\rm Sc}\,+{\bf j}{\bf T}_{\frac{1}{f}}^{ -1}\,{\rm Vec}\,. \tag{57}\]
_The inverse \({\cal T}_{f}^{-1}\) is also continuous in \(C(\Omega;\mathbb{B})\). The same property is valid for \({\cal T}_{\frac{1}{f}}\)._
**Proof.** since \({\bf T}_{f}^{-1}\) and \({\bf T}_{\frac{1}{f}}\) exists and are continuous [30, Sec. II], a simple computation shows that the inverse of \({\cal T}_{f}\) is given by (57). The continuity of \({\cal T}_{f}\) and \({\cal T}_{f}^{-1}\) in \(C(\Omega;\mathbb{B})\) is due to the continuity of \({\bf T}_{f},{\bf T}_{\frac{1}{f}}\), \({\bf T}_{f}^{-1}\), and \({\bf T}_{\frac{1}{f}}^{-1}\) in \(C(\Omega)\). Similar for \({\cal T}_{\frac{1}{f}}\).
**Remark 26**: _The operators \({\bf T}_{f}\) and \({\bf T}_{\frac{1}{f}}\) satisfy that \({\bf T}_{f}(C^{2}(\Omega))=C^{2}(\Omega)={\bf T}_{\frac{1}{f}}(C^{2}(\Omega))\) (see [30, Sec. 2]). Thus, \({\cal T}_{f}\,(C^{2}(\Omega;\mathbb{B}))=C^{2}(\Omega;\mathbb{B})\). Similar for \({\cal T}_{\frac{1}{f}}\)._
### A representation for the transmutation operator of the Darboux equation
Consider the radial component of the Schrodinger operator \({\bf S}_{f}\), that is given by
\[{\bf L}_{f}:=\frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr}-q_{f}(r)=\frac{1}{r} \frac{d}{dr}r\frac{d}{dr}-q_{f}(r). \tag{58}\]
We assume that \(f\) is normalized satisfying the initial conditions
\[f(0)=1,\quad\mbox{and}\ \lim_{r\to 0^{+}}rf^{\prime}(r)=0. \tag{59}\]
In such case, \(\frac{1}{f}\) satisfies the same conditions (59).
**Example 27**: _Let \(\kappa\in(0,1)\). Consider \(\Omega=\mathbb{D}\) and \(q(r)=-\kappa^{2}\) (The Helmholtz equation \(\triangle u+\kappa^{2}u=0\)). In this case, the regular solution of \(f^{\prime\prime}+\frac{1}{r}f=-\kappa^{2}f\) in \([0,1]\) is given by \(f(r)=J_{0}(\kappa r)\) (Bessel function of first kind). Since \(\kappa<1\), \(J_{0}(\kappa r)>0\) for \(r\in[0,1]\). Also, \(J_{0}(0)=1\) and \(J^{\prime}(\kappa r)=-\frac{\kappa^{2}r}{2}+o(r^{2})\), \(r\to 0^{+}\), and then \(f\) satisfies (59). In this case, the operator \({\bf T}_{f}\) is known explicitly [36]_
\[{\bf T}_{f}h(r)=h(r)-\int_{0}^{1}\frac{\partial}{\partial\rho}J_{0}\left( \alpha r\sqrt{1-\sigma^{2}}\right)h(\sigma^{2}r)d\sigma.\]
_Using [1, pp. 361, Formula 9.1.28], \(\frac{f^{\prime}(r)}{f(r)}=-\kappa\), then the corresponding Vekua equation is given by \(\overline{\boldsymbol{\partial}}W+\kappa\overline{W}=0\) in \(\mathbb{D}\). Te Darboux potential is given by \(q_{\frac{1}{f}}=3\kappa^{2}\)._
The Polya factorization of \({\bf L}_{f}\) is given by
\[{\bf L}_{f}=\frac{1}{rf}\frac{d}{dr}rf^{2}\frac{d}{dr}\frac{1}{f}. \tag{60}\]
In a similar way
\[{\bf L}_{\frac{1}{f}}=\frac{f}{r}\frac{d}{dr}\frac{r}{f^{2}}\frac{d}{dr}f. \tag{61}\]
In the case \(f\equiv 1\) we obtain the radial part of the Laplacian, \({\bf L}_{1}=\frac{1}{r}\frac{d}{dr}r\frac{d}{r}\). If \({\bf L}\) is any of these operators, we use the notation \(\widehat{\bf L}:=r^{2}{\bf L}\). Theorem 23 can be formulated in terms of the radial operators as follows
\[\widehat{\bf L}_{f}{\bf T}_{f}u(r)={\bf T}_{f}\widehat{\bf L}_{1}u(r),\quad \forall u\in C^{2}[0,\varrho_{\Omega}] \tag{62}\]
(see [30, Sec. 7]). Set \({\bf D}_{f}:=\frac{r}{f}\frac{d}{dr}f\). Hence we have the following factorization
\[\widehat{\bf L}_{f}={\bf D}_{f}{\bf D}_{\frac{1}{f}},\quad\widehat{\bf L}_{ \frac{1}{f}}={\bf D}_{\frac{1}{f}}{\bf D}_{f}. \tag{63}\]
**Remark 28**: _Let \(\lambda\geqslant 0\) and consider the equation_
\[\widehat{\bf L}_{f}u(\lambda,r)=\lambda^{2}u(\lambda,r),\quad r\in(0,\varrho_ {\Omega}]. \tag{64}\]
_Set \(u(\lambda,r)=\frac{y(\lambda,r)}{\sqrt{r}}\). A direct computation shows that \(u\) satisfies the equation_
\[y^{\prime\prime}-\frac{\lambda^{2}-\frac{1}{4}}{r^{2}}y-q_{f}(r)y=0,\quad r\in(0,\varrho_{\Omega}],\]
_that can be written as the Perturbed Bessel equation_
\[y^{\prime\prime}-\frac{\ell(\ell+1)}{r^{2}}y-q_{f}(r)y=0,\quad r\in(0,\varrho_ {\Omega}], \tag{65}\]
_with \(\ell=\lambda-\frac{1}{2}\geqslant-\frac{1}{2}\). It is known that for potentials \(q_{f}\in L_{1}(0,\varrho_{\Omega})\) satisfying the condition \(\int_{0}^{\varrho_{\Omega}}r^{\mu}|q_{f}(r)|dr<\infty\) for some \(\mu\in[0,\frac{1}{2}]\), there exists a unique solution \(y\in C[0,\varrho_{\Omega}]\cap C^{2}(0,\varrho_{\Omega})\) that satisfies the asymptotic relations (see [29])_
\[y(\lambda,r)\sim r^{\lambda+\frac{1}{2}},\quad y^{\prime}(\lambda,r)\sim \left(\lambda+\frac{1}{2}\right)r^{\lambda-\frac{1}{2}},\quad r\to 0^{+}. \tag{66}\]
_In particular this is valid for \(q_{f}\in C^{1}[0,\varrho_{\Omega}]\). Since \(u=\frac{y}{\sqrt{r}}\) and \(u^{\prime}=-\frac{1}{2}\frac{u}{r}+\frac{y^{\prime}}{\sqrt{r}}\), from (66) we obtain the following asymptotic relations_
\[u(\lambda,r)\sim r^{\lambda},\quad u^{\prime}(\lambda,r)\sim\lambda r^{ \lambda-1},\quad r\to 0^{+}. \tag{67}\]
_For this reason we call \(u(\lambda,r)\) the regular solution of (64) at \(r=0\). In this way, \(f\) is the regular solution of (64) for the case \(\lambda=0\)._
Suppose that \(\lambda\geqslant 0\) and let \(v(\lambda,r)\) be the regular solution at \(r=0\) of \(\widehat{\mathbf{L}}_{\frac{1}{f}}v(\lambda,r)=\lambda v(\lambda,r)\). By (63), \(u(\lambda,r)=\mathbf{D}_{f}v(\lambda,r)\) is a solution of (64). Furthermore, a direct computation shows that
\[u =rv^{\prime}+r\frac{f^{\prime}}{f}v,\] \[u^{\prime} =rv^{\prime\prime}+v^{\prime}-rq_{\frac{1}{f}}v+r\left(\frac{f^{ \prime}}{f}\right)^{2}v+r\frac{f^{\prime}}{f}v^{\prime}\] \[=\frac{\lambda^{2}v}{r}+r\left(\frac{f^{\prime}}{f}\right)^{2}v+ r\frac{f^{\prime}}{f}v^{\prime}.\]
Since \(v^{(k)}(r)\sim\frac{d^{k}}{dr^{k}}r^{\lambda}\), \(r\to 0^{+}\) for \(k=0,1\), and \(f\) satisfies (59), we have that \(u^{(k)}(r)\sim\lambda\frac{d^{k}}{dr^{k}}r^{\lambda}\), \(r\to 0^{+}\) for \(k=0,1\). Thus, \(u=\lambda\tilde{u}\), where \(\tilde{u}\) is the regular solution at \(r=0\) of (64). A right inverse of \(\mathbf{D}_{f}\) is given by \(\mathbf{I}_{f}u(r)=\frac{1}{f(r)}\left(\int_{0}^{r}\frac{f(s)u(s)}{s}ds+C\right)\), with \(C\in\mathbb{C}\). Note that when \(f\equiv 1\), the Darboux transformation is nothing but \(\mathbf{D}_{1}=r\frac{d}{dr}\). Following the ideas from [26], we look to show that the transmutation operator \(\mathbf{T}_{\frac{1}{f}}\) can be written in a suitable form as the composition \(\mathbf{T}_{\frac{1}{f}}=\mathbf{I}_{f}\mathbf{T}_{f}\mathbf{D}_{1}\), in some appropriate subspace, in such away that the following diagram commutes.
\[\begin{CD}\widehat{\mathbf{L}}_{1}+\lambda@>{\mathbf{T}_{f}}>{}>\widehat{ \mathbf{L}}_{f}+\lambda\\ @V{}V{\mathbf{D}_{1}}V@V{}V{\mathbf{I}_{f}}V\\ \widehat{\mathbf{L}}_{1}+\lambda@>{}>{\mathbf{T}_{\frac{1}{f}}}>\widehat{ \mathbf{L}}_{\frac{1}{f}}+\lambda\end{CD}\]
The reason of use the composition with the integral operator \(\mathbf{I}_{f}\) instead of looking for the relation \(\mathbf{T}_{\frac{1}{f}}=\mathbf{D}_{\frac{1}{f}}\mathbf{T}_{f}\) is to obtain a bounded operator.
**Remark 29**: _Given \(\lambda\geqslant 0\), the regular solution at \(r=0\) of \(\widehat{\mathbf{L}}_{1}h(\lambda,r)=\lambda^{2}h(\lambda,r)\) is just \(h(\lambda,r)=r^{\lambda}\). Hence \(\mathbf{T}_{f}\left[r^{\lambda}\right]\) is a solution of (64). Changing variables, the operator \(\mathbf{T}_{f}\) can be written as_
\[\mathbf{T}_{f}\left[r^{\lambda}\right]=r^{\lambda}+\frac{1}{2}\int_{0}^{1}G^{ f}(r,t)(1-t)^{\lambda}r^{\lambda}dt.\]
_Thus, \(\frac{1}{r^{\lambda}}\mathbf{T}_{f}\left[r^{\lambda}\right]=1+\int_{0}^{1}G^{ f}(r,t)(1-t)^{\lambda}dt\to 1\), when \(r\to 0^{+}\), by condition (49). On the other hand_
\[\frac{1}{\lambda r^{\lambda-1}}\frac{d}{dr}\mathbf{T}_{f}\left[r^{\lambda} \right]=1+\frac{1}{2}\int_{0}^{1}\left\{G^{f}_{r}(r,t)(1-t)^{\lambda}r+G^{f}( r,t)(1-t)^{\lambda}\right\}dt\]
_that tends to \(1\) when \(r\to 0^{+}\). Hence \(u(\lambda,r)=\mathbf{T}_{f}[r^{\lambda}]\) is the regular solution of (64) at \(r=0\). In particular \(f(r)=\mathbf{T}_{f}[1]\)._
We denote the set of all polynomial functions \(p(r)=\sum_{n=0}^{N}a_{n}r^{n}\) in \([0,\varrho_{\Omega}]\) by \(\mathcal{P}[0,\varrho_{\Omega}]\).
**Theorem 30**: _For all \(p\in\mathcal{P}[0,\varrho_{\Omega}]\), the following equality is valid_
\[\mathbf{T}_{\frac{1}{f}}p(r)=\frac{1}{f(r)}\left(\int_{0}^{r}\frac{f(s) \mathbf{T}_{f}\left[sp^{\prime}(s)\right]}{s}ds+p(0)\right),\quad r\in[0, \varrho_{\Omega}]. \tag{68}\]
**Proof.** Let \(n\in\mathbb{N}_{0}\) and \(h_{n}(r)=r^{n}\). If \(n=0\), \(h_{0}\equiv 1\). Then the right hand side of (68) is \(\frac{1}{f}\). On the other hand, by Remark (29) \(\mathbf{T}_{\frac{1}{f}}[1]\) is the regular solution of \(\widehat{\mathbf{L}}_{\frac{1}{f}}y=0\) at \(r=0\), that is given by \(\frac{1}{f}\), and hence the equality is valid. For \(n\geqslant 1\), denote
\(g_{n}(r)=\left(\frac{1}{f(r)}\int_{0}^{r}\frac{f(s)\mathbf{T}_{f}[sh_{n}^{ \prime}(s)]}{s}ds+h_{n}(0)\right)\). We have
\[g_{n}(r) =\frac{1}{f(r)}\int_{0}^{r}\frac{f(s)\mathbf{T}_{f}\left[sh_{n}^{ \prime}(s)\right]}{s}ds\] \[=\frac{1}{f(r)}\int_{0}^{r}\frac{f(s)}{s}\mathbf{T}_{f}[ns^{n}]ds,\]
and note that
\[\frac{1}{s}\mathbf{T}_{f}[s^{n}]=s^{n-1}+\int_{0}^{1}G^{f}(r,t)(1-t)^{n}s^{n- 1}ds,\]
that is continuous in the interval \([0,\varrho_{\Omega}]\). Hence the integral in \(g_{n}(r)\) is well defined in \([0,\varrho_{\Omega}]\). Thus,
\[g_{n}(r)=\frac{n}{f(r)}\int_{0}^{r}f(s)s^{n-1}ds+\frac{n}{f(r)}\int_{0}^{r}f(s )\left[\int_{0}^{1}G^{f}(s,t)(1-t)^{n}s^{n-1}dt\right]ds.\]
By the L'Hopital rule we have
\[\lim_{r\to 0^{+}}\frac{n}{r^{n}}\int_{0}^{r}f(s)s^{n-1}=\lim_{r\to 0^{+}} \frac{nf(r)r^{n-1}}{nr^{n-1}}=\lim_{r\to 0^{+}}f(r)=1.\]
For the second integral, using the L'Hopital rule we have
\[\lim_{r\to 0^{+}}\frac{1}{r^{n}}\int_{0}^{r}f(s)\left[\int_{0}^{1}G^{f}(s,t)(1- t)^{n}s^{n-1}dt\right] =\lim_{r\to 0^{+}}\frac{1}{nr^{n-1}}f(r)\int_{0}^{r}G^{f}(r,t)(1-t)^{n}r^{n- 1}dt\] \[=\frac{1}{n}\lim_{r\to 0^{+}}f(r)\int_{0}^{r}G^{f}(r,t)(1-t)^{n}dt=0.\]
Hence \(g_{n}(r)\sim r^{n}\), \(r\to 0^{+}\). For the derivative,
\[g_{n}^{\prime}(r)=-\frac{nf^{\prime}(r)}{f(r)}\int_{0}^{r}f(s)\mathbf{T}_{f}[ s^{n}]ds+\frac{n}{r}\mathbf{T}_{f}[r^{n}],\]
and then
\[\frac{g^{\prime}(r)}{nr^{n-1}}=-\frac{rf^{\prime}(r)}{f^{2}(r)}\left(\frac{1} {r^{n}}\int_{0}^{r}f(s)\mathbf{T}_{f}[s^{n}]ds\right)+\frac{1}{r^{n}}\mathbf{ T}_{f}[r^{n}].\]
We have just proven that \(\lim_{r\to 0^{+}}\frac{n}{r^{n}}\int_{0}^{r}f(s)\mathbf{T}_{f}[s^{n}]=1\), then by (59) and Remark 29 we have that \(g^{\prime}(r)\sim nr^{n-1}\), \(r\to 0^{+}\). Hence, \(g_{n}(r)\) satisfies the asymptotic conditions (67). Finally, denote the operator of the right-hand side of (68) by \(\widetilde{T}_{2}h_{n}\) and note that \(\widehat{\mathbf{L}}_{1}[rh_{n}^{\prime}]=n^{2}rh_{n}^{\prime}\). Hence we have
\[\widehat{\mathbf{L}}_{\frac{1}{f}}\widetilde{T}_{2}h_{n}=\mathbf{D}_{\frac{1 }{f}}\mathbf{D}_{f}\widetilde{T}_{2}h_{n}=\mathbf{D}_{\frac{1}{f}}\mathbf{T}_ {f}[rh_{n}^{\prime}]\]
and then
\[\mathbf{D}_{f}\widehat{\mathbf{L}}_{\frac{1}{f}}\widetilde{T}_{2}[h_{n}]= \widehat{\mathbf{L}}_{f}\mathbf{T}_{f}[rh_{n}^{\prime}]=\mathbf{T}_{f} \widehat{\mathbf{L}}_{1}[rh_{n}^{\prime}]=n^{2}\mathbf{T}_{f}[rh_{n}^{\prime}].\]
Since \(\mathbf{D}_{f}\widetilde{T}_{2}h_{n}=\mathbf{T}_{f}[rh_{n}^{\prime}]\) we obtain that \(\mathbf{D}_{f}\left(\widehat{\mathbf{L}}_{\frac{1}{f}}\widetilde{T}_{2}h_{n} \right)=n^{2}\mathbf{D}_{f}\widetilde{T}_{f}h_{n}\) in the interval \((0,\varrho_{\Omega}]\). Then \(\widehat{\mathbf{L}}_{\frac{1}{f}}\widetilde{T}_{2}h_{n}=n^{2}\widetilde{T}_{ 2}h_{n}+\frac{c}{f}\) for some constant \(c\in\mathbb{C}\). Note that \(\widetilde{T}_{2}h_{n}(0)=0\) and
\[\mathbf{D}_{\frac{1}{f}}\mathbf{T}_{f}[rh_{n}^{\prime}]=rf\left(-\frac{f^{ \prime}}{f^{2}}\mathbf{T}_{f}[rh_{n}^{\prime}]+\frac{1}{f}\frac{d}{dr}\mathbf{ T}_{f}[rh_{n}^{\prime}]\right).\]
We have
\[\frac{d}{dr}\mathbf{T}_{f}[r^{n}]=r^{n-1}+\int_{0}^{1}(1-t)^{n}\left\{G_{r}^{f }(r,t)r^{n}+nG(r,t)r^{n-1}\right\}.\]
By the conditions (49), \(\frac{d}{dr}\mathbf{T}_{f}[r^{n}]=r^{n-1}\big{|}_{r=0}\). Thus, by (59), \(\mathbf{D}_{\frac{1}{f}}\mathbf{T}_{f}[rh_{n}^{\prime}]\big{|}_{r=0}=0\), which implies \(\widehat{\mathbf{L}}_{\frac{1}{f}}\widetilde{T}_{2}h_{n}(0)=0\). Hence \(\widehat{\mathbf{L}}_{\frac{1}{f}}\widetilde{T}_{2}h_{n}=n^{2}\widetilde{T}_{ 2}h_{n}\). Since \(\widetilde{T}h_{n}\) satisfies (67), it must be the regular solution at \(r=0\). Thus, \(\mathbf{T}_{\frac{1}{f}}h_{n}=\widetilde{T}_{2}h_{n}\). The equality is fulfilled for all \(p\in\mathcal{P}[0,\varrho_{\Omega}]\) by the linearity of \(\mathbf{T}_{\frac{1}{f}}\) and \(\widetilde{T}_{2}\).
**Theorem 31**: _For all \(u\in C^{1}[0,\varrho_{\Omega})\), the following equality holds_
\[\mathbf{T}_{\frac{1}{f}}u(r)=\frac{1}{f(r)}\left(\int_{0}^{r}f(s)\mathbf{T}_{ f}[sh^{\prime}(s)]ds+u(0)\right),\quad r\in[0,\varrho_{\Omega}). \tag{69}\]
**Proof.** Let \(u\in C^{1}[0,\varrho_{\Omega})\) and take \(0<\rho<\varrho_{\Omega}\). Since \(u\in C^{1}[0,\varrho]\), by the Weierstrass approximation theorem there exists a sequence \(\{p_{n}\}_{n=0}^{\infty}\subset{\cal P}[0,\rho_{\Omega}]\) such that \(p_{n}^{(k)}\stackrel{{[0,\rho]}}{{\rightarrow}}h^{(k)}\) when \(n\rightarrow\infty\), \(k=0,1\). Since \(G_{f},G_{\frac{1}{f}}\in C\left([0,\varrho_{\Omega}]\times[0,1]\right)\), we have that \({\bf T}_{\frac{1}{f}}p_{n}\stackrel{{[0,\rho]}}{{\rightarrow}}{ \bf T}_{\frac{1}{f}}u\) and \({\bf T}_{f}[sp_{n}^{\prime}]\stackrel{{[0,\rho]}}{{\rightarrow}}{ \bf T}_{f}[su^{\prime}]\), from where \({\bf T}_{\frac{1}{f}}u(r)=\widetilde{T}_{2}u(r)\) for all \(r\in[0,\rho]\). Since \(\rho\) was arbitrary, we obtain the equality in the whole segment \([0,\varrho_{\Omega})\).
As a consequence of (31) we obtain the following transmutation relations.
**Proposition 32**: _The following equalities hold_
\[{\bf D}_{f}{\bf T}_{\frac{1}{f}}u={\bf T}{\bf D}_{1}u,\quad{\bf D}_{\frac{1}{ f}}{\bf T}_{f}u={\bf T}_{\frac{1}{f}}{\bf D}_{1}u,\quad\forall u\in C^{1}[0, \varrho_{\Omega}). \tag{70}\]
**Remark 33**:
* _Equality (_69_) and the transmutation relations (_70_) are valid for_ \(u\in C^{1}[0,\varrho_{\Omega}]\)_._
* _If_ \(u\in C^{2}[0,\varrho_{\Omega})\)_, then_ \[\widehat{\bf L}_{f}{\bf T}_{f}u={\bf D}_{f}{\bf D}_{\frac{1}{f}}{\bf T}_{f}u= {\bf D}_{f}{\bf T}_{\frac{1}{f}}{\bf D}_{1}u={\bf T}_{f}{\bf D}_{1}^{2}u={\bf T }_{f}\widehat{\bf L}_{1}u.\]
### Transmutation operators for the radial Vekua equation
**Theorem 34**: _For all \(W\in C^{1}(\Omega;\mathbb{B})\), the following equalities hold_
\[r\left(\overline{\mathbf{\partial}}-\frac{\overline{ \mathbf{\partial}}f}{f}C_{\mathbb{B}}\right){\cal T}_{f}W = {\cal T}_{\frac{1}{f}}r\overline{\mathbf{\partial}}W, \tag{71}\] \[r\left(\mathbf{\partial}-\frac{\mathbf{\partial }f}{f}C_{\mathbb{B}}\right){\cal T}_{f}W = {\cal T}_{\frac{1}{f}}r\mathbf{\partial}W. \tag{72}\]
**Proof.** We prove the first equality (the proof of the second is analogous). Let \(W\in C^{1}(\Omega;\mathbb{B})\) and \(u=\mathop{\rm Sc}W\), \(v=\mathop{\rm Vec}W\). By (45) and (70) we have
\[r\left(\overline{\mathbf{\partial}}-\frac{\overline{ \mathbf{\partial}}f}{f}C_{\mathbb{B}}\right){\cal T}_{f}W =\] \[= \frac{e^{\mathbf{j}\theta}}{2}\left({\bf D}_{\frac{1}{f}}{\bf T}_{ f}u+\mathbf{j}\frac{\partial}{\partial\theta}{\bf T}_{f}u+\mathbf{j}\left(r \frac{\partial}{\partial r}{\bf T}_{\frac{1}{f}}v+\frac{rf^{\prime}(r)}{f(r)}{ \bf T}_{\frac{1}{f}}v\right)-\frac{\partial}{\partial\theta}{\bf T}_{\frac{1} {f}}v\right)\] \[= \frac{e^{\mathbf{j}\theta}}{2}\left({\bf T}_{\frac{1}{f}}{\bf D}_ {1}u+\mathbf{j}{\bf T}_{f}u_{\theta}+\mathbf{j}{\bf D}_{f}{\bf T}_{\frac{1}{f} }v-{\bf T}_{\frac{1}{f}}v_{\theta}\right)\] \[= \frac{e^{\mathbf{j}\theta}}{2}\left({\bf T}_{\frac{1}{f}}\left({ \bf D}_{1}u-v_{\theta}\right)+\mathbf{j}{\bf T}_{f}\left({\bf D}_{1}u+v_{ \theta}\right)\right)={\bf T}_{\frac{1}{f}}\mathop{\rm Sc}r\overline{\mathbf{\partial}}W+\mathbf{j}{\bf T}_{f}\mathop{\rm Vec}r\overline{\mathbf{\partial}}W\] \[= {\cal T}_{\frac{1}{f}}r\overline{\mathbf{\partial}}W.\]
**Proposition 35**: _The following equality is valid_
\[{\rm V}_{f}(\Omega;\mathbb{B})={\cal T}_{f}\left(\mathop{\rm Hol}(\Omega; \mathbb{B})\right). \tag{73}\]
**Proof.** By the transmutation property (71), it is clear that \({\cal T}_{f}W\in{\rm V}_{f}(\Omega)\) if \(W\in{\rm Hol}(\Omega;\mathbb{B})\). Reciprocally, if \(W\in{\rm V}_{f}(\Omega)\), by Remark (26) there exists \(V\in C^{2}(\Omega;\mathbb{B})\) such that \(W={\cal T}_{f}V\). By (71) we obtain
\[0=r\left(\overline{\boldsymbol{\partial}}-\frac{\overline{\boldsymbol{\partial }}f}{f}C_{\mathbb{B}}\right){\cal T}_{f}V={\cal T}_{\frac{1}{f}}r\overline{ \boldsymbol{\partial}}V.\]
Since \({\cal T}_{\frac{1}{f}}\) is a bijection, hence \(\overline{\boldsymbol{\partial}}V=0\) in \(\Omega\), that is, \(V\in{\rm Hol}(\Omega;\mathbb{B})\).
Denote the harmonic Bergman space by
\[b_{2}(\Omega)=\{h\in{\rm Har}(\Omega)\,|\,h\in L_{2}(\Omega)\},\]
and the Bergman space of solutions of \({\bf S}_{f}u=0\) as
\[{\rm Sol}_{2}^{{\bf S}_{f}}(\Omega):=\{u\in{\rm Sol}_{2}^{{\bf S}_{f}}\mid u \in L_{2}(\Omega)\}. \tag{74}\]
Since \({\rm Sol}^{{\bf S}_{f}}(\Omega)\) is closed in the Frechet space \(C(\Omega)\)[30, Remark 13], then \({\rm Sol}_{2}^{{\bf S}_{f}}(\Omega)\) is a Hilbert space with reproducing kernel [31, Prop. 2.3].
**Proposition 36**: _The operator \({\cal T}_{f}:{\cal A}^{2}(\Omega;\mathbb{B})\to{\cal A}_{f}^{2}(\Omega; \mathbb{B})\) is bounded and invertible with bounded inverse. The same properties are valid for \({\cal T}_{\frac{1}{f}}\)._
**Proof.** Since \({\bf T}_{f},{\bf T}_{\frac{1}{f}}\in{\cal B}\left(b_{2}(\Omega),L_{2}(\Omega)\right)\) with bounded inverses [30, Sec. 3], then \({\cal T}_{f}\in{\cal B}\left({\cal A}_{2}(\Omega;\mathbb{B}),L_{2}(\Omega; \mathbb{B})\right)\). If \(W\in{\cal A}_{f}^{2}(\Omega;\mathbb{B})\), by Theorem 10, \({\rm Sc}\,W\in{\rm Sol}_{2}^{{\bf S}_{f}}(\Omega)\) and \({\rm Vec}\,W\in{\rm Sol}_{2}^{{\bf S}_{\frac{1}{f}}}(\Omega)\). Thus, \({\bf T}_{f}^{-1}{\rm Sc}\,W,{\bf T}_{\frac{1}{f}}^{-1}\,{\rm Vec}\,W\in b_{2}(\Omega)\) and then \({\cal T}_{f}^{-1}W\in L_{2}(\Omega;\mathbb{B})\). By Proposition 35, \({\cal T}_{f}^{-1}W\in{\cal A}_{2}(\Omega;\mathbb{B})\). Hence \({\cal T}_{f}:{\cal A}_{2}(\Omega;\mathbb{B})\to{\cal A}_{2}^{f}(\Omega; \mathbb{B})\) is a bounded bijection. By Theorem 12, \({\cal A}_{f}^{2}(\Omega;\mathbb{B})\) is a complex Hilbert space, then it follows from the open mapping theorem [19, Cor. 5.11] that \({\cal T}_{f}^{-1}\in{\cal B}\left({\cal A}_{f}^{2}(\Omega;\mathbb{B}),{\cal A }^{2}(\Omega;\mathbb{B})\right)\).
## 5 Complete system of solutions for the radial Vekua equation
### The radial formal powers
The following result allows us to know the action of the transmutation operator \({\cal T}_{f}\) over the set of polynomials in the variable \(z\).
**Theorem 37** ([30], Th. 25): _For each \(n\in\mathbb{N}_{0}\), the function \({\bf T}_{f}[z^{n}]\) is given by_
\[{\bf T}_{f}[r^{n}e^{ni\theta}]=\phi_{f}^{(n)}(r)r^{n}e^{in\theta}, \tag{75}\]
_where \(\phi_{f}^{(n)}(r)=\frac{y_{m}^{\prime}(r)}{r^{m+\frac{1}{2}}}\) and \(y_{m}^{\prime}(r)\) is the unique solution of the perturbed Bessel equation_
\[-y_{m}^{\prime\prime}(r)+\frac{\left(m-\frac{1}{2}\right)\left(m+\frac{1}{2} \right)}{r^{2}}y_{m}(r)+q_{f}(r)y_{m}(r)=0,\quad 0<r\leqslant\varrho_{\Omega}, \tag{76}\]
_satisfying the asymptotic conditions_
\[y_{m}^{\prime}(r)\sim r^{m+\frac{1}{2}},\quad\frac{d}{dr}y_{m}^{\prime}(r)\sim \left(m+\frac{1}{2}\right)r^{m-\frac{1}{2}},\quad r\to 0^{+} \tag{77}\]
Since \(\mathbf{T}_{f}[r^{n}e^{in\theta}]=\mathbf{T}_{f}[r^{n}]e^{in\theta}\), we have the equality
\[\mathbf{T}_{f}[r^{n}]=\phi_{f}^{(n)}(r)r^{n},\quad\forall n\in\mathbb{N}_{0}. \tag{78}\]
**Remark 38**: _A numerical method for the construction of the functions \(\{\phi_{f}^{(n)}(r)\}_{n=0}^{\infty}\) based on the Spectral Parameter Power Series Method (SPPS) can be founded in [13, Sec. 3] and [30, Sec. 6]._
**Definition 39**: _Let \(n\in\mathbb{N}_{0}\). The_ **basic Bicomplex radial formal powers of degree \(n\)** _(associated to \(f\)), are the functions given by_
\[\mathcal{Z}_{f}^{(n)}(1;z) := \mathcal{T}_{f}[\widehat{z}^{n}], \tag{79}\] \[\mathcal{Z}_{f}^{(n)}(\mathbf{j};z) := \mathcal{T}_{f}[\mathbf{j}\widehat{z}^{n}]. \tag{80}\]
_The family of all basic radial formal powers is given by \(\{\mathcal{Z}_{f}^{(n)}(1;z),\mathcal{Z}_{f}^{(n)}(\mathbf{j},z)\}_{n=0}^{\infty}\)._
By Theorem (37) and formulas (78) and (56), the basic Bicomplex radial formal powers can be written as
\[\mathcal{Z}_{f}^{(0)}(1;z) = f(r), \tag{81}\] \[\mathcal{Z}_{f}^{(0)}(\mathbf{j};z) = \frac{1}{f(r)}\mathbf{j}\] (82) \[\mathcal{Z}_{f}^{(n)}(1;z) = r^{n}\left(\phi_{f}^{(n)}(r)\cos(n\theta)+\mathbf{j}\phi_{ \frac{1}{7}}^{(n)}\sin(n\theta)\right),\quad n\geqslant 1,\] (83) \[\mathcal{Z}_{f}^{(n)}(\mathbf{j};z) = r^{n}\left(-\phi_{f}^{(n)}(r)\sin(n\theta)+\mathbf{j}\phi_{ \frac{1}{7}}^{(n)}(r)\cos(n\theta)\right)\quad n\geqslant 1. \tag{84}\]
**Remark 40**: _Let \(P(z)=\sum_{n=0}^{M}A_{n}\widehat{z}^{n}\) a Bicomplex polynomial. By (55) we have_
\[\mathcal{T}_{f}P(z) =\sum_{n=0}^{M}\mathcal{T}_{f}\left[A_{n}\widehat{z}^{n}\right]= \sum_{n=0}^{M}\left(\mathrm{Sc}(A_{n})\mathcal{T}_{f}[\widehat{z}^{n}]+ \mathrm{Vec}(A_{n})\mathbf{j}\mathcal{T}_{\frac{1}{7}}[\widehat{z}^{n}]\right)\] \[=\sum_{n=0}^{M}\left(\mathrm{Sc}(A_{n})\mathcal{T}_{f}[\widehat{ z}^{n}]+\mathrm{Vec}(A_{n})\mathcal{T}_{f}[\mathbf{j}\widehat{z}^{n}]\right)\] \[=\sum_{n=0}^{M}\left(\mathrm{Sc}(A_{n})\mathcal{Z}_{f}^{(n)}(1;z )+\mathrm{Vec}(B_{n})\mathcal{Z}_{f}^{(n)}(\mathbf{j};z)\right),\]
_where in the third inequality we use (56)._
Motivated by this result and following [8], we introduce the next definition.
**Definition 41**: _Let \(n\in\mathbb{N}_{0}\) and \(A\in\mathbb{B}\). The Bicomplex radial formal power of degree \(n\) and coefficient \(A\) is defined by_
\[\mathcal{Z}_{f}^{(n)}(A;z):=\operatorname{Sc}(A)\mathcal{Z}_{f}^{(n)}(1;z)+ \operatorname{Vec}(A)\mathcal{Z}_{f}^{(n)}(\mathbf{j};z). \tag{85}\]
_A radial formal polynomial of degree \(N\in\mathbb{N}_{0}\) is a sum of the form_
\[S_{N}(z):=\sum_{n=0}^{N}\mathcal{Z}_{f}^{(n)}(A_{n};z),\quad\text{ with }\ \{A_{n}\}_{n=0}^{N}\in\mathbb{B}\ \text{ and }\ A_{N}\neq 0. \tag{86}\]
_The set of all radial formal polynomials of degree \(N\) is denoted by \(\mathscr{S}_{f}^{N}(\Omega;\mathbb{B})\). We denote \(\mathscr{S}_{f}(\Omega;\mathbb{B}):=\bigcup_{n=0}^{\infty}\mathscr{S}_{f}^{N} (\Omega;\mathbb{B})\)._
In particular, the basic Bicomplex formal powers correspond to the coefficients \(1\) and \(\mathbf{j}\).
**Remark 42**: _Note that for \(\alpha,\beta\in\mathbb{C}\) and \(A\in\mathbb{B}\) we have_
\[\alpha\mathcal{Z}_{f}^{(n)}(A;z)=\mathcal{Z}_{f}^{(n)}(\alpha A;z),\quad \mathcal{Z}_{f}^{(n)}(\alpha+\mathbf{j}\beta;z)=\alpha\mathcal{Z}_{f}^{(n)}(1 ;z)+\beta\mathcal{Z}_{f}^{(n)}(\mathbf{j};z).\]
_Hence \(\mathscr{S}_{f}^{N}(\Omega;\mathbb{B})=\operatorname{Span}_{\mathbb{C}}\{ \mathcal{Z}_{f}^{(n)}(1;z),\mathcal{Z}_{f}^{(n)}(\mathbf{j};z)\}_{n=0}^{N}\) for \(N\in\mathbb{N}_{0}\) and \(\mathscr{S}_{f}(\Omega;\mathbb{B})=\operatorname{Span}_{\mathbb{C}}\{ \mathcal{Z}_{f}^{(n)}(1;z),\mathcal{Z}_{f}^{(n)}(\mathbf{j};z)\}_{n=0}^{\infty}\). By Remark 40, \(\mathcal{T}_{f}\left[\operatorname{Span}_{\mathbb{C}}\{\tilde{z}^{n},\mathbf{ j}\tilde{z}^{n}\}_{n=0}^{N}\right]=\mathscr{S}_{f}^{N}(\Omega;\mathbb{B})\)._
### Completeness in the space \(\mathbf{V}_{f}(\Omega;\mathbb{B})\)
Given \(X\subset\mathbb{C}\), the star-hull (with respect to \(z=0\)) of \(X\) is the set \(\operatorname{Star}(X):=\bigcup_{z\in X}[0,z]\), that is, the smallest star-shaped domain with respect to \(z=0\) containing \(X\). If \(K\subset\mathbb{C}\) is compact, then \(\operatorname{Star}(K)\) is also compact [30, Lemma 7].
**Lemma 43**: _For any compact \(K\subset\Omega\) and \(W\in C(\Omega;\mathbb{B})\), the following inequality holds_
\[\max_{z\in K}|\mathcal{T}_{f}W(z)|_{\mathbb{B}}\leqslant M_{1}\max_{z\in \operatorname{Star}(K)}|W(z)|_{\mathbb{B}}, \tag{87}\]
_where \(M_{1}=2\max\left\{1+\frac{1}{2}\|G^{f}\|_{C([0,\varrho_{\Omega}]\times[0,1])},1+\frac{1}{2}\|G^{\frac{1}{f}}\|_{C([0,\varrho_{\Omega}]\times[0,1])}\right\}\)._
**Proof.** Let \(K\subset\Omega\) be compact. The operator \(\mathbf{T}_{f}\) satisfies the following property [30, Prop. 8]
\[\max_{z\in K}|\mathbf{T}_{f}u(z)|\leqslant\left(1+\frac{1}{2}\|G^{f}\|_{C([0, \varrho_{\Omega}]\times[0,1])}\right)\max_{z\in\operatorname{Star}(K)}|u(z)|, \quad\forall u\in C(\Omega).\]
Thus, (87) follows from the fact that \(|\mathcal{T}_{f}W(z)|_{\mathbb{B}}\leqslant|\mathbf{T}_{f}\operatorname{Sc}W( z)|+|\mathbf{T}_{\frac{1}{f}}\operatorname{Vec}W(z)|\).
**Lemma 44** ([31], Lemma 5.10): _If \(X\subset\mathbb{C}\) is a bounded set star-shaped with respect to \(z=0\), then \(\mathbb{C}\setminus X\) is connected. In particular, if \(X=\Omega\) is a domain, then \(\Omega\) is simply connected._
**Lemma 45** (Runge's property): _Let \(V\in\operatorname{Hol}(\Omega;\mathbb{B})\) and \(K\subset\Omega\) be compact. Given \(\varepsilon>0\), there exists a Bicomplex polynomial \(P_{N}(z)=\sum_{n=0}^{N}A_{n}\tilde{z}^{n}\), with \(A_{0},\cdots,A_{N}\in\mathbb{B}\), such that_
\[\max_{z\in K}|V(z)-P_{N}(z)|_{\mathbb{B}}<\varepsilon \tag{88}\]
**Proof.** Since \(V\in\mathrm{Hol}(\Omega;\mathbb{B})\), then \(V^{+}\) is anti-holomorphic and \(V^{-}\) is holomorphic. Hence, \(\left(V^{-}\right)^{*}\) is holomorphic in \(\Omega\). Since \(\Omega\) is bounded and star-shaped with respect to \(z=0\), by Lemma 44 and the complex Runge's theorem [14, Cor. 1.15], there exists polynomials \(p_{1}(z)=\sum_{n=0}^{N_{1}}a_{n}z^{n}\) and \(p_{2}(z)=\sum_{n=0}^{N_{2}}b_{n}z^{n}\) such that
\[\max_{z\in K}|\left(V^{+}(z)\right)^{*}-p_{1}(z)|<\frac{\varepsilon}{2\sqrt{2}},\quad\max_{z\in K}|V^{-}(z)-p_{2}(z)|<\frac{\varepsilon}{2\sqrt{2}}.\]
Take \(N=\max\{N_{1},N_{2}\}\), define
\[\tilde{a}_{n}:=\begin{cases}a_{n},&\text{ if }0\leqslant n\leqslant N_{1},\\ 0,&\text{ if }N_{1}<n\leqslant N,\end{cases},\quad\tilde{b}_{n}:=\begin{cases}b_{n},& \text{ if }0\leqslant n\leqslant N_{1},\\ 0,&\text{ if }N_{1}<n\leqslant N,\end{cases}\]
and \(\tilde{p_{1}}(z):=\sum_{n=0}^{N}\tilde{a}_{n}z^{n}\), \(\tilde{p_{2}}(z):=\sum_{n=0}^{N}\tilde{b}_{n}z^{n}\). Finally, define \(P_{N}(z)=\mathbf{p}^{+}(\tilde{p_{1}}(z))^{*}+\mathbf{p}^{-}\tilde{p_{2}}(z)\). By (16), \(P_{N}\) is a Bicomplex polynomial. Thus, for \(z\in K\) we have
\[|V(z)-P_{N}(z)|_{\mathbb{B}} \leqslant\frac{1}{\sqrt{2}}\left(|V^{+}(z)-P^{+}(z)|+|V^{-}(z)-P ^{-}(z)|\right)\] \[=\frac{1}{\sqrt{2}}\left(|V^{+}(z)-(p_{1}(z))^{*}\,|+|V^{-}(z)-p_ {1}(z)|\right)\leqslant\frac{\varepsilon}{2},\]
where in the first inequality we use (6). Since \(z\in K\) was arbitrary, we conclude that \(\max_{z\in K}|V(z)-P_{N}(z)|<\varepsilon\).
**Theorem 46**: _The Bicomplex radial formal powers are a complete system of solutions for the radial Vekua equation, that is, for any solution \(W\in\mathrm{V}_{f}(\Omega;\mathbb{B})\) and any compact \(K\subset\Omega\), there exists a sequence \(\{S_{n}(z)\}\in\mathscr{S}_{f}(\Omega;\mathbb{B})\) such that \(S_{n}\xrightarrow{K}W\). Furthermore, if \(\Omega=B_{\varrho_{\Omega}}^{\mathbb{C}}(0)\), there exists constants \(\{A_{n}\}_{n=0}^{\infty}\subset\mathbb{B}\) such that_
\[W(z)=\sum_{n=0}^{\infty}\mathcal{Z}_{f}^{(n)}(A_{n};z),\quad z\in B_{\varrho_{ \Omega}}^{f}(0), \tag{89}\]
_and the series converges absolutely and uniformly on compact subsets of \(B_{\varrho_{\Omega}}^{\mathbb{C}}(0)\)._
**Proof.** Let \(K\subset\Omega\) be compact and \(W\in\mathrm{V}_{f}(\Omega;\mathbb{B})\). By Proposition 35, \(V=\mathcal{T}_{f}^{-1}W\in\mathrm{Hol}(\Omega;\mathbb{B})\). For each \(N\in\mathbb{N}\), since \(\mathrm{Star}(K)\) is compact, by Lemma 45 there exists a Bicomplex polynomial \(P_{N}(z)=\sum_{n=0}^{M_{N}}B_{n}\hat{z}^{n}\) satisfying \(\max_{z\in\mathrm{Star}(K)}|V(z)-P_{N}(z)|<\frac{1}{M_{1}N}\), where \(M_{1}\) is defined as in Lemma 43. Set \(S_{N}(z)=\mathcal{T}_{f}P_{N}(z)\). By Remark 42, \(S_{N}\in\mathscr{S}_{f}(\Omega;\mathbb{B})\). Given \(z\in K\), by Lemma 43 we have
\[|W(z)-S_{N}(z)|_{\mathbb{B}}=|\mathcal{T}_{f}(V-P_{N})(z)|_{\mathbb{B}} \leqslant M_{1}\max_{z\in\mathrm{Star}(K)}|V(z)-P_{N}(z)|_{\mathbb{B}}\leqslant \frac{1}{N}.\]
Since \(z\in K\) was arbitrary, we conclude that \(\max_{z\in K}|W(z)-S_{N}(z)|_{\mathbb{B}}\leqslant\frac{1}{N}\). Thus, the sequence \(\{S_{N}(z)\}_{n=0}^{\infty}\) converges uniformly to \(W\) on \(K\).
Now, suppose that \(\Omega=B_{\varrho\alpha}^{\mathbb{C}}(0)\). By Proposition 6(ii),
\[V(z)=\sum_{n=0}A_{n}\widehat{z}^{n},\quad\text{ with }A_{n}=\frac{\mathbf{\partial}^{n}V(0)}{n!},n \in\mathbb{N}, \tag{90}\]
and the series converges in the topology of \(C(\Omega;\mathbb{B})\). By the linearity and the continuity in \(C(\Omega;\mathbb{B})\) (Proposition 25) of \(\mathcal{T}_{f}\) we have
\[W(z)=\mathcal{T}_{f}\left[\sum_{n=0}^{\infty}A_{n}\widehat{z}^{n}\right]=\sum _{n=0}^{\infty}\mathcal{T}_{f}\left[A_{n}\widehat{z}^{n}\right]=\sum_{n=0}^{ \infty}\mathcal{Z}_{f}^{(n)}(A_{n};z)\]
and by the continuity of \(\mathcal{T}_{f}\), the series converges in the topology of \(C(\Omega;\mathbb{B})\). For the absolutely convergence, consider \(0<r<\varrho_{\Omega}\) and \(z\in\overline{B_{r}^{\mathbb{C}}(0)}\). By Lemma 43 we obtain
\[|\mathcal{T}_{f}[A_{n}\widehat{z}^{n}]|_{\mathbb{B}}\leqslant M_{1}\max_{z\in \operatorname{Star}(\overline{B_{r}^{\mathbb{C}}(0)})}|A_{n}\widehat{z}^{n}|_{ \mathbb{B}}\leqslant M_{1}\sqrt{2}|A_{n}|\max_{z\in\overline{B_{r}^{\mathbb{C }}(0)}}|\widehat{z}^{n}|=M_{1}\sqrt{2}|A_{n}|r^{n}\]
(here, we use the fact that \(\operatorname{Star}(X)=X\) if \(X\) is star-shaped with respect to \(z=0\)). By Proposition 6(ii), series (90) converges absolutely in \(B_{r}^{\mathbb{C}}(0)\), hence \(\sum_{n=0}^{\infty}|\mathcal{T}_{f}[A_{n}\widehat{z}^{n}]|_{\mathbb{C}}\) is dominated by the convergent series \(\sum_{n=0}^{\infty}|A_{n}|r^{n}\). Therefore, (89) converges absolutely in \(\overline{B_{r}^{\mathbb{C}}(0)}\).
### An orthogonal basis for the Bergman space on a disk
**Theorem 47**: _Suppose that \(\Omega=B_{\varrho_{\Omega}}(0)\). The basic Bicomplex radial formal powers \(\{\mathcal{Z}_{f}^{(n)}(1;z),\mathcal{Z}_{f}^{(n)}(\mathbf{j};z)\}_{n=0}^{\infty}\) satisfy the following orthogonality relations for all \(m,n\in\mathbb{N}\):_
\[\left\langle\mathcal{Z}_{f}^{(n)}(1;\cdot),\mathcal{Z}_{f}^{(m)}( \mathbf{j};\cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})} =0, \tag{91}\] \[\left\langle\mathcal{Z}_{f}^{(n)}(\Lambda;\cdot),\mathcal{Z}_{f} ^{(m)}(\Lambda;\cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})} =\pi\left(\left\|\phi_{f}^{(n)}\right\|_{L_{2}(0,\varrho\Omega;r ^{n+1}dr)}^{2}+\left\|\phi_{\frac{1}{f}}^{(n)}\right\|_{L_{2}(0,\varrho\Omega; r^{n+1}dr)}^{2}\right)\delta_{(n,m)},\] (92) \[\left\langle\mathcal{Z}_{f}^{(0)}(1;\cdot),\mathcal{Z}_{f}^{(m)} (\Lambda;\cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})} =\left\langle\mathcal{Z}_{f}^{(0)}(\mathbf{j};\cdot),\mathcal{Z }_{f}^{(m)}(\Lambda;\cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})}=0 \tag{93}\]
_for \(\Lambda\in\{1,\mathbf{j}\}\). Furthermore, the basic Bicomplex radial formal powers are an orthogonal basis for the Bergman space \(\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\),_
**Proof.** We prove (91) (the proof of (92) and (93) is analogous). Denote
\(I_{n,m}=\left\langle\mathcal{Z}_{f}^{(n)}(1;\cdot),\mathcal{Z}_{f}^{(m)}({\bf j}; \cdot)\right\rangle_{L_{2}(\Omega;\mathbb{B})}\). We have
\[I_{n,m} =\iint\limits_{B_{f}^{\infty}(0)}\Big{\{}\operatorname{Sc} \mathcal{Z}_{f}^{(n)}(1;z)\left(\operatorname{Sc}\mathcal{Z}_{f}^{(m)}({\bf j}; z)\right)^{*}+\operatorname{Vec}\mathcal{Z}_{f}^{(n)}(1;z)\left(\operatorname{Vec} \mathcal{Z}_{f}^{(m)}({\bf j};z)\right)^{*}\Big{\}}dA_{z}\] \[=\int_{0}^{1}r\int_{0}^{2\pi}\Bigg{\{}-r^{n+m}\phi_{f}^{(n)}(r) \left(\phi_{f}^{(m)}(r)\right)^{*}\cos(n\theta)\sin(m\theta)\] \[\qquad\qquad+r^{n+m}\phi_{\frac{1}{f}}^{(n)}(r)\left(\phi_{ \frac{1}{f}}^{(m)}(r)\right)^{*}\sin(n\theta)\cos(m\theta)\Bigg{\}}d\theta dr\] \[=-\int_{0}^{1}r^{1+n+m}\phi_{f}^{(n)}(r)\left(\phi_{f}^{(m)}(r) \right)^{*}dr\int_{0}^{2\pi}\cos(n\theta)\sin(m\theta)d\theta\] \[\qquad+\int_{0}^{1}r^{1+n+m}\phi_{\frac{1}{f}}^{(n)}(r)\left(\phi _{\frac{1}{f}}^{(m)}(r)\right)^{*}dr\int_{0}^{2\pi}\sin(n\theta)\cos(m\theta) d\theta=0.\]
Relations (91)-(93) implies that \(\{\mathcal{Z}_{f}^{(n)}(1;z),\mathcal{Z}_{f}^{(n)}({\bf j};z)\}_{n=0}^{\infty}\) is an orthogonal system. Take \(W\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\). By Proposition (36), \(V=\mathcal{T}_{f}^{-1}W\in\mathcal{A}^{2}(\Omega;\mathbb{B})\), and by Proposition 7 we can write
\[V(z)=\sum_{n=0}^{\infty}\left[a_{n}\widehat{z}^{n}+b_{n}{\bf j}\widehat{z}^{n}\right]\]
for some coefficients \(\{a_{n},b_{n}\}_{n=0}^{\infty}\subset\mathbb{C}\). This series converges in \(L_{2}(\Omega;\mathbb{B})\). Since
\(\mathcal{T}_{f}\in\mathcal{B}\left(\mathcal{A}^{2}(\Omega;\mathbb{B}), \mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\right)\), we obtain
\[W(z)=\mathcal{T}_{f}V(z)=\sum_{n=0}^{\infty}\left(a_{n}\mathcal{T}_{f}[ \widehat{z}^{n}]+b_{n}\mathcal{T}_{f}[{\bf j}\widehat{z}^{n}]\right)=\sum_{n=0 }^{\infty}\left(a_{n}\mathcal{Z}_{f}^{(n)}(1,z)+b_{n}\mathcal{Z}_{f}^{(n)}({ \bf j},z)\right).\]
Hence, \(W\) can be expanded into a Fourier series of Bicomplex radial formal powers.
When \(\Omega\) is just bounded and star-shaped (with respect to \(z=0\)), the following result establishes some conditions for the completeness of formal powers.
**Theorem 48**: _If \(\Omega=\operatorname{Int}(\overline{\Omega})\), then \(\mathscr{S}_{f}(\Omega;\mathbb{B})\) is a complete system in \(\mathcal{A}_{f}^{2}(\Omega)\)._
**Proof.** If \(\Omega\) is star-shaped with respect to \(z=0\), so is its closure. Indeed, given \(z\in\overline{\Omega}\), take a sequence \(\{z_{n}\}\subset\Omega\) such that \(z_{n}\to z\). Then for any \(t\in[0,1]\), \(tz_{n}\in\Omega\) and \(tz_{n}\to tz\). Hence \(tz\in\overline{\Omega}\) for all \(t\in[0,1]\). Since \(\overline{\Omega}\) is bounded, by Lemma 44, \(\mathbb{C}\setminus\overline{\Omega}\) is a domain. This condition together with the hypothesis \(\Omega=\operatorname{Int}(\overline{\Omega})\) and Lemma 44 implies that \(\Omega\) is a Caratheodory domain [16, Ch. 18, Prop. 1.9]. Hence, \(\{z^{n}\}_{n=0}^{\infty}\) is a complete system in the complex analytic Bergman space \(\mathcal{A}^{2}(\Omega)\), and in consequence, \(\{(z^{*})^{n}\}_{n=0}^{\infty}\) is complete in the anti-analytic Bergman space \(\overline{\mathcal{A}}^{2}(\Omega)\)[16, Ch. 18, Th. 1.11]. Applying a procedure similar to that of the proof of Lemma 45 (changing the norm of the maximum by the \(L_{2}\)-norm) we obtain that \(\{\widehat{z}^{n},{\bf j}\widehat{z}^{n}\}_{n=0}^{\infty}\) is complete in \(\mathcal{A}^{2}(\Omega;\mathbb{B})\). Let \(W\in\mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\). By Proposition 36, \(V=\mathcal{T}_{f}^{-1}W\in\mathcal{A}^{2}(\Omega;\mathbb{B})\). Thus, given \(\varepsilon>0\), there exists a Bicomplex polynomial
\(P_{N}(z)=\sum_{n=0}^{N}A_{n}\widehat{z}^{n}\) such that \(\|V-P_{N}\|_{L_{2}(\Omega;\mathbb{B})}<\frac{\varepsilon}{2M_{2}}\), where \(M_{2}=\|\mathcal{T}_{f}\|_{\mathcal{B}\left(\mathcal{A}^{2}(\Omega;\mathbb{B}, \mathcal{A}_{f}^{2}(\Omega;\mathbb{B})\right)}\). Take \(S_{N}(z)=\mathcal{T}_{f}P_{N}(z)\in\mathscr{S}_{f}^{N}(\Omega;\mathbb{B})\). Hence
\[\left\|W-S_{N}\right\|_{L_{2}(\Omega;\mathbb{B})}=\left\|\mathcal{T}_{f}\left( V-P_{N}\right)\right\|_{L_{2}(\Omega;\mathbb{B})}\leqslant M_{2}\left\|V-P_{N} \right\|_{L_{2}(\Omega;\mathbb{B})}\leqslant\frac{\varepsilon}{2}.\]
\(\therefore\mathscr{S}_{f}(\Omega;\mathbb{B})\) is complete in \(\mathcal{A}_{f}(\Omega;\mathbb{B})\).
**Proposition 49**: _Suppose that \(\Omega=B_{\varrho_{n}}^{\mathbb{C}}(0)\). Define the sequence of constants \(\{M_{0}^{1},M_{0}^{2},M_{n}\}_{n=0}^{\infty}\) by_
\[M_{0}^{1} := \sqrt{2\pi}\left\|f\right\|_{L_{2}(0,\varrho_{\Omega};r^{n+1}dr)}, \tag{94}\] \[M_{0}^{2} := \sqrt{2\pi}\left\|\frac{1}{f}\right\|_{L_{2}(0,\varrho_{\Omega}; r^{n+1}dr)},\] (95) \[M_{n} := \left(\pi\left(\left\|\phi_{f}^{(n)}\right\|_{L_{2}(0,\varrho_{ \Omega};r^{n+1}dr)}^{2}+\left\|\phi_{\frac{1}{f}}^{(n)}\right\|_{L_{2}(0, \varrho_{\Omega};r^{n+1}dr)}^{2}\right)\right)^{\frac{1}{2}},\quad n\geqslant 1. \tag{96}\]
_Then the Bergman kernel \(\mathscr{K}_{\Omega}^{f}(A;z,\zeta)\) can be written as_
\[\mathscr{K}_{\Omega}^{f}(A;z,\zeta) =\mathrm{Sc}(A)\frac{f^{*}(\zeta)f(z)}{(M_{0}^{1})^{2}}+\frac{ \mathbf{j}\operatorname{Vec}(A)}{(M_{0}^{1})^{2}f^{*}(\zeta)f(z)}\] \[\quad+\sum_{n=1}^{\infty}\frac{1}{M_{n}^{2}}\left(\left\langle A, \mathcal{Z}_{f}^{(n)}(1;\zeta)\right\rangle_{\mathbb{B}}\mathcal{Z}_{f}^{(n)}( 1;z)+\left\langle A,\mathcal{Z}_{f}^{(n)}(\mathbf{j};\zeta)\right\rangle_{ \mathbb{B}}\mathcal{Z}_{f}^{(n)}(\mathbf{j};z)\right)\]
_The series converge with respect to \(z\) in the \(L_{2}(\Omega;\mathbb{B})\)-norm, and uniformly on compact subsets of \(\Omega\)._
**Proof.** Since \(M_{0}^{1}=\left\|\mathcal{Z}_{f}^{(0)}(1;z)\right\|_{L_{2}(\Omega;\mathbb{B})}\), \(M_{0}^{2}=\left\|\mathcal{Z}_{f}^{(0)}(\mathbf{j};z)\right\|_{L_{2}(\Omega; \mathbb{B})}\) and \(M_{n}=\left\|\mathcal{Z}_{f}^{(n)}(\Lambda;z)\right\|_{L_{2}(\Omega;\mathbb{B} )}\), \(\Lambda\in\{1,\mathbf{j}\}\), \(n\geqslant 1\) (by (81), (82), and (92)), the result follows from Remark 17, Theorem 47, and the fact that
\[\langle A,f(\zeta)\rangle_{\mathbb{B}}=\mathrm{Sc}(A)f^{*}(\zeta)\quad\text{ and}\ \ \left\langle A,\frac{\mathbf{j}}{f(\zeta)}\right\rangle_{\mathbb{B}}=\frac{ \operatorname{Vec}(A)}{f^{*}(\zeta)}.\]
**Example 50**: _Consider \(f(r)=J_{0}(\kappa r)\), \(q_{f}=-\kappa^{2}\), \(q_{\frac{1}{f}}=3\kappa^{2}\), and \(\Omega=\mathbb{D}\), as in Example 27. By Theorem 37, \(\phi_{f}^{(n)}=\frac{y_{n}^{\prime}}{r^{n+\frac{1}{2}}}\), \(\phi_{\frac{1}{f}}^{(n)}=\frac{y_{n}^{\prime}}{r^{n+\frac{1}{2}}}\), where \(y_{n}^{\prime}\) and \(y_{n}^{\frac{1}{f}}\) satisfies the perturbed Bessel equation_
\[-u_{n}^{\prime\prime}+\frac{\left(n+\frac{1}{2}\right)\left(n+\frac{1}{2} \right)}{r^{2}}u_{n}=\lambda u_{n},\quad 0<r<1, \tag{97}\]
_with \(\lambda=\kappa^{2}\) and \(\lambda=-3\kappa^{2}\), respectively. According to [13, Example 2.8], the regular solution \(u_{m}(\lambda,r)\) of (97) that satisfies the asymptotic relations \(u_{n}(\lambda,r)\sim r^{n+\frac{1}{2}}\), \(u_{n}^{\prime}(\lambda,r)\sim\left(n+\frac{1}{2}\right)r^{n-\frac{1}{2}}\), \(r\to 0^{+}\), is given by_
\[u_{n}(\lambda,r)=\Gamma\left(n+1\right)2^{n}\lambda^{-\frac{n}{2}}\sqrt{r}J_{n} (\sqrt{\lambda}r). \tag{98}\]
_Hence_
\[\phi^{(n)}_{\frac{1}{f}}=\frac{n!2^{n}}{\kappa^{n}r^{n}}J_{n}(\kappa r)\quad\phi^ {(n)}_{\frac{1}{f}}(r)=\frac{n!2^{n}}{3^{\frac{n}{2}}\kappa^{n}}I_{n}(\sqrt{3} \kappa r),\]
_where \(I_{n}\) stands for the modified Bessel function of the first kind. Thus, the basic radial formal powers are given by_
\[\mathcal{Z}^{(0)}_{f}(1;z) =J_{0}(\kappa r),\quad\mathcal{Z}^{(0)}_{f}(\mathbf{j};z)=\frac{ \mathbf{j}}{J_{0}(\kappa r)},\] \[\mathcal{Z}^{(n)}_{f}(1;z) =\frac{n!2^{n}}{\kappa^{n}}\left(J_{n}(\kappa r)\cos(n\theta)+ \mathbf{j}3^{-\frac{n}{2}}I_{n}(\sqrt{3}\kappa r)\right)\] \[\mathcal{Z}^{(n)}_{f}(1;z) =\frac{n!2^{n}}{\kappa^{n}}\mathbf{j}\left(3^{-\frac{n}{2}}I_{n }(\sqrt{3}\kappa r)\cos(n\theta)+\mathbf{j}J_{n}(\kappa r)\sin(n\theta)\right),\]
_and they are an orthogonal basis for the Bergman space associated to the Vekua equation \(\overline{\boldsymbol{\partial}}W+\kappa\overline{W}=0\) in \(\mathbb{D}\)._
**Remark 51**: _If \(f\) is real-valued, (27) is reduced to the complex main Vekua equation_
\[\frac{\partial}{\partial z^{*}}w(z)=\frac{f_{z^{*}}(z)}{f(z)}w^{*}(z), \tag{99}\]
_where \(w\) is a complex-valued function. Since \(q_{f}\) is real-valued, the kernel \(G^{f}(r,t)\) is also real-valued [21]. Thus, operators \(\mathbf{T}_{f}\) and \(\mathbf{T}_{\frac{1}{f}}\) and the functions \(\{\phi^{f}_{n}(r),\phi^{\frac{1}{f}}_{n}(r)\}_{n=0}^{\infty}\) are real-valued. Hence the results presented are valid for the complex equation (99), changing the scalar and vectorial parts for the real and imaginary parts, and the unit \(\mathbf{j}\) for \(i\). In this case, the Bergman space \(\mathcal{A}^{2}_{f}(\Omega;\mathbb{C})\) is considered a real Hilbert space. With this approach, the results concerning completeness of \(\mathcal{A}^{2}_{f}(\Omega;\mathbb{C})\) and the existence of the Bergman kernel coincides with those obtained in [9]._
## 6 Conclusions
A construction of a pair of transmutation operators that transmutes Bicomplex holomorphic functions into solutions of the radial main Vekua equation was presented. The construction was based on obtain a relationship between the transmutation operator of the associated radial Schrodinger equation and the transmutation operator of the corresponding Darboux transformed equation. The properties of continuity and invertibility in the space of classical solutions and in the pseudoanalytic Bergman space were studied. A complete system of solutions, called the radial formal powers, was obtained by transmuting the Bicomplex powers. The completeness of the radial formal powers was established in the sense of the uniform convergence on compact subsets and in the \(L_{2}\)-norm. The existence of a reproducing Bergman kernel for the Bicomplex pseudoanalytic Bergman space was proven. In the case of the Bergman space on a disk, the radial formal powers are an orthogonal basis and can be used to approximate the Bergman kernel.
## Acknowledgments
The author expresses his gratitude to Prof. Briceyda B. Delgado for helpful discussions. |
2302.14818 | Symbiotic Dynamics in Living Liquid Crystals | An amalgamate of nematic liquid crystals and active matter, referred to as
living liquid crystals, is a promising self-healing material with futuristic
applications for targeted delivery of information and micro-cargo. We provide a
phenomenological model to study the symbiotic pattern dynamics in this
contemporary system using the Toner-Tu model for active matter (AM), the
Landau-de Gennes free energy for liquid crystals (LCs), and an experimentally
motivated coupling term that favours co-alignment of the active and nematic
components. Our extensive theoretical studies unfold two novel steady states,
chimeras and solitons, with sharp regions of distinct orientational order that
sweep through the coupled system in synchrony. The induced dynamics in the
passive nematic is unprecedented. We show that the symbiotic dynamics of the AM
and LC components can be exploited to induce and manipulate order in an
otherwise disordered system. | Aditya Vats, Pradeep Kumar Yadav, Varsha Banerjee, Sanjay Puri | 2023-02-17T10:46:19Z | http://arxiv.org/abs/2302.14818v1 | # Symbiotic Dynamics in Living Liquid Crystals
###### Abstract
An amalgamate of nematic liquid crystals and active matter, referred to as _living liquid crystals_, is a promising self-healing material with futuristic applications for targeted delivery of information and micro-cargo. We provide a phenomenological model to study the symbiotic pattern dynamics in this contemporary system using the Toner-Tu model for active matter (AM), the Landau-de Gennes free energy for liquid crystals (LCs), and an experimentally motivated coupling term that favours co-alignment of the active and nematic components. Our extensive theoretical studies unfold two novel steady states, _chimeras_ and _solitons_, with sharp regions of distinct orientational order that sweep through the coupled system in synchrony. The induced dynamics in the passive nematic is unprecedented. We show that the symbiotic dynamics of the AM and LC components can be exploited to induce and manipulate order in an otherwise disordered system.
Introduction
An assembly of interacting particles, ranging from microscopic to macroscopic sizes, that converts energy from the environment into mechanical energy for self-propulsion is termed as active matter (AM). This term encompasses a wide variety of living and non-living systems such as bird flocks, insect swarms, animal herds and fish shoals, suspensions of bacteria, cytoskeletal filaments and protein motors, synthetic self-phoretic colloids, vibrated granular matter, and even human crowds [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. The immense diversity in the constituent particles, lack of time-reversal symmetry and the intrinsic out-of-equilibrium behaviour has lead to intriguing experimental and theoretical investigations, see [15; 10; 14] for different perspectives. Most of these works have discussed AM in isotropic Newtonian fluids. However, recent attention has also turned to AM in non-Newtonian fluids. The fluid endows the active system with unique properties including improved diffusivity and decreased viscosity. An anisotropic medium also introduces directional dependence and can help control the AM's chaotic motion. In this context, a system of great topical interest is that of _living liquid crystals_ (LLCs), where living (active) particles are introduced in nematic liquid crystals (NLCs) [16; 17; 18; 19; 20; 21; 22; 23]. The latter are classic examples of anisotropic fluids having long-range order (LRO) or quasi-LRO below a critical temperature \(T_{c}\), with a special direction of averaged molecular alignment called the _director_\(\mathbf{n}\)[24; 25]. Consequently, mechanical, optical and diffusive properties of NLCs exhibit strong directional dependence [25].
The benchmarking works on LLCs considered a low concentration of rod-like bacteria (_Bacillus subtilis_) swimmers in non-toxic NLCs confined to a quasi-two-dimensional geometry, and reported spectacular experimental phenomena that were never observed in Newtonian fluids [16; 18; 20; 26]. The swimming bacteria (flagella) serve as probes for extracting information about the NLC properties and their geometric confinement. They create perturbations in the nematic medium over nanometer scales and yield emergent textures over hundreds of micrometers. The topological defects in NLCs on the other hand, play a critical role in active transport. Experiments reveal that in the defect free regions, the bacteria always swim parallel to the local director. They accumulate at the T-shape defects (with topological charge +1/2), but are deflected from Y-shape defects (with topological charge -1/2) [20]. Such observations are presumably generic to other self-propelled particles including synthetic swimmers, provided the low-concentration limit is respected. It is believed
that LLCs will bridge the properties of active and passive matter to create new micro-fluidic devices that can transport fluids without pumps or pressure, synthetic systems which resemble cells in motion, and nanotechnologies for targeted drug deliveries, sensing and other biomedical applications.
An important direction in this emerging field is to develop models of LLCs so that joint experimental and theoretical efforts can be made to unravel potential applications. One of the first contributions in this direction has been due to Genkin et al. [20], who introduced continuum models that capture the experimentally observed pattern formation of rod-shaped bacteria in NLCs. Guided by experimental observations, the primary assumptions in the description of Genkin et al. are: (i) The volume fraction of bacteria is relatively low and does not perturb the properties of the suspending NLC; (ii) The suspended bacteria co-align with the local nematic director on a time-scale much smaller than the characteristic time of collective behavior; (iii) At each point in the quasi-two-dimensional space, interactions between bacteria are apolar and allow them to glide past without collisions. To model the NLC environment, Genkin et al. use the Berris-Edwards model comprising of equations of motion for the tensor order parameter field \(\mathbf{Q}(\mathbf{r},t)\) and the velocity field \(\mathbf{u}(\mathbf{r},t)\). The transport of bacteria is governed by two coupled advection-diffusion equations for the concentrations of bacteria swimming parallel \(c^{+}\) and anti-parallel \(c^{-}\) to the director \(\mathbf{n}\)[20; 27; 28]. This model reproduces the experimentally observed accumulation and expulsion of bacteria at the defect cores. The above work is of great interest but is restricted to the dilute regime, where bacteria do not directly interact with each other. Clearly, the dense limit is significant in many applications of AM. Moreover, the pioneering experiments of Zhou et al. [16] on LLCs showed a rich and fascinating phenomenology in this limit also.
The scope of AM is vast. It studies the collective behaviour of self-propelled particles of varying sizes in a plethora of environments. The interaction of active particles amongst themselves, and with the medium, can be expected to yield exotic dynamical patterns with novel applications. An important direction of research therefore is to construct generic models of LLCs that capture pattern formation for the case when all three interactions are significant: AM-AM, LC-LC and AM-LC. In this situation, we expect a symbiotic dynamics with complex interplay of AM and LCs. We embark on this path by considering two well-established coarse-grained descriptions, the Toner-Tu (TT) model for AM and the Landau-de Gennes (LdG) free energy for NLCs, along with a coupling term motivated by experimental
observations [20]. The LdG formulation does not incorporate hydrodynamics, so there is no inherent director dynamics. The latter is usually imparted by the coarse-grained time-dependent Ginzburg-Landau (TDGL) equations, and is purely relaxational [29; 30].
Our extensive simulations reveal two novel steady states in the LLCs: (i) Sharp bands of large orientational order (in AM and NLCs) coexisting with a background of disoriented AM and isotropic NLCs. We refer to this coexistence of order and disorder as a _chimera_ state, a term which has found usage in the nonlinear dynamics literature [31; 32]. The bands sweep through the system with the speed of the active particles (say \(v_{0}\)). The band-width \(\Delta\) exhibits a power-law dependence on the AM-NLC coupling: \(\Delta\sim(c_{0}^{*}-c_{0})^{\theta}\), where \(\theta\) is a universal exponent. (ii) Localized regions with large orientational order (in AM as well as NLCs) or _solitons_ that propagate with speed \(v_{0}\). There are several 1-dimensional equations [33; 34; 35] which are known to exhibit soliton solutions, i.e., solitary waves which maintain their integrity under collision with other solitary waves. These are ubiquitous in diverse physical systems, ranging from plasmas to fluids and nerve conduction. However, there are very few examples of solitons in dimensions higher than 1. The simulations of our model for LLCs show four kinds of steady states: chimera, soliton, ordered and disordered. We have evaluated the phase boundaries analytically from the fixed points of the dynamical equations and their linear stability analysis.
## II Model and theoretical framework
Deep insights on NLCs have emerged from mean-field approaches based on the minimization of the LdG free energy [24; 36]. This is obtained as a Landau expansion in terms of a mesoscopic order parameter \(\mathbf{Q}\), and is characterized by a few phenomenological constants. The \(\mathbf{Q}\)-tensor is symmetric and traceless, with elements \(Q_{ij}=\mathcal{S}\left(n_{i}n_{j}-\delta_{ij}/2\right)\). The eigenvector corresponding to the largest eigenvalue is the director \(\mathbf{n}\), and \(\mathcal{S}\) measures the orientational order about \(\mathbf{n}\). The isotropic phase (\(T>T_{c}\)) corresponds to \(\mathcal{S}=0\), and \(\mathcal{S}=1\) describes the fully aligned nematic phase (\(T<T_{c}\)). A defect corresponds to regions of low order or \(\mathcal{S}\simeq 0\). It is easy to check that, in \(d=2\)
\[\mathrm{Tr}(\mathbf{Q})=0;\quad\mathrm{Tr}(\mathbf{Q}^{2})=2(Q_{11}^{2}+Q_{12 }^{2})=\mathcal{S}^{2}/2;\quad\mathrm{Tr}(\mathbf{Q}^{3})=0. \tag{1}\]
The LdG free energy for NLCs has been modelled as [24; 36]
\[F_{Q}[\mathbf{Q}]=\int\mathrm{d}\mathbf{r}\left\{\frac{A}{2}\mathrm{Tr}(\mathbf{Q}^{ 2})+\frac{B}{3}\mathrm{Tr}(\mathbf{Q}^{3})+\frac{C}{4}[\mathrm{Tr}(\mathbf{Q}^{2})]^{2 }+\frac{L}{2}\left|\nabla\mathbf{Q}\right|^{2}\right\}. \tag{2}\]
The Landau coefficients \(A,B,C\) and \(L\) are phenomenological parameters which are related to experimentally determined quantities like critical temperature, latent heat of transition, magnitude of the order parameter, etc. [37; 38]. For example, \(A=A_{0}(T-T_{c})\), where \(A_{0}\) is a material dependent coefficient and \(T_{c}\) is the critical temperature. At the coarse-grained level, the appropriate framework to study the dissipative dynamics that drives the system to the free energy minimum is the TDGL equation [29; 30]:
\[\frac{\partial\mathbf{Q}}{\partial t}=-\Gamma_{\mathbf{Q}}\frac{\delta F_{Q}[ \mathbf{Q}]}{\delta\mathbf{Q}}. \tag{3}\]
The parameter \(\Gamma_{Q}\) is the damping factor for the nematic component and sets the relaxation time scale for the system. The terms on the right of Eq. (3) are the functional derivatives of the free energy functional.
The minimal microscopic description for the collective motion of AM is the Vicsek model [39]. The corresponding coarse-grained formulation, provided by the elegant hydrodynamic theory of Toner and Tu (TT), yields the equation of motion for (i) the local density of the active particles \(\rho(\mathbf{r},\mathbf{t})\), and (ii) the local polarization \(\mathbf{P}(\mathbf{r},t)\) that describes their average orientation [10; 14; 40; 41; 42]. Although the original model is formulated phenomenologically using symmetry considerations, it is instructive to rewrite the equations of motion in terms of a free energy functional \(F_{a}[\rho,\mathbf{P}]\)[10; 14]:
\[\frac{\partial\rho}{\partial t} =-v_{0}\nabla\cdot(\mathbf{P}\rho)-\nabla\cdot\left(-\Gamma_{ \rho}\nabla\frac{\delta F_{a}}{\delta\rho}\right), \tag{4}\] \[\frac{\partial\mathbf{P}}{\partial t} =\lambda_{1}(\mathbf{P}\cdot\nabla)\mathbf{P}-\Gamma_{P}\frac{ \delta F_{a}}{\delta\mathbf{P}}. \tag{5}\]
Here, \(v_{0}\) is the speed of the active particles, and \(\Gamma_{\rho}\) and \(\Gamma_{P}\) set the relaxation time scales for the density and polarization fields. The first term in Eq. (4) quantifies the change in the density due to the polarization field. In the TT model, the \(\mathbf{P}\)-field acts both as the current and the orientational order parameter. Hence, it evolves in time [Eq. (5)] via both advection and flow alignment. Further, \(\lambda_{1}\) has the dimension of the speed and Galilean invariance would require \(\lambda_{1}=v_{0}\). Since this is a non-equilibrium system, \(\lambda_{1}\) is generally a phenomenological parameter different from \(v_{0}\).
The free energy functional in Eqs. (4)-(5) is given by [10; 14]:
\[F_{a}[\rho,{\bf P}]=\int{\rm d}{\bf r}\left[\frac{\alpha(\rho)}{2}|{\bf P}|^{2}+ \frac{\beta}{4}|{\bf P}|^{4}+\frac{\kappa}{2}|\nabla{\bf P}|^{2}+\frac{w}{2}|{ \bf P}|^{2}\nabla\cdot{\bf P}-\frac{v_{1}}{2}(\nabla\cdot{\bf P})\frac{\delta \rho}{\rho_{0}}+\frac{D_{\rho}}{2}(\delta\rho)^{2}\right] \tag{6}\]
where \(\alpha,\beta,\kappa,w,v_{1},D_{\rho}\) are material-dependent parameters whose precise values can be related to the microscopic properties of the active particles [43; 44]. The parameter \(\alpha(\rho)=\alpha_{0}(1-\rho/\rho_{c})\), where \(\rho_{c}\) is the critical density that is required to observe order in the active system. The gradient term \(|\nabla{\bf P}|^{2}\) models the energy cost for a deformation of the order parameter. The next two terms in the equation provide the \(|{\bf P}|^{2}\) and density contributions to the spontaneous splay \(\nabla\cdot{\bf P}\). These terms can be interpreted as the local aligning field due to the density and orientational order \(|{\bf P}|^{2}\). The last term in Eq. (6) penalizes the variation in the density about its mean value: \(\delta\rho=\rho-\rho_{0}\). A detailed discussion of these terms and their applicability can be found in [10] and [14].
Some remarks about the states seen in the TT model are in order. The order-disorder transition takes place as the parameter \(\alpha(\rho)\) goes through zero. An average density \(\rho_{0}<\rho_{c}\) results in a _disordered phase_ with \({\bf P}=0\). For \(\rho_{0}>\rho_{c}\), the system shows a state of uniform orientational order with \(|{\bf P}|^{2}\sim(\rho_{0}/\rho_{c}-1)\). This _ordered phase_ is characterized by the movement of active particles with velocity \({\bf v}=v_{0}{\bf P}\). Near the transition point (\(\rho_{0}=\rho_{c}^{+}\)), the ordered phase is unstable, and the system relaxes to a _banded phase_ that sweeps through the system with speed \(v_{0}\)[10; 42]. Additionally, solitons have also been observed in the quasi-one-dimensional case, but not in higher dimensions [44; 45; 46].
The above coarse-grained models are the ingredients of our phenomenological model for LLCs. We write the free energy of this composite system as the sum of (a) free energies of the nematic and active components, and (b) a suitably designed coupling term. Keeping in mind the experimental observations of Genkin et al. [20], we define the coupling between the nematic and active component as the dyadic product of the \({\bf Q}\)-tensor and the polarization vector \({\bf P}\). This is the lowest order term that ensures \({\bf P}\parallel{\bf n}\)[47; 48; 49; 50; 51]. With these considerations, the free energy for the LLC can be written as
\[F[{\bf Q},\rho,{\bf P}]=F_{a}+F_{Q}-c_{0}\sum_{i,j}Q_{ij}P_{i}P_{j}, \tag{7}\]
where \(c_{0}\) quantifies the strength of the AM-nematic interaction. Note that, when stated in terms of \({\bf n}\), the coupling term takes the form \(-({\bf n}\cdot{\bf P})^{2}\), which makes it easy to see that the two components prefer co-alignment [47; 48; 49; 50; 51].
We now substitute the free energy defined in Eq. (7) in Eqs. (3)-(5), and retain gradient terms up to second order to obtain the dynamical equations for LLCs in \(d=2\). These are provided in Eqs. (10)-(11) of Appendix A. Note that our model, which does not include the hydrodynamics of the nematic matrix, is suitable when the AM-nematic interactions are short-ranged, and the velocity of the nematogen is small as compared to the propulsion velocity of the active particle. This is the case in Ref. [46], or for AM in pre-designed director patterns [18; 23].
The dimensionless form of Eqs. (10)-(11) can be obtained by introducing the rescaled variables
\[\mathbf{Q}=c_{Q}\mathbf{Q}^{\prime},\quad\mathbf{P}=c_{P}\mathbf{P}^{\prime},\quad\mathbf{r}=c_{r}\mathbf{r}^{\prime},\quad t=c_{t}t^{\prime}. \tag{8}\]
The appropriate scale factors are
\[c_{Q}=\sqrt{\frac{|A|}{2C}};\quad c_{P}=\sqrt{\frac{\alpha_{0}}{\beta}};\quad c _{t}=\frac{\beta}{\alpha_{0}\Gamma_{Q}}\sqrt{\frac{|A|}{2C}};\quad c_{r}= \sqrt{\frac{L}{|A|}}. \tag{9}\]
Dropping the primes on the variables, we obtain
\[\frac{\partial Q_{11}}{\partial t} = \xi_{1}\left[\pm Q_{11}-(Q_{11}^{2}+Q_{12}^{2})Q_{11}+\nabla^{2 }Q_{11}\right]+c_{0}(P_{1}^{2}-P_{2}^{2}), \tag{10}\] \[\frac{\partial Q_{12}}{\partial t} = \xi_{1}\left[\pm Q_{12}-(Q_{11}^{2}+Q_{12}^{2})Q_{12}+\nabla^{2} Q_{12}\right]+2c_{0}P_{1}P_{2},\] (11) \[\frac{1}{\Gamma}\frac{\partial P_{1}}{\partial t} = \xi_{2}\Bigg{[}\left(\frac{\rho}{\rho_{c}}-1-\mathbf{P}\cdot \mathbf{P}\right)P_{1}-\frac{v_{1}^{\prime}}{2\rho_{0}}\nabla_{x}\rho+\lambda_ {1}^{\prime}(\mathbf{P}\cdot\nabla)P_{1}+\lambda_{2}^{\prime}\nabla_{x}(| \mathbf{P}|^{2})\] (12) \[+\lambda_{3}^{\prime}P_{1}(\nabla\cdot\mathbf{P})+\kappa^{\prime }\nabla^{2}P_{1}\Bigg{]}+c_{0}(Q_{11}P_{1}+Q_{12}P_{2}),\] \[\frac{1}{\Gamma}\frac{\partial P_{2}}{\partial t} = \xi_{2}\Bigg{[}\left(\frac{\rho}{\rho_{c}}-1-\mathbf{P}\cdot \mathbf{P}\right)P_{2}-\frac{v_{1}^{\prime}}{2\rho_{0}}\nabla_{y}\rho+\lambda_ {1}^{\prime}(\mathbf{P}\cdot\nabla)P_{2}+\lambda_{2}^{\prime}\nabla_{y}(| \mathbf{P}|^{2})\] (13) \[+\lambda_{3}^{\prime}P_{2}(\nabla\cdot\mathbf{P})+\kappa^{\prime }\nabla^{2}P_{2}\Bigg{]}+c_{0}(Q_{12}P_{1}-Q_{11}P_{2}),\] \[\frac{1}{\Gamma^{\prime}}\frac{\partial\rho}{\partial t} = -v_{0}^{\prime}\nabla\cdot(\mathbf{P}\rho)+D_{\rho}^{\prime} \nabla^{2}\rho. \tag{14}\]
The dimensionless parameters in Eqs. (10)-(14) are:
\[\xi_{1}=\frac{2|A|\beta}{\alpha_{0}}\sqrt{\frac{|A|}{2C}},\quad\xi_ {2}=\frac{\alpha_{0}}{2}\sqrt{\frac{2C}{|A|}},\] \[v_{1}^{\prime}=\frac{v_{1}}{\alpha_{0}}\sqrt{\frac{\beta|A|}{ \alpha_{0}L}},\quad v_{0}^{\prime}=\frac{v_{0}}{\Gamma_{\rho}}\sqrt{\frac{ \alpha_{0}|A|}{\beta L}},\] \[\Gamma=\frac{\beta|A|\Gamma_{P}}{\alpha_{0}\Gamma_{Q}C},\quad \Gamma^{\prime}=\frac{\beta\Gamma_{\rho}}{\alpha_{0}\Gamma_{Q}}\sqrt{\frac{|A |}{2C}},\] \[\kappa^{\prime}=\frac{\kappa|A|}{\alpha_{0}L},\quad D_{\rho}^{ \prime}=\frac{D_{\rho}|A|}{L},\] \[\lambda_{1}^{\prime}=\frac{\lambda_{1}}{\Gamma_{P}}\sqrt{\frac{ |A|}{\alpha_{0}\beta L}},\quad\lambda_{2}^{\prime}=\lambda_{2}\sqrt{\frac{|A |}{\alpha_{0}\beta L}},\quad\lambda_{3}^{\prime}=\lambda_{3}\sqrt{\frac{|A|} {\alpha_{0}\beta L}}. \tag{15}\]
The \(\pm\) sign in Eqs. (10)-(11) determines whether the nematic component (in the absence of AM) is above (\(-\)) or below (\(+\)) its critical temperature \(T_{c}\). Before presenting results, let us discuss the choice of parameters. The quantities \(\xi_{1}\) and \(\xi_{2}\) depend on the relative magnitudes of \(\mathbf{Q}\) and \(\mathbf{P}\), and are set to 1 in our simulations. In dimensional units, \(v_{0}>0\) is the speed of the active particle. Further, the stable state exists only if \(v_{1}>0\)[44]. We assign the corresponding rescaled parameters the values \(v_{0}^{\prime}=0.5,v_{1}^{\prime}=0.25\). Our simulation results do not change significantly if \(v_{0}^{\prime},v_{1}^{\prime}\) are varied. The dimensional parameters \(\Gamma_{P},\Gamma_{Q}\) and \(\Gamma_{\rho}\) are the inverse relaxation scales of \(\mathbf{P},\mathbf{Q}\) and \(\rho\), respectively. The dimensionless quantities \(\Gamma\) and \(\Gamma^{\prime}\) measure the relative time-scales, and we set them to 1. Similarly, \(\kappa^{\prime}\) and \(D_{\rho}^{\prime}\) set the relative values of elastic scales, and we assign them the value 1. Finally, the \(\lambda_{i}\) are the strengths of the convective nonlinearities present due to the absence of Galilean invariance. As remarked in Appendix A, the terms with \(\lambda_{2}\) and \(\lambda_{3}\) arise from the same term in the free energy \(F_{a}\) and obey \(\lambda_{2}=-\lambda_{3}/2\). However, both these terms are allowed under symmetry considerations, and we treat \(\lambda_{2}\) and \(\lambda_{3}\) as independent parameters. In dimensional terms, the linear stability analysis of the TT equations shows that non-trivial states arise under the conditions \(\lambda_{1}/\Gamma_{P}+\lambda_{2}+\lambda_{3}<0\) and \(\lambda_{2}=-\lambda_{3}\)[10; 14]. These conditions are invariant under the above rescaling, and we consider the case with \(\lambda_{1}^{\prime}=-0.5,\lambda_{2}^{\prime}=-0.5,\lambda_{3}^{\prime}=0.5\). There is clearly a degree of freedom involved in the above choice of parameters. However, we emphasize that our numerical results do not change qualitatively on changing the above values as long as the specified signs are preserved. The coupling constant \(c_{0}\) will be allowed to vary in our simulations.
## III Results
At the core of the current theoretical modelling is to understand the interplay of the AM-NLC coupling in LLCs. We now focus on understanding the effect of the coupling strength \(c_{0}\) on the dynamical evolution of the active and nematic fields. The three cases which provide interesting outcomes are _Case 1_: \(T>T_{c}\), \(\rho_{0}=\rho_{c}^{+}\); _Case 2_: \(T<T_{c}\), \(\rho_{0}=\rho_{c}^{-}\); _Case 3_: \(T<T_{c}\), \(\rho_{0}=\rho_{c}^{+}\). Here, \(\rho_{c}^{+}\) (\(\rho_{c}^{-}\)) corresponds to density slightly above (below) the critical density \(\rho_{c}\). Without loss of generality, we choose \(\rho_{c}=0.5\). For each of the three cases, we numerically solve Eqs. (10)-(14) via Euler discretization with an isotropic Laplacian on an \(N^{2}\) lattice (\(N=128\)). We impose periodic boundary conditions in both directions [52], so as to remove the edge effects and mimic the bulk system. The discretization mesh sizes are chosen to be \(\Delta t=0.01\) and \(\Delta x=1.0\). The initial conditions for \(\mathbf{Q}\) and \(\mathbf{P}\) are chosen as small fluctuations about zero, which mimics the disordered state. The corresponding initial state for \(\rho\) is small fluctuations around the mean density \(\rho_{0}\). All statistical quantities have been averaged over 10 independent initial conditions, unless otherwise stated.
First, let us discuss the consequences of AM-LC coupling for _Case 1_. The linear stability analysis for the uncoupled system (\(c_{0}=0\)) yields a disordered state for the nematic component with \(\mathcal{S}\simeq 0\), and a banded state for the active component. Fig. 1 shows the evolution of the active and nematic components with \(\rho_{0}=\rho_{c}^{+}=0.52\) for different values of \(c_{0}\). Subfigures (a) and (b) show the density (see colour bar) of the active field at \(t=10^{2}\) and \(10^{4}\) for \(c_{0}=0.5\). The white arrows point along the \(\mathbf{P}\)-field with the length proportional to the magnitude. Clearly, the AM shows a banded state analogous to the uncoupled limit. In the banded state, there is coexistence of order (large \(P\)) and disorder (small \(P\)) in the \(\mathbf{P}\)-field. In the nonlinear dynamics literature, this has often been referred to as a _chimera_ state [31; 32]. In Figs. 1(a)-(b), the evolution to the chimera state is evident. The chimera sweeps through the system with velocity \(v_{0}\). The corresponding developments in the nematic field are shown in Fig. 1(d)-(e). The colour bar indicates the value of the orientational order parameter \(\mathcal{S}\), which has been normalized by its maximum value: \(\mathcal{S}_{m}\simeq 0.67\) in (d), \(\mathcal{S}_{m}\simeq 0.61\) in (e). The coupling imprints the chimera state on the nematic component, which also travels with speed \(v_{0}\). Note that the nematogens continue to remain passive, it is only the orientational order (and disorder) that is dynamical. A visualization of this novel LLC steady state is provided by Movie 1 of Appendix C. In Fig. 1(g), we have plotted the variation of \(\bar{\rho}\),
and \(\bar{\mathcal{S}}\) with \(y\) in the steady state. The bar indicates an average along the \(x\)-direction. The homologous variation of all the quantities confirms their spatial co-alignment. These solutions correspond to traveling waves of Eqs. (10)-(14) with speed \(v_{0}\). The resultant ordinary differential equations have to be solved numerically to obtain the inhomogeneous profiles in Fig. 1(g).
To examine the consequence of increasing coupling strength, we show the active and nematic fields for \(c_{0}=1.0\) at \(t=10^{4}\) in sub-figures (c) and (f). The band width (\(\Delta\)) broadens, and the orientational order increases (\(\mathcal{S}_{m}\simeq 1.79\)). Sub-figure (h) shows the dependence of \(\Delta^{-1}\) vs. \(c_{0}\). The system settles to a homogeneous state (\(\Delta^{-1}=0\)) at a critical value \(c_{0}^{\star}\simeq 2.1\). The dashed line corresponds to \(\Delta^{-1}=c_{0}^{\star}-c_{0}\), and is a good fit to the data for higher values of \(c_{0}\). (We attribute the discrepancy in the value of \(c_{0}^{\star}\) to finite system sizes used in our simulations.) In sub-figure (i), we provide the phase diagram in the \((c_{0},\rho_{0})\) plane depicting regions where the chimera and ordered states are stable solutions. We have obtained the phase boundary (dashed line) analytically using linear stability analysis, the details of which are provided in Appendix B. The smear indicates the region where the numerically obtained phase boundary lies. In this region, the final state obtained in our simulations is dependent on the initial condition and may be either chimera or ordered. This ambiguity is a consequence of the Euler discretization on finite lattices, and will go away for infinite system size and \(\Delta x,\Delta t\to 0\). In the latter limit, we will recover the analytical phase boundary. It should be noted that there is a re-entrant phase transition for a range of \(\rho_{0}\)-values, where the LLC makes a transition from ordered \(\rightarrow\) chimera \(\rightarrow\) ordered on increasing \(c_{0}\).
Next, we present the results for _Case 2_ with \(T<T_{c}\), \(\rho_{0}=\rho_{c}^{-}=0.48\). In the uncoupled limit (\(c_{0}=0\)), the \(\mathbf{Q}\)-field settles to an ordered nematic state with a non-zero value of \(\mathcal{S}\), and the \(\rho\) and \(\mathbf{P}\) fields are isotropic. The introduction of the coupling shows dramatic consequences. The active field evolves into a chimera which has so far been observed only when \(\rho_{0}=\rho_{c}^{+}\). The naturally ordered nematic state is also driven into a chimera. A prototypical evolution can be seen in Movie 2 of Appendix C. Additionally, we also observe elusive 2-dimensional _soliton_ structures for some choices of \(c_{0}\) and \(\rho_{c}^{-}\). (The probability of occurrence of solitons is around \(0.1\) in our simulations.) As mentioned earlier, there is a long history of soliton solutions in completely integrable partial differential equations [33; 34; 35]. Most known soliton equations (e.g., Korteweg-de Vries equation, nonlinear Schrodinger
equation, etc.) are 1-dimensional, and there are very few examples of higher-dimensional solitons. We observe these in our proposed model of LLCs. In Fig. 2, we have plotted the evolution of the \(\rho\) field (top row) and nematic field (bottom row) for \(c_{0}=0.1\) at \(t=800,1000,1200\). The white arrows in the active morphologies correspond to the polarization field in the high density regions (\(\rho>0.6\)). A localized lump (\(L_{1}\)) moves to the right (\(t=800\)), and undergoes a complicated nonlinear collision with lumps moving towards the right (\(t=1000\)). After this collision, \(L_{1}\) emerges and recovers its original profile. Thus, the solitons maintain their self-confined shapes while propagating and survive the collisions. This scenario can be seen clearly in Movie 3 of Appendix C. The LLC model proposed here is a dissipative system and not Hamiltonian. So the conventional explanation of soliton behavior via "complete integrability and infinite constants of motion" does not apply here. Clearly, the origin of this soliton-like behavior requires further analytical investigation, and is beyond the scope of this paper.
Finally, we present the phase diagrams for _Case 2_ and _Case 3_ in Fig. 3(a)-(b) respectively. For _Case 2_ [Fig. 3(a)], the LLC coupling drives the active system from a disordered state to structured steady states even though \(\rho_{0}=\rho_{c}^{-}\). From our linear stability analysis provided in Appendix B, the transition from the disordered to ordered state occurs when \(c_{0}+\rho/\rho_{c}-1>0\), shown by the dotted line. For intermediate values of \(c_{0}\), there is a small region exhibiting both 1-dimensional chimera and higher-dimensional soliton states, and another where only the chimera state is observed. For larger \(c_{0}\)-values, the ordering nematic drives AM and both sub-systems transit to an ordered state. For _Case 3_ [Fig. 3(b)], the nematic and active fields are both in the ordered state with \(T<T_{c}\) and \(\rho_{0}=\rho_{c}^{+}\). The region corresponding to chimera states diminishes as \((\rho_{0}-\rho_{c})\) increases. For large \(c_{0}>c_{0}^{*}(\rho_{0})\), the system transits to an ordered state. In both sub-figures, the dashed line is the analytical phase boundary obtained from the linear stability analysis provided in Appendix B. The smear, as mentioned earlier, indicates the location of the approximate phase boundaries from our numerics.
## IV Summary and conclusion
To summarize, we have explored pattern dynamics in living liquid crystals (LLCs) - an amalgamate of active matter (AM) and nematic liquid crystals (NLCs). The latter are classic examples of anisotropic materials with a special direction of average molecular alignment.
We model the LLCs using the Toner-Tu (TT) model, the Landau-de Gennes (LdG) free energy and an experimentally motivated coupling term that favours co-alignment of the local polarization in the active field and the nematic director. The early theoretical models for this contemporary system are restricted to the dilute regime where the active particles do not interact with one another. Our generic model on the other hand, includes AM-AM, NLC-NLC as well as AM-NLC interactions, which unfold novel symbiotic dynamics of the active and nematic components.
We focus on understanding this symbiotic dynamics in two-dimensional (\(d=2\)) LLCs. Such geometries have been realised experimentally in the context of pure NLCs confined to shallow wells by ensuring that the top and bottom surfaces enforce planar boundary conditions. Consequently, the nematic molecules are primarily confined in a plane and the variations along the height of the sample are negligible. Our benchmarking work yields a range of analytical and numerical results for \(d=2\) LLCs. From a fixed point analysis of the dynamical equations, we have obtained phase diagrams for a range of parameters. Our extensive theoretical studies unfold two steady states hitherto unobserved in LLCs: (i) _Chimeras_ corresponding to bands of large orientational order (in AM and NLCs) coexisting with disorder. The ordered regions in the two components are co-aligned, and sweep through the system in synchrony with the speed \(v_{0}\) of the active particles. (ii) _Solitons_ corresponding to localized regions of order (in AM and NLCs) which are robust under locomotion and collisions. While their presence in \(d=1\) is well known, the existence of solitons in higher dimensions is rare. The induced dynamics in the passive nematic is unprecedented.
Our theoretical framework demonstrates that the AM-LC coupling can discipline AM by inducing orientational order and heal NLCs by erasing topological defects. Such observations suggest the design and synthesis of new self-healing materials, which can also provide targeted delivery of information and micro-cargo without channels. Our work provides many ideas for manipulating AM and LCs for exciting futuristic applications. We hope that it will initiate joint experimental and theoretical investigations in the contemporary LLCs.
## V Author contributions
VB and SP formulated the problem. AV and PY performed the numerical simulations. AV, PY, VB and SP did the analysis and wrote the paper.
Acknowledgements
AV and PY acknowledge UGC, India for support via a research fellowship. VB acknowledge DST India for research grants. AV and VB gratefully acknowledge the HPC facility of IIT Delhi for computational resources.
## Appendix A Dynamical Model for LLCs
We substitute the free energy defined in Eq. (7) in Eqs. (3)-(5), and keep gradient terms up to second order to obtain the following model for LLCs in \(d=2\):
\[\frac{1}{\Gamma_{Q}}\frac{\partial Q_{11}}{\partial t} = \pm 2|A|Q_{11}-4C(Q_{11}^{2}+Q_{12}^{2})Q_{11}+2L\nabla^{2}Q_{11}+ c_{0}(P_{1}^{2}-P_{2}^{2}), \tag{10}\] \[\frac{1}{\Gamma_{Q}}\frac{\partial Q_{12}}{\partial t} = \pm 2|A|Q_{12}-4C(Q_{11}^{2}+Q_{12}^{2})Q_{12}+2L\nabla^{2}Q_{12}+2 c_{0}P_{1}P_{2},\] (11) \[\frac{1}{\Gamma_{P}}\frac{\partial P_{1}}{\partial t} = [-\alpha(\rho)-\beta\mathbf{P}\cdot\mathbf{P}]P_{1}-\frac{v_{1}}{ 2\rho_{0}}\nabla_{x}\rho+\frac{\lambda_{1}}{\Gamma_{P}}(\mathbf{P}\cdot \nabla)P_{1}+\lambda_{2}\nabla_{x}(|\mathbf{P}|^{2})\] (12) \[+\lambda_{3}P_{1}(\nabla\cdot\mathbf{P})+\kappa\nabla^{2}P_{1}+2 c_{0}(Q_{11}P_{1}+Q_{12}P_{2}),\] \[\frac{1}{\Gamma_{P}}\frac{\partial P_{2}}{\partial t} = [-\alpha(\rho)-\beta\mathbf{P}\cdot\mathbf{P}]P_{2}-\frac{v_{1}}{ 2\rho_{0}}\nabla_{y}\rho+\frac{\lambda_{1}}{\Gamma_{P}}(\mathbf{P}\cdot \nabla)P_{2}+\lambda_{2}\nabla_{y}(|\mathbf{P}|^{2})\] (13) \[+\lambda_{3}P_{2}(\nabla\cdot\mathbf{P})+\kappa\nabla^{2}P_{2}+2 c_{0}(Q_{12}P_{1}-Q_{11}P_{2}),\] \[\frac{1}{\Gamma_{\rho}}\frac{\partial\rho}{\partial t} = -\frac{v_{0}}{\Gamma_{\rho}}\nabla\cdot(\mathbf{P}\rho)+D_{\rho} \nabla^{2}\rho. \tag{14}\]
The \(\pm\) signs in Eqs. (10)-(11) refer to \(T>T_{c}\)\((-)\) and \(T<T_{c}\)\((+)\), where \(T_{c}\) is the ordering temperature of the pure nematic. Notice that the free energy yields \(\lambda_{2}=w/2\) and \(\lambda_{3}=-w\) in these equations. However, both of these dynamical terms are permitted by symmetry considerations. Therefore, we treat \(\lambda_{2}\) and \(\lambda_{3}\) as unrelated phenomenological parameters.
## Appendix B Fixed Point Solutions and Linear Stability Analysis
The dimensionless Eqs. (10)-(14) govern the evolution of the LLC to its steady state. It is useful to study the fixed point (FP) solutions \((\mathbf{Q}^{*},\mathbf{P}^{*})\), as these dictate the nature of the domains and steady states formed during the evolution. To determine the FP solutions for the coupled system, we set \(\partial/\partial t=\nabla=0\) in Eqs. (10)-(14) with \(\xi_{1}=\xi_{2}=1\):
\[\pm Q_{11}^{*}-({Q_{11}^{*}}^{2}+{Q_{12}^{*}}^{2})Q_{11}^{*}+c_{0} ({P_{1}^{*}}^{2}-{P_{2}^{*}}^{2})=0, \tag{15}\] \[\pm Q_{12}^{*}-({Q_{11}^{*}}^{2}+{Q_{12}^{*}}^{2})Q_{12}^{*}+2c_{ 0}P_{1}^{*}P_{2}^{*}=0,\] (16) \[(g_{0}-|\mathbf{P}^{*}|^{2})P_{1}^{*}+c_{0}(Q_{11}^{*}P_{1}^{*}+Q _{12}^{*}P_{2}^{*})=0,\] (17) \[(g_{0}-|\mathbf{P}^{*}|^{2})P_{2}^{*}+c_{0}(Q_{12}^{*}P_{1}^{*}-Q _{11}^{*}P_{2}^{*})=0, \tag{18}\]
where \(g_{0}=\rho_{0}/\rho_{c}-1\). The conservation law dictates that the homogeneous FP solution of Eq. (14) is \(\rho=\rho_{0}\). A trivial solution for Eqs. (15)-(18) is \(Q_{11}^{*}=0\), \(Q_{12}^{*}=0\), \(P_{1}^{*}=0\),
\(P_{2}^{*}=0\), which corresponds to a disordered state for both components.
The non-trivial FPs are rotationally invariant and can be expressed as:
\[Q_{11}^{*}=r_{Q}\cos 2\theta,\quad Q_{12}^{*}=r_{Q}\sin 2\theta;\quad P_{1}^{*}=r_{ P}\cos\theta,\quad P_{2}^{*}=r_{ P}\sin\theta. \tag{10}\]
Here, \(\theta\) is the arbitrary angle between \(\mathbf{P}^{*}\parallel\mathbf{n}^{*}\) and the \(x\)-axis. We can choose \(\theta=0\) without loss of generality. This choice of \(\theta\) corresponds to \(Q_{11}^{*}=r_{Q},\ P_{1}^{*}=r_{P}\) and \(Q_{12}^{*}=P_{2}^{*}=0\). The substitution of these values in Eqs. (11)-(12) simplifies them to
\[-r_{Q}^{3}+(\pm 1+c_{0}^{2})r_{Q}\pm c_{0}|g_{0}|=0, \tag{11}\] \[r_{P}^{2}=c_{0}r_{Q}\pm|g_{0}|. \tag{12}\]
Here, the first \(\pm\) sign in Eq. (11) signifies \(T<T_{c}\) (\(+\)) or \(T>T_{c}\) (\(-\)). The \(\pm\) sign with \(|g_{0}|\) is dictated by whether \(\rho_{0}>\rho_{c}\) (\(+\)) or \(\rho_{0}<\rho_{c}\) (\(-\)). We solved these equations for arbitrary values of \(c_{0}\). The FPs thus obtained are given in Table 1 for all cases.
Next, we determine the stability of the FP solutions (\(\rho_{0},\mathbf{P}^{*},\mathbf{Q}^{*}\)). The evolution of small fluctuations around these solutions (\(\rho_{0}+\Delta\rho,\mathbf{P}^{*}+\Delta\mathbf{P},\mathbf{Q}^{*}+\Delta \mathbf{Q}\)) can be obtained using Eqs. (10)-(14). It is convenient to work with Fourier-transformed fluctuations
\([\Delta\rho({\bf k},t),\Delta{\bf P}({\bf k},t),\Delta{\bf Q}({\bf k},t)]\). The corresponding linearized equations can be written in vector notation:
\[\frac{\partial\Phi({\bf k},t)}{\partial t}=W({\bf k})\cdot\Phi({\bf k},t), \tag{10}\]
where \(\Phi({\bf k},t)=[\Delta\rho({\bf k},t),\Delta P_{1}({\bf k},t),\Delta P_{2}({ \bf k},t),\Delta Q_{11}({\bf k},t),\Delta Q_{12}({\bf k},t)]\). The quantity \(W({\bf k})\) is a \(5\times 5\) matrix:
\[\begin{pmatrix}iv^{\prime}_{0}(k_{x}P_{1}^{*}+k_{y}P_{2}^{*})&ik_{x}v^{\prime} _{0}\rho_{0}&ik_{y}v^{\prime}_{0}\rho_{0}&0&0\\ -D^{\prime}_{\rho}(k_{x}^{2}+k_{y}^{2})&\frac{\rho_{0}}{\rho_{c}}-1-3{P_{1}^{* }}^{2}-{P_{2}^{*}}^{2}&\\ \frac{P_{1}^{*}}{\rho_{c}}+\frac{ik_{x}v^{\prime}_{1}}{2\rho_{0}}&-ik_{x}( \lambda^{\prime}_{1}+2\lambda^{\prime}_{2}+\lambda^{\prime}_{3})P_{1}^{*}&-2{P _{1}^{*}}^{2}-2ik_{x}\lambda^{\prime}_{2}P_{2}^{*}&c_{0}P_{1}^{*}&c_{0}P_{2}^{* }\\ -ik_{y}\lambda^{\prime}_{1}P_{2}^{*}-\kappa^{\prime}(k_{x}^{2}+k_{y}^{2})&-ik _{y}\lambda^{\prime}_{3}P_{1}^{*}+c_{0}Q_{12}^{*}&\\ &+c_{0}Q_{11}^{*}&\\ &\frac{\rho_{0}}{\rho_{c}}-1-3{P_{2}^{*}}^{2}-{P_{1}^{*}}^{2}&\\ \frac{P_{2}^{*}}{\rho_{c}}+\frac{ik_{y}v^{\prime}_{1}}{2\rho_{0}}&-2{P _{1}^{*}}^{2}-2ik_{y}\lambda^{\prime}_{2}P_{1}^{*}&-ik_{y}(\lambda^{\prime}_{1 }+2\lambda^{\prime}_{2}+\lambda^{\prime}_{3})P_{2}^{*}&-c_{0}P_{2}^{*}&c_{0}P_{ 1}^{*}\\ -ik_{x}\lambda^{\prime}_{3}P_{2}^{*}+c_{0}Q_{12}^{*}&-ik_{x}\lambda^{\prime}_ {1}P_{1}^{*}-\kappa^{\prime}(k_{x}^{2}+k_{y}^{2})&\\ &-c_{0}Q_{11}^{*}&\\ &\pm 1-3{Q_{11}^{*}}^{2}&\\ 0&2c_{0}P_{1}^{*}&-2c_{0}P_{2}^{*}&-{Q_{12}^{*}}^{2}&-2{Q_{11}^{*}}^{2}\\ &-(k_{x}^{2}+k_{y}^{2})&\\ 0&2c_{0}P_{2}^{*}&2c_{0}P_{1}^{*}&-2{Q_{11}^{*}}^{2}&-{Q_{11}^{*}}^{2}\\ &-(k_{x}^{2}+k_{y}^{2})\end{pmatrix} \tag{11}\]
As usual, the eigenvalues \(\{\bar{\lambda}_{i}\}\) and eigenvectors of \(W({\bf k})\) determine the stability of a FP. If any \(\bar{\lambda}_{i}>0\), the fluctuations grow exponentially in time in the corresponding eigen-direction, i.e., the FP is unstable. To examine the stability of the disordered solution, we set \(P_{1}^{*}=P_{2}^{*}=Q_{11}^{*}=Q_{12}^{*}=0\) in Eq. (11). It is clear that the coupling terms do not contribute at the linear level as they are quadratic in \(P_{i}\) and \(Q_{ij}\). Thus, the stability properties of the trivial disordered FP are the same as those of the LC and AM separately.
For non-trivial FPs, the analysis is more complicated and and analytically ugly even after setting \(P_{2}^{*}=Q_{12}^{*}=0\). We determine the \(\{\bar{\lambda}_{i}({\bf k})\}\) numerically as a function of \({\bf k}\), and see
whether any of the values lies above 0. For example, consider the phase diagram in Fig. 1(i). For large values of \(\rho_{0}-\rho_{c}\), the system lies in the ordered state of _Case 1_ in Table 1. Thus, all eigenvalues are negative-definite for this state. We reduce the value of \(\rho_{0}-\rho_{c}\) at constant \(c_{0}\), and investigate where the first instability arises. This signals the onset of a non-trivial ordered state with spatial inhomogeneity, which is identified as a chimera. This is how the dashed lines in Fig. 1(i) and Fig. 3(a)-(b) are obtained.
In _Case 2_, we also have a non-trivial FP where \(Q_{11}^{*}=1,Q_{12}^{*}=P_{1}^{*}=P_{2}^{*}=0.\), i.e., the LC is ordered and AM is disordered. The dotted line in Fig. 3(a) denotes the boundary where this isotropic state becomes unstable, foreshadowing the onset of order in both fields.
## Appendix C Movies Showing Steady States of LLCs
The movies below show the evolution of the active field (right frame) and nematic field (left frame) to different steady states from the initially disordered state. The steady states exist throughout the simulation time (\(t=50000\)), even though these are shown in the movies only up to time \(t=2000\).
* Movie 1: Evolution of the LLC into a chimera for _Case 1_. The parameters are \(T>T_{c}\) and \(\rho_{0}=\rho_{c}^{+}=0.52\) with the coupling strength \(c_{0}=0.5\).
* Movie 2: Evolution of the LLC to the chimera state for _Case 2_: \(T<T_{c}\), \(\rho_{0}=\rho_{c}^{-}=0.48\) with \(c_{0}=0.1\). We point out here that the chimera in the nematic component manifests only after the annihilation of all defects (points of vanishing \(\mathcal{S}\)).
* Movie 3: The 2-dimensional soliton for _Case 2_: \(T<T_{c}\), \(\rho_{0}=\rho_{c}^{-}=0.48\) with \(c_{0}=0.1\). The nematic component exhibits the soliton only after the annihilation of all defects. |
2306.00342 | Combining Explicit and Implicit Regularization for Efficient Learning in
Deep Networks | Works on implicit regularization have studied gradient trajectories during
the optimization process to explain why deep networks favor certain kinds of
solutions over others. In deep linear networks, it has been shown that gradient
descent implicitly regularizes toward low-rank solutions on matrix
completion/factorization tasks. Adding depth not only improves performance on
these tasks but also acts as an accelerative pre-conditioning that further
enhances this bias towards low-rankedness. Inspired by this, we propose an
explicit penalty to mirror this implicit bias which only takes effect with
certain adaptive gradient optimizers (e.g. Adam). This combination can enable a
degenerate single-layer network to achieve low-rank approximations with
generalization error comparable to deep linear networks, making depth no longer
necessary for learning. The single-layer network also performs competitively or
out-performs various approaches for matrix completion over a range of parameter
and data regimes despite its simplicity. Together with an optimizer's inductive
bias, our findings suggest that explicit regularization can play a role in
designing different, desirable forms of regularization and that a more nuanced
understanding of this interplay may be necessary. | Dan Zhao | 2023-06-01T04:47:17Z | http://arxiv.org/abs/2306.00342v1 | # Combining Explicit and Implicit Regularization for Efficient Learning in Deep Networks
###### Abstract
Works on implicit regularization have studied gradient trajectories during the optimization process to explain why deep networks favor certain kinds of solutions over others. In deep linear networks, it has been shown that gradient descent implicitly regularizes toward low-rank solutions on matrix completion/factorization tasks. Adding depth not only improves performance on these tasks but also acts as an accelertive pre-conditioning that further enhances this bias towards low-rankedness. Inspired by this, we propose an explicit penalty to mirror this implicit bias which only takes effect with certain adaptive gradient optimizers (e.g. Adam). This combination can enable a degenerate single-layer network to achieve low-rank approximations with generalization error comparable to deep linear networks, making depth no longer necessary for learning. The single-layer network also performs competitively or out-performs various approaches for matrix completion over a range of parameter and data regimes despite its simplicity. Together with an optimizer's inductive bias, our findings suggest that explicit regularization can play a role in designing different, desirable forms of regularization and that a more nuanced understanding of this interplay may be necessary.
## 1 Introduction
Much work has poured into understanding why and how highly over-parameterized, deep neural networks with more parameters than training examples generalize so effectively despite long-held notions to the contrary [12; 13]. This generalization puzzle has only deepened as deep learning models often generalize well simply by optimizing its training error on an under-determined problem.
To explain this, previous works have focused on how gradient-based optimization induces an implicit bias on optimization trajectories, particularly in deep (i.e., over-parameterized) settings, tending towards solutions with certain properties [7; 27] like those that generalize well. In contrast, while explicit regularization has seen wide-spread usage in various settings (e.g. weight decay, dropout [59]), its role in explaining generalization has been less certain given its inability to prevent over-fitting on random labels [69] or its absence in deep models that generalize well on their own.
Some works have focused on a simple test-bed to formalize and isolate the mechanisms through which implicit regularization operates--namely, _matrix completion_. Given some observed subset of an unknown, low-rank matrix \(W^{\star}\), the task is to recover the unseen entries. A key observation [27] has been how gradient descent on a shallow linear neural network, with sufficiently small learning rate and near-zero initialization, pushes towards low-rank solutions on its own. This has led to the conjecture [27] that gradient descent induces implicit regularization that minimizes the nuclear norm.
This conjecture has been put into doubt by work [7] showing that gradient descent not only promotes low-rank solutions in the shallow case on matrix completion tasks, but its implicit regularization is further strengthened with increased depth--deep linear neural networks [7]) are able to produce
solutions with lower rank and more accurate completion than those from minimizing the nuclear norm. Others [6] have also shown how increased depth or over-parameterization can provide an accelerative pre-conditioning to regression problems for faster convergence in deep linear networks.
**Contributions**: Despite these findings, some questions still remain:
* Are there explicit norm-based penalties that can mirror the effect of implicit regularization so as to better harness this phenomenon for improved or more efficient learning? Previous work [7] has conjectured that implicit regularization cannot be characterized by explicit norm-based penalties, but whether these penalties can produce similar effects is unclear.
* Do implicit and explicit forms of regularization interact in any meaningful way? Can we modify the implicit biases of optimizers with explicit regularizers so as to promote better kinds of performance? Some work [11] has begun to draw inspiration from implicit regularization to create explicit regularizers, but their interactions are less clear.
* Previous works [6; 7] have shown that depth can act as a powerful pre-conditioning to accelerate convergence or enhance implicit tendencies towards certain simpler or well-generalizing solutions. Can this effect be produced without depth?
To try and shed more light on these questions, we propose an explicit penalty that takes the ratio between the nuclear norm of a matrix and its Frobenius norm (\(\|W\|_{\star}/\|W\|_{F}\)) and study its effects on the task of matrix completion. This penalty can be interpreted as an adaptation of the Hoyer measure [32] to the spectral domain or as a particular normalization of the nuclear-norm penalty that is commonly used to proxy for rank in convex relaxations of the problem.
Studying implicit regularization can be difficult as it is not always possible to account for all other sources of implicit regularization. For a more precise study of the implicit biases of optimizers and their interactions with our penalty, we use matrix completion and deep linear networks as tractable, yet expressive, and well-understood test-beds that admit a crisp formulation of the mechanism through which implicit regularization operates [7; 28]. In particular, we show the following:
1. A depth 1 linear neural network (i.e., a degenerate network without any depth) trained with this penalty can produce the same rank reduction and deliver comparable, if not better, generalization performance than a deep linear network--all the while converging faster. In short, depth is no longer necessary for learning.
2. The above result only occurs under Adam and, to some extent, its close variants. This suggests that different optimizers, each with their own inductive biases, can interact differently with explicit regularizers to modify dynamics and promote certain solutions over others.
3. With the penalty, we achieve comparable or better generalization and rank-reduction performance against various other techniques (Fig. 4) even in low data regimes (i.e., fewer observed entries during training) where other approaches may have no recovery guarantees.
4. Furthermore, the penalty under Adam enables linear neural networks of all depth levels to produce similar well-generalizing low-rank solutions largely independent of depth, exhibiting a degree of _depth invariance_.
In this specific case, it appears that the learning dynamics which occur through the inductive bias of depth can be compressed or replaced with the right combination of optimization algorithm and explicit penalty. These properties may make deep linear networks more efficient for a variety of related matrix completion, estimation, or factorization tasks, ranging from applications in efficient reinforcement learning [58] to adversarial robustness [67], NLP [1], and others.
Previous conjectures and works [6; 7; 69] have largely dismissed the necessity of explicit regularization in understanding generalization in deep learning, leaving its overall role unclear. Similarly, despite its relative popularity, Adam and other adaptive optimizers have received their share of doubt [60; 61] regarding their effectiveness in producing desirable solutions. Our results suggest a subtle but perhaps important interplay between the choice of the optimizer, its inductive bias, and the explicit penalty in producing different optimization dynamics and, hence, different kinds of solutions. If so, then in these interactions, explicit regularization may yet have a role to play.
**Paper Overview**: In Section 2, we review previous work. Section 3 details our experiments and findings. In Section 4, we extend our experiments to a common real-world benchmark and compare our method with other methods. We conclude in Section 5 with a discussion on future work.
Related Work
Implicit regularization has been studied extensively with recent work focusing on optimization trajectories in deep, over-parameterized settings [6; 7; 10]. One avenue in particular has focused on linear neural networks (LNNs) [26; 29; 37; 57] given their relative tractability and the similarity of their learning dynamics to those of non-linear networks. In settings with multiple optima, [28] has shown how LNNs trained on separable data can converge with an implicit bias to the max margin solution. Others [7] have demonstrated how gradient flow converges towards low-rank solutions where depth acts as an accelerative pre-conditioning [6]. In [3], for natural and vanilla gradient descent (GD), different kinds of pre-conditioning are shown to impact bias-variance and risk trade-offs in over-parameterized linear regression, but this is less clear for adaptive gradient optimizers.
Building upon notions of acceleration and pre-conditioning dating back to Nesterov [50] and Newton, Adam's [39] effectiveness--and its close variants [24; 44; 45]--in optimizing deep networks faster makes clear the importance of adaptive pre-conditioning. Though some [60; 61] have doubted their effectiveness due to potential issues that can harm generalization, others [31; 42; 66; 70] have demonstrated advantages from their adaptive preconditioning; despite speeding up optimization, however, the effect of pre-conditioning on generalization has been less clear as some [2; 64; 23] have argued that the "sharpness" of minima achieved can vary depending on the choice of optimizer. More recent works have characterized pre-conditioning and mechanisms of implicit regularization around "edges of stability" for optimizers where the training regime occurs within a certain sharpness threshold, defined as the maximum eigenvalue of the loss Hessian [8; 21; 22].
Matrix completion and factorization have themselves long been an important focus for areas like signal recovery [17] and recommendation systems [14] from theoretical bounds for recovery and convergence [18; 19; 46; 54] to practical algorithms and implementations [34; 47]. We refer to [20] for a comprehensive survey. These tasks and related ones have also served as test-beds for understanding implicit regularization; [7; 27] use matrix factorization and sensing to study gradient flow's implicit bias towards low-rank solutions, conjecturing that algorithmic regularization may not correspond to minimization of any norm-based penalty. [63] studies the implicit bias of mirror descent on matrix sensing, showing that the solution interpolates between the nuclear and Frobenius norms depending on the underlying matrix. In a related thread, [43] has shown that gradient flow with any commuting parameterization is equivalent to continuous mirror descent with a specific Legendre function, generalizing previous results [9; 27; 62] that have characterized the implicit bias of GD.
Interestingly, [11] illustrates how the discrete steps of GD can regularize optimization trajectories away from large gradients towards flatter minima, developing an explicit regularizer to embody and reinforce this implicit effect directly. To our knowledge, our paper is the first to propose a ratio penalty in matrix completion to study the interplay between explicit and implicit regularization.
## 3 Experiments and Findings
### Setup
Formally, we have a ground-truth matrix \(W^{\star}\in\mathbb{R}^{m\times n}\) whose observed entries are indexed by the set \(\Omega\). We define the projection \(\mathcal{P}_{\Omega}(W^{\star})\) to be a \(m\times n\) matrix such that the entries with indices in \(\Omega\) remain while the rest are masked with zeros. We are interested in the following optimization:
\[\min_{W}\mathcal{L}(W)\coloneqq\min_{W}\|\mathcal{P}_{\Omega}(W^{\star})- \mathcal{P}_{\Omega}(W)\|_{F}^{2}+\lambda R(W) \tag{1}\]
where \(\|\cdot\|_{F}\) is the Frobenius norm, \(R(\cdot)\) is an explicit penalty, and \(\lambda\geq 0\) is the tuning parameter. While we consider various penalties, our main focus is demonstrating the effects of our proposed penalty \(R(W)=||W||_{*}/||W||_{F}\). Following earlier works [4], we define a _deep linear neural network_ (DLNN) through the following over-parameterization, or deep factorization, of \(W\):
\[W=W_{N}W_{N-1}\ldots W_{1} \tag{2}\]
under the loss function in (1) where \(W_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}},i\in\{1,\ldots,N\}\) denotes the weight matrix corresponding to depth \(i\) or the \(i\)-th layer. Here, \(N\) denotes the depth of the network/factorization where \(N=2\) corresponds to _matrix factorization_ or a shallow network, \(N\geq 3\) corresponds to _deep matrix factorization_ or a deep network, and \(N=1\) is the degenerate case (no depth/factorization).
We refer to the matrix \(W\), the product of the \(N\) weight matrices in Eq.2, as the _end-product matrix_ as per [7]. As such, the end-product matrix \(W\) is the solution produced in estimating \(W^{\star}\) or, conveniently, the DLNN itself.
In our analyses, we focus on rank 5 matrices as the ground truth \(W^{*}\) and parameterize our DLNN \(W\) with \(d_{0}=\ldots=d_{N}=m=n=100\) (i.e., weight matrices \(W_{i}\in\mathbb{R}^{100\times 100},\ \forall i\)) for illustrative purposes, but our results extend to other ranks (e.g. see Appendix A.2). We follow previous work [7] and employ the effective rank [56] of a matrix to track and quantify the rank of \(W\) in our experiments, defined as: \(\mathrm{e}\mathrm{-rank}(W)=\exp\left\{H(p_{1},\ldots,p_{n})\right\}\) where \(H\) is the Shannon entropy, \(p_{i}=\sigma_{i}/\norm{\sigma}_{1}\), \(\{\sigma_{i}\}\) are the unsigned singular values of \(W\), and \(\norm{\sigma}_{1}\) is the \(\ell_{1}\) norm. The numerical instability of the numeric rank measure is a known issue [56], resulting in unreliable and unstable rank estimates. We leave a detailed discussion of experiment settings to Appendix A.1.
### Depth, without penalty
We first establish a baseline by characterizing the inductive biases of un-regularized gradient descent and un-regularized Adam to better understand their dynamics in the presence of our penalty.
Gradient DescentPrevious work [7] has shown that depth enhances gradient descent's implicit regularization towards low rank, characterized by the following trajectories on the end-product matrix \(W\) and its singular values \(\{\sigma_{i}\}\) (for details, see [6; 7]):
\[\dot{\sigma}_{i} =-N(\sigma_{i}(t)^{2})^{\frac{N-1}{N}}\cdot\mathbf{u}_{i}^{\top} \nabla_{W}\mathcal{L}(W(t))\mathbf{v}_{i} \tag{3}\] \[\mathrm{vec}(\dot{W}) =-P_{W}\mathrm{vec}\left(\nabla_{W}\mathcal{L}(W)\right) \tag{4}\]
where \(\dot{\sigma}_{i}\) is the time derivative of \(\sigma_{i}(t)\), the \(i\)-th singular value of \(W(t)\), \(\{\mathbf{u}_{i},\mathbf{v}_{i}\}\) are the left and right singular vectors of \(W(t)\) corresponding to \(\sigma_{i}(t)\), \(N\) is the network's depth, \(\nabla_{W}\mathcal{L}(W(t))\) is the loss gradient with respect to the end-product matrix \(W\) at time \(t\), \(\mathrm{vec}(\cdot)\) denotes (column-first order) vectorization, \(\dot{W}=dW/dt\) is the time evolution of the end-product matrix or (equivalently) the DLNN itself, \(P_{W}=\sum_{j=1}^{N}(W^{\top}W)^{\frac{N-j}{N}}\otimes(WW^{\top})^{\frac{j-1} {N}}\), and \(\otimes\) denotes the Kronecker product. We suppress the explicit dependency on \(t\) for simplicity and note that full dynamics in Eq.3 require non-degeneracy (non-trivial depth, \(N>1\)); otherwise, they reduce to just \(\mathbf{u}_{i}^{\top}\nabla_{W}\mathcal{L}(W(t))\mathbf{v}_{i}\).
In Eq.4, \(P_{W}\) can be seen as a pre-conditioning onto the gradient that, with sufficient depth (\(N\geq 2\)), accelerates movements already taken in the optimization [6]. As depth/over-parameterization increases, this acceleration intensifies while larger singular values and their movements become more
Figure 1: Dynamics of _un-regularized gradient descent (GD) and Adam_. Plots show the performance of GD over networks of depths 2/3/4/5 for rank 5 matrices of size \(100\times 100\). Colors correspond to different depth levels and shaded regions correspond to error bands. The left column depicts generalization error as a function of depth and training iterations. The middle column depicts the change in effective rank across depths and over training iterations. The right column shows the 1\({}^{\text{st}}\) and 5\({}^{\text{th}}\) largest singular values for each depth across training iterations. For singular values, a solid line indicates the 1\({}^{\text{st}}\) largest singular value while a dotted line indicates the 5\({}^{\text{th}}\) largest within each depth level (colored lines). We omit the remaining singular values to avoid clutter.
pronounced than their smaller counterparts, driving singular value separation and a decrease in rank of the recovered matrix (Fig. 1 top row). The singular values evolve at uneven paces depending on the depth; increasing depth increases the gap in the time it takes between the \(1^{\text{st}}\) and \(5^{\text{th}}\) largest singular values to develop while also taking longer to stabilize. These effects are even more pronounced when comparing the five largest singular values to the remaining ones. Only with sufficient depth (\(N>2\)) do solutions produced by un-penalized gradient descent minimize rank so as to recover the rank of the underlying matrix and produce solutions with low test error.
AdamAnalyzing Adam can be difficult given its exponentially moving average of gradients; to simplify our analysis, we borrow some assumptions from [5] to approximate Adam's dynamics via gradient flow by assuming that the discounted gradients can be well-approximated by their expectation. (see Appendix A.4 for more details).
**Theorem 1**.: _Under the assumptions above and of [7], the trajectory of the singular values \(\sigma_{i}\) of the end-product matrix \(W\) can be approximately characterized as:_
\[\dot{\sigma}_{i}=-\mathrm{vec}(\nabla_{i}\mathbf{u}_{i}^{\top})^{\top}P_{W,G }\mathrm{vec}(\nabla_{W}\mathcal{L}(W)) \tag{5}\]
_Similarly, the trajectory of the end-product matrix \(W\) itself can be approximately characterized as:_
\[\mathrm{vec}(\dot{W})=-P_{W,G}\mathrm{vec}(\nabla_{W}\mathcal{L}(W)) \tag{6}\]
_where \(P_{W,G}=\sum_{j=1}^{N}((WW^{\top})^{\frac{j-1}{N}}\otimes(W^{\top}W)^{\frac{N- j}{N}})G_{j}\) is p.s.d. and \(G_{j}\) is a diagonal matrix for layers \(j\in\{1,\ldots,N\}\). Specifically, \(G_{j}=\mathrm{diag}(\mathrm{vec}(S_{j}))\), \([S_{j}]_{m,n}=[(\nabla_{W_{j}}\mathcal{L}(W)^{2}+s_{j}^{2})^{-1/2}]_{m,n}\), \(\nabla_{W_{j}}\mathcal{L}(W)=\partial\mathcal{L}(W)/\partial W_{j}\) is layer \(j\)'s loss gradient, and \(s_{j}^{2}=\mathrm{var}(\nabla_{W_{j}}\mathcal{L}(W))\)._
Proof.: See Appendix A.4.
Via this approximation, the pre-conditioning induced by Adam can be characterized as a modification of gradient descent's \(P_{W}\), which now normalizes each layer by the square-root of its squared layer-wise loss gradient \((\partial\mathcal{L}(W)/\partial W_{j})^{2}\) and the gradient variance \(s_{j}^{2}\), before summing across all depth levels. Unlike before, the variance of the loss gradient comes into play. Whereas before the pre-conditioning served as a purely accelerative effect that intensifies with depth, its normalization by the gradient variance of each layer \(W_{j}\) can now either dampen or further accelerate the trajectory.
Empirically, we see that depth enhances the implicit bias towards low-rank solutions for both Adam and gradient descent albeit differently (Fig. 1, middle column); in deeper regimes (\(N>2\)), Adam minimizes rank to exact/near-exact rank recovery more smoothly than gradient descent via faster (\(10^{4}\) vs. \(10^{5}\) iterations) and more uniform convergence (Fig. 1, bottom row). With Adam, singular value dynamics exhibit significantly more uniform evolution regardless of depth in contrast to gradient descent (Fig. 1, right), leading to different types of trajectories and solutions.
### Depth, with penalty
Gradient DescentWe now characterize the dynamics of gradient flow with our penalty.
**Theorem 2**.: _Under the assumptions of [6], the evolution of the singular values of the end-product matrix, under gradient descent with the penalty, can be approximated in the following fashion:_
\[\dot{\sigma}_{r}=\dot{\sigma}_{r}^{\text{GF}}-\frac{\lambda N}{||W||_{F}^{2}} \left(1-\frac{||W||_{*}}{||W||_{F}}\right)\sigma_{r}^{\frac{3N-2}{2}} \tag{7}\]
_where \(\lambda\geq 0\) is the regularization parameter and \(\dot{\sigma}_{r}^{\text{GF}}\) denotes the un-regularized singular value trajectory under gradient flow in Eq. (3). Similarly, the evolution of \(W\) can be approximated via:_
\[\mathrm{vec}(\dot{W})=-P_{W}\left(\mathrm{vec}\left(\nabla_{W}\mathcal{L}(W) \right)+\lambda\frac{\mathrm{vec}(UV^{\top}-U\tilde{\Sigma}V^{\top})}{||W||_{ F}^{2}}\right) \tag{8}\]
_where \(U\), \(V\) contain the left and right singular vectors of \(W\), \(P_{W}\) is the pre-conditioning in Eq. (4), and \(\tilde{\Sigma}=\frac{||W||_{*}}{||W||_{F}}\Sigma\) where \(\Sigma\) contains the singular values of \(W\) along its diagonal._
Proof.: See Appendix A.4 for details.
The penalty can be seen as an additive component to the original trajectory in Eq. (3) that also intensifies with increased depth, making movements in larger singular values more pronounced than those of smaller ones. As a result, it can enable more pronounced singular value separation than before (\(\frac{2N-1}{N}\) vs. \(\frac{3N-2}{2}\)), depending on \(\lambda\). Increasing depth continues to push down rank but, unlike before (Eq. (3), Eq. (4)), the penalty now allows singular value trajectories to depend on their own magnitudes even without depth (Eq. (7) with \(N=1\)), providing an additional degree of freedom. The penalty also allows each singular value to depend on its relative weight within the distribution of singular values through \((1-||W||_{*}/||W||_{F})\) rather than just its own absolute magnitude.
In Eq. (8), we also see that the depth-dependent accelerative pre-conditioning \(P_{W}\) now acts on a new term: while the first term can be interpreted as the typical influence of reducing the loss via training and gradient optimization on the network's trajectory, the new term can be interpreted as a spectral-based component that can be used by \(P_{W}\) to further enhance the spectral trajectory of \(W\) at higher depths, like in Eq. (7). Looking at the diagonals, the new term can be seen as a spectrally re-scaled version of \(W\) that influences its trajectory in a way that accounts for each singular value's weight relative to its entire spectrum: \(\frac{\mathbf{u}_{i}^{\top}\mathbf{v}_{i}}{||W||_{F}^{2}}(1-\frac{||W||_{*}}{ ||W||_{F}}\sigma_{i})\).
Empirically, comparing the un-regularized case (Fig. 1 top row) to the regularized case (Fig. 2 top row), we see that the penalty helps increase the speed at which rank is reduced, inducing faster rates of rank reduction earlier in training. Unlike in un-regularized gradient descent where deeper networks take longer to exhibit rank reduction, the penalty enables near simultaneous reductions in rank across all depths (Fig. 2 top row, middle), making it less dependent on depth.
**Adam** With Adam, the penalty's effect differs considerably in terms of the solutions produced.
**Theorem 3**.: _Under the same assumptions, with the proposed penalty, the evolution of the end-product matrix and its singular values can be approximated via the following:_
\[\begin{split}\dot{\sigma}&=-\mathrm{vec}(\mathbf{v }_{i}\mathbf{u}_{i}^{\top})^{\top}P_{W,G}\left(\mathrm{vec}\left(\nabla_{W} \mathcal{L}(W)\right)+\lambda\frac{\mathrm{vec}(UV^{\top}-U\tilde{\Sigma}V^{ \top})}{||W||_{F}^{2}}\right)\\ \mathrm{vec}(\dot{W})&=-P_{W,G}\left(\mathrm{vec} \left(\nabla_{W}\mathcal{L}(W)\right)+\lambda\frac{\mathrm{vec}(UV^{\top}-U \tilde{\Sigma}V^{\top})}{||W||_{F}^{2}}\right)\end{split} \tag{9}\]
Proof.: Follows from Eq. (5) and Eq. (6) with our penalty. See Appendix A.4 for details.
Empirically, we note a new development: there is a large degree of _depth invariance_ as rank is pushed down and low test error is achieved almost independently of depth (Fig. 2, bottom row), even at depth 2 (i.e., a shallow network). Exact rank recovery of the underlying matrix is now possible at all depths, unlike gradient descent, and the networks converge to solutions faster by an order of magnitude.
Figure 2: Dynamics of _regularized gradient descent (GD) and regularized Adam_ with our penalty. Plots above show the performance of GD with our proposed penalty over networks of depths 2/3/4/5 for rank 5 matrix completion. Setup is identical to Fig. 3. Here, \(\lambda=10^{-4}\) but results hold for a range of values \(\lambda\in[10^{-4},10^{-1}]\).
From a shallow network (\(N=2\)), increasing the depth does not induce any material changes in the solutions produced as the penalized DLNN under Adam produces low-rank solutions that achieve exact rank recovery and low test error faster and better than previous settings.
Moreover, we see that this combination of Adam with the penalty also retains some of the accelerative effect of Adam. Specifically, we see more favorable generalization properties and smoother development of singular values whose convergence speed is at least an order of magnitude faster than under gradient descent (\(10^{4}\) vs. \(10^{5}\) iterations)--whose effects do not vary much with depth. As the singular values evolve, we see that their paths are relatively robust to depth, exhibiting near-identical behavior with significantly less dependence on depth than before (Fig. 2, right most column).
### No depth, with penalty
Given the beneficial depth-invariant effects produced by the penalty under Adam in both deep (\(N>2\)) and shallow (\(N=2\)) networks, we now consider its effects in a limiting case: a degenerate depth 1 network (i.e., no depth; \(N=1\)). It is known [69] that gradient descent in this setting converges to the minimum Frobenius (\(\ell_{2}\)) norm solution, which does not necessarily induce low-rankedness. As expected, training a depth 1 network with gradient descent fails to produce a well-generalizing or low-rank solution (Fig. 3, top row) as learning is heavily constrained without depth.
Yet, despite being ineffective under gradient descent, the penalty is again effective under Adam (Fig. 3, bottom row) even without depth, generalizing as well as if not better than deep networks (\(N>2\)). We note that replacing our penalty with other proxies of rank or spectral sparsity, like the nuclear norm, does not work (Table 1). As described earlier, under un-regularized gradient flow with no depth, network dynamics collapse as singular value trajectories reduce to \(\mathbf{u}_{i}^{\top}\nabla_{W}\mathcal{L}(W(t))\mathbf{v}_{i}\) and the depth dependent accelerative pre-conditioning vanishes (\(P_{W}=I_{mn}\)). We see this empirically (e.g. Fig. 3 top row and Table 1) as solutions from un-regularized gradient descent generalize poorly and fail at rank recovery. In contrast, a depth 1 network trained under Adam with the penalty not only achieves low test error (Fig. 3, bottom-left), but also recovers the underlying rank of the ground truth matrix--behaving qualitatively like a deep network. As such, we see that the depth invariance of Adam and the penalty in deeper networks also extends to the case of a depth-less degenerate network.
Without depth, a key component that appears to help the network learn under Adam and the penalty is its variance-weighted gradient term \(\nabla_{W}\mathcal{L}(W)\cdot G\), as defined in Eq. (6), along with the term \(P_{W,G}\left(\lambda\frac{\text{vec}(UV^{\top}-U\Sigma V^{\top})}{||W||_{F}^{ 2}}\right)\) which reduces to \(G\left(\lambda\frac{\text{vec}(UV^{\top}-U\Sigma V^{\top})}{||W||_{F}^{2}}\right)\) without depth. Interestingly, the variance of the loss gradient and the ratio \(\eta^{2}=\text{var}(\nabla_{W}\mathcal{L}(W))/\nabla_{W}\mathcal{L}(W)^{2}\) formed from \(\nabla_{W}\mathcal{L}(W)\cdot G\) resembles an inverse signal-to-noise ratio that both have come up in other works as
Figure 3: Performance comparison between choice of optimization and regularizer in a depth 1 network. Top row corresponds to gradient descent and bottom corresponds to Adam. Note that \(\lambda=0\) (red line) corresponds to the un-regularized setting. Here, \(\lambda=10^{-2}\) but results hold for values \(\lambda\in[10^{-6},10^{-1}]\).
important quantities that are strongly predictive of generalization capabilities [35] or are essential to finding optimal variance adaption factors for loss gradients [5].
### Comparison with other penalties and optimizers
**Other Combinations** Our results do not preclude the possibility that other optimizers and penalties can produce similar or better effects. For completeness, and inspired by the properties of our penalty (e.g. ratio-like, spectral, non-convex), we experiment with various optimizers and penalties (Table 1) to compare their interactions across shallow (\(N=1\)) and deep (\(N=3\)) settings. We note that our ratio penalty under Adam and its close variant Adamax, both initially proposed in [39], are the only combinations that largely show depth invariance in test error and rank recovery whereas others require depth to reduce rank or generalize better. Though the nuclear norm exhibits some depth invariance, it is unable to find well-generalizing solutions and fails in rank reduction under gradient descent where depth is still necessary. Other combinations also fail in enabling a depth 1 network to perform as well as deeper ones. Due to space constraints, we leave a fuller treatment and characterization of each optimizer's inductive bias and its interaction with different regularizers for future work.
**Comparative Performance** We also note our penalty's comparative performance (Fig. 4) against other methodologies for matrix completion across various sample sizes (i.e., the amount of observed entries, uniformly sampled, made available for training). A depth 1 network with Adam and the penalty (Fig. 4, **Adam:1+R**, red line) outperforms all other methods including an un-regularized
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Optimizer} & \multirow{2}{*}{Depth} & \multicolumn{2}{c}{Ratio} & \multicolumn{2}{c}{Sch \(\frac{1}{2}\cdot\frac{2}{3}\)} & \multicolumn{2}{c}{Sch \(\frac{1}{2}\cdot\frac{2}{3}\)} & \multicolumn{2}{c}{Sch \(\frac{1}{2}\cdot\frac{1}{2}\)} & \multicolumn{2}{c}{Nuc} & \multicolumn{2}{c}{None} \\ \cline{3-14} & & Err & Rk & Err & Rk & Err & Rk & Err & Rk & Err & Rk & Err & Rk \\ \hline \multirow{2}{*}{Adam [39]} & 1 & **4e\(\boldsymbol{-7}\)** & **5** & 0.72 & 33 & 0.80 & 45 & 0.81 & 53 & 0.36 & 6 & 1.00 & 79 \\ & 3 & **4e\(\boldsymbol{-7}\)** & **5** & \(3e\)\(e\)\(\boldsymbol{-6}\) & 5 & \(1e\)\(\boldsymbol{-5}\) & 5 & \(6e\)\(\boldsymbol{-6}\) & 5 & 0.30 & 5 & 0.04 & 6 \\ \hline \multirow{2}{*}{Adagrad [25]} & 1 & 0.58 & 31 & 0.81 & 60 & 0.97 & 32 & 0.79 & 60 & 0.12 & 8 & 0.80 & 70 \\ & 3 & 3e\(\boldsymbol{-7}\) & 5 & 9e\(\boldsymbol{-7}\) & 5 & \(1e\)\(\boldsymbol{-5}\) & 5 & 2e\(\boldsymbol{-7}\) & 5 & 0.05 & 6 & 4e\(\boldsymbol{-3}\) & 6 \\ \hline \multirow{2}{*}{Adamax [39]} & 1 & **4e\(\boldsymbol{-7}\)** & **5** & 0.76 & 44 & 0.85 & 22 & 0.80 & 58 & 0.05 & 6 & 0.81 & 72 \\ & 3 & **7e\(\boldsymbol{-7}\)** & **5** & \(3e\)\(\boldsymbol{-6}\) & 5 & \(7e\)\(\boldsymbol{+5}\) & 1 & \(6e\)\(\boldsymbol{-6}\) & 5 & 0.07 & 7 & 0.01 & 7 \\ \hline \multirow{2}{*}{RMSProp} & 1 & 2e\(\boldsymbol{-4}\) & 6 & 0.08 & 4 & 1.6e\(\boldsymbol{+3}\) & 5 & 1.8e\(\boldsymbol{+3}\) & 8 & 0.05 & 8 & 0.80 & 70 \\ & 3 & 0.03 & 5 & \(8e\)\(\boldsymbol{-4}\) & 5 & \(2e\)\(\boldsymbol{-3}\) & 5 & 1.9 & 5 & 0.05 & 6 & 0.11 & 14 \\ \hline \multirow{2}{*}{GD} & 1 & 0.81 & 67 & 0.81 & 62 & 0.80 & 47 & 0.81 & 60 & 0.82 & 59 & 0.83 & 72 \\ & 3 & 0.51 & 3 & 0.25 & 5 & 0.56 & 3 & 0.39 & 5 & 0.24 & 4 & 1\(e\)\(\boldsymbol{-5}\) & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for rank 5 matrix completion across various optimizer/penalty/depth combinations in terms of test error (**Err**) and effective rank (**Rk**, rounded to nearest integer) of the estimated matrix. **Ratio** denotes our ratio penalty (\(||\cdot||_{*}/||\cdot||_{F}\)), **Sch****p:q** denotes the ratio of two Schatten (quasi)norms (\(||\cdot||_{S_{p}}/||\cdot||_{S_{q}}\)) as penalty, **Nuc** denotes the nuclear norm penalty, **Nuc** is no penalty, and \(a\cdot e\)\(b\) denotes \(a\cdot 10^{b}\). Best results—in terms of both absolute test error (lower the better) and rank reduction (closer to 5 the better) as well as depth invariance in terms of error and rank—are in bold. For more results, see Appendix A.3.
Figure 4: Comparative performance in test error and rank minimization for rank 5 matrix completion. \(x\)-axis stands for the number of observed entries (\(\mathbb{R}^{100\times 100}\), so out of \(100\times 100=10^{4}\) entries) and shaded regions indicate error bands. **Adam:1+R** refers to a depth 1 network trained with Adam and penalty, **CP** is the minimum nuclear norm solution, **GD:3** is a depth 3 network trained with gradient descent, **OPT** is **OptSpace**[38], and **SI** is **SoftImpute**[47]. To reduce clutter, we omit results with similar performance as **GD:3** (e.g. GD:4, GD:5).
DLNN in terms of test error and degree of rank compression/recovery across varying sample sizes. Even at lower sample sizes, the depth 1 network generalizes better than methods such as SoftImpute[47], OptSpace[38], the minimum nuclear norm solution [19], and DLNNs trained with gradient descent (\(N\geq 3\)) by at least an order of magnitude. It also outperforms other methods across various data regimes from small sample sizes to large ones, improving as the sample size grows.
## 4 Results on real data
Lastly, a natural question might be: how well do our results extend to real-world data? To answer this, we consider MovieLens100K [15]--a common benchmark used to evaluate different approaches for recommendation systems. It consists of ratings from 944 users on 1,683 movies, forming an interaction matrix \(M\in\mathbb{R}^{944\times 1683}\) where the goal is to predict the rest of \(M\) after observing a subset.
Unlike our earlier experiments, the values here are discrete in the range \(\{1,2,3,4,5\}\) and \(M\) is of high, or near full, rank. Given these differences and more conventional empirical approaches in recommendation systems, we apply our penalty in two ways. The first way is as before: training a depth 1 network with Adam and the penalty (Depth 1 LNN, Table 2). The second way is to impose our penalty on a classic user-item embedding model (User-Item Embedding, Table 2[41]) that combines user-specific and item-specific biases with a dot product between a latent user and latent item embedding; we apply our penalty separately on either the item or the user embedding layer. Though approaches solely utilizing explicit ratings have fallen out of favor in lieu of ones incorporating additional information and complex designs (e.g. graph-based, implicit feedback, deep non-linear networks), we nonetheless evaluate the effects of our penalty within this simple framework. We compare the results from these two approaches with a variety of approaches that use specialized architectures, deep non-linear networks, additional side information, etc., beyond \(M\).
From Table 2, we see that Adam and the penalty (**w. Adam+penalty**) can improve performance over the baseline of gradient descent (GD) or Adam alone. Surprisingly, a depth 1 network with Adam and the penalty can outperform or come close to other more specialized approaches despite its simplicity; however, in contrast to the other methods, it does so without any specialized or additional architectures (e.g. helper models/networks), external information beyond \(M\) (e.g. implicit feedback,
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{3}{*}{Model} & Uses side info, & \multirow{3}{*}{\(90\%\)} \\ & add. features, or & \\ & other info, etc? & \\ \cline{2-3} & & RMSE \\ \hline Depth 1 LNN & No & \\ w. GD & & 2.814 \\ w. GD+penalty & & 2.808 \\ w. Adam & & 1.844 \\ \cline{2-3} & **w. Adam+penalty** & **0.915** \\ \hline User-Item Embedding & No & \\ w. GD & & 2.453 \\ w. GD+penalty & & 2.535 \\ w. Adam & & 1.282 \\ \cline{2-3} & **w. Adam+penalty** & **0.906** \\ \hline NMF [48] & No & 0.958 \\ PMF [48] & No & 0.952 \\ SVD++ [40] & Yes & 0.913 \\ NFM [30] & No & 0.910 \\ FM [55] & No & 0.909 \\ GraphRec [53] & No & 0.898 \\ AutoSVD++ [71] & Yes & 0.904 \\ GraphRec++sidefeat.[53] & Yes & 0.899 \\ GraphRec+graphside feat.[53] & Yes & 0.883 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance evaluations on MovieLens100K. Results are categorized by model, whether additional data or features (e.g. external/side information, implicit feedback, graph features, etc.) beyond the explicit ratings in the interaction matrix are used, and test error as measured by root mean squared error (RMSE, lower is better) on different splits in **(a)** and **(b)**. Since various approaches use different train-test proportions, results [16, 53] on two common splits are included. Results from using Adam with the penalty are in bold.
side information), construction of new graphs or features, non-linearites, higher-order interactions (e.g. factorization machines), or--for the depth 1 case--even any factorization at all. More precise tuning (e.g. better initialization, learning schedules) or usage of other information/features may yield further improvements on this task and others that involve matrix completion or factorization [1, 58, 67]. We leave these for fuller evaluation and further study in future work.
## 5 Discussion
The dynamics of optimization trajectories--induced together by the choice of optimizer, parameterization, loss function, and architecture--can play an important role in the solutions produced and their ability to generalize. Depth and gradient descent-like algorithms have been key ingredients to the successes of deep learning. On matrix completion/factorization, the proposed penalty helps produce well-generalizing solutions and perfect rank recovery even with a degenerate depth 1, or depth-less, network. Does that mean our penalty, together with Adam's own inductive bias, is producing an effect similar to implicit regularization under gradient descent with depth, but better?
We suspect not. While we concur with the conjecture in [7]--namely, a reductionist view which suggests that implicit regularization can be entirely encapsulated by an explicit norm-based penalty is likely an incorrect over-simplification--we believe that there is merit in studying both implicit and explicit forms of regularization to examine their interplay. Our work suggests that we may be able to partially replicate the successes of _deep_ learning by selectively combining optimization methods with explicit penalties via better model design or encoding of inductive biases, but this remains unclear.
Many questions remain open from our limited analyses which, due to space considerations, we leave for future work. For instance, how well do lessons from DLNNs translate to their non-linear counterparts or other tasks (e.g. classification)? How does this relate to learning regimes with larger learning rates or discrete trajectories (i.e., beyond gradient flow)? A more rigorous analysis of the properties (e.g. convergence) of Adam, adaptive gradient methods, and other optimizers in the presence of explicit regularizers may better our understanding. It remains unclear whether implicit regularization is a bug or a feature in deep over-parameterized networks. Nonetheless, our findings suggest the possibility that it can be harnessed and transformed to desirable effect.
## Acknowledgments and Disclosure of Funding
We thank Kellin Pelrine for his feedback and Derek Feng for his help with a previous collaboration on which this work builds. We also thank the anonymous reviewers for their valuable comments and feedback throughout the process.
|
2309.01801 | Phase Transitions for Binomial Sets Under Linear Forms | We generalize the study of the sum and difference sets of a subset of
$\mathbb{N}$ drawn from a binomial model to the following setting. Given $A
\subseteq \{0, 1, \dots, N\}$, an integer $h \geq 2$, and a linear form $L:
\mathbb{Z}^h \to \mathbb{Z}$ given by $$L(x_1, \dots, x_h) = u_1x_1 + \cdots +
u_hx_h, \quad u_i \in \mathbb{Z}_{\neq 0} \text{ for all } i \in [h],$$ we
study the size of $$L(A) = \left\{u_1a_1 + \cdots + u_ha_h : a_i \in A
\right\}$$ and its complement $L(A)^c$ when each element of $\{0, 1, \dots,
N\}$ is independently included in $A$ with probability $p(N)$. We identify two
phase transition phenomena. The first ``global" phase transition concerns the
relative sizes of $L(A)$ and $L(A)^c$, with $p(N) = N^{-\frac{h-1}{h}}$ as the
threshold. Asymptotically almost surely, it holds below the threshold that
almost all sums generated in $L(A)$ are distinct and almost all possible sums
are in $L(A)^c$, and above the threshold that almost all possible sums are in
$L(A)$. Our asymptotic formulae substantially extend work of Hegarty and Miller
and completely settle, with appropriate corrections made to its statement,
their conjecture from 2009. The second ``local" phase transition concerns the
asymptotic behavior of the number of distinct realizations in $L(A)$ of a given
value, with $p(N) = N^{-\frac{h-2}{h-1}}$ as the threshold. Specifically, it
identifies (in a sharp sense) when the number of such realizations obeys a
Poisson limit. Our main tools are recent results concerning the asymptotic
enumeration of partitions and weak compositions, classical theorems on Poisson
approximation, and the martingale machinery of Kim and Vu. | Ryan Jeong, Steven J. Miller | 2023-09-04T20:33:08Z | http://arxiv.org/abs/2309.01801v2 | # Phase transitions for binomial sets under linear forms
###### Abstract.
We revisit the study of the sum and difference sets of a subset of \(\mathbb{N}\) drawn from a binomial model, proceeding under the following more general setting. Given \(A\subseteq\{0,1,\ldots,N\}\), an integer \(h\geq 2\), and a linear form \(L:\mathbb{Z}^{h}\to\mathbb{Z}\) given by
\[L(x_{1},\ldots,x_{h})=u_{1}x_{1}+\cdots+u_{h}x_{h},\quad u_{i}\in\mathbb{Z}_{ \neq 0}\text{ for all }i\in[h],\]
we study the size of
\[L(A)=\{u_{1}a_{1}+\cdots+u_{h}a_{h}:a_{i}\in A\}\]
and its complement \(L(A)^{c}\) when each element of \(\{0,1,\ldots,N\}\) is independently included in \(A\) with probability \(p(N)\). We identify two phase transition phenomena. The first concerns the relative sizes of \(L(A)\) and \(L(A)^{c}\), with \(p(N)=N^{-(h-1)/h}\) as the threshold. Asymptotically almost surely, it holds below the threshold that almost all sums generated in \(L(A)\) are distinct and almost all possible sums are in \(L(A)^{c}\), and above the threshold that almost all possible sums are in \(L(A)\). This generalizes work of Hegarty and Miller and settles their conjecture. The second, which may be expressed in terms of a stochastic process on hypergraphs, concerns the asymptotic behavior of the number of distinct representations in \(L(A)\) of a given value, with \(p(N)=N^{-\frac{h-2}{h-1}}\) as the threshold.
## 1. Introduction
### Motivation
Some of the most basic objects that one can study in additive combinatorics are sum and difference sets of subsets of the integers. Specifically, letting \(A\subset\mathbb{Z}\) be a finite subset of the integers, the _sumset_\(A+A\) and the _difference set_\(A-A\) of \(A\) are given by
\[A+A :=\left\{a_{1}+a_{2}:a_{1},a_{2}\in A\right\}, A-A :=\left\{a_{1}-a_{2}:a_{1},a_{2}\in A\right\}.\]
Sum and difference sets lie at the core of many of the most important conjectures and results in number theory. For example, if we let \(P\) denote the set of all prime numbers and \(A_{k}\) the set of all \(k^{\text{th}}\) powers of positive integers, then Goldbach's Conjecture, the Twin Primes Conjecture, and Fermat's Last Theorem are respectively equivalent to the statements that \(P+P\) contains all even numbers that are at least \(4\), that \(2\) can be generated in \(P-P\) via infinitely many representations, and that \((A_{k}+A_{k})\cap A_{k}=\emptyset\) if \(k\) is an integer that is at least \(3\).
The most fundamental statistic that one can associate with a finite combinatorial object is its size. Since addition is commutative and subtraction is not, a typical pair of integers generates two distinct differences and one distinct sum. It is therefore natural to conjecture that a generic finite subset \(A\subset\mathbb{Z}\) will satisfy \(|A-A|\geq|A+A|\). _More sums than differences (MSTD) sets_, or finite sets \(A\subseteq Z\) such that \(|A+A|>|A-A|\), exist: both the first such example \(\{0,2,3,4,7,11,12,14\}\) and the claim that there does not exist an MSTD set with fewer elements than this example are attributed in folklore to Conway in the 1960s. Since the affirmative resolution of the question of whether MSTD sets exist, the construction of families of MSTD sets and studying their structural properties has been a productive direction of research: see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 71, 73, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 55, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 67, 68, 69, 70, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59,
## 1. Introduction
Let \(A\) be a finite set of all \(n\)-dimensional subsets of \(A\). We say that \(A\) is _strongly
if \(p(N)=o(N^{-1/2})\), this ratio converges in probability to a value converging to \(1\) as \(c\to\infty\) when \(p(N)=cN^{-1/2}\), and \(\frac{|(A+A)^{c}|}{|(A-A)^{c}|}\) converges in probability to \(2\) if \(N^{-1/2}=o(p(N))\). This result was later generalized in the same paper to binary linear forms (see [14] for some motivation on why studying these objects is natural), with the specific statement given in [10, Theorem 3.1].
It is natural to ask what happens if we include three or more summands: [10, Conjecture 4.2] asks for the correct generalization of [10, Theorem 3.1] to linear forms in \(h\geq 3\) variables. Since then, progress in this direction has been elusive. The article [10], which studies generalized sumsets, proceeds under restrictive assumptions on the inclusion probability \(p\). There, they identify \(p(N)=N^{-\frac{h-1}{h}}\) as the appropriate threshold probability, but they failed to find a tractable form for the relevant limits in probability even for the setting of fast decay in this restricted setting. Our main objective in this paper is to break this lull and complete the story. In Theorem 1.1, we assert that all relevant quantities are strongly concentrated about their expectations, and provide asymptotic formulae for these expectations.
### Notation and Conventions
For a natural number \(N\) (we will include \(0\) in the natural numbers), we will denote \([N]:=\{1,\ldots,N\}\) and (as defined earlier) will denote \(I_{N}=\{0,\ldots,N\}\). For a positive integer1\(h\), a \(\mathbb{Z}\)_-linear form in \(h\) variables_ is a function \(L:\mathbb{Z}^{h}\to\mathbb{Z}\) of the form
Footnote 1: We use \(h\) for the number of variables of a \(\mathbb{Z}\)-linear form to be consistent with the notation used by [10].
\[L(x_{1},\ldots,x_{h})=u_{1}x_{1}+\cdots+u_{h}x_{h},\quad u_{i}\in\mathbb{Z}_{ \neq 0}\text{ for all }i\in[h].\]
Throughout this paper, we understand intervals to be their intersections with \(\mathbb{Z}\). Say we are given a \(\mathbb{Z}\)-linear form \(L\) in \(h\) variables with coefficients \(u_{1},\ldots,u_{h}\in\mathbb{Z}_{\neq 0}\). We define
\[n_{L}:=\left|\{i\in[h]:u_{i}>0\}\right|,\hskip 28.452756pts_{L}:=\sum_{i:u_{i} >0}u_{i},\hskip 28.452756ptd_{L}:=\sum_{j:u_{j}<0}|u_{j}|,\hskip 28.452756ptm_{L}: =s_{L}+d_{L}.\]
Without loss of generality, we always assume that \(u_{1}\geq\cdots\geq u_{n}>0>u_{n+1}\geq\cdots\geq u_{h}\). If we are also given a subset \(A\subseteq I_{N}\), we let
\[L(A):=\left\{u_{1}a_{1}+\cdots+u_{h}a_{h}:a_{i}\in A\right\}. \tag{1.5}\]
Notably, we observe that \(L(A)\) corresponds to \(A_{s_{L},d_{L}}\) if \(|u_{i}|=1\) for all \(i\in[h]\). From these definitions, it follows that
\[L(A)\subseteq[-d_{L}N,s_{L}N]\,. \tag{1.6}\]
As such, we will define the _complement of \(L(A)\)_ to be
\[L(A)^{c}:=[-d_{L}N,s_{L}N]\setminus L(A). \tag{1.7}\]
Thus, \(|L(A)^{c}|\) is \(|L(A)|\) subtracted from the maximum possible value of \(|L(A)|\). The setting where the multisets \([u_{1},\ldots,u_{h}]\) and \([-u_{1},\ldots,-u_{h}]\) are equal will occasionally need to be treated separately in our arguments. We call such linear forms _balanced_. With the conventions and notation that we have established, it follows for a balanced linear form that \(h\) is even, \(n_{L}=h/2\), \(s_{L}=d_{L}=m_{L}/2\), and \(u_{i}=-u_{h-i+1}\) for all \(i\in[h]\).
We let \(\mathfrak{S}_{h}\) denote the symmetric group on \(h\) elements, and let \(\sigma_{\mathrm{rev}}\in\mathfrak{S}_{h}\) denote the permutation given by \(\sigma_{\mathrm{rev}}(i)=h-i+1\) for all \(i\in[h]\). Let \(\mathfrak{R}_{h}(L),\mathfrak{R}_{h}^{\mathrm{rev}}(L)\) denote subgroups of \(\mathfrak{S}_{h}\) given by
\[\mathfrak{R}_{h}(L) :=\left\{\sigma\in\mathfrak{S}_{h}:(u_{\sigma(1)},\ldots,u_{\sigma (h)})=(u_{1},\ldots,u_{h})\right\}, \tag{1.9}\] \[\mathfrak{R}_{h}^{\mathrm{rev}}(L) :=\left\langle\mathfrak{R}_{h}(L),\sigma_{\mathrm{rev}}\right\rangle. \tag{1.8}\]
Let \(\theta_{L}:=\left|\mathfrak{R}_{h}(L)\right|=\left|\mathfrak{R}_{h}^{\mathrm{rev }}(L)\right|/2\). We are interested in the number of distinct ways in which a value is generated in \(L(A)\). Towards this, let \(\approx_{L}\) denote the equivalence relation on \(\mathbb{N}^{h}\) for which
\((a_{1},\ldots,a_{h})\approx_{L}(b_{1},\ldots,b_{h})\) if and only if there exists \(\sigma\in\mathfrak{R}_{h}(L)\) such that
\[(a_{\sigma(1)},\ldots,a_{\sigma(h)})=(b_{1},\ldots,b_{h}). \tag{1.10}\]
We think of \(\approx_{L}\) as modding out redundancies (hence the notation (1.8), (1.9)) related to permuting the order in which we pass elements into \(L\). That \(\approx_{L}\) is an equivalence relation and that \((a_{1},\ldots,a_{h})\approx_{L}(b_{1},\ldots,b_{h})\implies L(a_{1},\ldots,a_{h}) \approx_{L}L(b_{1},\ldots,b_{h})\) both follow easily from \(\mathfrak{R}_{h}(L)\) being a subgroup of \(\mathfrak{S}_{h}\). For an equivalence class \(\Lambda\in\mathbb{N}^{h}\backslash\approx_{L}\), let \(L(\Lambda)\) denote the mapping of any representative of \(\Lambda\) under \(L\). We let \(\mathfrak{D}_{L}(N,k)\) denote the collection of equivalence classes \(\Lambda\in\mathbb{N}^{h}\backslash\approx_{L}\) satisfying2
Footnote 2: Think of this as the “offset” of size \(k\) from the minimum possible value of \(L(a_{1},\ldots,a_{h})\) when \(a_{1},\ldots,a_{h}\in I_{N}\).
\[L(\Lambda)=-d_{L}N+k, S(\Lambda)\subset I_{N}. \tag{1.11}\]
We now demonstrate the sense in which balanced linear forms serve as an edge case in this article. Say that the \(\mathbb{Z}\)-linear form \(L\) is balanced. It is not hard to see that
\[L(a_{1},\ldots,a_{h})=0\iff L(a_{\sigma_{\mathrm{rev}}(1)},\ldots,a_{\sigma_{ \mathrm{rev}}(h)})=0,\]
so \(\mathfrak{D}_{L}(N,m_{L}N/2)\) fails to capture all redundancies amongst \(h\)-tuples related to permuting the order in which we pass elements into \(L\). This leads us to consider the subgroup \(\mathfrak{R}_{h}^{\mathrm{rev}}(L)\) of \(\mathfrak{S}_{h}\), using which we may define the collection \(\mathfrak{D}_{L}^{\mathrm{rev}}(N,m_{L}N/2)\) of equivalence classes on \(h\)-tuples in \(\mathbb{N}^{h}\) mapping under \(L\) to \(0\) entirely analogously. We now define
\[\mathscr{R}_{L}(N,k):=\begin{cases}\mathfrak{R}_{h}^{\mathrm{rev} }(L)&\text{$L$ balanced, $k=m_{L}N/2$},\\ \mathfrak{R}_{h}(L)&\text{otherwise};\end{cases} \tag{1.13}\] \[\mathscr{D}_{L}(N,k):=\begin{cases}\mathfrak{D}_{L}^{\mathrm{rev} }(N,m_{L}N/2)&\text{$L$ balanced, $k=m_{L}N/2$},\\ \mathfrak{D}_{L}(N,k)&\text{otherwise}.\end{cases} \tag{1.12}\]
Following the preceding discussion, we think of the subgroup \(\mathscr{R}_{L}(N,k)\) of \(\mathfrak{S}_{h}\) as capturing the redundancies related to permuting the order in which we pass elements into \(L\) to generate \(-d_{L}N+k\), and the collection \(\mathscr{D}_{L}(N,k)\) as capturing the distinct (hence the notation) ways we can map \(h\)-tuples under \(L\) to generate \(-d_{L}N+k\). We call elements of \(\mathscr{D}_{L}(N,k)\)\(L\)_-expressions_. For an \(L\)-expression with representative \((a_{1},\ldots,a_{h})\), we call \(L(\Lambda):=L(a_{1},\ldots,a_{h})\) the _\(L\)-evaluation_ of \(\Lambda\), and we call \(S(\Lambda):=\{a_{1},\ldots,a_{h}\}\) the _ground set_ of \(\Lambda\).3 These notions are clearly well-defined.
Footnote 3: Of course, the ground set of \(\Lambda\) may have fewer than \(h\) elements if a representative of \(\Lambda\) has a repeated element.
In order to estimate or bound the probability that a particular collection of values is excluded from a sum or difference set, past papers on MSTD sets (e.g., see [14, 15]) introduced _forbiddance graphs_. Specifically, for a collection of forbidden values, the corresponding forbiddance graph is the graph whose vertex set is \(I_{N}\), with edges between vertices whose sum/difference is forbidden: the probability that no forbidden values are included in the sum/difference set is then equal to the probability that, if we choose each vertex of the forbiddance graph with probability \(p(N)\), an independent set is drawn. We generalize by letting the _forbiddance hypergraph_\(\mathcal{G}_{L}(N,k)\) be the hypergraph on the vertex set \(I_{N}\) such that \(e\subset I_{N}\) is a hyperedge of \(\mathcal{G}_{L}(N,k)\) if and only if \(e=S(\Lambda)\) for some \(\Lambda\in\mathscr{D}_{L}(N,k)\).
This article studies what happens when we apply \(\mathbb{Z}\)-linear forms to random subsets \(A\subseteq I_{N}\) drawn from a binomial model: we include each element from \(I_{N}\) in \(A\) with probability \(p(N):\mathbb{N}\to(0,1)\), which we take to be a function satisfying (1.4). Formally, we set \(\Omega_{N}:=\{0,1\}^{N+1}\), so subsets \(A\subseteq I_{N}\) correspond exactly with elements \((a_{0},\ldots,a_{N})\in\Omega_{N}\) in the natural way (that is, \(a_{i}=1\) corresponds to \(i\in A\) for all \(i\in I_{N}\)): we refer to \(A\) and \((a_{0},\ldots,a_{N})\) interchangeably. We will let \(\mu\) denote the product of \(N+1\) instances of the Bernoulli measure with parameter \(p\), so that we are working in the probability space \(\big{(}\Omega_{N},2^{\Omega_{N}},\mu\big{)}\).
We use \(d_{\mathrm{TV}}(\mu_{1},\mu_{2})\) to denote the total variation distance between probability measures \(\mu_{1}\) and \(\mu_{2}\) defined on the same space. For random variables \(X,Y\) defined on the same space, we will write \(d_{\mathrm{TV}}(X,Y)\) as shorthand for the total variation distance between the laws of \(X\) and \(Y\).
We now introduce some random variables and stochastic processes which will be of central importance throughout the article. For \(\Lambda\in\mathscr{D}_{L}(N,k)\), we let \(X_{\Lambda}\) be the Bernoulli random variable corresponding to the event \(S(\Lambda)\subseteq A\), so \(p_{\Lambda}:=\mathbb{E}[X_{\Lambda}]=p^{|S(\Lambda)|}\). We also let \(Y_{\Lambda}:=\mathrm{Pois}(p_{\Lambda})\) be mutually independent. We let \(\mathscr{X}_{L}(N,k):=(X_{\Lambda})_{\Lambda\in\mathscr{D}_{L}(N,k)}\) be the dependent Bernoulli process, and let \(\mathscr{Y}_{L}(N,k):=(Y_{\Lambda})_{\Lambda\in\mathscr{D}_{L}(N,k)}\) be the Poisson process on \(\mathscr{D}_{L}(N,k)\) with intensity \(p_{\mathscr{D}_{L}(N,k)}\): this is a measure on \(2^{\mathscr{D}_{L}(N,k)}\) defined by \(p_{\mathscr{D}_{L}(N,k)}(B):=\sum_{\Lambda\in B}p_{\Lambda}\) for \(B\subseteq\mathscr{D}_{L}(N,k)\).
Similarly, for \(e\in E(\mathcal{G}_{L}(N,k))\), we let \(X_{e}\) be the Bernoulli random variable corresponding to the event \(V(e)\subseteq A\), so \(p_{e}:=\mathbb{E}[X_{e}]=p^{|V(e)|}\). We also let \(Y_{e}:=\mathrm{Pois}(p_{e})\) be mutually independent. Let \(\mathfrak{X}_{k}:=(X_{e})_{e\in E(\mathcal{G}_{L}(N,k))}\) be the dependent Bernoulli process, and let \(\mathfrak{Y}_{k}:=(Y_{e})_{e\in E(\mathcal{G}_{L}(N,k))}\) be the Poisson process on \(E(\mathcal{G}_{L}(N,k))\) with intensity \(p_{\mathcal{G}_{L}(N,k)}\): this is a measure on \(2^{E(\mathcal{G}_{L}(N,k))}\) defined by \(p_{\mathcal{G}_{L}(N,k)}(B):=\sum_{\Lambda\in B}p_{\Lambda}\) for \(B\subseteq E(\mathcal{G}_{L}(N,k))\).
The asymptotic formulae that we derive in Theorem 1.1(ii) involve the density of the Irwin-Hall distribution of order \(h\) (e.g., see [10, 11]), which is the distribution for the sum of \(h\) independent random variables uniformly distributed on \([0,1]\). We denote the density of the Irwin-Hall distribution of order \(h\) at a real number \(x\) by \(\mathsf{IH}_{h}(x)\). Letting \(x_{+}:=\max\{0,x\}\) denote the positive part of the real number \(x\), this density is given by
\[\mathsf{IH}_{h}(x):=\begin{cases}\frac{1}{(h-1)!}\sum_{j=0}^{h}(-1)^{j}\binom{ h}{j}(x-j)_{+}^{h-1}&x\leq h,\\ 0&x>h.\end{cases} \tag{1.14}\]
We now record those conventions that we assume, unless stated otherwise, throughout the rest of the paper. Throughout the analysis, we assume that we have fixed an arbitrary integer \(h\geq 2\) and a \(\mathbb{Z}\)-linear form \(L:\mathbb{Z}^{h}\to\mathbb{Z}\) with coefficients \(u_{1},\ldots,u_{h}\in\mathbb{Z}_{\neq 0}\) satisfying \(\gcd(u_{1},\ldots,u_{h})=1\). We employ standard asymptotic notation throughout the article: for sake of completeness, we describe this notation in Appendix A. We also assume that \(N\) is a large positive integer which tends to infinity, and that all asymptotic notation is with respect to \(N\). As such, we henceforth abbreviate much of the notation introduced here by dropping arguments or subscripts of \(L\), \(N\), and \(h\). In this section, we were careful to make it clear what the quantities we defined depended on; whenever we use such an abbreviation, it will be unambiguous what it is referring to. We omit floor and ceiling symbols when doing so does not affect asymptotics. In Sections 2-7, we will assume that (1.4) holds, and frequently abbreviate \(p(N)\) to \(p\).
### Main Results
Our main interest is to understand what happens to \(|L(A)|\) and \(|L(A)^{c}|\) as we vary the rate at which the inclusion probability \(p(N)\) decays as \(N\) grows. Our techniques yield two broad classes of results.
#### 1.3.1. Phase Transition
Our main findings can be summarized by the following theorem.
**Theorem 1.1**.: _Let \(p:\mathbb{N}\to(0,1)\) be a function satisfying (1.4). Fix an integer \(h\geq 2\) and a \(\mathbb{Z}\)-linear form \(L:\mathbb{Z}^{h}\to\mathbb{Z}\) with coefficients \(u_{1},\ldots,u_{h}\in\mathbb{Z}_{\neq 0}\) such that \(\gcd(u_{1},\ldots,u_{h})=1\). Let \(A\subseteq I_{N}\) be a random subset where each element of \(I_{N}\) is independently included in \(A\) with probability \(p(N)\). The following three situations arise._
* _If_ \(p(N)\lll N^{-\frac{h-1}{h}}\)_, then_ (1.15) \[|L(A)|\sim\frac{\big{(}N\cdot p(N)\big{)}^{h}}{\theta_{L}}.\]
2. _If_ \(p(N)=cN^{-\frac{h-1}{h}}\) _for some constant_ \(c>0\)_, then_ (1.16) \[|L(A)| \sim\left(\sum_{i=1}^{h}|u_{i}|-2\int_{0}^{m/2}\exp\left(-\frac{c^{ h}\sum_{t_{1}=0}^{|u_{1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x- \sum_{i=1}^{h}t_{i}\right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)dx \right)N,\] (1.17) \[|L(A)^{c}| \sim\left(2\int_{0}^{m/2}\exp\left(-\frac{c^{h}\sum_{t_{1}=0}^{|u _{1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x-\sum_{i=1}^{h}t_ {i}\right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)dx\right)N.\]
3. _If_ \(p(N)\gg N^{-\frac{h-1}{h}}\)_, then_ (1.18) \[|L(A)^{c}| \sim\frac{2\cdot\Gamma\left(\frac{1}{h-1}\right)\ ^{h-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Theorem 1.2**.: _Let \(p:\mathbb{N}\to(0,1)\) be any function satisfying (1.4). Fix nonnegative integers \(s,d,h\) such that \(h\geq 2\) and \(s+d=h\). Let \(A\subseteq I_{N}\) be a random subset where each element of \(I_{N}\) is independently included in \(A\) with probability \(p(N)\). Then the following three situations arise._
* _If_ \(p(N)\lll N^{-\frac{h-1}{h}}\)_, then_ \[|A_{s,d}|\sim\frac{(N\cdot p(N))^{h}}{s!d!}.\]
* _If_ \(p(N)=cN^{-\frac{h-1}{h}}\) _for some constant_ \(c>0\)_, then_ \[|A_{s,d}|\sim\left(h-2\int_{0}^{h/2}\exp\left(-\frac{c^{h}}{s!d!}\mathsf{H}_{h} (x)\right)dx\right)N, |A_{s,d}^{c}|\sim\left(2\int_{0}^{h/2}\exp\left(-\frac{c^{h}}{s!d!} \mathsf{H}_{h}(x)\right)dx\right)N.\]
* _If_ \(p(N)\ggg N^{-\frac{h-1}{h}}\)_, then_ \[|A_{s,d}^{c}|\sim\frac{2\cdot\Gamma\left(\frac{1}{h-1}\right)\ ^{h-1}\!\sqrt{(h-1)!s!d!}}{(h-1)\cdot p(N)^{\frac{h}{h-1}}}.\]
Theorem 1.2 generalizes [11, Theorem 1.1] from sum and difference sets with two summands to generalized sumsets with \(h\) summands, under any number \(0\leq s\leq h\) of sums. It is also naturally of interest to understand how the relative asymptotic almost sure sizes of generalized sumsets and their complements change when we modify their structure, i.e., the number of sums amongst their \(h\) summands. In this direction, we have Corollary 1.3 as an immediate corollary of Theorem 1.2(i, iii). We note that Corollary 1.3(i) is a strictly more general statement than the first part of [11, Theorem 1.2]. We omit the corresponding result for Theorem 1.2(ii) (which would yield a strictly more general statement than the second part of [11, Theorem 1.2]), as it is less natural.
**Corollary 1.3**.: _Fix nonnegative integers \(s_{1},d_{1},s_{2},d_{2},h\) such that \(h\geq 2\) and \(s_{1}+d_{1}=s_{2}+d_{2}=h\). Let \(A\subseteq I_{N}\) be a random subset where each element of \(I_{N}\) is independently included in \(A\) with probability \(p(N)\). Then the following two situations arise._
* _If_ \(p(N)\lll N^{-\frac{h-1}{h}}\)_, then_ \[\frac{|A_{s_{1},d_{1}}|}{|A_{s_{2},d_{2}}|}\sim\frac{s_{2}!d_{2}!}{s_{1}!d_{1}!}, \frac{|A_{s_{1},d_{1}}^{c}|}{|A_{s_{2},d_{2}}^{c}|}\sim 1.\]
* _If_ \(p(N)\ggg N^{-\frac{h-1}{h}}\)_, then_ \[\frac{|A_{s_{1},d_{1}}|}{|A_{s_{2},d_{2}}|}\sim 1, \frac{|A_{s_{1},d_{1}}^{c}|}{|A_{s_{2},d_{2}}^{c}|}\sim\ ^{h-1}\!\sqrt{ \frac{s_{1}!d_{1}!}{s_{2}!d_{2}!}}.\]
#### 1.3.2. Limiting Point Process Behavior
In order to estimate the probability that a particular candidate value is included in \(L(A)\), which is the key step in our expectation computations, we invoke the Stein-Chen method based on dependency graphs: see, for example, [1, 1, 2, 3, 4, 5, 6, 7]. As such, it is natural to ask if there is some kind of interesting local point process limit. In this direction, the tools and arguments used to prove Theorem 1.1 quickly lend themselves to the following result, which presents a phase transition concerning the distribution of the dependent Bernoulli process corresponding to the number of distinct representations in \(L(A)\) of a given value.
**Theorem 1.4**.: _Assume the setup of Theorem 1.1. Let \(C>0\) be a constant. The following two situations arise._
* _If_ \(p(N)\lll N^{-\frac{h-2}{h-1}}\)_, then uniformly over_ \(k\in[0,mN]\)_,_ \(d_{\rm TV}\left(\mathscr{X}_{k},\mathscr{Y}_{k}\right)\lll 1\)_._
* _If_ \(p(N)\gtrsim N^{-\frac{h-2}{h-1}}\)_, then uniformly over_ \(k\in\left[CN,(m-C)N\right]\)_,_ \(d_{\rm TV}\left(\mathscr{X}_{k},\mathscr{Y}_{k}\right)=\Omega(1)\)_._
We may interpret Theorem 1.4, which identifies \(p(N)=N^{-\frac{h-2}{h-1}}\) as a threshold function for this Poisson convergence (observe that this dominates the threshold function \(N^{-\frac{h-1}{h}}\) in Theorem 1.1) holding over \(k\in[0,mN]\), as follows. If we are below the threshold, we may think of the number of ways that \(-dN+k\) is generated in \(L(A)\) as being modeled by a collection of several independent ways of representing \(-dN+k\) under \(L\). On the threshold, the extent to which this interpretation is faithful to the truth weakens as the constant \(c\) grows, until it collapses everywhere outside of the fringes of \([0,mN]\) as we continue to move above the threshold.
**Remark 1.5**.: The condition \(\gcd(u_{1},\dots,u_{h})=1\) assumed in the statements of Theorems 1.1 and 1.4 can be relaxed. Indeed, let \(L(x_{1},\dots,x_{h})=u_{1}x_{1}+\dots+u_{h}x_{h}\) be a \(\mathbb{Z}\)-linear form on \(h\) variables for which \(\gcd(u_{1},\dots,u_{h})>1\). Let \(\tilde{L}(x_{1},\dots,x_{h})=v_{1}x_{1}+\dots+v_{h}x_{h}\) be the \(\mathbb{Z}\)-linear form on \(h\) variables for which \(v_{i}=\frac{u_{i}}{\gcd(u_{1},\dots,u_{h})}\) for all \(i\in[h]\). It is straightforward to show that
\[\mathscr{D}_{L}(N,k)=\begin{cases}\emptyset&\gcd(u_{1},\dots,u_{h})\nmid k,\\ \mathscr{D}_{\tilde{L}}\left(N,k/\gcd(u_{1},\dots,u_{h})\right)&\gcd(u_{1}, \dots,u_{h})\mid k,\end{cases} \tag{1.19}\]
and that
\[|L(A)|=|\tilde{L}(A)|,\qquad\qquad|L(A)^{c}|=|\tilde{L}(A)^{c}|-\left(\gcd(u_ {1},\dots,u_{h})-1\right)\sum_{i=1}^{h}|u_{i}|.\]
In this sense, we will have solved the problem for all \(\mathbb{Z}\)-linear forms once we have solved it under the setting in which \(\gcd(u_{1},\dots,u_{h})=1\). We have thus assumed this condition in our main results.
**Remark 1.6**.: By considering the case \(h=2\), we observe that Theorem 1.1(i) and Theorem 1.1(iii) are respectively consistent with [13, Theorem 3.1(i)] and [13, Theorem 3.1(iii)]. Letting \(L(x_{1},x_{2})=u_{1}x_{1}+u_{2}x_{2}\) be a 2-linear form for which \(|u_{1}|\geq|u_{2}|\), \(\gcd(u_{1},u_{2})=1\), and \((u_{1},u_{2})\neq(1,1)\), combining Theorem 1.1(ii) with [13, Theorem 3.1(ii)] yields the following identity, which appears to be new:
\[2\int_{0}^{\frac{|u_{1}|+|u_{2}|}{2}}\exp\left(-\frac{c^{2}\sum_{t_{1}=0}^{|u _{1}|-1}\sum_{t_{2}=0}^{|u_{2}|-1}\mathsf{H}_{2}\left(x-t_{1}-t_{2}\right)}{| u_{1}u_{2}|}\right)dx\]
\[=\frac{2|u_{1}u_{2}|\left(1-e^{-c^{2}/|u_{1}|}\right)}{c^{2}}+\left(|u_{1}|-|u _{2}|\right)e^{-c^{2}/|u_{1}|}.\]
The validity of this identity is easy to confirm if \(|u_{1}|=|u_{2}|=1\).
Finally, we note that there are other natural choices for the definitions of our main objects of study in Subsection 1.2: we have proceeded as such to be consistent with the presentation of [13], which pursues the \(h=2\) case of our problem. We mention some of these alternatives here, and discuss the changes they would induce to our main results.
One natural modification is to define a coarser equivalence relation on \(\mathbb{N}^{h}\) for which two \(h\)-tuples are equivalent if and only if they have the same ground set, and then proceed exactly as before. We may think of this as studying minimal subsets of \(I_{N}\) which generate a candidate sum \(-dN+k\) in \(L(A)\) instead of distinct representations in \(L(A)\) generating \(-dN+k\). By tracing our arguments, it can be shown that in this setting, Theorem 1.4 translates to the following result concerning a dependent Bernoulli process on hypergraphs, also with \(p(N)=N^{-\frac{h-2}{h-1}}\) as the corresponding threshold function.
**Theorem 1.7**.: _Assume the setup of Theorem 1.4. The following two situations arise._
* _If_ \(p(N)\lll N^{-\frac{h-2}{h-1}}\)_, then uniformly over_ \(k\in[0,mN]\)_,_ \(d_{\mathrm{TV}}\left(\mathfrak{X}_{k},\mathfrak{Y}_{k}\right)\lll 1\)_._
* _If_ \(p(N)\gtrsim N^{-\frac{h-2}{h-1}}\)_, then uniformly over_ \(k\in\left[CN,(m-C)N\right]\)_,_ \(d_{\mathrm{TV}}\left(\mathfrak{X}_{k},\mathfrak{Y}_{k}\right)=\Omega(1)\)_._
Another natural modification is to instead define \(L(A)\) by
\[L(A):=\left\{u_{1}a_{1}+\cdots+u_{h}a_{h}:a_{i}\in A,a_{1},\ldots,a_{h}\text{ distinct}\right\}. \tag{1.20}\]
By tracing our arguments, it can be checked that all of our main results follow over without issue to this setting (indeed, much of the analysis becomes simpler, as we no longer have to deal with repeated summands). Notably, if we proceed under this definition, the forbidance hypergraphs we define will be \(h\)-regular, and Theorem 1.7 translates to a result concerning stochastic processes on regular hypergraphs.
The rest of the paper is organized as follows. In Section 2, we establish much of the machinery that we will invoke in the proofs of our main results. In Section 3, we perform some standard computations that we will make use of in later sections. In Sections 4-6, we prove Theorem 1.1. In Section 7, we prove Theorem 1.4. We conclude in Section 8 with directions for future research.
## 2. Preliminaries
In this section, we gather several results that we invoke in the proofs of our main results.
### Asymptotic Enumeration
In this subsection, we present a number of results concerning the asymptotics of \(L\)-expressions which we invoke later in our arguments. The enumerative combinatorial arguments we used to prove these results do not provide much insight for the rest of the main body of the paper. Thus, we simply give the statements here, and defer proofs of these results to Appendix B to avoid distracting from the flavor of this article. We first introduce the following quantities, which we abbreviate as \(\lambda_{k}\), and which will resurface in Sections 5 and 6. Set
\[\lambda_{L}(N,k):=\frac{\sum_{t_{1}=0}^{|u_{1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h }|-1}|\mathsf{H}_{h}\left(k/N-\sum_{i=1}^{h}t_{i}\right)}{2^{\mathbf{1}\left\{ L\text{ balanced, }k=mN/2\right\}}\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}. \tag{2.1}\]
**Lemma 2.1**.: _Fix \(g:\mathbb{N}\to\mathbb{N}\) satisfying \(1\lll g(N)\lll N\). Uniformly over \(k\in\left[0,g(N)\right]\),_
\[|\mathscr{D}_{k}|=|\mathscr{D}_{mN-k}|\lll N^{h-1}. \tag{2.2}\]
_Uniformly over \(k\in\left[g(N),mN/2\right]\),_
\[|\mathscr{D}_{k}|=|\mathscr{D}_{mN-k}|\sim\lambda_{k}N^{h-1}. \tag{2.3}\]
**Lemma 2.2**.: _The following statements hold uniformly over \(k\in\left[0,mN/2\right]\)._
1. _The number of_ \(L\)_-expressions_ \(\Lambda\in\mathscr{D}_{k}\) _such that_ \(|S(\Lambda)|=h\) _is_ \(\gtrsim k^{h-1}\)_._
2. _The number of_ \(2\)_-tuples_ \(\left(\Lambda(1),\Lambda(2)\right)\in\mathscr{D}_{k}^{2}\) _such that_ \[S(\Lambda(1))\cap S(\Lambda(2))\neq\emptyset, \big{|}S(\Lambda(1))\cup S(\Lambda(2))\big{|}=2h-1\] _is_ \(\gtrsim k^{2h-3}\)_._
**Lemma 2.3**.: _Fix positive integers \(t,\ell\) such that \(t\in[4]\) and \(\ell\leq\max\{h,th-1\}\). Uniformly over \(k\in[0,mN]\), the number of \(t\)-tuples \(\left(\Lambda(1),\ldots,\Lambda(t)\right)\in\mathscr{D}_{k}^{t}\) such that \(\Lambda(1),\ldots,\Lambda(t)\) satisfy_
\[S(\Lambda_{i})\cap\left(\bigcup_{j\neq i}S(\Lambda_{j})\right)\neq\emptyset \text{ for all }i\in[t], \left|\bigcup_{i=1}^{t}S(\Lambda_{i})\right|=\ell \tag{2.4}\]
_is \(\lesssim k^{\ell-\lceil(\ell+1)/h\rceil}\) if \(t\geq 2\), and \(\lesssim k^{\ell-1}\) if \(t=1\)._
### Poisson Approximation
Over the course of our calculations in Sections 5 and 6, we will make use of the following definitions and theorems. Note that it is very natural to suspect some kind of Poisson weak limit in this setting: loosely speaking, the possible ways in which a candidate sum \(-dN+k\) may be generated in \(L(A)\) correspond to a collection of weakly dependent events, rendering the Stein-Chen method a promising line of attack.
**Definition 2.4** ([1], Equation (1.1)).: Let \(I\) be a finite index set. For each \(\alpha\in I\), let \(X_{\alpha}\) be a Bernoulli random variable. We say that the random variables \(\{X_{\alpha}:\alpha\in I\}\) are _positively related_ if, for every \(\alpha\in I\), there exist random variables \(\left\{Y_{\beta\alpha}:\beta\in I\setminus\{\alpha\}\right\}\) defined on the same probability space such that
\[\mathcal{L}\left(Y_{\beta\alpha};\beta\in I\right) =\mathcal{L}\left(X_{\beta};\beta\in I\ |\ X_{\alpha}=1\right), Y_{\beta\alpha}\geq X_{\beta}\ \text{for all}\ \beta\in I\setminus\{\alpha\}.\]
**Theorem 2.5** ([1], Theorem 1).: _Let \(I\) be a finite index set. For each \(\alpha\in I\), let \(X_{\alpha}\) be a Bernoulli random variable with parameter \(p_{\alpha}\), and choose a subset \(B_{\alpha}\) of \(I\). Define_
\[b_{1} :=\sum_{\alpha\in I}\sum_{\beta\in B_{\alpha}}p_{\alpha}p_{\beta},\] \[b_{2} :=\sum_{\alpha\in I}\sum_{\alpha\neq\beta\in B_{\alpha}}\Pr[X_{ \alpha}X_{\beta}=1],\] \[b_{3} :=\sum_{\alpha\in I}\mathbb{E}\left[\big{|}\mathbb{E}[X_{\alpha} \ |\ X_{\beta}:\beta\notin B_{\alpha}]-p_{\alpha}\big{|}\right].\]
_Let \(W:=\sum_{\alpha\in I}X_{\alpha}\), and let \(Z:\stackrel{{ d}}{{=}}\mathrm{Pois}(\lambda)\) with rate \(\lambda:=\mathbb{E}[W]=\sum_{\alpha\in I}p_{\alpha}\). Then_
\[\left|\Pr[W=0]-e^{-\lambda}\right|<\min\{1,\lambda^{-1}\}(b_{1}+b_{2}+b_{3}).\]
Whenever Theorem 2.5 is invoked in this article, we will take \(B_{\alpha}\) to be the _dependency set_ of \(\alpha\) for every \(\alpha\in I\): \(\beta\in B_{\alpha}\) if and only if \(X_{\alpha}\) and \(X_{\beta}\) are dependent. With this choice of the subsets \(B_{\alpha}\), we may interpret \(b_{1}\) as measuring the total size of the dependence sets and \(b_{2}\) as twice the expected number of dependent pairs that arise. It also follows immediately that \(b_{3}=0\) (in general, \(b_{3}\) might be thought of as measuring how honest we were in selecting \(B_{\alpha}\) to be the dependency set of \(\alpha\)). We now state the theorems that we will invoke when proving our results at the process level.
**Theorem 2.6** ([11], Corollary 2.3).: _Assume the setup of Theorem 2.5, with the additional assumption that for each \(\alpha\in I\), \(B_{\alpha}\) is the dependency set of \(\alpha\). For each \(\alpha\in I\), let \(Y_{\alpha}:\stackrel{{ d}}{{=}}\mathrm{Pois}(p_{\alpha})\) be mutually independent. Let \(\mathscr{X}:=(X_{\alpha})_{\alpha\in I}\) denote the dependent Bernoulli process, and \(\mathscr{Y}:=(Y_{\alpha})_{\alpha\in I}\) denote the Poisson process on \(I\) with intensity \(p_{(\cdot)}\). Then_
\[d_{\mathrm{TV}}\left(\mathscr{X},\mathscr{Y}\right) \leq\min\{1,\lambda^{-1}\}\left(b_{1}+b_{2}\right).\]
**Theorem 2.7** ([1], Theorem 3.E).: _Let \(\{X_{\alpha}:\alpha\in I\}\) be positively related, with \(p_{\alpha}:=\mathbb{E}[X_{\alpha}]\). Let \(W:=\sum_{\alpha\in I}X_{\alpha}\), and let \(\lambda:=\mathbb{E}[W]\). Set_
\[\epsilon :=\frac{\mathrm{Var}(W)}{\lambda}-1, \gamma :=\frac{\mathbb{E}\left[(W-\mathbb{E}[W])^{4}\right]}{\lambda}-1,\]
_and also4_
Footnote 4: The expression for \(\psi\) we provide is not the same as that given in [1, Theorem 3.E]. Indeed, our expression is at least the expression they provide, which quickly follows by applying Jensen’s inequality to their final summand.
\[\psi :=\left(\frac{\gamma}{\lambda\epsilon}\right)_{+}+3\epsilon+\frac{ \sum_{\alpha\in I}p_{\alpha}^{2}}{\lambda^{2}\epsilon}+\frac{3\mathrm{Var}(W) \max_{\alpha\in I}p_{\alpha}}{\lambda\epsilon}.\]
_If \(\epsilon>0\), then_
\[d_{\mathrm{TV}}\left(W,\mathrm{Pois}(\lambda)\right)\geq\frac{\epsilon}{11+3\psi}.\]
### Martingale Concentration
When proving strong concentration results in Sections 5 and 6, we will rely on the martingale machinery developed in [21]. For each \(A\in\Omega\), \(n\in I_{N}\), and \(x\in\{0,1\}\), we define the functions5
Footnote 5: Our expression for \(V_{n}(A)\) deviates from that in [11, Equation (2.27)], which contains a mistake.
\[C_{n}(x,A) :=\left|\mathbb{E}\left[\left|L(A)^{c}\right|\ \right|\ a_{0}, \ldots,a_{n-1},a_{n}=x\right]-\mathbb{E}\left[\left|L(A)^{c}\right|\ \right|\ a_{0},\ldots,a_{n-1}\right]\right|,\] \[V_{n}(A) :=\int_{0}^{1}C_{n}^{2}(x,A)\ d\mu_{n}(x)=(1-p)C_{n}^{2}(0,A)+pC_ {n}^{2}(1,A).\]
We also define the functions
\[C(A) :=\max_{\begin{subarray}{c}n\in I_{N},\\ x\in\{0,1\}\end{subarray}}C_{n}(x,A), V(A) :=\sum_{n=0}^{N}V_{n}(A).\]
With this setup in place, we can extract the following result.
**Theorem 2.8** ([21], Lemma 3.1).: _For any \(\lambda,V,C>0\) such that \(\lambda\leq 4V/C^{2}\), we have that_
\[\Pr\left(\left|\left|L(A)^{c}\right|-\mathbb{E}\left[\left|L(A)^{c}\right| \right]\right|\geq\sqrt{\lambda V}\right)\leq 2e^{-\lambda/4}+\Pr(C(A)\geq C)+ \Pr(V(A)\geq V).\]
To make some simplifications, we also introduce, for each \(A\in\Omega\) and \(n\in I_{N}\), the function
\[\begin{split}\Delta_{n}(A)&:=\mathbb{E}\left[\left| L(A)^{c}\right|\ a_{0},\ldots,a_{n-1},a_{n}=0\right]-\mathbb{E}\left[\left|L(A)^{c} \right|\ \right|a_{0},\ldots,a_{n-1},a_{n}=1\right]\\ &=\mathbb{E}\left[\left|L\left(A\setminus\{n\}\right)^{c}\right| -\left|L\left(A\cup\{n\}\right)^{c}\right|\ \right|a_{0},\ldots,a_{n-1}\right].\end{split} \tag{2.5}\]
Here, \(\Delta_{n}(A)\) is the change in the conditional expectation of \(\left|L(A)^{c}\right|\), conditioning on \(a_{0},\ldots,a_{n}\), when we flip the value of \(a_{n}\) from \(1\) to \(0\). Proceeding as in [11, Equations (2.34)-(2.40)] now yields that we can write the functions \(C(A)\) and \(V(A)\) in terms of the functions \(\Delta_{n}(A)\), namely as
\[C(A) =(1-p)\max_{n\in I_{N}}\Delta_{n}(A)\sim\max_{n\in I_{N}}\Delta_{ n}(A), \tag{2.7}\] \[V(A) =p(1-p)\sum_{n=0}^{N}\left(\Delta_{n}(A)\right)^{2}\sim p\sum_{n=0 }^{N}\left(\Delta_{n}(A)\right)^{2}. \tag{2.6}\]
We will make use of these functions in Sections 5 and 6.
## 3. Standard Computations
In this section, we isolate those computations which we will need later. Our computations are over values \(k\in\left[0,mN/2\right]\). It is not hard to show (e.g., via (B.15) with a standard argument) that \(W_{k}\stackrel{{ d}}{{=}}W_{mN-k}\). Indeed, under this correspondence, it holds at the process level that we have the equality in distribution
\[\left(X_{\Lambda}:\Lambda\in\mathscr{D}_{k}\right)\stackrel{{ d}}{{=}}\left(X_{\Lambda}:\Lambda\in\mathscr{D}_{mN-k} \right). \tag{3.1}\]
These observations allow us to extend these computations to values \(k\in\left[mN/2,mN\right]\). We now fix some \(k\in\left[0,mN/2\right]\). We take \(\mathscr{D}_{k}\) as our finite index set of interest. Up to constants, we bound the expectation, variance, third moment, and fourth central moment of \(W_{k}\). All asymptotic statements are to be understood as holding uniformly over all \(k\in\left[0,mN/2\right]\). Asymptotic statements involving
a condition on \(k\) are to be understood as holding uniformly over such \(k\) when working over a regime where the condition applies. We have
\[\begin{split}\mu_{k}:=\mathbb{E}[W_{k}]&=\sum_{ \Lambda\in\mathscr{D}_{k}}p^{|S(\Lambda)|}=\sum_{\ell=1}^{h}\sum_{\begin{subarray} {c}\Lambda\in\mathscr{D}_{k}\\ |S(\Lambda)|=\ell\end{subarray}}p^{\ell}\begin{cases}\gtrsim k^{h-1}p^{h}\\ \lesssim\sum_{\ell=1}^{h}k^{\ell-1}p^{\ell}\begin{subarray}{c}k\gtrsim 1/p\\ \lesssim\end{subarray}k^{h-1}p^{h}\end{cases}\quad,\\ \operatorname{Var}(W_{k})&=\sum_{\begin{subarray}{c}\left(\Lambda(1),\Lambda( 2)\right)\in\mathscr{D}_{k}^{2}\\ \Lambda(1)\neq\Lambda(2)\end{subarray}}\operatorname{Cov}(X_{\Lambda(1)},X_{ \Lambda(2)})+\sum_{\Lambda\in\mathscr{D}_{k}}\operatorname{Var}(X_{\Lambda})\\ &=\sum_{\begin{subarray}{c}\left(\Lambda(1),\Lambda(2)\right)\in \mathscr{D}_{k}^{2}\\ S(\Lambda(1))\cap S(\Lambda(2))\neq\emptyset\end{subarray}}p^{|S(\Lambda(1) )\cup S(\Lambda(2))|}-p^{|S(\Lambda(1))|+|S(\Lambda(2))|}+\mathbb{E}[W_{k}]\\ &\sim\sum_{\ell=1}^{2h-1}\sum_{\begin{subarray}{c}\left(\Lambda(1),\Lambda( 2)\right)\in\mathscr{D}_{k}^{2}\\ S(\Lambda(1))\cap S(\Lambda(2))\neq\emptyset\\ |S(\Lambda(1))\cup S(\Lambda(2))|=\ell\end{subarray}}p^{\ell}+\mathbb{E}[W_{k} ]\begin{cases}\gtrsim k^{2h-3}p^{2h-1}+k^{h-1}p^{h}\\ \lesssim\sum_{\ell=h}^{2h-1}k^{\ell-2}p^{\ell}+\sum_{\ell=1}^{h-1}k^{\ell-1}p ^{\ell}\end{cases}\\ &\begin{cases}k\gtrsim(1/p)^{\frac{h-1}{h-2}}\\ \gtrsim\sum_{\ell=1}^{h-1}k^{2h-3}p^{2h-1}+k^{h-1}p^{h}\end{cases}\quad,\\ \mathbb{E}[W_{k}^{3}]&=\sum_{\begin{subarray}{c}\left(\Lambda(1),\Lambda(2), \Lambda(3)\right)\in\mathscr{D}_{k}^{3}\\ \left\lfloor\cup_{i=1}^{3}S(\Lambda(i))\right\rfloor=\ell\end{subarray}} \mathbb{E}\left[X_{\Lambda(1)}X_{\Lambda(2)}X_{\Lambda(3)}\right]\lesssim \sum_{\ell=1}^{3h-2}k^{\ell-\lceil(\ell+1)/h\rceil}p^{\ell}+\mathbb{E}[W_{k} ^{2}]\\ &\lesssim\sum_{\ell=2h}^{3h-2}k^{\ell-3}p^{\ell}+\sum_{\ell=h}^{2h-1}k^{ \ell-2}p^{\ell}+\sum_{\ell=1}^{h-1}k^{\ell-1}p^{\ell}+\operatorname{Var}(W_{k} )+\mathbb{E}[W_{k}]\\ &\overset{k\gtrsim 1/p}{\gtrsim}k^{3h-5}p^{3h-2}+k^{2h-3}p^{2h-1}+k^{h-1 }p^{h},\\ \mathbb{E}\left[(W_{k}-\mathbb{E}[W_{k}])^{4}\right]&\leq\sum_{ \begin{subarray}{c}\left(\Lambda(1),\ldots,\Lambda(4)\right)\in\mathscr{D}_{ k}^{4}\\ \left\lfloor\cup_{i=1}^{4}S(\Lambda(i))\right\rfloor=\ell\end{subarray}} \left|\mathbb{E}\left[(X_{\Lambda(1)}-p_{\Lambda(1)})\cdots(X_{\Lambda(4)}-p_{ \Lambda(4)})\right]\right|\\ &\lesssim\sum_{\ell=1}^{4h-2}\sum_{\begin{subarray}{c}\left(\Lambda(1), \ldots,\Lambda(4)\right)\in\mathscr{D}_{k}^{4}\\ \left\lfloor\cup_{i=1}^{4}S(\Lambda(i))\right\rfloor=\ell\end{subarray}} \mathbb{E}\left[X_{\Lambda(1)}X_{\Lambda(2)}X_{\Lambda(3)}X_{\Lambda(4)} \right]\lesssim\sum_{\ell=1}^{4h-2}k^{\ell-\lceil(\ell+1)/h\rceil}p^{\ell}+ \mathbb{E}[W_{k}^{3}]\\ &\overset{k\gtrsim 1/p}{\lesssim}k^{4h-6}p^{4h-2}+k^{3h-4}p^{3h-1}+k^{2h-3 }p^{2h-1}+k^{h-1}p^{h}\end{split} \tag{3.3}\]
\[\stackrel{{ k\gtrsim(1/p)}}{{\lesssim}}k^{4h-6}p^{4h-2}.\]
We have invoked Lemmas 2.2 and 2.3 several times in these computations. In particular, observe that all summands in (3.2), (3.3), and (3.4) corresponding to tuples which fail to satisfy the former condition of (2.4) vanish, so we take the upper limits of the sums in (3.3) and (3.4) to be \(3h-2\) and \(4h-2\), respectively. For use in Section 7, we also record the following, which is easily observed by proceeding slightly more frugally in the variance computations starting from (3.2).
\[\operatorname{Var}(W_{k})-(1-p)\mu_{k}\geq\sum_{\ell=1}^{2h-1}\sum_{ \begin{subarray}{c}\left(\Lambda(1),\Lambda(2)\right)\in\mathscr{D}_{k}^{2}\\ S(\Lambda(1))\cap S(\Lambda(2))\neq\emptyset\\ |S(\Lambda(1))|=\ell\end{subarray}}p^{\ell}\gtrsim k^{2h-3}p^{2h-1}+k^{h-1}p^{h}. \tag{3.5}\]
Finally, we bound the quantities that were introduced in Theorem 2.5. Let \(B_{\Lambda}\subseteq\mathscr{D}_{k}\) denote the dependency set of \(\Lambda\) for every \(\Lambda\in\mathscr{D}_{k}\), so that \(\Lambda^{\prime}\in B_{\Lambda}\) if and only if \(S(\Lambda)\cap S(\Lambda^{\prime})\neq\emptyset\). Recall that \(b_{3}\equiv 0\) in this setting: we may bound the other two constants by
\[b_{1}(k) =\sum_{\Lambda\in\mathscr{D}_{k}}\sum_{\Lambda^{\prime}\in B_{ \Lambda}}p_{\Lambda}p_{\Lambda^{\prime}}=\sum_{t=1}^{h}\sum_{\begin{subarray}{ c}(\Lambda,\Lambda^{\prime})\in\mathscr{D}_{k}^{2}\\ |S(\Lambda)\cap S(\Lambda^{\prime})|=t\end{subarray}}p_{\Lambda}p_{\Lambda^{ \prime}}\leq\sum_{t=1}^{h}\sum_{\begin{subarray}{c}(\Lambda,\Lambda^{\prime}) \in\mathscr{D}_{k}^{2}\\ |S(\Lambda)\cap S(\Lambda^{\prime})|=t\end{subarray}}\Pr\left[X_{\Lambda}X_{ \Lambda^{\prime}}=1\right]\] \[\leq\sum_{t=1}^{h}\sum_{\ell=2}^{2h-t}\sum_{\begin{subarray}{c}( \Lambda,\Lambda^{\prime})\in\mathscr{D}_{k}^{2}\\ S(\Lambda)\cap S(\Lambda^{\prime})\neq\emptyset\\ |S(\Lambda)\cup S(\Lambda^{\prime})|=\ell\end{subarray}}\Pr\left[X_{\Lambda} X_{\Lambda^{\prime}}=1\right]\lesssim\sum_{t=1}^{h}\sum_{\ell=2}^{2h-t}k^{\ell- \lceil(\ell+1)/h\rceil}p^{\ell}=\sum_{\ell=h}^{2h-1}k^{\ell-2}p^{\ell}+\sum_{ \ell=2}^{h-1}k^{\ell-1}p^{\ell}\] \[\stackrel{{ k\gtrsim 1/p}}{{\lesssim}}k^{2h-3}p^{2h-1}+k^{h- 2}p^{h-1}\stackrel{{ k\gtrsim(1/p)^{\frac{h}{h-1}}}}{{\lesssim}}k^{2h-3}p^{2h-1},\] \[b_{2}(k) \leq\sum_{\Lambda\in\mathscr{D}_{k}}\sum_{\Lambda^{\prime}\in B_ {\Lambda}}\Pr\left[X_{\Lambda}X_{\Lambda^{\prime}}=1\right]=\sum_{t=1}^{h} \sum_{\begin{subarray}{c}(\Lambda,\Lambda^{\prime})\in\mathscr{D}_{k}^{2}\\ |S(\Lambda)\cap S(\Lambda^{\prime})|=t\end{subarray}}\Pr\left[X_{\Lambda}X_{ \Lambda^{\prime}}=1\right]\] \[\stackrel{{ k\gtrsim 1/p}}{{\lesssim}}k^{2h-3}p^{2h-1}+k^{h- 2}p^{h-1}\stackrel{{ k\gtrsim(1/p)^{\frac{h}{h-1}}}}{{\lesssim}}k^{2h-3}p^{2h -1}.\]
Combining these two computations yields
\[b_{1}(k)+b_{2}(k)+b_{3}(k)\stackrel{{ k\gtrsim 1/p}}{{\lesssim}}k^{2h-3}p^{2h- 1}+k^{h-2}p^{h-1}\stackrel{{ k\gtrsim(1/p)^{\frac{h}{h-1}}}}{{ \lesssim}}k^{2h-3}p^{2h-1}. \tag{3.6}\]
## 4. Subcritical Decay
We commence the proof of Theorem 1.1. In this section, we prove Theorem 1.1(i). Throughout this section, we assume that
\[Np^{\frac{h}{h-1}}\lll 1. \tag{4.1}\]
We prove that an upper bound and a lower bound for \(|L(A)|\) are both asymptotically equivalent to \((Np)^{h}/\theta_{L}\). Certainly, \(A^{h}\) corresponds to a subset \(\Lambda(A)\) of \(\mathbb{N}^{h}/\!\approx(\) in the natural way\()\), and
\[|L(A)|\leq|\Lambda(A)|=\frac{|A|(|A|-1)\cdots(|A|-h+1)}{\theta_{L}}+O\left(|A|^{ h-1}\right)\sim\frac{(Np)^{h}}{\theta_{L}}, \tag{4.2}\]
where the \(O(|A|^{h-1})\) term corresponds to \(L\)-expressions in \(\Lambda(A)\) whose ground sets have size less than \(h\), and the asymptotic equivalence follows from (for example) a Chernoff bound since \(A\) is a binomial random subset of \(I_{N}\) whose expected size, \((N+1)p\), tends to infinity. We obtain a lower bound for \(|L(A)|\) by subtracting, from \(|\Lambda(A)|\), the sum of the number of \(L\)-expressions in \(\Lambda(A)\) with \(L\)-evaluation \(0\) (i.e., \(|\mathscr{S}_{dN}\cap\Lambda(A)|\)) and the number of ordered pairs of \(L\)-expressions in \(\Lambda(A)\) with the same nonzero \(L\)-evaluation. For this latter summand, we count, for each \(k\in[0,mN]\setminus\{dN\}\) and \(t\in[h]\), the number of ordered pairs \(\left(\Lambda,\Lambda^{\prime}\right)\in\mathscr{D}_{k}^{2}\) of such \(L\)-expressions satisfying
\[\Lambda,\Lambda^{\prime}\in\Lambda(A),\qquad\Lambda\neq\Lambda^{\prime}, \qquad|S(\Lambda)\cap S(\Lambda^{\prime})|=t. \tag{4.3}\]
Using the computations in Section 3, the expectation of the value that we subtract is
\[\mathbb{E}\left[\left|\mathscr{S}_{dN}\cap\Lambda(A)\right| \right]+\sum_{k\in I_{mN}\setminus\{dN\}}\sum_{t=1}^{h}\mathbb{E}\big{[}\text {num. ordered pairs }\left(\Lambda,\Lambda^{\prime}\right)\in\mathscr{D}_{k}^{2}\text{ satisfying \eqref{eq:
Let \(\tilde{L}:\mathbb{Z}^{h}\to\mathbb{Z}\) be the \(\mathbb{Z}\)-linear form with coefficients \(-u_{1},\ldots,-u_{h}\). It follows that \(L(A)=-\tilde{L}(A)\), and for any \(k\),
\[sN-k\notin L(A)\iff-sN+k\notin\tilde{L}(A). \tag{5.3}\]
In many forthcoming arguments, we will prove statements over values \(-dN+k\) for \(k\in\left[0,mN/2\right]\), then take advantage of the symmetry apparent in (5.3) to prove the corresponding statement over values \(-dN+k\) for \(k\in\left[mN/2,mN\right]\). Additionally, in Sections 5 and 6, we define the function \(\delta:\mathbb{N}\to\mathbb{R}_{\geq 0}\) as follows, which we think of as the worst-case multiplicative margin of error for the asymptotic equivalence given by Lemma 2.1:
\[\delta(N):=\max_{k\in\left[g(N),mN/2\right]}\left|\frac{\mu_{k}}{\lambda_{k}N ^{h-1}p^{h}}-1\right|. \tag{5.4}\]
By Lemma 2.1 and Section 3 (the first \(\sim\) is since all summands \(\ell<h\) are of a lower order and \(g(N)\ggg 1/p\)), it holds uniformly over \(k\in\left[g(N),mN/2\right]\) that
\[\mu_{k}=\sum_{\ell=1}^{h}\sum_{\begin{subarray}{c}\Lambda\in\mathscr{D}_{k}\\ |S(\Lambda)|=\ell\end{subarray}}p^{\ell}\sim|\mathscr{D}_{k}|\cdot p^{h}\sim \lambda_{k}N^{h-1}p^{h}. \tag{5.5}\]
Therefore, it follows from Lemma 2.1 that \(\delta(N)\lll 1\).
### Expectation
In this subsection, we compute \(\mathbb{E}\left[|L(A)|\right]\) and \(\mathbb{E}\left[|L(A)^{c}|\right]\). We begin with the following observation, which we invoke in the computation. We exclude \(k=mN/2\) from the range where (5.6) holds uniformly: this is for convenience, as it ensures \(\mathscr{D}_{k}=\mathfrak{D}_{k}\) for all such \(k\).
**Lemma 5.1**.: _Uniformly over all \(k\in\left[g(N),mN/2\right)\),_
\[\left|\Pr\left[-dN+k\notin L(A)\right]-e^{-c^{h}\lambda_{k}}\right|\lll e^{-c^ {h}\lambda_{k}}. \tag{5.6}\]
Proof.: By definition, \(-dN+k\notin L(A)\) and \(W_{k}=0\) are the same event. It follows from Theorem 2.5 that uniformly over \(k\in\left[g(N),mN/2\right)\),
(5.7) \[\begin{split}& e^{c^{h}\lambda_{k}}\left|\Pr\left[-dN+k\notin L (A)\right]-e^{-c^{h}\lambda_{k}}\right|=e^{c^{h}\lambda_{k}}\left|(\Pr\left[W _{k}=0\right]-e^{-\mu_{k}})+(e^{-\mu_{k}}-e^{-c^{h}\lambda_{k}})\right|\\ &\qquad\leq\exp\left(\max_{k\in\left[g(N),mN/2\right)}c^{h} \lambda_{k}\right)\left(b_{1}(k)+b_{2}(k)+b_{3}(k)\right)+e^{c^{h}\lambda_{k} }\left|e^{-\mu_{k}}-e^{-c^{h}\lambda_{k}}\right|\\ &\qquad\lesssim k^{2h-3}p^{2h-1}+k^{h-2}p^{h-1}+\left|e^{c^{h} \lambda_{k}\left(1-\frac{\mu_{k}}{c^{h}\lambda_{k}}\right)}-1\right|\lesssim N ^{2h-3}p^{2h-1}+N^{h-2}p^{h-1}+o(1)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
\[\mathbb{E}\left[|L(A)^{c}|\right] \sim\left(2\int_{0}^{m/2}\exp\left(-\frac{c^{h}\sum_{t_{1}=0}^{|u_{ 1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x-\sum_{i=1}^{h}t_{i} \right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)dx\right)N.\]
Proof.: It is clear that it suffices to prove one of the two statements. We prove the latter. We write
\[\mathbb{E}\left[|L(A)^{c}|\right]=\mathbb{E}\left[\left|L(A)^{c}\cap\left[-dN,-dN+mN/2\right]\right|\right]+\mathbb{E}\left[\left|L(A)^{c}\cap\left[sN-mN/2,sN\right]\right|\right]. \tag{5.8}\]
Using (5.1), the first summand in (5.8) can be written as
\[o(N)+\mathbb{E}\left[\left|L(A)^{c}\cap\left[-dN+g(N),-dN+mN/2 \right]\right|\right]. \tag{5.9}\]
We express the latter summand of (5.9) as
\[\sum_{k=g(N)}^{mN/2}\Pr\left[-dN+k\notin L(A)\right]=O(1)+\sum_{k =g(N)}^{mN/2}\Pr\left[-dN+k\notin L(A)\right]-e^{-c^{h}\lambda_{k}}+e^{-c^{h} \lambda_{k}}\] \[\sim O(1)+\sum_{k=g(N)}^{mN/2}e^{-c^{h}\lambda_{k}}\] \[\sim O(1)+\int_{k=g(N)}^{mN/2}\exp\left(-\frac{c^{h}\sum_{t_{1}=0 }^{|u_{1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x/N-\sum_{i=1} ^{h}t_{i}\right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)\ dx \tag{5.11}\] \[\sim N\int_{0}^{m/2}\exp\left(-\frac{c^{h}\sum_{t_{1}=0}^{|u_{1}|- 1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x-\sum_{i=1}^{h}t_{i} \right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)dx. \tag{5.10}\]
We have invoked Lemma 5.1 in (5.10). It follows from (5.9) that (5.11) is asymptotically equivalent to the first summand of (5.8). We now handle the second summand of (5.8). Proceeding like (5.9) and invoking (5.3), this is
\[o(N)+\sum_{k=g}^{\tau N}\Pr\left[sN-k\notin L(A)\right]=o(N)+ \sum_{k=g}^{\tau N}\Pr\left[-sN+k\notin\tilde{L}(A)\right] \tag{5.13}\] \[\sim N\int_{0}^{m/2}\exp\left(-\frac{c^{h}\sum_{t_{1}=0}^{|u_{1}|- 1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x-\sum_{i=1}^{h}t_{i} \right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)dx. \tag{5.12}\]
We have applied (5.11) to \(\tilde{L}(A)\).7 Using (5.8), (5.11) and (5.13), we conclude that
Footnote 7: This is valid, since we established (5.11) for an arbitrary \(\mathbb{Z}\)-linear form on \(h\) variables.
\[\mathbb{E}\left[|L(A)^{c}|\right]\sim 2N\int_{0}^{m/2}\exp\left(-\frac{c^{h}\sum_{t _{1}=0}^{|u_{1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}\mathsf{H}_{h}\left(x-\sum_{ i=1}^{h}t_{i}\right)}{\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)dx,\]
which suffices to prove the theorem.
### Concentration
We now show that the random variables \(|L(A)|\) and \(|L(A)^{c}|\) are strongly concentrated about its expectations. With Theorem 5.2, this proves Theorem 1.1(ii).
**Proposition 5.3**.: _If (5.1) holds, then \(|L(A)|\sim\mathbb{E}\left[|L(A)|\right]\) and \(|L(A)^{c}|\sim\mathbb{E}\left[|L(A)^{c}|\right]\)._
Proof.: We fix \(n\in I_{N}\). We define
\[\mathcal{I}_{n}:=\left[-dN+\min\{n,N-n\},sN-\min\{n,N-n\}\right]. \tag{5.14}\]
We also define
\[\tilde{g}(N):=(1/p)^{\frac{h}{h-1}}\log(1/p), \tag{5.15}\]
and abbreviate \(\tilde{g}(N)\) to \(\tilde{g}\). We think of \([-dN+\tilde{g},sN-\tilde{g}]\) and \([-dN+\tilde{g},sN-\tilde{g}]^{c}\), respectively, as the "middle" and "fringes" of the interval \([-dN,sN]\).8 Since \(A\setminus\{n\}\subset A\cup\{n\}\), \(L\left(A\setminus\{n\}\right)\subseteq L\left(A\cup\{n\}\right)\), and thus it follows that the integrand in (2.5) can be written as
Footnote 8: Our choice of \(\tilde{g}\), as well as much of the following argument, will seem unnatural. In the proof of Theorem 6.6, we will follow, with some modifications mentioned there, exactly the same argument to prove the strong concentration of \(|L(A)^{c}|\) about its expectation. As such, many of our choices will make more sense later.
\[\left|L\left(A\setminus\{n\}\right)^{c}\right|-\left|L\left(A\cup\{n\}\right) ^{c}\right|=\left|L\left(A\setminus\{n\}\right)^{c}\right|\wedge L\left(A \cup\{n\}\right)^{c}\right|=\left|L\left(A\cup\{n\}\right)\setminus L\left(A \setminus\{n\}\right)\right|. \tag{5.16}\]
The final expression in (5.16) can be understood as the number of new elements that are added to \(L(A)\) due to the inclusion of \(n\). Any such new element certainly must use \(n\) as a summand in any sum that generates it. Therefore, it holds that
\[L\left(A\cup\{n\}\right)\setminus L\left(A\setminus\{n\}\right)\subseteq \mathcal{I}_{n}.\]
From the linearity of conditional expectation,
\[\Delta_{n}(A) =\mathbb{E}\left[\left|L\left(A\cup\{n\}\right)\setminus L\left( A\setminus\{n\}\right)\right|\ \Big{|}\ a_{0},\ldots,a_{n-1}\right]\] \[=\mathbb{E}\left[\left|\left(L\left(A\cup\{n\}\right)\setminus L \left(A\setminus\{n\}\right)\right)\cap\left(\mathcal{I}_{n}\cap[-dN+\tilde{g},sN-\tilde{g}]\right)\right|\ \Big{|}\ a_{0},\ldots,a_{n-1}\right] \tag{5.17}\] \[=:\Delta_{n,1}(A)+\Delta_{n,2}(A).\]
From (5.1), we see that \(\tilde{g}\ggg N\), so it certainly holds that \(\Delta_{n,1}(A)=0\). We now turn to showing that \(\Delta_{n,2}(A)\) is modest with high probability. If \(n\in[\tilde{g},N-\tilde{g}]\),
\[\min\{n,N-n\}\geq\tilde{g},\]
from which it follows from (5.14) that
\[\mathcal{I}_{n}\setminus[-dN+\tilde{g},sN-\tilde{g}]=\left[-dN+\min\{n,N-n\}, sN-\min\{n,N-n\}\right]\setminus[-dN+\tilde{g},sN-\tilde{g}]=\emptyset.\]
This implies
\[\Delta_{n,2}(A)=0\text{ for all }n\in[\tilde{g},N-\tilde{g}]. \tag{5.18}\]
Combining (5.18) with a union bound yields that with probability \(1-o(1)\),9
Footnote 9: We write like this so that the argument can be adapted without modification to the proof of Theorem 6.6. The same comment applies for (5.24).
\[\Delta_{n}(A)=\Delta_{n,1}(A)+\Delta_{n,2}(A)=0\text{ for all }n\in[\tilde{g},N- \tilde{g}]. \tag{5.19}\]
We now consider values \(n\notin[\tilde{g},N-\tilde{g}]\). Here, we observe that
\[\Delta_{n,2}(A)\leq\left(\left|A\cap[0,\tilde{g}]\right|+\left|A\cap[N-\tilde {g},N]\right|\right)^{h-1}. \tag{5.20}\]
Indeed, including the element \(n\) in \(A\setminus\{n\}\) generates no more than
\[\big{(}\big{|}A\cap[0,\tilde{g}]\big{|}+\big{|}A\cap[N-\tilde{g},N]\big{|}\big{)} ^{h-1}\]
new elements from \(\mathcal{I}_{n}\setminus[-dN+\tilde{g},sN-\tilde{g}]\): as before, any new element in \(L(A)\) resulting from including \(n\) in \(A\setminus\{n\}\) must use \(n\) as a summand in any sum which generates it, and it is not hard to see that the remaining \(h-1\) summands in any such sum must lie in
\[(A\cap[0,\tilde{g}])\cup(A\cap[N-\tilde{g},N]),\]
since the resulting sum would lie in \([-dN+\tilde{g},sN-\tilde{g}]\) otherwise. By a Chernoff bound,
\[\begin{split}\Pr\Big{(}|A\cap[0,\tilde{g}]|&\leq 2 (1/p)^{\frac{1}{h-1}}\log(1/p)\Big{)}\geq 1-\exp\left(-(1/p)^{\frac{1}{h-1}} \log(1/p)/2\right),\\ \Pr\Big{(}\big{|}A\cap[N-\tilde{g},N]|&\leq 2 (1/p)^{\frac{1}{h-1}}\log(1/p)\Big{)}\geq 1-\exp\left(-(1/p)^{\frac{1}{h-1}} \log(1/p)/2\right).\end{split} \tag{5.21}\]
From (5.21) with a union bound, we deduce that with probability at most
\[2\exp\left(-(1/p)^{\frac{1}{h-1}}\log(1/p)/2\right),\]
we have that
\[\big{(}\big{|}A\cap[0,\tilde{g}]\big{|}+\big{|}A\cap[N-\tilde{g},N]\big{|} \big{)}^{h-1}>\left(4(1/p)^{\frac{1}{h-1}}\log(1/p)\right)^{h-1}=4^{h-1}(1/p) \left[\log(1/p)\right]^{h-1}. \tag{5.22}\]
Now, (5.20), (5.22), and a union bound over \(n\notin[\tilde{g},N-\tilde{g}]\) together imply that
\[\Pr\Big{(}\Delta_{n,2}(A)>4^{h-1}(1/p)\left[\log(1/p)\right]^{h- 1}\text{ for some }n\notin[\tilde{g},N-\tilde{g}]\Big{)}\] \[\leq\Pr\Big{(}\big{(}\big{|}A\cap[0,\tilde{g}]\big{|}+\big{|}A \cap[N-\tilde{g},N]\big{|}\big{)}^{h-1}>4^{h-1}(1/p)\left[\log(1/p)\right]^{h -1}\text{ for some }n\notin[\tilde{g},N-\tilde{g}]\Big{)}\] \[\lesssim\tilde{g}\exp\left(-\frac{\log(1/p)}{2p^{\frac{1}{h-1}}} \right)\lesssim\frac{\log(1/p)}{p^{\frac{h}{h-1}}}\exp\left(-\frac{\log(1/p)}{ 2p^{\frac{1}{h-1}}}\right)\leq\left(\frac{\log(1/p)}{p^{\frac{1}{h-1}}}\right) ^{h}\exp\left(-\frac{\log(1/p)}{2p^{\frac{1}{h-1}}}\right)\lll 1.\]
Therefore, with probability \(1-o(1)\),
\[\Delta_{n,2}(A)\lesssim(1/p)\left[\log(1/p)\right]^{h-1}\text{ for all }n\notin[\tilde{g},N-\tilde{g}]. \tag{5.23}\]
Combining (5.23) with a union bound yields that with probability \(1-o(1)\),
\[\Delta_{n}(A)=\Delta_{n,1}(A)+\Delta_{n,2}(A) \lesssim o(1)+(1/p)\left[\log(1/p)\right]^{h-1}\] \[\lesssim(1/p)\left[\log(1/p)\right]^{h-1}\text{ for all }n\notin[ \tilde{g},N-\tilde{g}]. \tag{5.24}\]
Therefore, by combining (5.19) and (5.24) under a union bound and recalling (2.6), we deduce that with probability \(1-o(1)\),
\[C(A)\sim\max_{n\in I_{N}}\Delta_{n}(A)\lesssim(1/p)\left[\log(1/p)\right]^{h- 1}. \tag{5.25}\]
On this event with \(1-o(1)\) probability, we also deduce from (5.19) and (5.24) that, recalling (2.7),
\[\begin{split} V(A)&\sim p\sum_{n=0}^{N}\big{(} \Delta_{n}(A)\big{)}^{2}=p\sum_{n=\tilde{g}}^{N-\tilde{g}}\big{(}\Delta_{n}(A) \big{)}^{2}+p\sum_{n=[\tilde{g},N-\tilde{g}]^{c}}\big{(}\Delta_{n}(A)\big{)}^{2 }\\ &\lesssim p\cdot o(1)+p\cdot\tilde{g}\left((1/p)\left[\log(1/p) \right]^{h-1}\right)^{2}\\ &\lesssim o(1)+\frac{\log(1/p)}{p^{\frac{1}{h-1}}}(1/p)^{2}\left( \log(1/p)\right)^{2(h-1)}\lesssim(1/p)^{2+\frac{1}{h-1}}\left(\log(1/p)\right)^{ 2h-1}.\end{split} \tag{5.26}\]
We finish the proof by invoking Theorem 2.8. Specifically, we take
\[\lambda\asymp\log(1/p),\ \ V\asymp(1/p)^{2+\frac{1}{h-1}}\left(\log(1/p)\right)^{2h-1 },\ \ C\asymp(1/p)\left(\log(1/p)\right)^{h-1}, \tag{5.27}\]
and take the constant factors implicit in our expressions for \(C\) and \(V\) to agree with those implied in (5.25) and (5.26), respectively. It follows from our choices in (5.27) that
\[\lambda\lesssim\log(1/p)\lesssim V/C^{2}=(1/p)^{\frac{1}{h-1}}\log (1/p), \tag{5.29}\] \[\sqrt{\lambda V}\asymp\sqrt{\log(1/p)\cdot\left(1/p\right)^{2+ \frac{1}{h-1}}\left(\log(1/p)\right)^{2h-1}}=(1/p)^{1+\frac{1}{2(h-1)}}\left[ \log(1/p)\right]^{h}\ll(1/p)^{\frac{h}{h-1}}. \tag{5.28}\]
From (5.1) and (5.29), it follows that \(\sqrt{\lambda V}\lll N\). Since the right-hand sides of (1.16) and (1.17) are \(\Omega(N)\), it follows from Theorem 2.8 and (5.29) that to prove the theorem, it suffices to show
\[2e^{-\lambda/4}+\Pr(C(A)\geq C)+\Pr(V(A)\geq V)\lll 1. \tag{5.30}\]
Certainly, \(2e^{-\lambda/4}\lll 1\), since it is immediate from (5.27) that \(\lambda\ggg 1\). The latter two terms in the LHS of (5.30) vanish due to (5.25) and (5.26).
Propositions 5.2 and 5.3 together prove Theorem 1.1(ii).
## 6. Supercritical Decay
Finally, we prove Theorem 1.1(iii). The \(h=2\) case was done by [11, Theorem 3.1(iii)], so we are free to assume \(h\geq 3\). We assume in this section that
\[Np^{\frac{h}{h-1}}\ggg 1. \tag{6.1}\]
### Reduction
We set up expressions for the setting in which (6.1) holds. We let \(f:\mathbb{N}\to\mathbb{R}_{\geq 0}\) be a function satisfying
\[1\lll f(N)\lll\min\left\{Np^{\frac{h}{h-1}},\left(1/\delta(N) \right)^{\frac{1}{h-1}}\right\},\] \[\exp\left(\frac{f(N)^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1} ^{h}|u_{i}|}\right)f(N)^{2h-3}p^{\frac{1}{(h-1)(h-2)}}\lll 1; \tag{6.2}\]
by taking \(f\) to be sufficiently slowly growing, it is easy to see that such a function satisfying all of the conditions in (6.2) exists. We define
\[\tau(N):=\frac{f(N)}{Np^{\frac{h}{h-1}}}. \tag{6.3}\]
From (6.2) and (6.3), it follows that \(\tau\lll 1\). We will frequently abbreviate \(\tau(N)\) to \(\tau\). We think of \([(-d+\tau)N,(s-\tau)N]\) as the "middle" of the interval \([-dN,sN]\), and \([(-d+\tau)N,(s-\tau)N]^{c}\) as the "fringes." In this subsection, we reduce the computation of \(\mathbb{E}\left[|L(A)^{c}|\right]\) to the fringes. We begin by deriving an asymptotic lower bound for the expected number of elements in \([-dN,sN]\) which are missing sums in \(L(A)\).
**Lemma 6.1**.: _It holds that_
\[\mathbb{E}\left[|L(A)^{c}|\right]\gtrsim\left(1/p\right)^{\frac{h}{h-1}}.\]
Proof.: It follows from (6.1) that \(N\ggg(1/p)^{\frac{h}{h-1}}\), so
\[\mathcal{I}:=\left[-dN,-dN+(1/p)^{\frac{h}{h-1}}\right]\subseteq[-dN,sN].\]
Certainly, any sum in \(L(A)\) which is generated by adding a term greater than \((1/p)^{\frac{h}{h-1}}\) or subtracting a term less than \(N-(1/p)^{\frac{h}{h-1}}\) fails to lie in \(\mathcal{I}\). Thus, any sum in \(L(A)\) which lands in \(\mathcal{I}\) must
strictly add terms that are at most \((1/p)^{\frac{h}{h-1}}\) and subtract terms that are at least \(N-(1/p)^{\frac{h}{h-1}}\). Now, it follows from a Chernoff bound and the latter statement of (1.4) that for all \(i\in[n]\) and \(j\in\{n+1,\ldots,h\}\),
\[\left|A\cap\left[0,(1/p)^{\frac{h}{h-1}}/u_{i}\right]\right|\sim(1/p)^{\frac{ 1}{h-1}}/u_{i},\quad\left|A\cap\left[N-(1/p)^{\frac{h}{h-1}}/|u_{j}|,N\right] \right|\sim(1/p)^{\frac{1}{h-1}}/|u_{j}|. \tag{6.4}\]
By considering the number of \(L\)-expressions whose \(L\)-evaluations lie in \(\mathcal{I}\), we deduce that with probability \(1-o(1)\), the number of elements in \(\mathcal{I}\) that lie in \(L(A)\) is at most10
Footnote 10: The \(o(1)\) term in (6.5) corresponds to \(h\)-tuples \((a_{1},\ldots,a_{h})\) with nondistinct elements, which are of a lower order.
\[\frac{1+o(1)}{\theta_{L}}\cdot\prod_{i=1}^{n}\left|A\cap\left[0,(1/p)^{\frac{ h}{h-1}}/u_{i}\right]\right|\cdot\prod_{j=n+1}^{h}\left|A\cap\left[N-(1/p)^{ \frac{h}{h-1}}/|u_{j}|,N\right]\right|. \tag{6.5}\]
Furthermore, on this high probability event, (6.5) is bounded by
\[\frac{1+o(1)}{\theta_{L}}\cdot\prod_{i=1}^{n}\left(\frac{(1+o(1))(1/p)^{\frac {1}{h-1}}}{u_{i}}\right)\cdot\prod_{j=n+1}^{h}\left(\frac{(1+o(1))(1/p)^{\frac {1}{h-1}}}{|u_{j}|}\right)=\frac{(1+o(1))(1/p)^{\frac{h}{h-1}}}{\theta_{L} \cdot\prod_{i=1}^{h}|u_{i}|}.\]
Therefore, with probability \(1-o(1)\), the number of elements in \(\mathcal{I}\) missing in \(L(A)\) is at least
\[(1/p)^{\frac{h}{h-1}}-\frac{(1+o(1))(1/p)^{\frac{h}{h-1}}}{\theta_{L}\cdot \prod_{i=1}^{h}|u_{i}|}=\left[1-\frac{1+o(1)}{\theta_{L}\cdot\prod_{i=1}^{h}|u _{i}|}\right](1/p)^{\frac{h}{h-1}}\gtrsim(1/p)^{\frac{h}{h-1}}, \tag{6.6}\]
since \(\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|\geq 2\) (because \(h\geq 3\), \(|u_{i}|=1\) for all \(i\in[h]\) would imply that \(\theta_{L}\geq 2\), since either \(1\) or \(-1\) appears as a coefficient at least twice). This establishes the lemma.
We now prove Lemma 6.2, which shows that the lower bound of Lemma 6.1 dominates the number of elements in the middle of \([-dN,sN]\) that are missing from \(L(A)\).
**Lemma 6.2**.: _The expected number of elements in the interval \([(-d+\tau)N,(s-\tau)N]\) that are missing from \(L(A)\) satisfies_
\[\mathbb{E}\left[\left|L(A)^{c}\cap\left[(-d+\tau)N,(s-\tau)N\right]\right| \right]\lll 1/p.\]
Proof.: Fix \(k\in\left[\tau N,mN/2\right]\): it follows from (6.2) and (6.3) that \(k^{h-1}p^{h}\gtrsim 1\) uniformly over such \(k\). We take \(\mu_{k}\) and \(b_{2}(k)\) as defined in the proof of Lemma 6.4 and Section 3, respectively. Uniformly over \(k\in\left[\tau N,mN/2\right]\setminus\{dN\}\),
\[-\frac{\mu_{k}^{2}}{2b_{2}(k)}\lesssim-\frac{k^{2h-2}p^{2h}}{k^{2h-3}p^{2h-1} }\lesssim-kp. \tag{6.7}\]
If there exists \(\Lambda\in\mathscr{S}_{k}\) for which \(S(\Lambda)\subseteq A\), then \(-dN+k\in L(A)\). Let \(E_{\Lambda}\) denote the event \(S(\Lambda)\subseteq A\). Now, (6.7) and the extended Janson inequality (e.g., see [1, Theorem 8.1.2]) together imply that for some constant \(C>0\),
\[\Pr\left[-dN+k\notin L(A)\right]\leq\Pr\left[\bigwedge_{\Lambda\in\mathscr{S} _{k}}\overline{E_{\Lambda}}\right]\leq e^{-\frac{\mu^{2}}{2\Lambda}}\leq e^{- Ckp}, \tag{6.8}\]
where this holds uniformly over \(k\in\left[\tau N,mN/2\right]\). We deduce that
\[\mathbb{E}\left[\left|L(A)^{c}\cap\left[(-d+\tau)N,\left(s-m/2\right)N \right]\right|\right]=\sum_{k=\tau N}^{mN/2}\Pr\left[-dN+k\notin L(A)\right] \leq\sum_{k=\tau N}^{mN/2}e^{-Ckp}\]
\[\leq\int_{\tau N-1}^{\infty}e^{-Cxp}\ dx=\frac{1}{Cp}\exp\left(-Cp( \tau N-1)\right)\lesssim(1/p)\exp\left(C(p-f(N))\right)\lll 1/p, \tag{6.9}\]
with the final statement due to (6.2). It now follows from (5.3) that
\[\mathbb{E}\left[\left|L(A)^{c}\cap\left[\left(-d+m/2\right)N,(s- \tau)N\right]\right|\right] =\sum_{k\in[\tau N,mN/2]}\Pr[sN-k\notin L(A)]\] \[=\sum_{k\in[\tau N,mN/2]}\Pr[-sN+k\notin\tilde{L}(A)]\lll 1/p, \tag{6.10}\]
where we have applied (6.9) with respect to \(\tilde{L}\) to observe (6.10). Altogether, we conclude that
\[\mathbb{E}\left[\left|L(A)^{c}\cap\left[\left(-d+\tau)N,(s-\tau) N\right]\right|\right]\right.\] \[=\mathbb{E}\left[\left|L(A)^{c}\cap\left[\left(-d+\tau\right)N, \left(s-h/2\right)N\right]\right|\right]+\mathbb{E}\left[\left|L(A)^{c}\cap \left[\left(-d+h/2\right)N,(s-\tau)N\right]\right|\right]\lll 1/p.\]
This yields the desired statement.
It now follows from Lemmas 6.1 and 6.2 that
\[\mathbb{E}\left[\left|L(A)^{c}\right|\right] =\mathbb{E}\left[\left|L(A)^{c}\cap\left[\left(-d+\tau\right)N,(s -\tau)N\right]\right|\right]+\mathbb{E}\left[\left|L(A)^{c}\cap\left[\left(-d+ \tau\right)N,(s-\tau)N\right]^{c}\right|\right]\] \[\sim\mathbb{E}\left[\left|L(A)^{c}\cap\left[\left(-d+\tau\right)N,(s-\tau)N\right]^{c}\right|\right], \tag{6.11}\]
which provides the desired reduction.
**Remark 6.3**.: It is straightforward to adapt Lemma 6.2 if we were to include the condition that some particular value \(n\in I_{N}\) cannot be in any subset of \(\mathscr{S}_{k}\). Indeed, it is not hard to see that the asymptotic claim implicit in (6.7) would still hold uniformly over \(k\in\left[\tau N,mN/2\right]\), since for any such \(k\), the number of \(h\)-tuples \((a_{1},\ldots,a_{h})\) with at least one instance of \(n\) such that \(L(a_{1},\ldots,a_{h})=-dN+k\) is \(O(k^{h-2})\). More specifically, by tracing and adapting the proof of Lemma 6.2, we can show that there exists a constant \(C>0\) (also independent of \(n\)) for which it holds for all \(k\in\left[\left(1/p\right)^{\frac{h}{h-1}},mN/2\right]\) that
\[\Pr\left[-dN+k\notin L\left(A\setminus\{n\}\right)\right]\leq e^{-Ckp},\qquad \qquad\Pr\left[sN-k\notin L\left(A\setminus\{n\}\right)\right]\leq e^{-Ckp}.\]
We make use of this remark in the proof of Theorem 6.6.
### Expectation
In this subsection, we will compute \(\mathbb{E}\left[\left|L(A)^{c}\right|\right]\). We begin with the following observation, which is the analogue of Lemma 5.1 for this regime.
**Lemma 6.4**.: _Uniformly over all \(k\in\left[g(N),\tau N\right]\),_
\[\left|\Pr\left[-dN+k\notin L(A)\right]-e^{-\lambda_{k}N^{h-1}p^{h}}\right| \lll e^{-\lambda_{k}N^{h-1}p^{h}}.\]
Proof.: We proceed as in the proof of Lemma 5.1. Uniformly over \(k\in\left[g(N),\tau N\right]\),
\[\mu_{k}\sim|\mathscr{D}_{k}|\cdot p^{h}\sim\lambda_{k}N^{h-1}p^{h}=\frac{p^{h }k^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}. \tag{6.12}\]
Here, (6.12) follows as in Lemma 5.1, with the final equality due to the definition of \(\mathsf{IH}_{h}\). It now follows from Theorem 2.5 that uniformly over \(k\in\left[g(N),\tau N\right]\),
\[e^{\lambda_{k}N^{h-1}p^{h}}\left|\Pr\left[-dN+k\notin L(A)\right] -e^{-\lambda_{k}N^{h-1}p^{h}}\right|\] \[=e^{\lambda_{k}N^{h-1}p^{h}}\left|(\Pr\left[W_{k}=0\right]-e^{-\mu_ {k}})+(e^{-\mu_{k}}-e^{-\lambda_{k}N^{h-1}p^{h}})\right|\]
\[\leq\exp\left(\frac{p^{h}k^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1 }^{h}|u_{i}|}\right)\left(b_{1}(k)+b_{2}(k)+b_{3}(k)\right)+e^{\lambda_{k}N^{h- 1}p^{h}}\left|e^{-\mu_{k}}-e^{-\lambda_{k}N^{h-1}p^{h}}\right|\] \[\lesssim\exp\left(\frac{\left(\tau Np^{\frac{h}{h-1}}\right)^{h-1} }{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)\left(k^{2h-3}p^{2h-1 }+k^{h-1}p^{h}\right)\] \[\quad+\left|\exp\left(\lambda_{k}N^{h-1}p^{h}\left(1-\frac{\mu_{k} }{\lambda_{k}N^{h-1}p^{h}}\right)\right)-1\right| \tag{6.13}\] \[\leq\exp\left(\frac{f(N)^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_ {i=1}^{h}|u_{i}|}\right)k^{h-1}p^{h}\left(k^{h-2}p^{h-1}+1\right)+o(1)\] \[\leq\exp\left(\frac{f(N)^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_ {i=1}^{h}|u_{i}|}\right)\left(\tau Np^{\frac{h}{h-1}}\right)^{h-1}\left(\left( \tau Np^{\frac{h-1}{h-2}}\right)^{h-2}+1\right)+o(1)\] \[=\exp\left(\frac{f(N)^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1 }^{h}|u_{i}|}\right)f(N)^{h-1}\left(\left(f(N)p^{\frac{h-1}{h-2}-\frac{h}{h-1 }}\right)^{h-2}+1\right)+o(1)\] \[\leq\exp\left(\frac{f(N)^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_ {i=1}^{h}|u_{i}|}\right)f(N)^{2h-3}\left(p^{\frac{1}{(h-1)(h-2)}}+p\right)+o(1)\] \[\lesssim\exp\left(\frac{f(N)^{h-1}}{(h-1)!\cdot\theta_{L}\cdot \prod_{i=1}^{h}|u_{i}|}\right)f(N)^{2h-3}p^{\frac{1}{(h-1)(h-2)}}+o(1)\llll 1.\]
Here, (6.13) follows since uniformly over \(k\in\left[g(N),\tau N\right]\),
\[\left|\lambda_{k}N^{h-1}p^{h}\left(1-\frac{\mu_{k}}{\lambda_{k}N^{h-1}p^{h}} \right)\right|\leq f(N)^{h-1}\delta(N)\ll 1. \tag{6.14}\]
This proves the lemma.
Equipped with Lemma 6.4, we are ready to compute the expectation that we seek.
**Proposition 6.5**.: _We have that_
\[\mathbb{E}\left[|L(A)^{c}|\right]\sim\mathbb{E}\left[\left|L(A)\cap\left[(-d+ \tau)N,(s-\tau)N\right]^{c}\right|\right]\sim\frac{2\cdot\Gamma\left(\frac{1}{ h-1}\right)\sqrt[h-1]{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}}{(h-1) \cdot p(N)^{\frac{h}{h-1}}}.\]
Proof.: The first asymptotic equivalence is simply a restating of (6.11). We prove the latter. We begin with the left fringe. Using (5.2), we write
\[\mathbb{E}\left[\left|L(A)^{c}\cap\left[-dN,(-d+\tau)N\right] \right|\right] \tag{6.15}\] \[=o\left((1/p)^{\frac{h}{h-1}}\right)+\mathbb{E}\left[\left|L(A)^ {c}\cap\left[-dN+g(N),(-d+\tau)N\right]\right|\right].\]
We express the latter term as
\[\mathbb{E}\left[\left|L(A)^{c}\cap\left[-dN+g(N),(-d+\tau)N \right]\right|\right]=\sum_{k=g(N)}^{\tau N}\Pr\left[-dN+k\notin L(A)\right] \tag{6.16}\] \[=\sum_{k=g(N)}^{\tau N}\left(\Pr\left[-dN+k\notin L(A)\right]-e^ {-\lambda_{k}N^{h-1}p^{h}}+e^{-\lambda_{k}N^{h-1}p^{h}}\right)\sim\sum_{k=g(N) }^{\tau N}e^{-\lambda_{k}N^{h-1}p^{h}}\]
\[=\sum_{k=g(N)}^{\tau N}\exp\left(-\frac{p^{h}k^{h-1}}{(h-1)!\cdot \theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)\sim\int_{g(N)}^{\tau N}\exp\left(- \frac{p^{h}x^{h-1}}{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}\right)\ dx\] \[\sim\sqrt[h-1]{\frac{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{ i}|}{p^{h}}}\int_{g(N)}^{\tau N\sqrt[h-1]{\frac{p^{h}}{(h-1)!\cdot\theta_{L} \cdot\prod_{i=1}^{h}|u_{i}|}}}e^{-x^{h-1}}\ dx \tag{6.18}\] \[\sim\frac{\sqrt[h-1]{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{ i}|}}{p^{\frac{h}{h-1}}}\int_{0}^{\infty}e^{-x^{h-1}}\ dx\] \[=\frac{\sqrt[h-1]{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{ i}|}}{p^{\frac{h}{h-1}}}\int_{0}^{\infty}x^{h-2}x^{-(h-2)}e^{-x^{h-1}}\ dx\] (6.19) \[=\frac{\sqrt[h-1]{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{ i}|}}{p^{\frac{h}{h-1}}}\int_{0}^{\infty}x^{-\frac{h-2}{h-1}}e^{-x}\ dx=\frac{\Gamma\left(\frac{1}{h-1}\right)\sqrt[h-1]{(h-1)! \cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}}{(h-1)\cdot p^{\frac{h}{h-1}}}. \tag{6.17}\]
We used Lemma 6.4 in (6.16) and (6.17), and the asymptotic equivalence in (6.18) follows from the dominated convergence theorem, since (5.2) and (6.2) imply that the lower and upper limits of the integral respectively tend to zero and infinity. It follows from (6.15) and (6.19) that
\[\mathbb{E}\left[\Big{|}L(A)^{c}\cap\left[-dN,(-d+\tau)N\right]\Big{|}\right] \sim\frac{\Gamma\left(\frac{1}{h-1}\right)\sqrt[h-1]{(h-1)!\cdot\theta_{L} \cdot\prod_{i=1}^{h}|u_{i}|}}{(h-1)\cdot p^{\frac{h}{h-1}}}.\]
We now handle the right fringe. Proceeding like (6.15) in (6.20) and invoking (5.3),
\[\mathbb{E}\left[\Big{|}L(A)^{c}\cap\left[(s-\tau)N,sN\right] \Big{|}\right]=o\left((1/p)^{\frac{h}{h-1}}\right)+\sum_{k=g}^{\tau N}\Pr\left[ sN-k\notin L(A)\right] \tag{6.21}\] \[=o\left((1/p)^{\frac{h}{h-1}}\right)+\sum_{k=g}^{\tau N}\Pr\left[ -sN+k\notin\tilde{L}(A)\right]\sim\frac{\Gamma\left(\frac{1}{h-1}\right)\sqrt[ h-1]{(h-1)!\cdot\theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}}{(h-1)\cdot p^{\frac{h}{h-1}}}, \tag{6.20}\]
where we have applied (6.19) to \(\tilde{L}(A)\). Using (6.19) and (6.21), we conclude that
\[\mathbb{E}\left[\Big{|}L(A)^{c}\cap\left[(-d+\tau)N,(s-\tau)N\right]^{c}\Big{|} \right]\sim\frac{2\cdot\Gamma\left(\frac{1}{h-1}\right)\sqrt[h-1]{(h-1)!\cdot \theta_{L}\cdot\prod_{i=1}^{h}|u_{i}|}}{(h-1)\cdot p^{\frac{h}{h-1}}},\]
which was the desired result.
### Concentration
We now show that the random variable \(|L(A)^{c}|\) is strongly concentrated about its expectation. Together with Proposition 6.5, this will prove Theorem 1.1(iii).
**Proposition 6.6**.: _If (6.1) holds, then \(|L(A)^{c}|\sim\mathbb{E}\left[|L(A)^{c}|\right]\)._
Proof.: We proceed exactly as in the proof of Theorem 5.3, but with the following modifications. First, we show that \(\Delta_{n,1}(A)\) is modest with high probability. The expectation of \(\Delta_{n,1}(A)\) satisfies
\[\mathbb{E}\left[\Delta_{n,1}(A)\right]\] \[=\mathbb{E}\left[\left|\left(L\left(A\cup\{n\}\right)\setminus L \left(A\setminus\{n\}\right)\right)\cap(\mathcal{I}_{n}\cap[-dN+\tilde{g},sN- \tilde{g}])\right|\right]\]
\[=\sum_{k\in\mathcal{I}_{n}\cap[-dN+\tilde{g},sN-\tilde{g}]}\Pr\left[ \left(k\in L\left(A\cup\{n\}\right)\right)\wedge\left(k\notin L\left(A\setminus \{n\}\right)\right)\right] \tag{6.22}\] \[\leq\sum_{k=\max\left\{\tilde{g},\min\{n,N-n\}\right\}}^{mN- \tilde{g}}\Pr\left[-dN+k\notin L\left(A\setminus\{n\}\right)\right]\leq\sum_{k =\max\left\{\tilde{g},\min\{n,N-n\}\right\}}^{mN-\tilde{g}}e^{-Ckp}\] (6.23) \[\lesssim(1/p)\exp\left(-Cp\left(\max\left\{\tilde{g},\min\{n,N-n \}\right\}-1\right)\right)\] \[\lesssim(1/p)\exp\left(-(C/2)p\max\left\{\tilde{g},\min\{n,N-n \}\right\}\right).\]
We appealed to Remark 6.3 to observe the latter inequality in (6.22) (we may do so, since it follows from (5.15) and (5.2) that \(\tilde{g}\gg(1/p)^{\frac{h}{h-1}}\)), where \(C>0\) is an appropriate constant. The former inequality in (6.23) follows from computations entirely analogous to those performed in the proof of Lemma 6.2. Therefore, we have from (6.23) and Markov's inequality that
\[\Pr\left(\Delta_{n,1}(A)\gtrsim(1/p)\exp\left(-(C/4)p\max\left\{ \tilde{g},\min\{n,N-n\}\right\}\right)\right)\] \[\lesssim\frac{(1/p)\exp\left(-(C/2)p\max\left\{\tilde{g},\min\{n, N-n\}\right\}\right)}{(1/p)\exp\left(-(C/4)p\max\left\{\tilde{g},\min\{n,N-n\} \right\}\right)}\lesssim\exp\left(-(C/4)p\max\left\{\tilde{g},\min\{n,N-n\} \right\}\right). \tag{6.24}\]
By a union bound, it now follows from (6.24) that
\[\Pr\left(\Delta_{n,1}(A)\geq\exp\left(-(C/4)p\max\left\{\tilde{g},\min\{n,N-n\}\right\}\right)\text{ for some }n\in I_{N}\right)\] \[\lesssim\sum_{n=0}^{N}\exp\left(-(C/4)p\max\left\{\tilde{g},\min \{n,N-n\}\right\}\right)\] \[=\sum_{n=0}^{\tilde{g}}\exp\left(-\frac{Cp\tilde{g}}{4}\right)+ \sum_{n=N-\tilde{g}}^{N}\exp\left(-\frac{Cp\tilde{g}}{4}\right)+\sum_{n=\tilde {g}+1}^{N-\tilde{g}-1}\exp\left(-\frac{Cpn}{2}\right)\] \[\lesssim\frac{\log(1/p)}{p^{\frac{h}{h-1}}}\exp\left(-\frac{C \log(1/p)}{4p^{\frac{1}{h-1}}}\right)+\int_{\tilde{g}}^{\infty}\exp\left(- \frac{Cpx}{2}\right)\ dx\] \[\lesssim\left((1/p)^{\frac{1}{h-1}}\log(1/p)\right)^{h}\exp\left( -(1/p)^{\frac{1}{h-1}}(C/4)\log(1/p)\right)+(1/p)\exp\left(-(1/p)^{\frac{1}{h -1}}(C/4)\log(1/p)\right)\] \[\leq o(1)+\left((1/p)^{\frac{1}{h-1}}\log(1/p)\right)^{h-1}\exp \left(-(1/p)^{\frac{1}{h-1}}(C/2)\log(1/p)\right)\ll 1.\]
So with probability \(1-o(1)\),
\[\Delta_{n,1}(A) \leq\exp\left(-(C/4)p\max\left\{\tilde{g},\min\{n,N-n\}\right\}\right)\] \[\leq\exp\left(-(1/p)^{\frac{1}{h-1}}(C/4)\log(1/p)\right)\ll 1 \text{ for all }n\in I_{N}. \tag{6.25}\]
Furthermore, on this event with \(1-o(1)\) probability, it holds that
\[\begin{split}& p\sum_{n=0}^{N}\big{(}\Delta_{n,1}(A)\big{)}^{2}\\ &\leq p\left[\sum_{n=0}^{\tilde{g}}\exp\left(-\frac{C\log(1/p)}{2p^ {\frac{1}{h-1}}}\right)+\sum_{n=N-\tilde{g}}^{N}\exp\left(-\frac{C\log(1/p)}{2p ^{\frac{1}{h-1}}}\right)+\sum_{n=\tilde{g}+1}^{N-\tilde{g}}\exp\left(-Cpn \right)\right]\\ &\lesssim p\tilde{g}\exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1 }}}\right)+p\cdot\frac{1}{p}\exp\left(-\frac{C\log(1/p)}{p^{\frac{1}{h-1}}} \right)\\ &=p\tilde{g}\exp\left(-\frac{C\log(1/p)}{2p^{\frac{1}{h-1}}} \right)+o(1)\lesssim\frac{\log(1/p)}{p^{\frac{1}{h-1}}}\exp\left(-\frac{C\log (1/p)}{2p^{\frac{1}{h-1}}}\right)+o(1)\lll 1.\end{split} \tag{6.26}\]
Finally, if we assume (6.1), (5.29) and Lemma 6.1 together imply the sufficiency of showing (5.30). With these adjustments, tracing the proof of Theorem 5.3 proves the desired.
This completes the proof of Theorem 1.1.
## 7. Limiting Poisson Behavior
The methods we used to prove Theorem 1.1 lend themselves quickly to another phase transition concerning the asymptotic behavior of certain local dependent Bernoulli processes of interest, which is captured in Theorem 1.4. In the following computations, we assume that \(k\in\left[0,mN/2\right]\). As remarked in Section 3, we may extend these results to \(k\in\left[mN/2,mN\right]\) by appealing to (3.1). We let \(F:\mathbb{N}\to\mathbb{R}_{\geq 0}\) be a function satisfying
\[p^{\frac{2}{2h-3}}\lll F(N)\lll 1. \tag{7.1}\]
We begin by proving an upper bound for the total variation distance between the processes \(\mathscr{X}_{k}\) and \(\mathscr{Y}_{k}\). By invoking Theorem 2.6 and substituting results from Section 3, it follows from entirely straightforward manipulations that if \(p(N)\lll N^{-\frac{h-2}{h-1}}\) (this assumption is needed only for the latter case), noting that it follows from (7.1) that \(F(N)(1/p)^{\frac{2h-1}{2h-3}}\lll 1/p\),
\[\begin{split} d_{\mathrm{TV}}\left(\mathscr{X}_{k},\mathscr{Y}_ {k}\right)&\leq\min\{1,\mu_{k}^{-1}\}\left(b_{1}(k)+b_{2}(k) \right)\\ &\lesssim\begin{cases}\sum_{\ell=h}^{2h-1}k^{\ell-2}p^{\ell}+\sum _{\ell=2}^{h-1}k^{\ell-1}p^{\ell}&k\leq F(N)(1/p)^{\frac{2h-1}{2h-3}}\\ \frac{k^{2h-3}p^{2h-1}+k^{h-2}p^{h-1}}{k^{h-1}p^{h}}&k>F(N)(1/p)^{\frac{2h-1}{2 h-3}}\lll 1.\end{cases}\end{split} \tag{7.2}\]
This proves Theorem 1.4(i). We now turn to proving a lower bound for the total variation distance, which will show that the regime of \(p(N)\) for which we established the convergence of \(\mathscr{X}_{k}\) to a Poisson process for all \(k\in[0,mN]\), in the sense of (7.2), is sharp. In what follows, we assume that
\[p(N)\geq cN^{-\frac{h-2}{h-1}} \tag{7.3}\]
for some constant \(c>0\). We fix another constant \(C>0\), and we study values \(k\in\left[CN,mN/2\right]\), for which we will be able to obtain a meaningful lower bound. For bookkeeping purposes, we record
\[N\gtrsim(1/p)^{\frac{h-1}{h-2}}, k\asymp N. \tag{7.4}\]
To derive such a lower bound on the total variation distance, we invoke Theorem 2.7 to bound \(d_{\mathrm{TV}}\left(W_{k},\mathrm{Pois}(\mu_{k})\right)\) from below. First, observe that the collection of random variables \(\{X_{\Lambda}:\Lambda\in\mathscr{D}_{k}\}\) is positively related. Indeed, for \(\Lambda,\Lambda^{\prime}\in\mathscr{D}_{k}\), let \(Y_{\Lambda^{\prime}\Lambda}\) be the Bernoulli random variable
corresponding to the event \(S(\Lambda^{\prime})\setminus S(\Lambda)\subseteq A\); if \(\Lambda=\Lambda^{\prime}\), then \(Y_{\Lambda^{\prime}\Lambda}\equiv 1\). It is not hard to see that
\[\mathcal{L}\left(Y_{\Lambda^{\prime}\Lambda};\Lambda^{\prime}\in\mathscr{D}_{k} \right)=\mathcal{L}\left(X_{\Lambda^{\prime}};\Lambda^{\prime}\in\mathscr{D}_{ k}\ \mid X_{\Lambda}=1\right),\qquad Y_{\Lambda^{\prime}\Lambda}\geq X_{\Lambda^{\prime}}\text{ for all }\Lambda^{\prime}\in\mathscr{D}_{k}\setminus\{\Lambda\}.\]
We define the following. The statements for \(\epsilon_{k}\) hold uniformly over \(k\in\left[CN,mN/2\right]\) and follow quickly from computations in Section 3, notably using (3.5) to prove the \(\gtrsim\) implicit in the \(\asymp\):
\[\gamma_{k}:=\frac{\mathbb{E}\left[(W_{k}-\mathbb{E}[W_{k}])^{4}\right]}{\mu_ {k}}-1;\qquad\qquad\epsilon_{k}:=\frac{\operatorname{Var}(W_{k})}{\mu_{k}}-1 \asymp N^{h-2}p^{h-1}=\Omega(1).\]
Thus, we may invoke Theorem 2.7. Appealing to (7.4), we proceed with the relevant computations.
\[\left(\frac{\gamma_{k}}{\mu_{k}\epsilon_{k}}\right)_{+} \leq\frac{\gamma_{k}+1}{\mu_{k}\epsilon_{k}}\lesssim\frac{N^{4h-6} p^{4h-2}+N^{3h-4}p^{3h-1}+N^{2h-3}p^{2h-1}+N^{h-1}p^{h}}{N^{3h-4}p^{3h-1}}\] \[=N^{h-2}p^{h-1}+1+\frac{1}{N^{h-1}p^{h}}+\frac{1}{N^{2h-3}p^{2h-1 }}\asymp N^{h-2}p^{h-1}+1\asymp\epsilon_{k};\] \[\frac{\sum_{\Lambda\in\mathscr{D}_{k}}p_{\Lambda}^{2}}{\mu_{k}^{ 2}\epsilon_{k}} \leq\frac{\sum_{\Lambda\in\mathscr{D}_{k}}p_{\Lambda}}{\mu_{k}^{2 }\epsilon_{k}}=\frac{\mu_{k}}{\mu_{k}^{2}\epsilon_{k}}=\frac{1}{\mu_{k} \epsilon_{k}}\asymp\frac{1}{N^{2h-3}p^{2h-1}}\lll 1;\] \[\frac{3\operatorname{Var}(W_{k})\max_{\Lambda\in\mathscr{D}_{k}}p _{\Lambda}}{\mu_{k}\epsilon_{k}} \lesssim\frac{3pN^{2h-3}p^{2h-1}}{N^{2h-3}p^{2h-1}}=3p\lll 1.\]
Therefore, it follows that
\[\psi_{k}:=\left(\frac{\gamma_{k}}{\mu_{k}\epsilon_{k}}\right)_{+}+3\epsilon_{ k}+\frac{\sum_{\Lambda\in\mathscr{D}_{k}}p_{\Lambda}^{2}}{\mu_{k}^{2}\epsilon_{k}} +\frac{3\operatorname{Var}(W_{k})\max_{\Lambda\in\mathscr{D}_{k}}p_{\Lambda}} {\mu_{k}\epsilon_{k}}\asymp\epsilon_{k}.\]
Altogether, we conclude that uniformly over \(k\in\left[CN,mN/2\right],\)
\[d_{\operatorname{TV}}\left(\mathscr{X}_{k},\mathscr{Y}_{k}\right)\geq d_{ \operatorname{TV}}\left(W_{k},\operatorname{Pois}(\mu_{k})\right)\geq\frac{ \epsilon_{k}}{11+3\psi_{k}}\geq\frac{\epsilon_{k}}{11+O(1)\epsilon_{k}}= \Omega(1),\]
which proves Theorem 1.4(ii).
## 8. Future Directions
### Further Generalizations
Our work
1. presents and utilizes the Stein-Chen method as a powerful tool for the study of sum/difference sets and some of their natural generalizations, as it yields tight estimates on the probability that certain values are missing from said sets;
2. provides sharp statements concerning when we can control the dependence resulting from several overlapping ways of representing the same sum.
The second author has been involved in several projects investigating MSTD sets, some of which were mentioned in the introduction. These projects took place in a variety of settings, ranging from working with different groups (e.g., we may generalize our present setting by assuming that we draw elements independently from subsets \(G_{N}\) of a group \(G\); here, we took \(G_{N}=I_{N}\) and \(G=\mathbb{Z}\)) to working with different binary operations. In some of these settings, the desired outcome was to show that a positive proportion of sets were MSTD or to show that asymptotically almost all sets failed to be MSTD, and the main rigidity arose due to dependencies resulting from overlapping representations. It may thus be fruitful to revisit these works and see if our techniques may be quickly adapted to yield improvements on known results.
A slightly less immediate generalization is to introduce dependencies into the binomial model itself. Specifically, we might assume that \(A\) is drawn from a probability measure \(\mathbb{P}_{N}\) on \(\left(\Omega_{N},2^{\Omega_{N}}\right)\) for which the marginal laws are all \(\operatorname{Ber}\left(p(N)\right)\), but the joint law need not be the product measure. Natural generalizations of Theorems 1.1 and 1.4 for this setting would be nice.
### Global Process Behavior
Theorem 1.4 provides a sharp threshold for when the total variation distance between \(\mathscr{X}_{k}\) and \(\mathscr{Y}_{k}\) vanishes uniformly across all \(k\in[0,mN]\). In this sense, we have shown that the dependent Bernoulli process corresponding to the number of ways a sum is generated in \(L(A)\) converges to a Poisson point process. It might be worth asking if a nice point process limit can be obtained across the entire set \(L(A)\), i.e., for the dependent Bernoulli process \((X_{k})_{k\in[0,mN]}\), where \(X_{k}\) corresponds to the event \(k\in L(A)\).
## Acknowledgments
This research was supported, in part, by NSF Grants DMS1947438 and DMS2241623. The first author was introduced to this topic while he was a student at the 2022 SMALL REU at Williams College, and he thanks Professor Eyvindur Ari Palsson and the second author for giving him a chance to participate in the program. We thank the University of Pennsylvania and Williams College for their support over the course of this project. Parts of this work were carried out when the first author was attending the 2023 Princeton Machine Learning Theory Summer School, and he thanks the organizers for providing a hospitable environment to work in. The first author also thanks Professor Robin Pemantle for a productive conversation on the problem.
## Appendix A Asymptotic Notation
Let \(X\) be a real-valued random variable depending on some positive integer parameter \(N\), and let \(f(N)\) be some real-valued function. We write \(X\sim f(N)\) to denote the fact that, for any \(\epsilon_{1},\epsilon_{2}>0\), it holds for all sufficiently large values of \(N\) that
\[\Pr\big{(}X\notin[(1-\epsilon_{1})f(N),(1+\epsilon_{1})f(N)]\big{)}<\epsilon_{ 2}.\]
We will also use this notation for deterministic functions \(f(N),g(N)\): here, we write \(f(N)\sim g(N)\) if \(\lim_{N\to\infty}f(N)/g(N)=1\). We write \(f(N)\gtrsim g(N)\) to indicate that there exists a constant \(C>0\) for which \(f(N)\geq Cg(N)\) for all sufficiently large \(N\), and \(f(N)\lesssim g(N)\) to indicate that there exists a constant \(C>0\) for which \(f(N)\leq Cg(N)\) for all sufficiently large \(N\). By \(f(N)=O(g(N))\), we mean that there exists a constant \(C>0\) for which \(f(N)\leq Cg(N)\) for all \(N\) large. We write \(f(N)=\Omega(g(N))\) if \(g(N)=O(f(N))\). By \(f(N)=\Theta(g(N))\), we mean that both \(f(N)=O(g(N))\) and \(g(N)=O(f(N))\) hold. Finally, if \(\lim_{N\to\infty}f(N)/g(N)=0\) then we write \(f(N)=o(g(N))\), which is equivalent to \(f(N)\lll g(N)\). We write \(f(N)=\omega(g(N))\) if \(g(N)=o(f(N))\), which is equivalent to \(f(N)\ggg(N)\).
## Appendix B Proofs for Section 2
We work towards a proof of Lemma 2.1. We begin with the following standard definitions.
**Definition B.1**.: A _partition_ of a positive integer \(k\) is a finite nonincreasing sequence of positive integers \(\lambda_{1},\ldots,\lambda_{h}\) such that \(\sum_{i=1}^{h}\lambda_{i}=k\). A _weak composition_ is an \(h\)-tuple \((\lambda_{1},\ldots,\lambda_{h})\) of nonnegative integers such that \(\sum_{i=1}^{h}\lambda_{i}=k\).
**Definition B.2**.: A sequence \(a_{0},a_{1},\ldots,a_{n}\) of real numbers is _unimodal_ if there exists an index \(0\leq j\leq n\) for which it holds that
\[a_{0}\leq a_{1}\leq\cdots\leq a_{j}\geq a_{j+1}\geq\cdots\geq a_{n}.\]
The sequence is _symmetric_ if \(a_{i}=a_{n-i}\) for all \(0\leq i\leq n\).
For later use, observe that a sequence \(a_{0},a_{1},\ldots,a_{n}\) that is both unimodal and symmetric must have that \(a_{\lfloor n/2\rfloor}=a_{\lceil n/2\rceil}\), and that this value must be the maximum of the sequence.
**Lemma B.3**.: _Fix a function \(g:\mathbb{N}\to\mathbb{N}\) satisfying \(1\lll g(N)\). For \(k\in I_{hN}\), let \(p_{h}(k,N)\) denote the number of partitions of \(k\) into at most \(h\) parts, each at most \(N\). Uniformly over \(k\in\left[g(N),hN-g(N)\right]\),_
(B.1) \[p_{h}\left(k,N\right)\sim\frac{\mathsf{IH}_{h}\left(k/N\right)}{h!}N^{h-1}.\]
Proof.: We handle the cases \(k\notin\left[N,(h-1)N\right]\) and \(k\in\left[N,(h-1)N\right]\) separately. It is well known that the sequence
(B.2) \[p_{h}(0,N),\ p_{h}(1,N),\ \ldots,\ p_{h}(hN-1,N),\ p_{h}(hN,N)\]
has the Gaussian binomial coefficient \(\binom{N+h}{h}_{q}\) as its generating function (e.g., see [1, Theorem 3.1]), and that this sequence is both unimodal and symmetric (e.g., see [1, 20]). Appealing to [13, Theorem 2.4] and [16, Theorem 3.2], we observe that, where the first statement holds uniformly over all integers \(k\geq g(N)\) and \(\alpha\) is a nonnegative real number,
(B.3) \[[q^{k}]\binom{k+h}{h}_{q}\sim\frac{k^{h-1}}{(h-1)!h!}=\frac{\mathsf{IH}_{h}(k/ N)}{h!}N^{h-1},\ \ \ \ \ \left[q^{\lfloor\alpha N\rfloor}\right]\binom{N+h}{h}_{q}\sim\frac{ \mathsf{IH}_{h}\left(\alpha\right)}{h!}N^{h-1}.\]
Since any partition of an integer \(k\) satisfying \(g(N)\leq k\leq N\) certainly has parts at most \(k\), it follows for such values of \(k\) that \(p_{h}(k,N)\) is simply the number of partitions of \(k\) into \(h\) parts. Thus, the first statement of (B.3) implies that (B.1) holds uniformly over all integers \(k\) satisfying \(g(N)\leq k\leq N\), from which the symmetry of the sequence (B.2) implies that (B.1) holds uniformly over all \(k\in[N,(h-1)N]^{c}\cap\left[g(N),hN-g(N)\right]\).
We now turn to the case where \(k\in\left[N,(h-1)N\right]\). We observe that \(\mathsf{IH}(x)\) is uniformly continuous on \(x\in[1,h-1]\) and positive on \(x\in(0,h)\) (the positivity can be deduced inductively using [16, Remark 3.3], for example). It thus follows that \(\mathsf{IH}_{h}(k/N)\) has a fixed positive minimum on \(k\in\left[N,(h-1)N\right]\). From these observations and the latter statement of (B.3), it is straightforward to show using a simple continuity argument11 that uniformly over \(k\in\left[N,(h-1)N\right]\),
Footnote 11: Briefly, appeal to the latter statement of (B.3) at finitely many points \(\alpha\in[1,h-1]\), including \(h/2\). Use the uniform continuity of \(\mathsf{IH}_{h}(x)\) on \(x\in[1,h-1]\) so that values of \(\mathsf{IH}_{h}(\alpha)\) between consecutive points \(\alpha\) are small compared to \(\min_{x\in[1,h-1]}\mathsf{IH}_{h}(x)\). The unimodality and symmetry of (B.2) now imply that we can estimate \([q^{k}]\binom{N+h}{h}_{q}\) by the values of \(\frac{\mathsf{IH}_{h}\left(\alpha\right)}{h!}N^{h-1}\) for those points \(\alpha\) above and below \(k/N\) in a manner that is uniform over \(k\in\left[N,(h-1)N\right]\).
(B.4) \[[q^{k}]\binom{N+h}{h}_{q}\sim\frac{\mathsf{IH}_{h}\left(k/N\right)}{h!}N^{h-1}.\]
Combining (B.4) with the result for \(k\notin\left[N,(h-1)N\right]\) proves the lemma.
**Remark B.4**.: For \(k\leq N\), the expression in Lemma B.3 can be written using (1.14) as
\[\frac{\mathsf{IH}_{h}\left(k/N\right)}{h!}N^{h-1}=\frac{\left(k/N\right)^{h-1 }}{(h-1)!h!}N^{h-1}=\frac{k^{h-1}}{(h-1)!h!}.\]
This is consistent with classical results on the number of partitions of a positive integer with a bounded number of parts (e.g., see [1, Theorem 4.1]).
**Lemma B.5**.: _Fix a function \(g:\mathbb{N}\to\mathbb{N}\) satisfying \(1\lll g(N)\lll N\), jointly coprime nonzero integers \(u_{1},\ldots,u_{h}\), and integers \(b_{1},\ldots,b_{h}\). Let \(u=(u_{1},\ldots,u_{h})\) and \(b=(b_{1},\ldots,b_{h})\). For \(k\in I_{hN}\), let \(c_{h,u,b}(k,N)\) denote the number of weak compositions \((a_{1},\ldots,a_{h})\) of \(k\) with \(h\) parts, each of which is at most \(N\), that satisfy_
(B.5) \[a_{i}\equiv b_{i}\pmod{u_{i}}\text{ for all }i\in[h].\]
_Uniformly over \(k\in\big{[}g(N),hN-g(N)\big{]}\),_
(B.6) \[c_{h,u,b}(k,N)\sim\frac{\mathsf{IH}_{h}(k/N)}{\prod_{i=1}^{h}|u_{i}|}N^{h-1}.\]
_Additionally, uniformly over \(k_{1}<g(N)\) and \(k_{2}>hN-g(N)\),_
(B.7) \[\mathsf{IH}_{h}(k_{1}/N)\lll\mathsf{IH}_{h}(1+k_{1}/N), \mathsf{IH}(k_{2}/N)\lll\mathsf{IH}_{h}(1-k_{2}/N).\]
Proof.: Let \(c_{h}(k,N)\) denote the number of weak compositions \((a_{1},\ldots,a_{h})\) of \(k\) with \(h\) parts, each of which is at most \(N\). It follows from [22, Theorem 3] that the sequence
(B.8) \[c_{h}(0,N),\ c_{h}(1,N),\ \ldots,\ c_{h}(hN,N)\]
is unimodal and symmetric. Additionally, the sequence12
Footnote 12: Of course, the sequence (B.9) is not unimodal in general. Take \(b_{i}=0\) and \(u_{i}=2\) for all \(i\in[h]\), for instance.
(B.9) \[c_{h,u,b}(0,N),\ c_{h,u,b}(1,N),\ \ldots,\ c_{h,u,b}(hN,N)\]
can easily be deduced to be symmetric from the observation that for \(k\in[0,hN]\),
(B.10) \[a_{1}+\cdots+a_{h}=k\iff(N-a_{1})+\cdots+(N-a_{h})=hN-k.\]
Lemma B.3 implies that uniformly over \(k\in\big{[}g(N),hN/2\big{]}\),
(B.11) \[c_{h}(k,N)\sim\mathsf{IH}_{h}(k/N)N^{h-1},\]
since it is easy to show that \(O(k^{h-2})\) partitions of \(k\) into at most \(h\) parts are such that its parts are not all distinct.13 The asymptotic equivalence (B.11) is now seen to hold uniformly over \(k\in\big{[}g(N),hN-g(N)\big{]}\) by the symmetry of (B.8).
Footnote 13: To derive (B.11) from here, treat the cases \(k\in\big{[}g(N),N\big{]}\) and \(k\in\big{[}N,hN/2\big{]}\) separately. Appeal to Remark B.4 for the former, and the existence of a fixed positive minimum for \(\mathsf{IH}_{h}(k/N)\) with \(k=O(N)\) for the latter.
We deviate from our usual practice of assuming a fixed \(h\geq 2\), and prove (B.6) by induction on \(h\geq 2\). We begin with the induction basis \(h=2\). For all \(k\in\big{[}g(N),2N-g(N)\big{]}\), the values of \(a_{1}\) yielding a (unique) weak composition \((a_{1},a_{2})\) of \(k\) with parts at most \(N\) comprise an interval with at least \(g=\omega(1)\) elements. It is not hard to show that uniformly over all such \(k\), \(\sim\frac{1}{|u_{1}u_{2}|}\) of them satisfy (B.5) (e.g., appeal to the result of [10, Exercise 8.1.4(b)]). Combined with (B.11), this establishes (B.6) for \(h=2\).
Now consider \(h\geq 3\). Let \(\tilde{g}:\mathbb{N}\to\mathbb{N}\) be a function satisfying \(1\lll\tilde{g}\lll g\). We can reformulate the condition that \((a_{1},\ldots,a_{h})\) is a weak composition of \(k\) with \(h\) parts via
(B.12) \[\sum_{i=1}^{h}a_{i}=k\iff\sum_{i=2}^{h}a_{i}=k-a_{1}.\]
It is easy to see that for all \(k\in\big{[}g(N),hN/2\big{]}\), the possible choices of \(a_{1}\) yielding the existence of \(a_{2},\ldots,a_{h}\) for which \((a_{1},\ldots,a_{h})\) is a weak composition of \(k\) with \(h\) parts, each at most \(N\), comprise an interval with at least \(g=\omega(1)\) elements. For such an integer \(k\), let \(\mathcal{I}_{k}\) denote this interval of possible choices for \(a_{1}\), but with both endpoints compressed by an additive margin of \(\tilde{g}\). So for any \(a_{1}\in\mathcal{I}_{k}\), \(k-a_{1}\in\big{[}\tilde{g}(N),hN/2\big{]}\) for all \(a_{1}\in\mathcal{I}_{k}\). The following statements are to be understood as holding uniformly over \(k\in\big{[}g(N),hN/2\big{]}\). Since \(\tilde{g}\lll g\), \(\mathcal{I}_{k}\) has \(\omega(1)\) elements. Every14\(|u_{1}|^{\text{th}}\) element \(a_{1}\in\mathcal{I}_{k}\) satisfies (B.5) for \(a_{1}\): each such \(a_{1}\) yields \(c_{h-1}(k-a_{1},N)\) weak compositions of \(k\) with \(h\) parts, each at most \(N\). From (B.8), we observe the unimodality and symmetry of the sequence
Footnote 14: Technically, this is starting from one of the first \(|u_{1}|\) elements of \(\mathcal{I}_{k}\).
(B.13) \[c_{h-1}(\tilde{g}(N),N),\ldots,c_{h-1}(hN-\tilde{g}(N),N).\]
Using this observation alongside (B.11), it follows from a simple continuity argument that \(\sim\frac{1}{|u_{1}|}\) of the number of weak compositions \((a_{1},\ldots,a_{h})\) satisfying (B.12) for which \(a_{1}\in\mathcal{I}_{k}\) also satisfy (B.5) for \(a_{1}\). The induction hypothesis (which may be invoked since \(1\lll\tilde{g}\lll N\)) with (B.11) implies that uniformly over all \(a_{1}\in\mathcal{I}_{k}\), \(\sim\frac{1}{\prod_{i=2}^{h}|u_{i}|}\) of the weak compositions \((a_{2},\ldots,a_{h})\) of \(k-a_{1}\) with \(h-1\) parts at most \(N\) satisfy (B.5). Together with the preceding deduction, we conclude that \(\sim\frac{1}{\prod_{i=1}^{h}|u_{i}|}\) of the weak compositions \((a_{1},\ldots,a_{h})\) of \(k\) with \(h\) parts, each at most \(N\), and with \(a_{1}\in\mathcal{I}_{k}\) satisfy (B.5). Since \(\tilde{g}\lll g\leq k\), it follows that the number of weak compositions \((a_{1},\ldots,a_{h})\) of \(k\) with \(h\) parts, each at most \(N\), for which \(a_{1}\notin\mathcal{I}_{k}\) is \(o(k^{h-1})\). Altogether, with (B.11), we conclude that
(B.14) \[c_{h,u,b}(k,N)\sim\frac{\mathsf{IH}_{h}(k/N)}{\prod_{i=1}^{h}|u_{i}|}N^{h-1},\]
uniformly over \(k\in\big{[}g(N),hN/2\big{]}\).15 Since the sequence (B.9) is symmetric, we conclude that (B.6) holds uniformly over \(k\in\big{[}g(N),hN-g(N)\big{]}\), completing the induction.
Footnote 15: As was done to prove (B.11), (B.14) can be observed by treating \(k\in\big{[}g(N),N\big{]}\) and \(k\in\big{[}N,hN/2\big{]}\) separately.
Now, (B.7) can be seen to hold from an entirely elementary continuity argument.
We are now ready to prove Lemma 2.1.
Proof of Lemma 2.1.: Fix \(k\in\big{[}0,mN/2\big{]}\). The fact that \(|\mathscr{D}_{k}|=|\mathscr{D}_{mN-k}|\) follows quickly from
(B.15) \[L\left(a_{1},\ldots,a_{h}\right)=-dN+k\iff L\left(N-a_{1},\ldots,N-a_{h} \right)=-dN+(mN-k)=sN-k,\]
which naturally lends itself to a bijection between \(\mathscr{D}_{k}\) and \(\mathscr{D}_{mN-k}\). We now prove (2.2) and (2.3) for \(|\mathscr{D}_{k}|\), after which we are done. Rearranging the LHS of (B.15) gives
(B.16) \[\sum_{i=1}^{n}u_{i}a_{i}+\sum_{i=n+1}^{h}|u_{i}|\left(N-a_{i}\right)=k.\]
For values \(a_{1},\ldots,a_{h}\in I_{N}\), (B.16) is equivalent to the statement that
(B.17) \[\big{(}u_{1}a_{1},\ldots,u_{n}a_{n},|u_{n+1}|(N-a_{n+1}),\ldots,|u_{h}|(N-a_{h })\big{)}\]
forms a weak composition of \(k\). For any \(k\in\big{[}0,g(N)\big{]}\), there are at most \(g(N)^{h-1}\lll N^{h-1}\) weak compositions with \(h\) parts of \(k\), so (2.2) follows. We now fix \(k\in\big{[}g(N),mN/2\big{]}\), and prove (2.3) by counting the number of \(h\)-tuples \((a_{1},\ldots,a_{h})\) in \(I_{N}^{h}\) for which (B.17) forms a weak composition of \(k\), then arguing that for essentially all such \(h\)-tuples, all \(h\) parts are distinct. Dividing our initial count by \(2^{\mathbf{1}\{L\text{ balanced, }k=mN/2\}}\cdot\theta_{L}\) to correct for overcounting (resulting from counting \(h\)-tuples rather than sets) now yields an asymptotic expression for \(|\mathscr{D}_{k}|\). All of this will be done using an argument which applies uniformly over \(k\in[g(N),mN/2]\).
We begin with the initial count. We partition the collection of \(h\)-tuples \((a_{1},\ldots,a_{h})\in I_{N}^{h}\) for which (B.17) is a weak composition of \(k\) into classes based on the \(h\)-tuple
(B.18) \[\big{(}\lfloor u_{1}a_{1}/N\rfloor,\ldots,\lfloor u_{n}a_{n}/N\rfloor,\lfloor u _{n+1}|(N-a_{n+1})/N\rfloor,\ldots,\lfloor u_{h}|(N-a_{h})/N\rfloor\big{)}\,.\]
Set \(t_{i}\in I_{|u_{i}|-1}\) for \(i\in[h]\), so that those \(h\)-tuples \((a_{1},\ldots,a_{h})\) in the class where (B.18) is \((t_{1},\ldots,t_{h})\) are exactly those which satisfy the inequalities
(B.19) \[0\leq u_{i}a_{i}-t_{i}N\leq N-1\text{ for }i\in[n],\qquad 0\leq|u_{i}|(N-a_{i})-t_{i }N\leq N-1\text{ for }i>n.\]
This suggests, for all such \(h\)-tuples \((a_{1},\ldots,a_{h})\), the reformulation of (B.16) via
(B.20) \[\sum_{i=1}^{n}\left(u_{i}a_{i}-t_{i}N\right)+\sum_{i=n+1}^{h}\left(|u_{i}| \left(N-a_{i}\right)-t_{i}N\right)=k-\left(\sum_{i=1}^{h}t_{i}\right)N,\]
from which we observe that the number of \(h\)-tuples \((a_{1},\ldots,a_{h})\) in the class where (B.18) is \((t_{1},\ldots,t_{h})\) is exactly the number of weak compositions of \(k-\left(\sum_{i=1}^{h}t_{i}\right)N\) with \(h\) parts, each at most \(N-1\) (due to (B.19)), and satisfying (B.5) for the values \(b_{1}=-t_{1}N,\ldots,b_{h}=-t_{h}N\). Invoking16 Lemma B.5 for all choices of \(t_{i}\in I_{|u_{i}|-1}\) for \(i\in[h]\) and summing yields that the corresponding number of \(h\)-tuples \((a_{1},\ldots,a_{h})\in I_{N}^{h}\) for which (B.17) is a weak composition of \(k\) is
Footnote 16: Appeal to (B.6) if \(k/N-\sum_{i=1}^{h}t_{i}\in\left[g(N),hN-g(N)\right]\) and (B.7) if not. Of course, \((N-1)^{h-1}\sim N^{h-1}\).
(B.21) \[\sim\frac{\sum_{t_{1}=0}^{|u_{1}|-1}\cdots\sum_{t_{h}=0}^{|u_{h}|-1}|\mathsf{ H}_{h}\left(k/N-\sum_{i=1}^{h}t_{i}\right)}{\prod_{i=1}^{h}|u_{i}|}N^{h-1}.\]
The number of these \(h\)-tuples \((a_{1},\ldots,a_{h})\) for which either the \(h\) parts are not distinct or \(t_{i}=|u_{i}|\) for some \(i\in[h]\) is \(O(k^{h-2})\). Combining this observation with (B.21), we conclude that (2.3) holds uniformly over \(k\in\left[g(N),mN/2\right]\).17
Footnote 17: As in the proof of Lemma B.5, treat \(k\in\left[g(N),N\right]\) and \(k\in\left[N,mN/2\right]\) separately to see this.
We now prove Lemmas 2.2 and 2.3.
Proof of Lemma 2.2.: Since \(\gcd(u_{1},\ldots,u_{h})=1\), there is a solution to the Diophantine equation
(B.22) \[L(a_{1},\ldots,a_{h})=-dN+k.\]
It is not hard to show (briefly, add appropriate multiples of \(\operatorname{lcm}(u_{1},\ldots,u_{h})\) to the entries of a particular solution of (B.22) to form new solutions of (B.22)) that uniformly over \(k\in\left[0,mN/2\right]\), we may form \(\gtrsim k^{h-1}\) solutions to (B.22) such that the \(h\) elements are distinct and lie in \(I_{N}\). Since there are \(O(1)\) such solutions in each equivalence class of \(\mathscr{D}_{k}\), this proves (1).
We prove (2) by deriving the lower bound for the number of ordered pairs of \(h\)-tuples
(B.23) \[\left((a_{1},\ldots,a_{h}),(b_{1},\ldots,b_{h})\right)\in(I_{N}^{h})^{2}\]
such that \(a_{1}=b_{1}\), the two \(h\)-tuples have a union of size \(2h-1\), and both map under \(L\) to \(-dN+k\). This is since \(O(1)\) such ordered pairs of \(h\)-tuples (B.23) correspond18 to an ordered pair \(\left(\Lambda(1),\Lambda(2)\right)\in\mathscr{D}_{k}^{2}\) such that \(|S(\Lambda(1))\cup S(\Lambda(2))|=2h-1\) and \(S(\Lambda(1))\cap S(\Lambda(2))\neq\emptyset\). It can be similarly shown that uniformly over \(k\in\left[0,mN/2\right]\), the number of such ordered pairs we may form is \(\gtrsim k^{2h-3}\) (e.g., fix \(a_{1}=b_{1}\) such that \(u_{1}a_{1}\leq k/2\) and the resulting Diophantine equation (B.22) has a solution, then choose the remaining \(2(h-1)\) values).
Footnote 18: Surjectively map such ordered pairs of \(h\)-tuples to the ordered pairs of their equivalence classes in \(\mathscr{D}_{k}\).
Proof of Lemma 2.3.: It suffices to derive these asymptotic upper bounds for the number of matrices
(B.24) \[\mathcal{A}=\left(a_{i,j}\right)_{1\leq i\leq t,1\leq j\leq h}\in I_{N}^{t \times h},\]
where the matrices \(\mathcal{A}\) that we consider are those such that there are exactly \(\ell\) distinct values amongst the entries of \(\mathcal{A}\), there do not exist \(i_{1},i_{2}\in[t]\) and \(\sigma\in\mathscr{R}_{k}\) such that
(B.25) \[(a_{i_{1},1},\ldots,a_{i_{1},h})=(a_{i_{2},\sigma(1)},\ldots,a_{i_{2},\sigma(h )}),\]
and it holds for all \(i\in[t]\) that
(B.26) \[\{a_{i,1},\ldots,a_{i,h}\}\cap\left(\bigcup_{j\neq i}\{a_{j,1},\ldots,a_{j,h} \}\right)\neq\emptyset\text{ for all }i\in[t],\]
and that
(B.27) \[L\left(a_{i,1},\ldots,a_{i,h}\right)=-dN+k.\]
Indeed, there is an injective map19 from the \(t\)-tuples we would like to count to the collection of all such matrices \(\mathcal{A}\). Observe that for any such \(\mathcal{A}\), at least \(2\ell-th\) of the \(\ell\) values occur once in the entries of \(\mathcal{A}\), so the number of rows of \(\mathcal{A}\) with a value that is included exactly once in \(\mathcal{A}\) is at least
Footnote 19: Given \((\Lambda_{1},\ldots,\Lambda_{t})\), take representatives and make them the rows of the matrix \(\mathcal{A}\) that it is mapped to.
(B.28) \[\lceil(2\ell-th)/h\rceil=\lceil 2\ell/h\rceil-t.\]
We partition the collection of all such matrices \(\mathcal{A}\) based on the \(\ell\) subsets20 of \(\big{(}a_{i,j}\big{)}_{1\leq i\leq t,1\leq j\leq h}\) corresponding to the \(\ell\) distinct values. We break into cases based on the values of \(\ell\) and \(t\). The following discussion is to be understood as having fixed one of the blocks of the partition. We will occasionally assume that we are working with a specific fixed block of the partition.
Footnote 20: More precisely, the partition is based on the \(\ell\) subsets of \([t]\times[h]\) corresponding to these \(\ell\) values.
**Case 1: \(1\leq\ell\leq h-1\) or \(t=1\) and \(\ell=h\).** Let \(r=1\), and let \(v_{1}\) be one of the \(\ell\) distinct subsets.
**Case 2: \(t\geq 2\) and \(h\leq\ell\leq 2h-1\).** We break into two subcases.
1. If there exist values \(v_{1},v_{2}\) such that \(v_{2}\) is in a row that \(v_{1}\) is not in, then let \(r=2\).
2. If no such \(v_{1},v_{2}\) exist, then necessarily \(\ell=h\), and all \(\ell\) values lie in all \(t\) rows of \(\mathcal{A}\). The entries of any such matrix \(\mathcal{A}\) must satisfy, for some \(\sigma\notin\mathscr{R}_{k}\) (due to (B.25)), (B.29) \[L\left(a_{1,1},\ldots,a_{1,h}\right)=L(a_{1,\sigma(1)},\ldots,a_{1,\sigma(h)} )=-dN+k.\] If such a permutation \(\sigma\notin\mathscr{R}_{k}\) exists, it must be that the two equations \(u_{1}a_{1,1}+\cdots+u_{h}a_{1,h}=-dN+k\), \(u_{\sigma^{-1}(1)}a_{1,1}+\cdots+u_{\sigma^{-1}(h)}a_{1,h}=-dN+k\) are multiples of each other. By the definition of \(\mathscr{R}_{k}\), there must exist \(j\in[h]\) for which \(u_{j}\neq u_{\sigma^{-1}(j)}\), so it follows that \(-dN+k=0\). It is not hard to see that \(|u_{j}|=|u_{\sigma^{-1}(j)}|\) (since \([u_{1},\ldots,u_{h}]\neq[u_{\sigma(1)},\ldots,u_{\sigma(h)}]\) otherwise), so \(u_{i}=-u_{\sigma^{-1}(i)}\) for all \(i\in[h]\). Thus, \(L\) must also be balanced, and it therefore holds that \(\sigma\in\mathscr{R}_{k}\). We conclude that no such matrices \(\mathcal{A}\) satisfying the conditions of this subcase exist.
**Case 3: \(2h\leq\ell\leq 3h-1\).** We are safe to assume \(t\geq 3\). We break into two subcases.
1. If there exist values \(v_{1},v_{2},v_{3}\) such that \(v_{2}\) is in a row that \(v_{1}\) is not in and \(v_{3}\) is in a row that neither \(v_{1}\) nor \(v_{2}\) are in, then let \(r=3\). It is not hard to show that if there exists a value occurring in exactly one row of \(\mathcal{A}\), such values \(v_{1},v_{2},v_{3}\) exist. It follows from (B.28) that this holds for all \(\ell\) such that \(2h+1\leq\ell\leq 3h-1\), and for \(\ell=2h\) if \(t=3\).
2. Otherwise, it must hold that \(\ell=2h\), \(t=4\), every value occurs exactly twice in \(\mathcal{A}\), and these \(2h\) values are arranged in such a way that \(h\) values lie once in each of two rows of \(\mathcal{A}\), and the other \(h\) values lie once in each of the other two rows of \(\mathcal{A}\). Arguing as in Case 2(2), it follows that no such matrices \(\mathcal{A}\) satisfying the conditions of this subcase exist.
**Case 4: \(\ell\geq 3h\).** We are safe to assume \(t=4\). We break into two subcases.
1. If there exist values \(v_{1},\ldots,v_{4}\) such that for every \(j\in[4]\), \(v_{j}\) is in a row of \(\mathcal{A}\) that no \(v_{i}\) for \(i<j\) is in, then let \(r=4\). By (B.28), this can be done for all \(\ell\geq 3h+1\).
2. Otherwise, \(\ell=3h\). By (B.28), two rows of \(\mathcal{A}\) have values occurring exactly once in \(\mathcal{A}\). The remaining two rows of \(\mathcal{A}\) thus contain the same values. Arguing as in Case 2(2), it follows that no such matrices \(\mathcal{A}\) satisfying the conditions of this subcase exist.
In all cases where \(r\) was defined, we count the matrices \(\mathcal{A}\) by choosing the \(\ell-r\) values that are not \(v_{1},\ldots,v_{r}\), where we have \(O(k)\) choices for each value. Afterwards, (B.27) determines the values of these remaining \(r\) subsets. It is easy to check that in all of these cases, this gives the bounds given in the statement of the lemma. The lemma follows since there are \(O(1)\) blocks in this partition. |
2307.01361 | Quadruple Inequalities: Between Cauchy-Schwarz and Triangle | We prove a set of inequalities that interpolate the Cauchy-Schwarz inequality
and the triangle inequality. Every nondecreasing, convex function with a
concave derivative induces such an inequality. They hold in any metric space
that satisfies a metric version of the Cauchy-Schwarz inequality, including all
CAT(0) spaces and, in particular, all Euclidean spaces. Because these
inequalities establish relations between the six distances of four points, we
call them quadruple inequalities. In this context, we introduce the quadruple
constant - a real number that quantifies the distortion of the Cauchy-Schwarz
inequality by a given function. Additionally, for inner product spaces, we
prove an alternative, more symmetric version of the quadruple inequalities,
which generalizes the parallelogram law. | Christof Schötz | 2023-07-03T21:30:40Z | http://arxiv.org/abs/2307.01361v3 | # Quadruple Inequalities:
###### Abstract
We prove a set of inequalities that interpolate the Cauchy-Schwarz inequality and the triangle inequality. Every nondecreasing, convex function with a concave derivative induces such an inequality. They hold in any metric space that satisfies a metric version of the Cauchy-Schwarz inequality, including all CAT(0) spaces and, in particular, all Euclidean spaces. Because these inequalities establish relations between the six distances of four points, we call them quadruple inequalities.
###### Contents
* 1 Introduction
* 1.1 Relating Cauchy-Schwarz and Triangle
* 1.2 Contributions
* 1.3 Related Literature
* 1.3.1 Convex Analysis
* 1.3.2 Quadruple Inequality
* 1.3.3 Metric Geometry
* 1.3.4 Martingale Theory
* 1.3.5 Statistics
* 1.4 Outline
* 2 Quadruple Transformations
* 2.1 Properties
* 2.2 Lower Bounds on the Quadruple Constant
* 2.3 Stability
* 3 Nondecreasing, Convex Functions with Concave Derivative
* 3.1 Properties
* 3.2 Approximation
* 3.3 Stability
* 4 Proof Outline
* 4.1 Parametrization
* 4.2 Remaining Proof Steps
* 5 Implications and Discussion
* 5.1 Symmetries
* 5.2 Bounds for the Right-Hand Side
* 5.3 Corollaries for Special Cases
* 5.4 Quadruple Constants and Further Research
* A Proof of Theorem 3
* Proof of Theorem 23
* B.1 Summary of Properties
* B.2 Lemma 33: Elimination of \(r\)
* B.2.1 Proof that Lemma 33 implies Theorem 23
* B.3 Proof of Lemma 33 (ii)
* B.4 Proof of Lemma 33 (i) for \(c\geq a\geq b\geq sc\)
* B.5 Proof of Lemma 33 (i) for \(c\geq a\geq b\), \(b\leq sc\)
* B.6 Proof of Lemma 33 (i) for \(c\geq a\), \(a\leq b\)
* B.7 Proof of Lemma 33 (i) for \(a\geq c\), \(b\leq 2sc\), \(sc\geq a-b\)
* B.8 Proof of Lemma 33 (i) for \(a\geq c\), \(b\geq 2sc\)
* B.9 Proof of Lemma 33 (i) for \(a\geq c\), \(b\leq 2sc\), \(sc\leq a-b\)
* Auxiliary Results
* C.1 Merging Terms
* C.2 Mechanical Proofs
## 1 Introduction
### Relating Cauchy-Schwarz and Triangle
The Cauchy-Schwarz inequality states that in any inner product space \((V,\langle\cdot,\cdot\rangle)\), we have
\[\langle u,v\rangle\leq\|u\|\|v\| \tag{1}\]
for all \(u,v\in V\), where \(\|u\|=\sqrt{\langle u,u\rangle}\). In any metric space \((\mathcal{Q},d)\) the triangle inequality
\[\overline{y,z}\leq\overline{y,p}+\overline{p,z} \tag{2}\]
is true for all \(p,y,z\in\mathcal{Q}\), where we use the short notation \(\overline{y,z}:=d(y,z)\). Now, consider the inequality
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau(\overline {z,p})\leq L_{\tau}\,\overline{q,p}\,\tau^{\prime}(\overline{y,z}) \tag{3}\]
for \(y,z,q,p\in\mathcal{Q}\), a differentiable function \(\tau\colon[0,\infty)\to\mathbb{R}\) with derivative \(\tau^{\prime}\), and a constant \(L_{\tau}\in[0,\infty)\). We call (3) a _quadruple inequality_[15] as it establishes a relationship between the six distances among four points, see Figure 1. If we plug the identity \(\tau=\tau_{1}:=(x\mapsto x)\) and \(L_{\tau}=2\) into (3), we obtain
\[\overline{y,q}-\overline{y,p}-\overline{z,q}+\overline{z,p}\leq 2\,\overline{q,p}\,. \tag{4}\]
The triangle inequality (2) implies (4). Furthermore, in a symmetric distance space \((\mathcal{Q},d)\)[16], where \(d\) does not necessarily fulfill the triangle inequality, (4) also implies (2) by setting \(z=q\). Next, let us evaluate (3) with \(\tau=\tau_{2}:=(x\mapsto x^{2})\) and \(L_{\tau}=1\). We get
\[\overline{y,q}^{2}-\overline{y,p}^{2}-\overline{z,q}^{2}+\overline{z,p}^{2} \leq 2\,\overline{q,p\,y,z}\,. \tag{5}\]
If we assume that the metric space \((\mathcal{Q},d)\) is induced by an inner product space \((V,\langle\cdot,\cdot\rangle)\), i.e., \(\mathcal{Q}=V\) and \(d(q,p)=\|q-p\|\), then (5) becomes
\[2\,\langle q-p\,,\,z-y\rangle\leq 2\|q-p\|\|y-z\|\,. \tag{6}\]
Thus, in this case, (5) is equivalent to (1). Hence, we can consider (5) to be a generalization of the Cauchy-Schwarz inequality to metric spaces. Equation (5) is not true in every metric space. But it is true and characteristic [1] in non-positively curved geodesic spaces, which are called \(\mathsf{CAT}(0)\) spaces or, if they are complete, Hadamard or NPC spaces. In these spaces, (5) is also known as _four point cosq condition_[1] or _Reshetnyak's Quadruple Comparison_[14, Proposition 2.4].
### Contributions
The functions \(\tau_{1},\tau_{2}\) are both nondecreasing, convex, and have a concave derivative. They can be considered as edge cases of all functions with these properties: As a linear function, \(\tau_{1}\) can be thought of as "least convex" of all convex functions. Similarly, \(\tau_{2}\), which has a linear and strictly increasing derivative, is a "most convex" function among all functions with a concave derivative. As our main result, we show that in all metric spaces with the property (5), inequality (3) is true for all functions "between" \(\tau_{1}\) and \(\tau_{2}\), i.e., for all nondecreasing, convex functions with a concave derivative. In this sense, we "interpolate" the triangle and the Cauchy-Schwarz inequality.
**Theorem 1.** Let \((\mathcal{Q},d)\) be a metric space. Let \(y,z,q,p\in\mathcal{Q}\). Assume
\[\overline{y},\overline{q}^{2}-\overline{y},\overline{p}^{2}-\overline{z}, \overline{q}^{2}+\overline{z},\overline{p}^{2}\leq 2\,\overline{q},p\, \overline{y},\overline{z}\,.\]
Let \(\tau\colon[0,\infty)\to\mathbb{R}\) be differentiable. Assume \(\tau\) is nondecreasing, convex and has a concave derivative \(\tau^{\prime}\). Then
\[\tau(\overline{y},\overline{q})-\tau(\overline{y},\overline{p})-\tau( \overline{z},\overline{q})+\tau(\overline{z},\overline{p})\leq 2\,\overline{q}, \overline{p}\,\tau^{\prime}(\overline{y},\overline{z})\,. \tag{7}\]
We call functions \(\tau\) that satisfy (3) when (5) is true quadruple transformations:
**Definition 2** (Quadruple transformation).: Let \(\tau\colon(0,\infty)\to\mathbb{R}\) be differentiable. We call \(\tau\) a _quadruple transformation_ if there is a constant \(L_{\tau}\in[0,\infty)\) such that the following condition holds: For every metric space \((\mathcal{Q},d)\) and pairwise distinct points \(y,z,q,p\in\mathcal{Q}\) such that
\[\overline{y},\overline{q}^{2}-\overline{y},\overline{p}^{2}-\overline{z}, \overline{q}^{2}+\overline{z},\overline{p}^{2}\leq 2\,\overline{q},p\, \overline{y},\overline{z}\,,\]
we also have
\[\tau(\overline{y},\overline{q})-\tau(\overline{y},\overline{p})-\tau( \overline{z},\overline{q})+\tau(\overline{z},\overline{p})\leq L_{\tau}\, \overline{q},\overline{p}\,\tau^{\prime}(\overline{y},\overline{z})\,.\]
Define the _quadruple constant_ of a quadruple transformation \(\tau\) as the smallest possible choice of \(L_{\tau}\) and denote it as \(L_{\tau}^{*}\). Let \(\mathcal{T}\) denote the _set of all quadruple transformations_.
Let \(\mathcal{S}\) be the set of all nondecreasing, convex, and differentiable functions \(\tau\colon[0,\infty)\to\mathbb{R}\) with concave derivative. Then, by Theorem 1, \(\mathcal{S}\subseteq\mathcal{T}\). We show that, for any \(\tau\in\mathcal{S}\), the quadruple constant \(L_{\tau}^{*}\) is in the interval \([1,2]\), see Proposition 10. Further lower bounds on \(L_{\tau}^{*}\) are discussed in Proposition 9. Slightly stronger versions of Theorem 1 are presented in Theorem 23 and Theorem 24.
Figure 1: Four points and their six distances.
Let \(\mathcal{S}_{0}\) be set of functions in \(\mathcal{S}\) with \(\tau(0)=0\). For \(\tau\in\mathcal{S}_{0}\), the right-hand side of (7) can be bounded by \(2\tau(\overline{q,\overline{p}})+2\tau(\overline{y,\overline{z}})\), see Corollary 25. In inner product spaces, we derive a stronger upper bound:
**Theorem 3.** Let \((V,\langle\cdot\,,\,\cdot\rangle)\) be an inner product space with induced metric \(d\). Let \(y,z,q,p\in V\).
Let \(\tau\in\mathcal{S}_{0}\). Then
\[\tau(\overline{y,q})-\tau(\overline{y,\overline{p}})-\tau(\overline{z,q})+\tau (\overline{z,\overline{p}})\leq\tau(\overline{q,p})+\tau(\overline{y,z})\,. \tag{8}\]
Note that \(\tau(a)+\tau(b)\) can be much larger than \(a\tau^{\prime}(b)\) for \(a,b\in[0,\infty)\).
### Related Literature
For a history of the Cauchy-Schwarz inequality and many of its extension, [10] is highly recommended. The book [11] is an excellent reference for many metric related concepts.
#### 1.3.1 Convex Analysis
Theorem 3 is related to _Karamata's inequality_[12]: Let \(f\colon\mathbb{R}\to\mathbb{R}\) be a convex and nondecreasing function. Let \(a_{1},\ldots,a_{n},b_{1}\ldots,b_{n}\in\mathbb{R}\) with
\[\sum_{i=1}^{k}a_{i}\leq\sum_{i=1}^{k}b_{i} \tag{9}\]
for \(k=1,\ldots,n\). Then
\[\sum_{i=1}^{n}f(a_{i})\leq\sum_{i=1}^{n}f(b_{i})\,. \tag{10}\]
If we set \(f=\tau\), \(n=4\), \(a_{1}=\overline{y,q}\), \(a_{2}=\overline{z,p}\), \(a_{3}=a_{4}=0\), \(b_{1}=\overline{q,\overline{p}}\), \(b_{2}=\overline{y,z}\), \(b_{3}=\overline{y,\overline{p}}\), \(b_{4}=\overline{z,q}\), then Karamata's inequality proves Theorem 3 for configurations of distances that fulfill (9). But this does not cover all cases.
#### 1.3.2 Quadruple Inequality
Theorem 1 extends [10, Theorem 3], which states that \(\tau_{\alpha}:=(x\mapsto x^{\alpha})\) with \(\alpha\in[1,2]\) is a quadruple transformation and, together with [10, Appendix E], implies \(L^{*}_{\tau_{\alpha}}=2^{2-\alpha}\). Moreover, in [10, Appendix E] it is shown that \(\alpha\in[1,2]\) are the only positive real exponents with \(\tau_{\alpha}\in\mathcal{T}\). Note that \(\tau_{\alpha}\in\mathcal{S}\) for \(\alpha\in[1,2]\). The proof of Theorem 1 requires new ideas compared to the one of [10, Theorem 3], e.g., as we cannot take derivatives with respect to \(\alpha\). Our generalization to all functions \(\tau\in\mathcal{S}\) comes at the cost of a larger constant on the right-hand side: Theorem 1 applied to \(\tau=\tau_{\alpha}\) yields a constant factor \(L_{\tau_{\alpha}}=2\), which is strictly greater than \(L^{*}_{\tau_{\alpha}}\) for \(\alpha>1\).
#### 1.3.3 Metric Geometry
Aside from \(\mathsf{CAT}(\mathsf{0})\) spaces briefly discussed above, some further ideas in metric geometry seem relevant in the context of quadruple inequalities.
A function \(\varphi\colon[0,\infty)\to[0,\infty)\) is called _metric preserving_, if \(\varphi\circ d\) is a metric for any metric space \((\mathcal{Q},d)\). See [11] for an overview. As the quadruple inequality (3) with \(\tau=\tau_{2}\) is a condition for Theorem 1, we may think of the main result as stating that a Cauchy-Schwarz-like inequality is persevered under transformation with \(\tau\). But note that the right-hand side of (3) is not written in terms of \(\tau\circ d\).
A metric space \((\mathcal{Q},d)\) has the _Euclidean \(k\)-point property_[12, Definition 50.1] if any \(k\)-tuple of points in \(\mathcal{Q}\) has an isometric embedding in the Euclidean space \(\mathbb{R}^{k-1}\). If \((\mathcal{Q},d)\) has the Euclidean \(4\)-point property, then (5) is fulfilled. For \(\gamma\in(0,\infty)\), let \(\varphi_{\gamma}(x):=x^{\gamma}\). This function is metric preserving for \(\gamma\leq 1\). According to [12, Theorem 52.1], \((\mathcal{Q},\varphi_{\gamma}\circ d)\) has the Euclidean \(4\)-point
property for all \(\gamma\leq 1/2\). Furthermore, \(\gamma=1/2\) is the largest exponent with this property. Thus, \((\mathcal{Q},\varphi_{\gamma}\circ d)\), fulfills (5) for \(\gamma\in(0,1/2]\). In particular,
\[\overline{y,q}^{2\gamma}-\overline{y,p}^{2\gamma}-\overline{y,p}^{2\gamma}+ \overline{y,p}^{2\gamma}\leq 2\,\overline{q,p}^{\gamma}\,\overline{y,z}^{ \gamma}\,. \tag{11}\]
As \(\bar{d}=d^{2\gamma}\) is a metric -- \(x\mapsto x^{2\gamma}\) is metric preserving for \(\gamma\in(0,1/2]\) -- we obtain from (4),
\[\overline{y,q}^{2\gamma}-\overline{y,p}^{2\gamma}-\overline{y,p}^{2\gamma}+ \overline{y,p}^{2\gamma}\leq 2\min\!\left(\overline{q,p}^{2\gamma},\overline{y,z}^{ 2\gamma}\right)\,, \tag{12}\]
which also implies (11).
The Euclidean 4-point property can be weakened for \(\mathsf{CAT}(0)\) spaces. A metric space \((\mathcal{Q},d)\) fulfills the \(\mathsf{CAT}(0)\) 4-point condition [11, Definition II.1.10] if, for all \(y,z,q,p\in\mathcal{Q}\), there are \(\bar{y},\bar{z},\bar{q},\bar{p}\in\mathbb{R}^{2}\) such that
\[\overline{y,q}=\left\|\bar{y}-\bar{q}\right\|, \overline{y,p}=\left\|\bar{y}-\bar{p}\right\|, \overline{z,q}=\left\|\bar{z}-\bar{q}\right\|, \overline{z,p}=\left\|\bar{z}-\bar{p}\right\|,\] \[\overline{q,p}\leq\left\|\bar{q}-\bar{p}\right\|, \overline{y,z}\leq\left\|\bar{y}-\bar{z}\right\|.\]
Every \(\mathsf{CAT}(0)\) space fulfills the \(\mathsf{CAT}(0)\) four-point condition, see [10] or [11, Proposition II.1.11].
Another famous 4-point property is _Ptolemy's inequality_: A metric space \((\mathcal{Q},d)\) is called _Ptolemaic_ if, for all \(y,z,q,p\in\mathcal{Q}\), we have
\[\overline{y,q\,z,p}+\overline{y,p\,z,q}\leq\overline{q,p\,y,z}\,. \tag{13}\]
Every inner product space is Ptolemaic. If a normed vector space is Ptolemaic, then it is an inner product space. All \(\mathsf{CAT}(0)\) spaces are Ptolemaic. A complete Riemannian manifold is Ptolemaic if and only if it is \(\mathsf{CAT}(0)\)[1, Theorem 1.1]. Each geodesically connected metric space satisfying the \(\tau_{2}\)-quadruple inequality is Ptolemaic, but a geodesically connected Ptolemaic metric space is not necessarily \(\mathsf{CAT}(0)\)[10, 10].
Strongly related to Theorem 3 is the concept of roundness of a metric space: A value \(\alpha\in(0,\infty)\) is called _roundness exponent_ of a metric space \((\mathcal{Q},d)\) if, for all \(y,z,q,p\in\mathcal{Q}\),
\[\overline{y,q}^{\alpha}-\overline{y,p}^{\alpha}-\overline{z,q}^{\alpha}+ \overline{z,p}^{\alpha}\leq\overline{q,p}^{\alpha}+\overline{y,z}^{\alpha}\,. \tag{14}\]
Let \(R=R(\mathcal{Q},d)\) be the set of all roundness exponents of \((\mathcal{Q},d)\). The _roundness_\(r=r(\mathcal{Q},d)\) of \((\mathcal{Q},d)\) is the supremum of the roundness exponents \(r:=\sup R\). By the triangle inequality and the metric preserving property of \((x\mapsto x^{\alpha})\) for \(\alpha\in(0,1]\), we have \((0,1]\subseteq R\) for all metric spaces. The functions spaces \(L_{p}(0,1)\) have roundness \(p\) for \(p\in[1,2]\)[12]. For a geodesic metric space, roundness \(r=2\) is essentially equivalent to being \(\mathsf{CAT}(0)\), see [10, Remark 7]. A metric space is called _ultrametric_ if the triangle inequality can be strengthened to \(\overline{y,z}\leq\max(\overline{y,p,z,p})\) for all points \(y,z,p\). Every ultrametric space can be isometrically embedded in a Hilbert space, see, e.g., [13, Corollary 5.4]. A metric space is ultrametric if and only if \(r=\infty\), [13, Theorem 5.1]. Then \(R=(0,\infty)\), [13, Proposition 2.7]. In general, \(R\) is not necessarily an interval [12, Remark p. 254]. But if \((\mathcal{Q},d)\) is a (subset of a) Banach space with the metric \(d\) induced by its norm, then \(R=(0,r]\) with \(r\in[1,2]\), [14, Proposition 4.1.2]. In particular, (14) holds for \(\alpha\in(0,2]\) in all inner product spaces. A metric space is called _additive_ if
\[\overline{y,q}+\overline{z,p}\leq\max(\overline{y,p}+\overline{z,q},\overline {q,p}+\overline{y,z}) \tag{15}\]
for all points \(y,z,q,p\). Every ultrametric space is additive. Every additive metric space is Ptolemaic. Additive metric spaces have roundness \(r\geq 2\)[13, Proposition 4.1].
#### 1.3.4 Martingale Theory
Nondecreasing, convex functions with concave derivative play an important role in the Topchii-Vatutin inequality of martingales, see [14, Theorem 2] and [1]: For a suitably integrable martingale \((M_{n})_{n\in\mathbb{N}_{0}}\), we have
\[\mathbb{E}[\tau(|M_{n}|)-\tau(|M_{0}|)]\leq 2\sum_{k=1}^{n}\mathbb{E}[\tau(|M_{ k}-M_{k-1}|)] \tag{16}\]
for all \(\tau\in\mathcal{S}_{0}\), where \(\mathbb{E}[\cdot]\) denotes the expectation. In this context, the functions \(\tau\in\mathcal{S}_{0}\) are named _weakly convex_. Moreover, [14, Lemma 6] gives a weaker version of Theorem 3: Let \(\tau\in\mathcal{S}_{0}\). For \(a,b\in[0,\infty)\) with \(a\geq b\), it was shown that \(\tau(a+b)+\tau(a-b)\leq 2\tau(a)+2\tau(b)\).
#### 1.3.5 Statistics
Theorem 1 can be applied to prove rates of convergence for certain kinds of means [14]: We may want to calculate a mean value of some sample points in a metric spaces. One candidate for this is the _Frechet mean_[10], also called _barycenter_. It is the (set of) minimizer(s) of the squared distance to the sample points. If \(Y\) is a random variable with values in a metric space \((\mathcal{Q},d)\), the Frechet mean is \(\arg\min_{q\in\mathcal{Q}}\mathbb{E}[\overline{Y,q}]\), where we assume \(\mathbb{E}[\overline{Y,q}^{2}]<\infty\) for all \(q\in\mathcal{Q}\). Similarly, one can define the Frechet median [11] as \(\arg\min_{q\in\mathcal{Q}}\mathbb{E}[\overline{Y,q}]\), or a more general \(\tau\)-Frechet mean [14] as \(\arg\min_{q\in\mathcal{Q}}\mathbb{E}[\tau(\overline{Y,q})]\) for functions \(\tau\colon[0,\infty)\to\mathbb{R}\). Given a sequence of independent random variables \(Y_{1},Y_{2},\dots\) with the same distribution as \(Y\), a standard task in statistics is to bound the distance between the sample statistics and its corresponding population version. In our case, assume the \(\tau\)-Frechet mean is unique and define
\[m :=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[\tau( \overline{Y,q})]\,, \hat{m}_{n} :=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1} ^{n}\tau(\overline{Y_{i},q})\,.\]
We want to bound \(\overline{\hat{m}_{n},m}\) depending on \(n\). One can employ quadruple inequalities such as (3) to obtain a suitable upper bound [14, Theorem 1]. This approach is particularly useful, if we do not want to make the assumption that the diameter of the metric space \(\sup_{q,p\in\mathcal{Q}}\overline{q,p}\) is finite. With Theorem 1, one can obtain such a bound for \(\tau\)-Frechet means with \(\tau\in\mathcal{S}\) (under some conditions). We emphasize that this is only possible with (3) and not with (8). Noteworthy examples of \(\tau\in\mathcal{S}\) in this context, aside from \(\tau=\tau_{\alpha}\), are the Huber loss \(\tau_{\mathsf{H},\delta}\)[15] and the Pseudo-Huber loss \(\tau_{\mathsf{pH},\delta}\)[12] for \(\delta\in(0,\infty)\),
\[\tau_{\mathsf{H},\delta}(x) :=\begin{cases}\frac{1}{2}x^{2}&\text{ for }x\leq\delta\,,\\ \delta(x-\frac{1}{2}\delta)&\text{ for }x>\delta\,,\end{cases} \tau_{\mathsf{pH},\delta}(x) :=\delta^{2}\left(\sqrt{1+\frac{x^{2}}{\delta^{2}}}-1\right)\,,\]
as well as \(x\mapsto\ln(\cosh(x))\)[1]. These functions are of great interest in robust statistics and image processing as their respective minimizers combine properties of the classical mean (\(\tau_{2}\)-Frechet mean) and the median (\(\tau_{1}\)-Frechet mean).
### Outline
In the remaining sections, we first discuss the set \(\mathcal{T}\), i.e., the set of quadruple transformations, see section 2. We continue with a discussion of the set \(\mathcal{S}\), i.e., nondecreasing, convex functions with concave derivative, in section 3. Thereafter, we prove our main result, i.e., \(\mathcal{S}\subseteq\mathcal{T}\). The basic ideas of the proof and variations of the main result are presented in section 4. The technical details can be found in appendix B and C. The proof of Theorem 3 can be found in appendix A. In section 5 we discuss implications of the main results and open questions.
## 2 Quadruple Transformations
We explore some properties of quadruple functions \(\tau\in\mathcal{T}\) and their quadruple constant \(L^{*}_{\tau}\).
### Properties
**Lemma 4** (Constant functions).:
* For \(c\in\mathbb{R}\), let \(\tau_{c}:=(x\mapsto c)\). Then \(\tau_{c}\in\mathcal{T}\) with \(L^{*}_{\tau}=0\).
* If \(\tau\in\mathcal{T}\) with \(L_{\tau}=0\), then there is \(c\in\mathbb{R}\) such that \(\tau=\tau_{c}\).
Proof.: The first part is trivial. For the second part, let \(u\in(0,\infty)\) and \(v\in(0,3u]\). Take \(y,z,q,p\in\mathbb{R}^{2}\) such that \(\overline{y,q}=\overline{y,p}=\overline{z,p}=u\) and \(\overline{z,q}=v\), or \(\overline{y,q}=\overline{y,p}=\overline{z,q}=u\) and \(\overline{z,p}=v\), see Figure 2,
to obtain
\[|\tau(u)-\tau(v)|\leq 0 \tag{17}\]
from (3). This can only be fulfilled for constant functions.
**Lemma 5**.: Let \(\tau\in\mathcal{T}\). Then
* \(\tau^{\prime}(x)\geq 0\) for all \(x\in(0,\infty)\) and
* \(\inf_{x\in(0,\infty)}\tau(x)>-\infty\).
Proof.:
* Let \(y,z,q,p\in\mathbb{R}^{2}\) form a square with diagonal \(x\in(0,\infty)\), see Figure 3 (i). Then, by (3), \[0\leq L_{\tau}x\tau^{\prime}(x)\,.\] (18) As \(x>0\) and \(L_{\tau}\geq 0\) with equality only if \(\tau\) is constant, we have \(0\leq\tau^{\prime}(x)\).
* By (i), we only have to check that \(\tau(\varepsilon)\) is bounded below for \(\varepsilon\in(0,1]\). Let \(\varepsilon>0\). In the Euclidean plane \(\mathbb{R}^{2}\), set \(y=(0,1),z=(0,0),q=(-\sqrt{3}/2,1/2),p=(\cos(\alpha),\sin(\alpha))\), where \(\alpha\in[0,\pi/2]\) is chosen such that \(\overline{y,p}=\varepsilon\), see Figure 3 (ii). Then \(\overline{y,q}=\overline{z,q}=\overline{y,z}=\overline{z,p}=1\) and \(\overline{q,p}\leq 2\). Then, using (3) and (i), we obtain \[\tau(1)-\tau(\varepsilon)\leq 2L_{\tau}\tau^{\prime}(1)\,.\] (19) This yields a constant lower bound of \(\tau(1)-2L_{\tau}\tau^{\prime}(1)>-\infty\) on \(\tau(\varepsilon)\) for all \(\varepsilon\in(0,1]\).
Next, we extend the domain of \(\tau\in\mathcal{T}\) in a consistent way to include \(0\).
Figure 3: Constructions for the proof of Lemma 5.
Figure 2: Construction for the proof of Lemma 4 (ii).
**Proposition 6**.: Let \(\tau\in\mathcal{T}\). Define \(\tau(0):=\lim_{x\searrow 0}\tau(x)\) and \(\tau^{\prime}(0):=\liminf_{x\searrow 0}\tau^{\prime}(x)\). Then, given (5), inequality (3) holds for any quadruple of points (not necessarily pairwise distinct).
Proof.: By Lemma 5, \(\tau(0)\) is well-defined. We have to show that (5) implies (3) in the cases where at least one distance is zero. In the case \(y=z\), the left-hand side of (3) vanishes and the right-hand side is nonnegative by Lemma 5 (i). Next, consider the case \(y,z,q,p\in\mathcal{Q}\) with \(y\neq z\) but at least two points being identical. As any triplet of points in a metric space can be isometrically embedded in the Euclidean plane \(\mathbb{R}^{2}\), this can also be done for \((y,z,q,p)\). Furthermore, we can find a sequence of quadruples of points \((y_{n},z_{n},q_{n},p_{n})_{n\in\mathbb{N}}\subseteq(\mathbb{R}^{2})^{4}\) with \(\|y_{n}-z_{n}\|=\overline{y,z}\) and all other distances strictly positive and convergent to the respective distance of the points \(y,z,q,p\). By continuity of \(\tau\), the definition of \(\tau(0)\), and the constant value of \(\|y_{n}-z_{n}\|\), (3) holds in the limit \(n\to\infty\).
**Proposition 7** (Power functions).: Let \(\tau_{\alpha}=(x\mapsto x^{\alpha})\) for \(\alpha\in(0,\infty)\). We have \(\tau_{\alpha}\in\mathcal{T}\) if and only if \(\alpha\in[1,2]\). In this case, the quadruple constant is \(L_{\tau_{\alpha}}^{*}=2^{2-\alpha}\).
Proof.: [1, Theorem 3 and Appendix E].
### Lower Bounds on the Quadruple Constant
**Lemma 8**.: The quadruple constant \(L_{\tau}^{*}\) is well-defined in the sense that if \(\tau\in\mathcal{T}\) there is a smallest value \(L_{\tau}^{*}\in[0,\infty)\) such that (3) is true for all metric spaces and quadruples of points therein.
Proof.: Let \(\tau\in\mathcal{T}\). Let \((L_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)\) be a decreasing sequence with \(L_{\infty}:=\lim_{n\to\infty}L_{n}\). Assume (3) is true for all \(L_{\tau}=L_{n}\). We need to show that (3) is also true for \(L_{\tau}=L_{\infty}\). If it were false, there would be a metric space \((\mathcal{Q},d)\) with points \(y,z,q,p\in\mathcal{Q}\) such that
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau( \overline{z,p})>L_{\infty}\,\overline{q,p}\,\tau^{\prime}(\overline{y,z})\,. \tag{20}\]
As the inequality is strict, it would also hold when \(L_{\infty}\) is replaced by a \(L_{n}\) for a sufficiently large \(n\), which is a contradiction.
**Proposition 9**.: Let \(\tau\in\mathcal{T}\). Let \(u,v\in(0,\infty)\). Then, assuming the denominator is not \(0\),
1. \[L_{\tau}^{*}\geq 2\frac{\tau(u)-\tau(0)}{u\tau^{\prime}(u)}\,,\] (21)
2. \[L_{\tau}^{*}\geq\frac{\tau(2u)-\tau(0)}{2u\tau^{\prime}(u)}\,,\] (22)
3. \[L_{\tau}^{*}\geq\frac{\tau^{\prime}(u)-\tau^{\prime}(v)}{\tau^{\prime}(|u-v|)}\,,\] (23)
4. \[L_{\tau}^{*}\geq\frac{\tau^{\prime}(u)+\tau^{\prime}(v)}{\tau^{\prime}(u+v)}\,.\] (24)
Proof.: For all parts, we apply (3) in the metric space of the real line with Euclidean metric, see Figure 4.
1. Set \(z=q=0\), \(y=u\), \(p=u/2\). Then (3) becomes \[\tau(u)-\tau(0)\leq L_{\tau}\frac{u}{2}\tau^{\prime}(u)\,.\]
2. Set \(z=q=0\), \(y=u\), \(p=2u\). Then (3) becomes \[\tau(2u)-\tau(0)\leq 2L_{\tau}u\tau^{\prime}(u)\,.\]
3. Let \(\varepsilon\in(0,\min(u,v))\). Set \(q=0\), \(p=\varepsilon\), \(y=u\), \(z=v\). Then (3) becomes \[\tau(u)-\tau(u-\varepsilon)-\tau(v)+\tau(v-\varepsilon)\leq L_{\tau}\, \varepsilon\,\tau^{\prime}(|u-v|)\,.\] Thus, \[L_{\tau}\tau^{\prime}(|u-v|)\geq\frac{\tau(u)-\tau(u-\varepsilon)}{\varepsilon }-\frac{\tau(v)-\tau(v-\varepsilon)}{\varepsilon}\xrightarrow{\varepsilon \to 0}\tau^{\prime}(u)-\tau^{\prime}(v)\,.\]
4. Let \(\varepsilon\in(0,\min(u,v))\). Let \(q=0\), \(p=\varepsilon\), \(y=u\), \(z=-v\). Then (3) becomes \[\tau(u)-\tau(u-\varepsilon)-\tau(v)+\tau(v+\varepsilon)\leq L_{\tau}\, \varepsilon\,\tau^{\prime}(u+v)\,.\] Thus, \[L_{\tau}\tau^{\prime}(u+v)\geq\frac{\tau(u)-\tau(u-\varepsilon)}{\varepsilon}+ \frac{\tau(v+\varepsilon)-\tau(v)}{\varepsilon}\xrightarrow{\varepsilon\to 0} \tau^{\prime}(u)+\tau^{\prime}(v)\,.\]
**Proposition 10**.: Let \(\tau\in\mathcal{T}\). Assume \(\tau\) is not constant. Then \(L_{\tau}^{*}\geq 1\).
Figure 4: Constructions for the proof of Proposition 9.
Proof.: Let \(u\in(0,\infty)\) be such that \(\tau^{\prime}(u)\neq 0\). Then Lemma 5 (i) implies \(\tau^{\prime}(u)>0\). Let \(\varepsilon\in(0,u)\). In the metric space of the real line with Euclidean metric, we choose \(z=q=0\), \(p=\varepsilon\), \(y=u\), see Figure 5. Then (3) becomes
\[\tau(u)-\tau(u-\varepsilon)-\tau(0)+\tau(\varepsilon)\leq L_{\tau}\,\varepsilon \,\tau^{\prime}(u)\,.\]
As \(\tau\) is nondecreasing by Lemma 5 (i), we have \(\tau(0)=\lim_{x\searrow 0}\tau(x)\leq\tau(\varepsilon)\). Thus,
\[L_{\tau}\tau^{\prime}(u)\geq\frac{\tau(u)-\tau(u-\varepsilon)}{\varepsilon} \stackrel{{\varepsilon\to 0}}{{\longrightarrow}}\tau^{\prime}(u)\,.\]
### Stability
We can construct potentially new elements of \(\mathcal{T}\) from given ones by taking limits or certain linear combinations, as the next two propositions show.
**Proposition 11**.: Let \((\tau_{n})_{n\in\mathbb{N}}\subseteq\mathcal{T}\) with constants \((L_{\tau_{n}})_{n\in\mathbb{N}}\subseteq[0,\infty)\). Let \(\tau\colon(0,\infty)\to\mathbb{R}\) be a differentiable function. Assume \(\lim_{n\to\infty}\tau_{n}(x)=\tau(x)\) and \(\limsup_{n\to\infty}\tau_{n}^{\prime}(x)\leq\tau^{\prime}(x)\) for all \(x\in(0,\infty)\). Set \(L_{\tau}:=\limsup_{n\to\infty}L_{\tau_{n}}\). Assume \(L_{\tau}<\infty\). Then \(\tau\in\mathcal{T}\) with constant \(L_{\tau}\).
Proof.: This is a direct consequence of the assumed limit properties of \(\tau\), \(\tau^{\prime}\), and \(L_{\tau}\), and of the quadruple inequality for \(\tau_{n}\) with constant \(L_{\tau_{n}}\):
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+ \tau(\overline{z,p}) =\lim_{n\to\infty}\left(\tau_{n}(\overline{y,q})-\tau_{n}( \overline{y,p})-\tau_{n}(\overline{z,q})+\tau_{n}(\overline{z,p})\right)\] \[\leq\lim_{n\to\infty}L_{\tau_{n}}\,\overline{q,p}\,\tau_{n}^{ \prime}(\overline{y,z})\] \[\leq L_{\tau}\,\overline{q,p}\,\tau^{\prime}(\overline{y,z})\,.\]
**Proposition 12**.:
* Let \(\tau\in\mathcal{T}\) with constant \(L_{\tau}\). Let \(a\in[0,\infty)\). Then \((x\mapsto a\tau(x))\in\mathcal{T}\) with constant \(aL_{\tau}\).
* Let \(\tau,\tilde{\tau}\in\mathcal{T}\) with constant \(L_{\tau}\) and \(L_{\tilde{\tau}}\), respectively. Then \((x\mapsto\tau(x)+\tilde{\tau}(x))\in\mathcal{T}\) with constant \(L_{\tau}+L_{\tilde{\tau}}\).
Proof.: Both parts follow directly from the definition of \(\mathcal{T}\) and linearity of the derivative operator.
Figure 5: Constructions for the proof of Proposition 10.
## 3 Nondecreasing, Convex Functions with Concave Derivative
We define \(\mathcal{S}\) to be the set of nondecreasing, convex functions \(\tau\colon[0,\infty)\to\mathbb{R}\) with concave derivative \(\tau^{\prime}\colon[0,\infty)\to\mathbb{R}\) where \(\tau^{\prime}(0)\) is the right derivative
\[\tau^{\prime}(0)=\lim_{h\searrow 0}\frac{\tau(h)-\tau(0)}{h}\,. \tag{25}\]
Let \(\mathcal{C}^{k}([0,\infty))\) denote the space of \(k\)-times continuously differentiable functions \([0,\infty)\to\mathbb{R}\), where derivatives at \(0\) are taken as right derivatives. Denote \(\mathcal{S}_{0}:=\{\tau\in\mathcal{S}\colon\tau(0)=0\}\). Denote \(\mathcal{S}^{k}:=\mathcal{S}\cap\mathcal{C}^{k}([0,\infty))\) and \(\mathcal{S}_{0}^{k}:=\mathcal{S}_{0}\cap\mathcal{C}^{k}([0,\infty))\) for \(k\in\mathbb{N}\cup\{\infty\}\).
### Properties
In this section, we establish some simple properties of functions \(\tau\in\mathcal{S}^{k}\) that will be useful in the proof of the main theorem.
**Lemma 13**.: Let \(\tau\in\mathcal{S}\). Then \(\tau\) is continuously differentiable. Furthermore \(\tau^{\prime}\) is nonnegative, continuous, nondecreasing, and differentiable almost everywhere. If \(\tau\in\mathcal{S}^{2}\), then \(\tau^{\prime\prime}\) is nonnegative and nonincreasing. If \(\tau\in\mathcal{S}^{3}\), then \(\tau^{\prime\prime\prime}\) is nonpositive.
Proof.: As \(\tau^{\prime}\) is concave, \(\tau^{\prime}\) is continuous and the differentiable function \(\tau\) is continuously differentiable. As \(\tau\) is nondecreasing, \(\tau^{\prime}\) is nonnegative. As \(\tau\) is convex, \(\tau^{\prime}\) is nondecreasing. As \(\tau^{\prime}\) is concave, \(\tau^{\prime}\) is differentiable almost everywhere. Assume \(\tau^{\prime\prime}\) exists. As \(\tau\) is convex, \(\tau^{\prime\prime}\) is nonnegative. As \(\tau^{\prime}\) is concave, \(\tau^{\prime\prime}\) is nonincreasing. Assume \(\tau^{\prime\prime\prime}\) exists. As \(\tau^{\prime}\) is concave, \(\tau^{\prime\prime\prime}\) is nonpositive.
The next lemma shows that all functions \(\tau\in\mathcal{S}^{3}\) are between a nondecreasing linear function and a parabola that opens upward.
**Lemma 14** (Polynomial bounds).: Let \(\tau\in\mathcal{S}^{3}\). Let \(x,y\in[0,\infty)\). Then
1. \[\tau(x)+y\tau^{\prime}(x)\leq\tau(x+y)\leq\tau(x)+y\tau^{\prime}(x)+\frac{1}{2 }y^{2}\tau^{\prime\prime}(x)\,,\]
2. \[\tau^{\prime}(x)\leq\tau^{\prime}(x+y)\leq\tau^{\prime}(x)+y\tau^{\prime\prime }(x)\,.\]
Proof.:
1. Apply a second and a third order Taylor expansion to \(y\mapsto\tau(x+y)\) at \(0\) and use Lemma 13.
2. Apply a first and a second order Taylor expansion to \(y\mapsto\tau^{\prime}(x+y)\) at \(0\) and use Lemma 13.
In the following lemma, we provide useful bounds for the proof of the main theorem and gives a hint about the form of the right-hand side of the quadruple inequality (3).
**Lemma 15** (Difference bound).: Let \(\tau\in\mathcal{S}\).
1. Let \(x,y\in[0,\infty)\). Assume \(x\geq y\). Then \[\frac{x-y}{2}\left(\tau^{\prime}(x)+\tau^{\prime}(y)\right)\leq\tau(x)-\tau(y )\leq(x-y)\tau^{\prime}\bigg{(}\frac{x+y}{2}\bigg{)}\.\]
2. Let \(x,y\in[0,\infty)\). Then \[\tau(x+y)-\tau(|x-y|)\leq 2\min(x,y)\tau^{\prime}(\max(x,y))\,.\]
Proof.:
1. For the lower bound, as \(\tau^{\prime}\) is concave, \[\tau(x)-\tau(y) =\int_{y}^{x}\tau^{\prime}(u)\mathrm{d}u\] \[\geq(x-y)\int_{0}^{1}(1-t)\tau^{\prime}(y)+t\tau^{\prime}(x)\, \mathrm{d}t\] \[=\frac{x-y}{2}\left(\tau^{\prime}(x)+\tau^{\prime}(y)\right)\,.\] For the upper bound, concavity of \(\tau^{\prime}\) implies the existence of an affine linear function \(h\) with \(h(u)\geq\tau^{\prime}(u)\) for all \(u\in[0,\infty)\) and \[h\bigg{(}\frac{x+y}{2}\bigg{)}=\tau^{\prime}\bigg{(}\frac{x+y}{2}\bigg{)}\,\,.\] (26) Thus, \[\tau(x)-\tau(y) \leq\int_{y}^{x}h(u)\mathrm{d}u\] \[=\frac{x-y}{2}\left(h(x)+h(y)\right)\] \[=(x-y)h\bigg{(}\frac{x+y}{2}\bigg{)}\,\,.\]
2. Follows directly from the upper bound in (i).
In further preparation for the main proof, we collect some properties of concave functions such as \(\tau^{\prime}\).
**Lemma 16** (Concave derivative).: Let \(\tau\in\mathcal{S}\).
1. Let \(x,y\in[0,\infty)\). Then \[\tau^{\prime}(x+y)\leq\tau^{\prime}(x)+\tau^{\prime}(y)\leq 2\tau^{\prime} \bigg{(}\frac{x+y}{2}\bigg{)}\,\,.\]
2. Let \(a,x\in[0,\infty)\). Then \[\tau^{\prime}(ax)\geq a\tau^{\prime}(x)\text{ for }a\leq 1\,,\] \[\tau^{\prime}(ax)\leq a\tau^{\prime}(x)\text{ for }a\geq 1\,.\]
3. Let \(x,y\in[0,\infty)\). Assume \(y\geq x\). Then \[x\tau^{\prime}(y)\leq y\tau^{\prime}(x)\,.\]
Proof.: These are all well-known properties of nonnegative, concave functions.
1. Use Lemma 46 and Jensen's inequality.
2. Use \((1-t)\tau^{\prime}(x_{0})+t\tau^{\prime}(x_{1})\leq\tau^{\prime}((1-t)x_{0}+ tx_{1})\) on points \(x_{0}=0\), \(x_{1}=x\), \(t=a\) and on
\(x_{0}=0\), \(x_{1}=ax\), \(t=1/a\), respectively, and note that \(\tau^{\prime}(0)\geq 0\).
3. Apply (ii) with \(a=y/x\).
The next lemma provides another tool for the main proof.
**Lemma 17** (Square root and derivative).: Let \(\tau\in\mathcal{S}_{0}^{2}\). Let \(x,y\in[0,\infty)\). Assume \(x\geq y\). Then
\[\tau(\sqrt{xy})\leq x\tau^{\prime}\bigg{(}\frac{1}{2}y\bigg{)}\.\]
Proof.: Define \(f(x,y):=x\tau^{\prime}(\frac{1}{2}y)-\tau(\sqrt{xy})\). Its partial derivative with respect to \(x\) is
\[\partial_{x}f(x,y) =\tau^{\prime}\bigg{(}\frac{1}{2}y\bigg{)}-\frac{\sqrt{y}}{2\sqrt {x}}\tau^{\prime}(\sqrt{xy})\] \[\geq\tau^{\prime}\bigg{(}\frac{1}{2}y\bigg{)}-\tau^{\prime} \bigg{(}\frac{1}{2}y\bigg{)}=0\,,\]
by Lemma 16 (ii) with \(\frac{\sqrt{y}}{2\sqrt{x}}\in[0,1]\). Thus,
\[f(x,y) \geq f(y,y)\] \[=y\tau^{\prime}\bigg{(}\frac{1}{2}y\bigg{)}-\tau(y)\] \[\geq 0\,,\]
where the last inequality is due to Lemma 15 (i) with \(\tau(0)=0\).
### Approximation
In the proof of the main theorem, we will first show \(\mathcal{S}^{3}\subseteq\mathcal{T}\) and then approximate the remaining functions in \(\mathcal{S}\) via smooth functions. The following lemma shows that this is possible.
**Lemma 18** (Smooth approximation).: Let \(\tau\in\mathcal{S}\). Then there is a sequence \((\tau_{n})_{n\in\mathbb{N}}\subseteq\mathcal{S}^{\infty}\) such that \(\tau(x)=\lim_{n\to\infty}\tau_{n}(x)\) and \(\tau^{\prime}(x)=\lim_{n\to\infty}\tau^{\prime}_{n}(x)\).
Proof.: We will smooth \(\tau^{\prime}\) by convolution with a mollifier. The convolution is executed in the group of positive real numbers under multiplication endowed with its Haar measure \(\mu(A)=\int_{A}\frac{1}{x}\mathrm{d}x\) for \(A\subseteq(0,\infty)\) measurable.
For \(n\in\mathbb{N}\), let \(\varphi_{n}\in\mathcal{C}^{\infty}((0,\infty))\) be a sequence of nonnegative functions with support in \([\exp(-1/n),\exp(1/n)]\) and
\[\int_{0}^{\infty}\frac{\varphi_{n}(x)}{x}\mathrm{d}x=1\,. \tag{27}\]
Let \(\tau\in\mathcal{S}\) with derivative \(\tau^{\prime}\). For \(n\in\mathbb{N}\), \(x\in[0,\infty)\), we define
\[\tau_{n}(x):=\tau(0)+\int_{0}^{x}\int_{0}^{\infty}\frac{\varphi_{n}(z)}{z} \tau^{\prime}\Big{(}\frac{y}{z}\Big{)}\,\mathrm{d}z\mathrm{d}y\,. \tag{28}\]
Then, for \(y\in[0,\infty)\),
\[\tau^{\prime}_{n}(y)=\int_{0}^{\infty}\frac{\varphi_{n}(z)}{z}\tau^{\prime} \Big{(}\frac{y}{z}\Big{)}\,\mathrm{d}z=\int_{\mathbb{R}}\varphi_{n}(\mathrm{e }^{t})\tau^{\prime}\Big{(}\mathrm{e}^{\log(y)-t}\Big{)}\,\mathrm{d}t\,. \tag{29}\]
Thus, \(s\mapsto\tau^{\prime}_{n}(\mathrm{e}^{s})\) is the convolution of \(t\mapsto\varphi_{n}(\mathrm{e}^{t})\) with \(t\mapsto\tau^{\prime}(\mathrm{e}^{t})\). Using standard results on convolutions, the mollified function has following properties:
1. \(\tau^{\prime}_{n}\) is infinitely differentiable on \((0,\infty)\), because \(\varphi_{n}\) is,
2. \(\tau^{\prime}_{n}\) is nonnegative, nondecreasing, and concave, because \(\tau^{\prime}\) is and \(\varphi_{n}\) is nonnegative,
3. \(\lim_{n\to\infty}\tau^{\prime}_{n}(x)=\tau^{\prime}(x)\) because \(\tau^{\prime}\) is continuous.
Furthermore, \(\tau_{n}\) is convex, as \(\tau^{\prime}_{n}\) is nondecreasing and \(\lim_{n\to\infty}\tau_{n}(x)=\tau(x)\) by dominated convergence. Thus, \((\tau_{n})_{n\in\mathbb{N}}\subseteq\mathcal{S}^{\infty}\), and the sequence has the desired point-wise limits.
### Stability
The set \(\mathcal{S}\) enjoys similar stability properties as \(\mathcal{T}\). Thus, after having shown \(\mathcal{S}\subseteq\mathcal{T}\), we cannot easily find further functions in \(\mathcal{T}\) from the constructions presented in section 2.3.
**Proposition 19**.: Let \((\tau_{n})_{n\in\mathbb{N}}\subseteq\mathcal{S}\). Let \(\tau\colon[0,\infty)\to\mathbb{R}\) be differentiable. Assume that \(\tau(x)=\lim_{n\to\infty}\tau_{n}(x)\) for all \(x\in[0,\infty)\). Then \(\tau\in\mathcal{S}\).
Proof.: As \(\tau_{n}\) is nondecreasing and convex, so is \(\tau\). As \(\tau^{\prime}_{n}\) is concave, by [10, Theorem 25.7], for all \(x\in(0,\infty)\),
\[\lim_{n\to\infty}\tau^{\prime}_{n}(x)=\tau^{\prime}(x)\,. \tag{30}\]
Thus, as all \(\tau^{\prime}_{n}\) are concave, \(\tau^{\prime}\) is concave on \((0,\infty)\). As \(\tau\) is convex, \(\frac{\tau(x+h)-\tau(x)}{h}\) is increasing in both \(x\) and \(h\). Thus,
\[\liminf_{x\searrow 0}\tau^{\prime}(x) =\inf_{x\in(0,\infty)}\tau^{\prime}(x)\] \[=\inf_{x,h\in(0,\infty)}\frac{\tau(x+h)-\tau(x)}{h}\] \[=\liminf_{h\searrow 0}\frac{\tau(h)-\tau(0)}{h}\] \[=\tau^{\prime}(0)\,.\]
Therefore, \(\tau^{\prime}\) is continuous at \(0\) and concavity extends to \([0,\infty)\).
**Proposition 20**.:
1. Let \(\tau\in\mathcal{S}\). Let \(a\in[0,\infty)\). Then \((x\mapsto a\tau(x))\in\mathcal{S}\).
2. Let \(\tau,\tilde{\tau}\in\mathcal{T}\). Then \((x\mapsto\tau(x)+\tilde{\tau}(x))\in\mathcal{S}\).
Proof.: Both parts follow directly from the definition of \(\mathcal{S}\) and linearity of the derivative operator.
## 4 Proof Outline
In the first step of the proof of Theorem 1, we represent general \(4\)-point metric spaces with \(6\) real-valued parameters. We refer to this representation as a _parametrization_. It converts our problem from the domain of metric geometry to the domain of real analysis. The rest of the proof consists of a complex sequence of elementary calculus arguments. This sequence may seem difficult to discover. To aid this process, an extensive application of computer-assisted numerical assessments was employed. The inequality (3) and transformations of it were evaluated on a grid
of the parameter space and for a finite set of functions \(\tau\). This computational tool played a crucial role in guiding the proof. It helped to identify steps that would not be useful and indicated steps with potential merit.
### Parametrization
There are different ways to parameterize a general \(k\)-tuple of points of a metric space. A trivial way is to take the \(k\) distances as the \(k\) real parameters. Then the parameter space is a subset of \([0,\infty)^{k}\) with certain constraints that ensure the triangle inequality. In this parametrization, the representation of distances is simple and the parameter space is more complex. We will base our parametrization on the Euclidean cosine-formula. We start with a parametrization of 3 points, see Figure 6. The parameter space for this construction is \(\mathbb{R}^{2}\times[-1,1]\) without further constraints. In contrast to the purely distance based parametrization described above, this parametrization yields a more complex representation of all distances, but a very simple parameter space.
**Proposition 21** (3-point parametrization).:
1. Let \((\mathcal{Q},d)\) be a metric space and \(y,q,p\in\mathcal{Q}\). Set \(a:=\overline{y,p}\) and \(b:=\overline{q,p}\). Isometrically embed \(y,q,p\) in the Euclidean plane \(\mathbb{R}^{2}\) and set \(s:=\cos(\angle{ypq})\) with the angle \(\angle{ypq}\) measured in the Euclidean plane. Then, \(\overline{y,q^{2}}=a^{2}+b^{2}-2sab\) and \(a,b\in[0,\infty),s\in[-1,1]\).
2. For all \(a,b\in[0,\infty),s\in[-1,1]\) there is a metric space with a triplet of points \(y,q,p\) such that \(\overline{y,p}=a\), \(\overline{q,p}=b\), and \(\overline{y,q^{2}}=a^{2}+b^{2}-2sab\).
Proof.: The first part is true by construction. For given parameters \(a,b\in[0,\infty),s\in[-1,1]\), we can easily construct a triangle in \(\mathbb{R}^{2}\) with sides \(a,b,\sqrt{a^{2}+b^{2}-2sab}\) and an angle \(\arccos(s)\) between the sides with length \(a\) and \(b\). The corners of this triangle are the points \(y,q,p\).
For the proof of Theorem 1, we use a 4-point parametrization, see Figure 7. It is based on repeated application of the Euclidean cosine formula. Its parameter space is a subset of \([0,\infty)^{3}\times[-1,1]^{3}\) with rather complex constraints. We later relax this parametrization to a construction with a parameter space \([0,\infty)^{3}\times[-1,1]^{2}\) without further constraints. That construction is not a parametrization, but results shown in its parameter space are stronger than those shown in the parametrization.
**Proposition 22** (4-point parametrization).:
1. Let \((\mathcal{Q},d)\) be a metric space and \(y,z,q,p\in\mathcal{Q}\). Set \(a:=\overline{z,p}\), \(c:=\overline{y,p}\), \(b:=\overline{q,p}\), \(s:=\cos(\angle{ypq})\), \(r:=\cos(\angle{zpq})\), \(t:=\cos(\angle{ypz})\), with all angles measured as in an isometric 3-point embedding in the Euclidean plane. Then, \[\overline{y,z^{2}}=a^{2}+c^{2}-2tac\,,\qquad\overline{y,q^{2}}=c^{2}+b^{2}-2 scb\,,\qquad\overline{z,q^{2}}=a^{2}+b^{2}-2rab\,.\]
Figure 6: A 3-point parametrization. We denote \(<\!\!s:=\arccos(s)\).
Furthermore, \(a,b,c\in[0,\infty)\), \(r,s,t\in[-1,1]\), and \[\begin{split}-tac&\leq b^{2}-rab-scb+\sqrt{a^{2}-2rab+b ^{2}}\sqrt{c^{2}-2scb+b^{2}}\,,\\ -rab&\leq c^{2}-tac-scb+\sqrt{a^{2}-2tac+c^{2}}\sqrt{c ^{2}-2scb+b^{2}}\,,\\ -scb&\leq a^{2}-rab-tac+\sqrt{a^{2}-2rab+b^{2}}\sqrt{ a^{2}-2tac+c^{2}}\,.\end{split}\] (31)
2. For all \(a,b,c\in[0,\infty)\), \(r,s,t\in[-1,1]\) that fulfill (31), there is a metric space \((\mathcal{Q},d)\) with a quadruple of points \(y,z,q,p\in\mathcal{Q}\) such that \[\begin{split} a&=\overline{z,p}\,,\qquad \qquad\qquad c=\overline{y,p}\,,\qquad\qquad\qquad b=\overline{q,p}\,,\\ \overline{y,z}^{2}&=a^{2}+c^{2}-2tac\,,\qquad \overline{y,q}^{2}=c^{2}+b^{2}-2scb\,,\qquad\overline{z,q}^{2}=a^{2}+b^{2}-2 rab\,.\end{split}\]
Proof.:
1. The inequalities (31) are due to the triangle inequality, \[\overline{y,z}\leq\overline{y,q}+\overline{z,q}\,,\qquad\overline{z,q}\leq \overline{y,z}+\overline{y,q}\,,\qquad\overline{y,q}\leq\overline{y,z}+\overline {z,q}\,.\]
2. Define a four point set \(\mathcal{Q}\) with elements named \(y,z,q,p\). Define \(d\colon\mathcal{Q}\times\mathcal{Q}\to[0,\infty)\) with the equations given in the lemma, extended by symmetry and \(d(x,x)=0\) for all \(x\in\mathcal{Q}\). By construction, \(d\) is a semimetric [1] (vanishing distance for non-identical points allowed) so that identifying points with distance \(0\) yields a metric space.
With this parametrization and
\[a^{2}-c^{2}-(a^{2}-2rab+b^{2})+(c^{2}-2scb+b^{2})=2b\,(ra-sc)\, \tag{32}\]
(5) can be expressed as
\[b\,(ra-sc)\leq b\sqrt{a^{2}+c^{2}-2tac}\,, \tag{33}\]
and (3) becomes
\[\tau(a)-\tau(c)-\tau_{\sqrt{}}\big{(}a^{2}-2rab+b^{2}\big{)}+\tau_{\sqrt{}} \big{(}c^{2}-2scb+b^{2}\big{)}\leq L_{\tau}\,b\,\tau^{\prime}(a^{2}+c^{2}-2tac)\,, \tag{34}\]
where we use the shorthand \(\tau_{\sqrt{}}(x):=\tau(\sqrt{x})\). Thus, Theorem 1 is equivalent to showing that (33) implies (34) for all \(a,b,c\in[0,\infty)\), \(r,s,t\in[-1,1]\) that fulfill (31). We will prove a stronger but simpler looking result in section B of the appendix:
Figure 7: A \(4\)-point parametrization. We denote \(\lessdot x:=\arccos(x)\).
**Theorem 23**.: Let \(a,b,c\geq 0\), \(r,s\in[-1,1]\), and \(\tau\in\mathcal{S}_{0}^{3}\). Then
\[\tau(a)-\tau(c)-\tau_{\sqrt{}}\big{(}a^{2}-2rab+b^{2}\big{)}+\tau_{\sqrt{}}\big{(} c^{2}-2scb+b^{2}\big{)}\leq 2b\tau^{\prime}(\max(ra-sc,|a-c|)). \tag{35}\]
### Remaining Proof Steps
From Theorem 23, we obtain a slightly stronger result than Theorem 1 by relaxing (5):
**Theorem 24**.: Let \(\tau\in\mathcal{S}\). Let \((\mathcal{Q},d)\) be a metric space. Let \(y,z,q,p\in\mathcal{Q}\). Assume, there is \(L\in[2,\infty)\) such that
\[\overline{y,q^{2}}-\overline{y,p^{2}}-\overline{z,q^{2}}+\overline{z,p^{2}} \leq L\,\overline{q,p\,y,z}\,. \tag{36}\]
Then
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau(\overline {z,p})\leq L\,\overline{q,p}\,\tau^{\prime}(\overline{y,z}). \tag{37}\]
Proof that Theorem 23 implies Theorem 24.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Using (32), the metric version of (35) is
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+ \tau(\overline{z,p}) \tag{38}\] \[\leq 2\,\overline{q,p}\,\tau^{\prime}\bigg{(}\max\bigg{(}\frac{ \overline{y,q^{2}}-\overline{y,p^{2}}-\overline{z,q^{2}}+\overline{z,p}^{2}}{2 \,\overline{q,p}},|\overline{z,p}-\overline{y,p}|\bigg{)}\bigg{)}\.\]
Using (36) and the triangle inequality, we bound the right-hand side of (38), by
\[2\,\overline{q,p}\,\tau^{\prime}\bigg{(}\max\bigg{(}\frac{L\,\overline{y,z}}{ 2},\overline{y,z}\bigg{)}\bigg{)}. \tag{39}\]
With Lemma 16 (ii) and \(L/2\geq 1\), we obtain (37).
To extend the result shown for \(\tau\in\mathcal{S}_{0}^{3}\) to \(\tau\in\mathcal{S}\), we use Lemma 18 and Proposition 11 to remove the smoothness requirement, and Lemma 4 (i) and Proposition 12 (ii) to remove the requirement \(\tau(0)=0\).
Theorem 1 follows from Theorem 24 by fixing \(L=2\). The remaining part of the main proof, i.e., the proof of Theorem 23, is given in the appendix section B. Figure 8 gives an overview of how the different intermediate results presented above and below relate to each other.
## 5 Implications and Discussion
With Theorem 1, we have shown a new set of rather fundamental inequalities in metric spaces that are related to the Cauchy-Schwarz and the Triangle inequalities. In this section, we discuss some immediate consequences of the main result.
### Symmetries
Figure 9 illustrates the symmetries in quadruple inequalities. Sides of the same color contribute essentially in the same way in the inequality. In (3), the diagonals of Figure 9 come up in non-exchangeable terms. But, they can be swapped in the assumption (5). Thus, if the conditions of Theorem 1 are fulfilled, we have
\[\tau(\overline{y,q})-\tau(\overline{y,}\overline{p})-\tau(\overline{z,q})+ \tau(\overline{z,p})\leq 2\min(\overline{q,p}\,\tau^{\prime}(\overline{y,z}), \overline{y,z}\,\tau^{\prime}(\overline{q,p}))\,. \tag{40}\]
Furthermore, swapping \(y\) and \(z\) or \(q\) and \(p\) does not influence the right-hand side but changes the sign on the left-hand side. Thus, assuming
\[\big{|}\overline{y,q^{2}}-\overline{y,}\overline{p}^{2}-\overline{z,}\overline{ q}^{2}+\overline{z,p}^{2}\big{|}\leq 2\,\overline{q,p\,y,z} \tag{41}\]
Figure 8: Overview of theorems and lemmas in the main proof.
Figure 9: Four points as a quadrilateral. The sides of the quadrilateral show up on the left of the quadruple inequalities (the terms of opposite sides have the same sign); the diagonals form the right-hand side.
we get
\[|\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau(\overline{z, \overline{p}})|\leq 2\min(\overline{q,p}\,\tau^{\prime}(\overline{y,z}), \overline{y,z}\,\tau^{\prime}(\overline{q,p}))\,. \tag{42}\]
Further bounds of the right-hand side are shown in the next subsection.
### Bounds for the Right-Hand Side
**Corollary 25.** Let \(\tau\in\mathcal{S}_{0}\). Let \((\mathcal{Q},d)\) be a metric space. Let \(y,z,q,p\in\mathcal{Q}\). Assume
\[\overline{y,q}^{2}-\overline{y,p}^{2}-\overline{z,q}^{2}+\overline{z,p}^{2} \leq 2\,\overline{q,p\,y,z}\,. \tag{43}\]
Then the value
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau(\overline {z,\overline{p}}) \tag{44}\]
is bounded from above by all of the following values:
1. \(2\min(\overline{q,p\,y,z})\tau^{\prime}(\max(\overline{q,p\,y,z}))\),
2. \(2\,\overline{q,p}^{\beta}\overline{y,z^{1-\beta}}\tau^{\prime}\big{(}\overline {q,p}^{1-\beta}\overline{y,z^{\beta}}\big{)}\) for all \(\beta\in[0,1]\),
3. \(2\,(\beta\,\overline{q,p}+(1-\beta)\overline{y,z})\,\tau^{\prime}((1-\beta) \overline{q,p}+\beta\overline{y,z})\) for all \(\beta\in[0,1]\),
4. \(2\sqrt{q,p\,y,z}\tau^{\prime}(\sqrt{q,p\,y,z})\),
5. \((\overline{q,p}+\overline{y,z})\,\tau^{\prime}\Big{(}\frac{\overline{q,p}+ \overline{y,z}}{2}\Big{)}\),
6. \(4\tau(\sqrt{q,p\,y,z})\),
7. \(4\tau\Big{(}\frac{\overline{q,p}+\overline{y,z}}{2}\Big{)}\),
8. \(2\tau(\overline{q,p})+2\tau(\overline{y,z})\).
Proof.: We first apply Theorem 1 twice, to \((y,z,q,p)\) and to \((q,p,y,z)\), to obtain
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau(\overline {z,p})\leq 2\min(\overline{q,p\,y,z})\tau^{\prime}(\max(\overline{q,p\,y,z}) ). \tag{45}\]
This shows (i). Let \(a,b\in[0,\infty)\) and \(\beta\in[0,1]\). Then, by Lemma 16 (ii) and the weighted arithmetic-geometric mean inequality,
\[\min(a,b)\tau^{\prime}(\max(a,b)) \leq a^{\beta}b^{1-\beta}\tau^{\prime}(a^{1-\beta}b^{\beta})\] \[\leq(\beta a+(1-\beta)b)\,\tau^{\prime}((1-\beta)a+\beta b)\,.\]
Applying these inequalities to (45) shows (ii) and (iii), and their special cases (iv) and (v), where \(\beta=1/2\). By Lemma 15 (i) with \(y=0\), we have
\[x\tau^{\prime}(x)\leq 2\tau(x) \tag{46}\]
for all \(x\in[0,\infty)\). Thus,
\[\min(a,b)\tau^{\prime}(\max(a,b)) \leq\sqrt{ab}\,\tau^{\prime}(\sqrt{ab})\] \[\leq 2\tau(\sqrt{ab})\,,\]
which yields (vi). The remaining parts (vii) and (viii), can be obtained using (46) and Jensen's
inequality:
\[\min(a,b)\tau^{\prime}(\max(a,b)) \leq\frac{a+b}{2}\tau^{\prime}\bigg{(}\frac{a+b}{2}\bigg{)}\] \[\leq 2\tau\bigg{(}\frac{a+b}{2}\bigg{)}\] \[\leq\tau(a)+\tau(b)\,.\]
### Corollaries for Special Cases
We apply Theorem 1, Theorem 3, Proposition 7, and Corollary 25 for a triple of points (a quadruple of points with two identical points), on the real line, and for parallelograms in inner product spaces to demonstrate the main results.
**Corollary 26** (For three points).: Let \((\mathcal{Q},d)\) be a metric space. Let \(y,q,p\in\mathcal{Q}\).
1. Let \(\tau\in\mathcal{S}_{0}\). Then \[\tau(\overline{y,q})-\tau(\overline{y,p})+\tau(\overline{q,p})\leq 2\,\overline{ q,p}\,\tau^{\prime}(\overline{y,q})\,.\] (47)
2. Let \(\alpha\in[1,2]\). Then \[\overline{y,q}^{\alpha}-\overline{y,p}^{\alpha}+\overline{q,p}^{\alpha}\leq \alpha 2^{2-\alpha}\,\overline{q,p\,y,q}^{\alpha-1}\,.\] (48)
Proof.: By the triangle inequality, \(|\overline{y,q}-\overline{q,p}|\leq\overline{y,p}\). After squaring this inequality, we obtain (5) with \(z=q\), which is
\[\overline{y,q}^{2}-\overline{y,p}^{2}+\overline{q,p}^{2}\leq 2\,\overline{q,p\,y,q}\,. \tag{49}\]
Thus, Theorem 1 implies (47) and Proposition 7 implies (48).
**Corollary 27** (On the real line).: Let \(a,b,c\in[0,\infty)\).
1. Let \(\tau\in\mathcal{S}_{0}\). Then \[\tau(a+b+c)-\tau(a+b)-\tau(b+c)+\tau(b) \leq 2c\tau^{\prime}(a)\] (50) \[\tau(a+b+c)-\tau(a+b)-\tau(b+c)+\tau(b) \leq\tau(c)+\tau(a)\,.\] (51)
2. Let \(\alpha\in[1,2]\). Then \[(a+b+c)^{\alpha}-(a+b)^{\alpha}-(b+c)^{\alpha}+b^{\alpha} \leq\alpha 2^{2-\alpha}ca^{\alpha-1}\] (52) \[(a+b+c)^{\alpha}-(a+b)^{\alpha}-(b+c)^{\alpha}+b^{\alpha} \leq c^{\alpha}+a^{\alpha}\,.\] (53)
Proof.: In \((\mathbb{R},|\cdot-\cdot|)\), (5) is fulfilled. Choose \(y=0\), \(z=a\), \(p=a+b\), \(q=a+b+c\) and apply Theorem 1 and Proposition 7 to obtain (50). For (51), apply Theorem 3. To get (52), use Proposition 7. The equation (53) is (51) with \(\tau=\tau_{\alpha}=(x\mapsto x^{\alpha})\).
**Corollary 28** (In inner product spaces).: Let \((V,\langle\cdot\,,\,\cdot\rangle)\) be an inner product space with induced norm \(\|\cdot\|\). Let \(u,v\in V\).
1. Let \(\tau\in\mathcal{S}_{0}\). Then \[\tau(\|u\|)-\tau(\|v\|)\leq\|u-v\|\tau^{\prime}(\|u+v\|)\,.\] (54)
2. Let \(\alpha\in[1,2]\). Then \[\|u\|^{\alpha}-\|v\|^{\alpha}\leq\alpha 2^{1-\alpha}\|u-v\|\|u+v\|^{\alpha-1}\,.\] (55)
3. Let \(\tau\in\mathcal{S}_{0}\). Then \[\tau(\|u+v\|)+\tau(\|u-v\|)\leq 2\tau(\|u\|)+2\tau(\|v\|)\,.\] (56)
4. Let \(\alpha\in[1,2]\). Then \[\|u+v\|^{\alpha}+\|u-v\|^{\alpha}\leq 2\|u\|^{\alpha}+2\|v\|^{\alpha}\,.\] (57)
Proof.: For the first two parts, set \(y=0\), \(z=u+v\), \(q=u\), \(p=v\); for the last two parts, set \(y=0\), \(z=v\), \(q=u+v\), \(p=u\). Then apply Theorem 1, Theorem 3, and Proposition 7.
**Remark 29**.: Recall the _parallelogram law_: Let \((V,\langle\cdot,\,\cdot\rangle)\) be an inner product space with induced norm \(\|\cdot\|\). Let \(u,v\in V\). Then
\[\|u+v\|^{2}+\|u-v\|^{2}\leq 2\|u\|^{2}+2\|v\|^{2}\,, \tag{58}\]
which is also true with equality. Thus, we can say that Corollary 28 generalizes the parallelogram law.
### Quadruple Constants and Further Research
We summarize our findings on the quadruple inequality and quadruple constant for symmetric and non-symmetric right-hand side, and for power functions as well as nondecreasing, convex functions with concave derivative.
Let \(A\subseteq(0,\infty)^{6}\) be the set of points \(x=(x_{1},\ldots,x_{6})\) such that there is a metric space \((\{y_{x},z_{x},q_{x},p_{x}\},d)\) with
\[x_{1}=\overline{y_{x},q_{x}}\,,\quad x_{2}=\overline{y_{x},p_{x}}\,,\quad x_{ 3}=\overline{z_{x},q_{x}}\,,\quad x_{4}=\overline{z_{x},p_{x}}\,,\quad x_{5}= \overline{q_{x},p_{x}}\,,\quad x_{6}=\overline{y_{x},z_{x}} \tag{59}\]
that fulfills (5). For \(\alpha\in(0,\infty)\), define
\[L_{\alpha} :=\sup_{x\in A}\frac{\overline{y_{x},q_{x}}^{\alpha}-\overline{y_ {x},p_{x}}^{\alpha}-\overline{z_{x},q_{x}}^{\alpha}+\overline{z_{x},p_{x}}^{ \alpha}}{\overline{q_{x},p_{x}}\overline{y_{x},z_{x}}^{\alpha-1}-1}\,,\] \[K_{\alpha} :=\sup_{x\in A}\frac{\overline{y_{x},q_{x}}^{\alpha}-\overline{y_ {x},p_{x}}^{\alpha}-\overline{z_{x},q_{x}}^{\alpha}+\overline{z_{x},p_{x}}^{ \alpha}}{\overline{q_{x},p_{x}}\overline{z}\overline{y_{x},z_{x}}^{\alpha}}\,,\] \[J_{\alpha} :=\sup_{x\in A}\frac{\overline{y_{x},q_{x}}^{\alpha}-\overline{y_ {x},p_{x}}^{\alpha}-\overline{z_{x},q_{x}}^{\alpha}+\overline{z_{x},p_{x}}^{ \alpha}}{\overline{q_{x},p_{x}}^{\alpha}+\overline{y_{x},z_{x}}^{\alpha}}\,.\]
Furthermore, define
\[L_{\mathcal{S}_{0}} :=\sup_{\tau\in\mathcal{S}_{0}}\sup_{x\in A}\frac{\tau(\overline {y_{x},q_{x}})-\tau(\overline{y_{x},p_{x}})-\tau(\overline{z_{x},q_{x}})+\tau (\overline{z_{x},p_{x}})}{\overline{q_{x},p_{x}}\,\tau^{\prime}(\overline{y_ {x},z_{x}})}\,,\] \[K_{\mathcal{S}_{0}} :=\sup_{\tau\in\mathcal{S}_{0}}\sup_{x\in A}\frac{\tau(\overline {y_{x},q_{x}})-\tau(\overline{y_{x},p_{x}})-\tau(\overline{z_{x},q_{x}})+\tau (\overline{z_{x},p_{x}})}{\tau(\overline{q_{x},p_{x}}\overline{y_{x},z_{x}})}\,,\] \[J_{\mathcal{S}_{0}} :=\sup_{\tau\in\mathcal{S}_{0}}\sup_{x\in A}\frac{\tau(\overline {y_{x},q_{x}})-\tau(\overline{y_{x},p_{x}})-\tau(\overline{z_{x},q_{x}})+\tau (\overline{z_{x},p_{x}})}{\tau(\overline{q_{x},p_{x}})+\tau(\overline{y_{x}, z_{x}})}\,.\]
**Proposition 30**.:
1. \(L_{\alpha}=\alpha 2^{2-\alpha}\) _for_ \(\alpha\in[1,2]\) _and_ \(L_{\alpha}=\infty\) _for_ \(\alpha\in(0,\infty)\setminus[1,2]\)_._
2. \(K_{\alpha}=2\) _for_ \(\alpha\in(0,1]\)_,_ \(K_{\alpha}\in[2,\alpha 2^{2-\alpha}]\) _for_ \(\alpha\in[1,2]\)_,_ \(K_{\alpha}=\infty\) _for_ \(\alpha\in(2,\infty)\)_._
3. \(J_{\alpha}=1\) _for_ \(\alpha\in(0,1]\)_,_ \(J_{\alpha}\in[1,\alpha 2^{1-\alpha}]\) _for_ \(\alpha\in[1,2]\)_,_ \(J_{\alpha}=\infty\) _for_ \(\alpha\in(2,\infty)\)_._
4. \(L_{\mathcal{S}_{0}}=2\)_._
5. \(K_{\mathcal{S}_{0}}\in[2,4]\)_._
6. \(J_{\mathcal{S}_{0}}\in[1,2]\)_._
Proof.: From Proposition 7, we know \(L_{\alpha}=\alpha 2^{2-\alpha}\) for \(\alpha\in[1,2]\) and \(L_{\alpha}=\infty\) for \(\alpha\in(0,\infty)\setminus[1,2]\). As direct consequences, we obtain \(K_{\alpha}\leq\alpha 2^{2-\alpha}\) and \(J_{\alpha}\leq\alpha 2^{1-\alpha}\) for \(\alpha\in[1,2]\).
If we set \(y=p\) and \(z=q\) and assume \(\overline{y,q}=a\in(0,\infty)\), we have
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+ \tau(\overline{z,p}) =2\tau(a)\,,\] \[\tau(\sqrt{\overline{q,p\,y,z}}) =\tau(a)\,,\] \[\tau(\overline{q,p})+\tau(\overline{y,z}) =2\tau(a)\,,\]
for any function \(\tau\colon(0,\infty)\to\mathbb{R}\) with \(\tau(0)=0\). Thus, \(K_{\alpha},K_{\mathcal{S}_{0}}\geq 2\), \(J_{\alpha},J_{\mathcal{S}_{0}}\geq 1\).
If \(\tau\) is metric preserving, like \(\tau_{\alpha}=(x\mapsto x^{\alpha})\) with \(\alpha\in(0,1]\), then
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+\tau( \overline{z,p})\leq 2\min(\tau(\overline{q,p}),\tau(\overline{y,z})). \tag{60}\]
This shows \(K_{\alpha}\leq 2\) and \(J_{\alpha}\leq 1\) for \(\alpha\in(0,1]\).
Let \(y=(0,0)\), \(z=(0,\epsilon)\), \(q=(1,\epsilon)\), \(p=(1,0)\) in the Euclidean plane \(\mathbb{R}^{2}\), see Figure 10. Then
\[\tau(\overline{y,q})-\tau(\overline{y,p})-\tau(\overline{z,q})+ \tau(\overline{z,p}) =2\tau(\sqrt{\epsilon^{2}+1})-2\tau(1)\,,\] \[\tau(\sqrt{\overline{q,p\,y,z}}) =\tau(\epsilon)\,,\] \[\tau(\overline{q,p})+\tau(\overline{y,z}) =2\tau(\epsilon)\,,\]
for any function \(\tau\colon[0,\infty)\to\mathbb{R}\). Assume \(\tau(0)=0\), \(\tau^{\prime}(1)>0\), and \(\lim_{\epsilon\searrow 0}\frac{\epsilon}{\tau^{\prime}(\epsilon)}=\infty\). Then, by
\begin{table}
\begin{tabular}{c||c|c|c|c} \(\bullet\) & \(\alpha\in(0,1]\) & \(\alpha\in[1,2]\) & \(\alpha\in(2,\infty)\) & \(\mathcal{S}_{0}\) \\ \hline \hline \(L_{\bullet}\) & \(\infty\) & \(\alpha 2^{2-\alpha}\) & \(\infty\) & \(2\) \\ \(K_{\bullet}\) & \(2\) & \([2,\alpha 2^{2-\alpha}]\) & \(\infty\) & \([2,4]\) \\ \(J_{\bullet}\) & \(1\) & \([1,\alpha 2^{1-\alpha}]\) & \(\infty\) & \([1,2]\) \\ \end{tabular}
\end{table}
Table 1: Range of quadruple constants shown in Proposition 30.
Figure 10: Construction in the proof of Proposition 30.
I'Hopital's rule,
\[\lim_{\epsilon\searrow 0}\frac{\tau(\sqrt{\epsilon^{2}+1})-\tau(1)}{\tau(\epsilon)}= \lim_{\epsilon\searrow 0}\frac{\epsilon\tau^{\prime}(\sqrt{\epsilon^{2}+1})}{ \sqrt{\epsilon^{2}+1}\,\tau^{\prime}(\epsilon)}=\infty\,. \tag{61}\]
In particular, \(K_{\alpha}=\infty\) and \(J_{\alpha}=\infty\) for \(\alpha\in(2,\infty)\).
By Theorem 1 and Corollary 25, we have \(L_{\mathcal{S}_{0}}\leq 2\), \(K_{\mathcal{S}_{0}}\leq 4\), and \(J_{\mathcal{S}_{0}}\leq 2\). As here we take the supremum over \(\tau\in\mathcal{S}_{0}\), we also get \(L_{\mathcal{S}_{0}}\geq 2\), e.g., for \(\tau=\tau_{1}=(x\mapsto x)\).
**Remark 31**.: We have \(\alpha 2^{1-\alpha}\in[1,\frac{2}{\mathrm{e}\ln(2)}]\) for \(\alpha\in[1,2]\) and \(\frac{2}{\mathrm{e}\ln(2)}\approx 1.06\). The maximum is attained at \(\alpha=\ln(2)^{-1}\approx 1.44\).
For future work it remains to find the precise values of \(K_{\alpha}\) and \(J_{\alpha}\) for \(\alpha\in[1,2]\), and for \(K_{\mathcal{S}_{0}}\) and \(J_{\mathcal{S}_{0}}\). Furthermore, it would be interesting extend the result for an explicit form of the quadruple constant from the functions \(\tau_{\alpha}=(x\mapsto x^{\alpha})\), where we have \(L_{\tau_{\alpha}}^{*}=2^{2-\alpha}\) for \(\alpha\in[1,2]\), to functions \(\tau\in\mathcal{S}\), where we so far know \(L_{\tau}^{*}\in[1,2]\).
## Appendix A Proof of Theorem 3
The proof is inspired by the proofs of [1, Proposition 4.1.1, Proposition 4.1.2].
For any four points \(y,z,q,p\in V\), we have
\[\|y-q\|^{2}-\|y-p\|^{2}-\|z-q\|^{2}+\|z-p\|^{2}=2\,\langle y-z\,,\,p-q\rangle \leq\|q-p\|^{2}+\|y-z\|^{2}\,. \tag{62}\]
Let \(u,v\in V\). Consider a parallelogram with vertices \((0,(u-v)/2,u,(u+v)/2)\). It has the diagonals \(u\) and \(v\) and the largest diagonal is not smaller than the largest side. As \(\tau\in\mathcal{S}_{0}\), \(\tau_{\vee}\) is nonnegative, nondecreasing, and concave, see Lemma 32. Thus, we can apply Lemma 47 to
\[x_{1}=x_{2}=\left\|\frac{u-v}{2}\right\|^{2}\,,\quad x_{3}=x_{4}=\left\|\frac{ u+v}{2}\right\|^{2}\,,\quad x_{5}=\|u\|^{2}\,,\quad x_{6}=\|v\|^{2}\,, \tag{63}\]
where \(x_{1}+x_{2}+x_{3}+x_{4}\geq x_{5}+x_{6}\) is ensured by (62). We obtain
\[\tau(\|u\|)+\tau(\|v\|)\leq 2\tau\bigg{(}\bigg{\|}\frac{u-v}{2}\bigg{\|} \bigg{)}+2\tau\bigg{(}\bigg{\|}\frac{u+v}{2}\bigg{\|}\bigg{)}\,\,. \tag{64}\]
To extend the result from parallelograms to any quadrilateral, we note that \(\tau\) is nondecreasing and convex, and apply Lemma 46: For every \(x\in V\),
\[2\tau\bigg{(}\bigg{\|}\frac{u-v}{2}\bigg{\|}\bigg{)} \leq\tau(\|x\|)+\tau(\|u-v-x\|)\,\,, \tag{65}\] \[2\tau\bigg{(}\bigg{\|}\frac{u+v}{2}\bigg{\|}\bigg{)} \leq\tau(\|u-x\|)+\tau(\|v+x\|)\,\,. \tag{66}\]
By appropriate choice of \(u,v,x\) for a given quadrilateral with vertices \(y,z,q,p\), see Figure 11, we have shown
\[\tau(\|y-q\|)+\tau(\|z-p\|)\leq\tau(\|q-p\|)+\tau(\|y-z\|)+\tau(\|y-p\|)+\tau( \|z-q\|) \tag{67}\]
and finished the proof of Theorem 3.
## Appendix B Proof of Theorem 23
If we want to show \(f(x)\leq 0\) for all \(x\in A\), where \(A\subseteq\mathbb{R}^{n}\), for a continuous function \(f\colon A\to\mathbb{R}\), it is enough to prove the inequality on a dense subset of \(A\). We use this fact in the following. If we write an expression with a quotient, we silently restrict the domains of the real parameters in all statements about this expression to a domain on which the denominator is not \(0\). The restricted domain, will always be dense in the unrestricted domain.
### Summary of Properties
Recall \(\tau_{\!\!\bigvee}(x)=\tau(\sqrt{x})\). The next lemma shows some simple properties of \(\tau_{\!\!\bigvee}\) and its derivatives.
**Lemma 32**.: Let \(\tau\in\mathcal{S}_{0}\).
* Then \(\tau_{\!\!\bigvee}\) is nonnegative, nondecreasing, and concave.
* If \(\tau_{\!\!\bigvee}\) is differentiable, then \(\tau_{\!\!\bigvee}^{\prime}\) is nonnegative and nonincreasing.
* If \(\tau_{\!\!\bigvee}\) is twice differentiable, then \(\tau_{\!\!\bigvee}^{\prime\prime}\) is nonpositive.
Proof.: As \(\tau\) is nonnegative, so is \(\tau_{\!\!\bigvee}\). We have
\[\tau_{\!\!\bigvee}^{\prime}(x) =\frac{\tau^{\prime}(\sqrt{x})}{2\sqrt{x}}\,,\] \[\tau_{\!\!\bigvee}^{\prime\prime}(x) =\frac{\tau^{\prime\prime}(\sqrt{x})-\frac{\tau^{\prime}(\sqrt{x })}{\sqrt{x}}}{4x}\,.\]
As \(\tau^{\prime}\) is nonnegative, so is \(\tau_{\!\!\bigvee}^{\prime}\). Thus, \(\tau\) is increasing. Furthermore, as \(\tau^{\prime\prime}\) is nonincreasing,
\[\tau^{\prime}(\sqrt{x})-\tau^{\prime}(0)=\int_{0}^{\sqrt{x}}\tau^{\prime \prime}(u)\mathrm{d}u\geq\sqrt{x}\tau^{\prime\prime}(x)\,. \tag{68}\]
Thus, with \(\tau^{\prime}(0)\geq 0\), we obtain \(\tau^{\prime}(\sqrt{x})\geq\sqrt{x}\tau^{\prime\prime}(x)\). Hence, \(\tau_{\!\!\bigvee}^{\prime\prime}\) is nonpositive, \(\tau_{\!\!\bigvee}^{\prime}\) is nonincreasing, and \(\tau_{\!\!\bigvee}\) is concave.
Properties of \(\tau\in\mathcal{S}_{0}^{3}\) (Lemma 13) and the corresponding \(\tau_{\!\!\bigvee}\) (Lemma 32) are summarized in Table 2 for reference. There and in the proof below, we use following shorthand notation for properties of functions \(f\colon[0,\infty)\to\mathbb{R}\):
* \(f\geq 0\): \(f\) is nonnegative.
* \(f\nearrow\): \(f\) is nondecreasing.
* \(f\rarrow\): \(f\) is concave.
Figure 11: Four points in a vector space \(V\). Their relative position is described by three vectors \(u,v,x\in V\).
### Lemma 33: Elimination of \(r\)
We will show that the following lemma implies Theorem 23.
**Lemma 33** (Elimination of \(r\)).: Let \(\tau\in\mathcal{S}_{0}^{3}\).
1. For all \(a,b,c\in[0,\infty)\), \(s\in[-1,\min(1,2\frac{a}{c}-1)]\), we have \[\tau(a)-\tau(c)-\tau(|a-b|)+\tau_{\surd}(c^{2}-2scb+b^{2})\leq 2b\tau^{\prime}(a- sc)\enspace.\] (69)
2. For all \(a,b,c\in[0,\infty)\) with \(a\geq c\), we have \[\tau(a)-\tau(c)-\tau_{\surd}((a-b)^{2}-4bc)+\tau_{\surd}(b+c)\leq 2b\tau^{ \prime}(a-c)\,.\] (70)
#### b.2.1 Proof that Lemma 33 implies Theorem 23
For this proof, we first show some auxiliary lemmas. We distinguish the cases \(ra-sc\leq|a-c|\) and \(ra-sc\geq|a-c|\) as well as \(a\geq c\) and \(c\geq a\). Some trivial implications of these cases are recorded in following lemma.
**Lemma 34**.: Let \(a,b,c\geq 0\), \(r,s\in[-1,1]\). Then
\[ra-sc\geq a-c \Leftrightarrow s\leq(r-1)\frac{a}{c}+1 \Leftrightarrow r\geq(s-1)\frac{c}{a}+1\,,\] \[ra-sc\geq c-a \Leftrightarrow s\leq(r+1)\frac{a}{c}-1 \Leftrightarrow r\geq(s+1)\frac{c}{a}-1\,.\]
Denote
\[\ell_{\tau}(a,b,c,r,s):=\tau(a)-\tau(c)-\tau_{\surd}\big{(}a^{2}-2rab+b^{2} \big{)}+\tau_{\surd}\big{(}c^{2}-2scb+b^{2}\big{)}\,\] \[F_{\tau}(a,b,c,r,s):=\ell_{\tau}(a,b,c,r,s)-2b\tau^{\prime}(ra-sc)\.\]
For \(ra-sc\geq|a-c|\), we want to show \(F_{\tau}(a,b,c,r,s)\leq 0\). Because of the next lemma, we can reduce the number of values of \(r\) for which we need to check this inequality.
**Lemma 35** (Convexity in \(r\)).: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\geq 0\), \(s,r\in[-1,1]\). Assume \(ra-sc\geq 0\).
Then
\[\partial_{r}^{2}F_{\tau}(a,b,c,r,s)\geq 0\,.\]
Proof.: As \(\tau_{\surd}^{\prime\prime},\tau^{\prime\prime\prime}\leq 0\), we have
\[\partial_{r}^{2}\tau_{\surd}\big{(}a^{2}-2rab+b^{2}\big{)}=4a^{2}b^{2}\tau_{ \surd}^{\prime\prime}(a^{2}-2rab+b^{2})\leq 0\,,\partial_{r}^{2}\tau^{\prime}(ra- sc)=a^{2}\tau^{\prime\prime\prime}(ra-sc)\leq 0\,.\]
Thus,
\[\partial_{r}^{2}F_{\tau}(a,b,c,r,s)=-\partial_{r}^{2}\tau_{\surd}\big{(}a^{2} -2rab+b^{2}\big{)}-2b\partial_{r}^{2}\tau^{\prime}(ra-sc)\geq 0\,.\]
In the case \(|a-c|\geq ra-sc\), the right-hand side of (34) does not depend on \(r\) or \(s\). Thus, we will only need to check the inequality with the left-hand side \(\ell_{\tau}\) maximized in \(r\) and \(s\).
**Lemma 36** (Maximizing the left-hand side for \(|a-c|\geq ra-sc\)).: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\in[0,\infty)\), \(r,s\in[-1,1]\). Assume \(|a-c|\geq ra-sc\).
1. If \(a\geq c\) and \(a^{2}\leq c^{2}+2ab-2cb\), then \[\ell_{\tau}(a,b,c,r,s)\leq\tau(a)-\tau(c)-\tau(|a-b|)+\tau(|c-b|)\,.\]
2. If \(a\geq c\) and \(a^{2}\geq c^{2}+2ab-2cb\), then \[\ell_{\tau}(a,b,c,r,s)\leq\tau(a)-\tau(c)-\tau_{\nearrow}((a-b)^{2}+4cb)+\tau( c+b)\,.\]
3. If \(a\leq c\), then \[\ell_{\tau}(a,b,c,r,s)\leq\tau(a)-\tau(c)-\tau_{\nearrow}(|a-b|)+\tau_{ \nearrow}((c+b)^{2}-4ab)\,.\]
Proof.: As \(\tau_{\nearrow}\nearrow\), \(s\mapsto\ell_{\tau}(a,b,c,r,s)\searrow\) and \(r\mapsto\ell_{\tau}(a,b,c,r,s)\nearrow\), i.e., for \(s_{0},r_{0}\in[-1,1]\),
\[\max_{s\geq s_{0},r\leq r_{0}}\ell_{\tau}(a,b,c,r,s)=\ell_{\tau}(a,b,c,r_{0},s _{0})\,.\]
Case 1: \(a\geq c\). For \(r\in[-1,1]\), set \(s_{\sf min}(r):=(r-1)\frac{a}{c}+1\), cf. Lemma 34. Define
\[f(r) :=\ell_{\tau}(a,b,c,r,s_{\sf min}(r))\] \[=\tau(a)-\tau(c)-\tau_{\nearrow}(a^{2}-2rab+b^{2})+\tau_{ \nearrow}(c^{2}-2rab+2ab-2cb+b^{2})\,.\]
Then
\[\frac{f^{\prime}(r)}{2ab}=\tau_{\nearrow}^{\prime}(a^{2}-2rab+b^{2})-\tau_{ \nearrow}^{\prime}(c^{2}-2rab+2ab-2cb+b^{2})\,.\]
Case 1.1: \(a^{2}\leq c^{2}+2ab-2cb\). As \(\tau_{\nearrow}^{\prime}\searrow\), we have
\[a^{2}-2rab+b^{2} \leq c^{2}-2rab+2ab-2cb+b^{2}\,,\] \[\tau_{\nearrow}^{\prime}(a^{2}-2rab+b^{2}) \geq\tau_{\nearrow}^{\prime}(c^{2}-2rab+2ab-2cb+b^{2})\,.\]
Thus, \(f^{\prime}(r)\geq 0\) and \(f\) is maximal at \(r=r_{\sf max}=1\), with \(s_{\sf min}(r)=1\). Hence,
\[\ell_{\tau}(a,b,c,r,s)\leq f(1)=\tau(a)-\tau(c)-\tau(|a-b|)+\tau(|c-b|)\,.\]
Case 1.2: \(a^{2}\geq c^{2}+2ab-2cb\). As \(\tau_{\nearrow}^{\prime}\searrow\), we have
\[a^{2}-2rab+b^{2} \geq c^{2}-2rab+2ab-2cb+b^{2}\,,\] \[\tau_{\nearrow}^{\prime}(a^{2}-2rab+b^{2}) \leq\tau_{\nearrow}^{\prime}(c^{2}-2rab+2ab-2cb+b^{2})\,.\]
Thus, \(f^{\prime}(r)\leq 0\) and \(f\) is maximal at \(r=r_{\sf min}=1-2\frac{c}{a}\), with \(s_{\sf min}(r)=-1\). Hence,
\[\ell_{\tau}(a,b,c,r,s)\leq f(r_{\sf min})=\tau(a)-\tau(c)-\tau_{\nearrow}((a-b )^{2}+4cb)+\tau(c+b)\,.\]
Case 2: \(a\leq c\). For \(r\in[-1,1]\), set \(s_{\sf min}(r):=(r+1)\frac{a}{c}-1\), cf. Lemma 34. Define
\[f(r) :=\ell_{\tau}(a,b,c,r,s_{\sf min}(r))\] \[=\tau(a)-\tau(c)-\tau_{\nearrow}(a^{2}-2rab+b^{2})+\tau_{\nearrow }(c^{2}-2rab-2ab+2cb+b^{2})\,.\]
Then
\[\frac{f^{\prime}(r)}{2ab}=\tau^{\prime}_{\sqrt{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Define
\[f(a,b,c):=\tau(a)-\tau(c)-\tau_{\nearrow}((a-b)^{2}+4cb)+\tau(c+b)-2b\tau^{\prime} (a-c)\,.\]
By Lemma 52 and Lemma 53, \(\partial_{b}f(a,b,c)\leq 0\). Thus, as \(b\geq 0\), we have
\[f(a,b,c)\leq f(a,0,c)=0\,.\]
### Proof of Lemma 33 (i) for \(c\geq a\geq b\geq sc\)
**Lemma 38**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\geq 0\), \(s\in[-1,1]\). Assume \((s+1)c\leq 2a\), \(c\geq a\geq b\geq sc\). Then
\[\tau(a)-\tau(c)-\tau(a-b)+\tau_{\nearrow}(c^{2}-2scb+b^{2})\leq 2b\tau^{\prime} (a-sc)\,.\]
Proof.: Define
\[f(a,b,c,s) :=\tau(a)-\tau(c)-\tau(a-b)+\tau_{\nearrow}(c^{2}-2scb+b^{2})-2b \tau^{\prime}(a-sc)\,,\] \[\partial_{b}f(a,b,c,s) =\tau^{\prime}(a-b)+2(b-sc)\tau^{\prime}_{\nearrow}(c^{2}-2scb+b^ {2})-2\tau^{\prime}(a-sc)\,.\]
By Lemma 54, \(\partial_{b}f(a,b,c,s)\leq 0\). Thus, as \(b\geq b_{\sf min}:=\max(0,sc)\), we have \(f(a,b,c,s)\leq f(a,b_{\sf min},c,s)\). If \(s\leq 0\), then \(b_{\sf min}=0\) and
\[f(a,0,c,s)=\tau(a)-\tau(c)-\tau(a)+\tau_{\nearrow}(c^{2})=0\,.\]
For \(s\geq 0\), we have \(b_{\sf min}=sc\). Define
\[g(a,c,s) :=f(a,sc,c,s)=\tau(a)-\tau(c)-\tau(a-sc)+\tau_{\nearrow}((1-s^{2} )c^{2})-2sc\tau^{\prime}(a-sc)\,, \tag{77}\] \[\partial_{a}g(a,c,s) =\tau^{\prime}(a)-\tau^{\prime}(a-sc)-2sc\tau^{\prime\prime}(a-sc )\,. \tag{78}\]
Set \(d:=sc\) and define
\[h(a,d):=\tau^{\prime}(a)-\tau^{\prime}(a-d)-2d\tau^{\prime\prime}(a-d)= \partial_{a}g(a,c,s)\,. \tag{79}\]
Then \(\partial_{a}h(a,d)\leq 0\) by Lemma 55. As \(a\geq b\geq sc=d\), this means
\[h(a,d)\leq h(d,d)=\tau^{\prime}(d)-\tau^{\prime}(d)-2(d-d)\tau^{\prime\prime} (d)=0\,.\]
Thus, \(\partial_{a}g(a,c,s)=h(a,d)\leq 0\). Therefore, as \(a\geq a_{\sf min}:=\frac{s+1}{2}c\),
\[g(a,c,s) \leq g(a_{\sf min},c,s)\] \[=\tau(\frac{s+1}{2}c)-\tau(c)-\tau(\frac{1-s}{2}c)+\tau_{\nearrow }((1-s^{2})c^{2})-2sc\tau^{\prime}(\frac{1-s}{2}c)\,.\]
Set \(u:=\frac{1}{2}(c+sc)\), \(v:=\frac{1}{2}(c-sc)\) with \(0\leq v\leq u\) due to \(s\in[0,1]\), and define
\[\ell(u,v):=\tau(u)-\tau(u+v)-\tau(v)+\tau_{\nearrow}(4uv)-2(u-v)\tau^{\prime} (v)=g(a_{\sf min},c,s)\,. \tag{80}\]
By Lemma 56, \(\partial_{u}\ell(u,v)\leq 0\). Thus, as \(u\geq v\), we have
\[\ell(u,v) \leq\ell(v,v)\] \[=\tau(v)-\tau(2v)-\tau(v)+\tau_{\nearrow}(4vv)-2(v-v)\tau^{\prime} (v)\] \[=0\,.\]
Therefore, we obtain \(f(a,sc,c,s)\leq g(a,c,s)\leq\ell(u,v)\leq 0\) and we have finally shown \(f(a,b,c,s)\leq 0\).
### Proof of Lemma 33 (i) for \(c\geq a\geq b\), \(b\leq sc\)
**Lemma 39**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\geq 0\), \(s\in[-1,1]\). Assume \(a\leq c\), \((s+1)c\leq 2a\), \(a\geq b\), and \(b\leq sc\). Then
\[\tau(a)-\tau(c)-\tau(a-b)+\tau_{\surd}(c^{2}-2scb+b^{2})\leq 2b\tau^{\prime}(a- sc)\,.\]
Proof.: Define
\[f(a,b,c,s):=\tau(a)-\tau(c)-\tau(a-b)+\tau_{\surd}(c^{2}-2scb+b^{2})-2br^{\prime }(a-sc)\,.\]
By Lemma 60, \(\partial_{a}f(a,b,c,s)\leq 0\). Thus, as \(a\geq a_{\sf min}:=(1+s)c/2\), we have
\[f(a,b,c,s) \leq f(a_{\sf min},b,c,s)\] \[=:g(b,c,s)\,.\]
By Lemma 40, \(\partial_{b}g(b,c,s)+\partial_{c}g(b,c,s)\leq 0\). Set \(u:=c-b\). Define \(h(b,u,s):=g(b,u+b,s)\). Then \(\partial_{b}h(b,u,s)=\partial_{b}g(b,c,s)+\partial_{c}g(b,c,s)\leq 0\). Thus, as \(b\geq 0\), we have
\[h(b,u,s)\leq h(0,u,s)=g(0,u,s)=\tau\bigg{(}\frac{1}{2}(1+s)u\bigg{)}-\tau(u)- \tau\bigg{(}\frac{1}{2}(1+s)u\bigg{)}+\tau_{\surd}(u^{2})=0\,.\]
Thus,
\[f(a,b,c,s)\leq g(b,c,s)=h(b,u,s)\leq 0\,.\]
**Lemma 40**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b,c\geq 0\), \(s\in[-1,1]\). Assume \(b\leq sc\). Define
\[g(b,c,s):=\tau\bigg{(}\frac{1}{2}(1+s)c\bigg{)}-\tau(c)-\tau\bigg{(}\frac{1}{ 2}(1+s)c-b\bigg{)}+\tau_{\surd}(c^{2}-2scb+b^{2})-2br^{\prime}\bigg{(}\frac{1} {2}(1-s)c\bigg{)}\,\,.\]
Then
\[\partial_{b}g(b,c,s)+\partial_{c}g(b,c,s)\leq 0\,.\]
Proof.: We have
\[\partial_{b}g(b,c,s) =\tau^{\prime}\bigg{(}\frac{1}{2}(1+s)c-b\bigg{)}-2(sc-b)\tau^{ \prime}_{\surd}(c^{2}-2scb+b^{2})-2\tau^{\prime}\bigg{(}\frac{1}{2}(1-s)c \bigg{)}\,\,,\] \[\partial_{c}g(b,c,s) =\frac{1}{2}(1+s)\tau^{\prime}\bigg{(}\frac{1}{2}(1+s)c\bigg{)}- \tau^{\prime}(c)-\frac{1}{2}(1+s)\tau^{\prime}\bigg{(}\frac{1}{2}(1+s)c-b \bigg{)}+\] \[2(c-sb)\tau^{\prime}_{\surd}(c^{2}-2scb+b^{2})-(1-s)b\tau^{ \prime\prime}\bigg{(}\frac{1}{2}(1-s)c\bigg{)}\,\,.\]
Define
\[f(b,c,s):=\frac{(1-s)c}{2}\tau^{\prime}(c-b)+2(1-s)(b+c)\tau^{ \prime}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
By Lemma 59, \(\partial_{u}h(u,v)\leq 0\). Thus, as \(u\geq 0\), we have
\[h(u,v)\leq h(0,v)=\tau(0)-\tau(v)+\tau(v)=0\,.\]
Thus, \(f(a,b,c,s)\leq g(a,c,s)\leq h(u,v)\leq 0\).
### Proof of Lemma 33 (i) for \(a\geq c\), \(b\leq 2sc\), \(sc\geq a-b\)
**Lemma 42**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\in[0,\infty)\), \(s\in[-1,1]\). Assume \(\frac{1}{2}b\leq sc\), \(sc\geq a-b\), and \(a\geq c\). Then
\[\tau(a)-\tau(c)-\tau(a-b)+\tau_{\nearrow}(c^{2}-2scb+b^{2})\leq 2b\tau^{ \prime}\bigg{(}\frac{a-sc}{2}\bigg{)}. \tag{82}\]
Proof.: Define
\[x_{1}^{+}:=a^{2}\,,\qquad x_{2}^{+}:=c^{2}-2scb+b^{2}\,,\qquad x_{1}^{-}:=c^{2} \,,\qquad x_{2}^{-}:=(a-b)^{2}\]
to write the left-hand side of (82) as
\[\tau_{\nearrow}(x_{1}^{+})+\tau_{\nearrow}(x_{2}^{+})-\tau_{\nearrow}(x_{1}^{- })-\tau_{\nearrow}(x_{2}^{-})\,.\]
As \(a\geq c\),
\[x_{1}^{+}+x_{2}^{+}-x_{1}^{-}-x_{2}^{-}=2b\,(a-sc)\geq 0\,. \tag{83}\]
Because \(a\geq c\) and \(b\leq 2sc\), we have \(a-b\geq a-2sc\geq a-2c\geq-c\). Together with \(a-b\leq a\), we obtain \(x_{1}^{+}\geq\max(x_{1}^{-},x_{2}^{-})\).
Case 1: \(x_{2}^{+}\geq\min(x_{1}^{-},x_{2}^{-})\): By first applying Lemma 49 and then using (83), we obtain
\[\tau_{\nearrow}(x_{1}^{+})+\tau_{\nearrow}(x_{2}^{+})-\tau_{\nearrow}(x_{1}^{ -})-\tau_{\nearrow}(x_{2}^{-})\leq 2\tau_{\nearrow}(b\,(a-sc))\.\]
Case 2: \(x_{2}^{+}\leq\min(x_{1}^{-},x_{2}^{-})\): By (83), we have
\[x_{1}^{+}+x_{2}^{+}\geq x_{1}^{-}+x_{2}^{-}\,. \tag{84}\]
By first applying Lemma 50, then using (83), and finally \(\tau_{\nearrow}\frown\), we obtain
\[\tau_{\nearrow}(x_{1}^{+})+\tau_{\nearrow}(x_{2}^{+})-\tau_{ \nearrow}(x_{1}^{-})-\tau_{\nearrow}(x_{2}^{-})\] \[\leq\tau_{\nearrow}(2b\,(a-sc))\] \[\leq 2\tau_{\nearrow}(b\,(a-sc))\.\]
Finally: The condition \(0\leq a-sc\leq b\) together with Lemma 17 implies
\[\tau_{\nearrow}(b\,(a-sc))\leq b\tau^{\prime}\bigg{(}\frac{a-sc}{2}\bigg{)}\.\qed\]
### Proof of Lemma 33 (i) for \(a\geq c\), \(b\geq 2sc\)
**Lemma 43**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c[0,\infty)\), \(s\in[-1,1]\). Assume \(b\geq 2sc\), \(a\geq sc\). Then
\[\tau(a)-\tau(c)-\tau(|a-b|)+\tau(c^{2}-2scb+b^{2})\leq 2\tau^{\prime} \bigg{(}\frac{a-sc}{2}\bigg{)}\.\]
Proof.: The function \(x\mapsto\tau_{\sqrt{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(f(sc,b,c,s)\leq 0\). Thus, \(f(x,b,c,s)\leq 0\). Then we obtain, using Lemma 15 (i) and \(\tau^{\prime}\nearrow\),
\[\tau(a)-\tau(c)-\tau(a-b)+\tau_{\surd}(c^{2}-2scb+b^{2})\leq 2b\tau^{\prime} \bigg{(}\frac{a-sc}{2}\bigg{)}\.\]
**Lemma 45**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(x,b,c\in[0,\infty)\). Assume \(b\leq 2x\), \(x+b\geq c\), \(x\leq c\). Then
\[\tau(x+b)+\tau_{\surd}(c^{2}-2xb+b^{2})\leq\tau(c)+\tau_{\surd}(x)+2\tau(b)\,.\]
Proof.: Define
\[f(x,b,c) :=\tau(x+b)+\tau_{\surd}(c^{2}-2xb+b^{2})-\tau(c)-\tau(x)-2\tau(b)\,,\] \[g(x,b,c) :=\partial_{x}f(x,b,c) =\tau^{\prime}(x+b)-\tau^{\prime}(x)-2b\tau_{\surd}^{\prime}(c^{2 }-2xb+b^{2})\,.\]
By Lemma 65, \(\partial_{x}g(x,b,c)\leq 0\). Thus, as \(x\geq x_{\sf min}:=\max(\frac{b}{2},c-b)\), we have \(g(x,b,c)\leq g(x_{\sf min},b,c)\). Case 1, \(x_{\sf min}=c-b\):
In this case \(c-b\geq b/2\), i.e., \(2c\geq 3b\). By Lemma 61, \(g(c-b,b,c)\leq 0\). Thus, as \(x\geq x_{\sf min}=c-b\), we have \(f(x,b,c)\leq f(c-b,b,c)\). By Lemma 62, \(f(c-b,b,c)\leq 0\). Thus, \(f(x,b,c)\leq 0\).
Case 2, \(x_{\sf min}=\frac{b}{2}\):
In this case \(c-b\leq b/2\), i.e., \(2c\leq 3b\). By Lemma 63, \(g(b/2,b,c)\leq 0\). Thus, as \(x\geq x_{\sf min}=b/2\), we have \(f(x,b,c)\leq f(b/2,b,c)\). By Lemma 64, \(f(b/2,b,c)\leq 0\). Thus, \(f(x,b,c)\leq 0\).
## Appendix C Auxiliary Results
To make the proofs presented in appendix A and B more readable, we have extracted some calculations to this section.
### Merging Terms
**Lemma 46**.:
* Let \(f\colon[0,\infty)\to\mathbb{R}\). Assume \(f\) is concave. Let \(a,b\in[0,\infty)\) with \(a\geq b\). Then \(x\mapsto f(a+x)+f(b-x)\) is nonincreasing. If additionally \(f(0)\geq 0\), then \(f\) is subadditive.
* Let \(f\colon[0,\infty)\to\mathbb{R}\). Assume \(f\) is convex. Let \(a,b\in[0,\infty)\) with \(a\geq b\). Then \(x\mapsto f(a+x)+f(b-x)\) is nondecreasing.
Proof.: We prove the first part; the second part is analogous. As \(f\) is concave, we have
\[f(a) \geq\frac{a-b+x}{a-b+2x}f(a+x)+\frac{x}{a-b+2x}f(b-x)\,,\] \[f(b) \geq\frac{x}{a-b+2x}f(a+x)+\frac{a-b+x}{a-b+2x}f(b-x)\]
for \(x\in[0,b]\). Adding the two inequalities yields
\[f(a)+f(b)\geq f(a+x)+f(b-x)\,. \tag{86}\]
As this inequality also applies to \(\tilde{a}=a+x\), \(\tilde{b}=b-x\), we have that \(x\mapsto f(a+x)+f(b-x)\) is nonincreasing. Subadditivity follows by setting \(x=b\)
**Lemma 47**.: Let \(f\colon[0,\infty)\to\mathbb{R}\). Assume \(f(0)\geq 0\), \(f\) is nondecreasing, and \(f\) is concave. Let \(x_{1},\ldots,x_{6}\in[0,\infty)\). Assume \(\max(x_{1},x_{2},x_{3},x_{4})\leq\max(x_{5},x_{6})\) and \(x_{1}+x_{2}+x_{3}+x_{4}\geq x_{5}+x_{6}\). Then
\[f(x_{1})+f(x_{2})+f(x_{3})+f(x_{4})\geq f(x_{5})+f(x_{6})\,. \tag{87}\]
Proof.: Without loss of generality assume \(x_{1}\geq x_{2}\geq x_{3}\geq x_{4}\) and \(x_{5}\geq x_{6}\).
First consider the case \(x_{1}\geq x_{6}\). We decrease \(x_{5}\) and increase \(x_{6}\) while holding \(x_{5}+x+6\) constant until one \(x_{\bullet}\) on the right-hand side coincides with one \(x_{\bullet}\) one the left-hand side. By Lemma 46, this can only increase the right-hand side of (87). If \(\{x_{1},x_{2},x_{3},x_{4}\}\cap\{x_{5},x_{6}\}\neq\emptyset\), we can subtract the term with the value in the intersection from (87). The inequality of the form \(f(x_{1})+f(x_{2})+f(x_{3})\geq f(x_{1}+x_{2}+x_{3})\geq f(x_{5})\) for \(x_{5}\geq x_{1}+x_{2}+x_{3}\) is obtained using subadditivity of \(f\), see Lemma 46, and the assumption that \(f\) is nondecreasing.
Now consider the case \(x_{1}<x_{6}\). Set \(s:=(x_{5}+x_{6})/2\). Using Lemma 46, we obtain \(f(x_{5})+f(x_{6})\leq 2f(s)\). Furthermore \(x_{1}\leq s\) and \(x_{1}+x_{2}+x_{3}+x_{4}\leq 2s\). Thus, again using Lemma 46 and the assumption that \(f\) is nondecreasing, we can increase \(x_{1}\) and \(x_{2}\) while decreasing \(x_{3}\) and \(x_{4}\) to \(0\) to get
\[f(x_{1})+f(x_{2})+f(x_{3})+f(x_{4})\geq 2f(s)+2f(0)\,. \tag{88}\]
As \(f(0)\geq 0\), we arrive at the desired result.
**Lemma 48**.: Let \(\tau\in\mathcal{S}\). Let \(a,b,c,d\in[0,\infty)\). Assume \(a\geq b\geq c\geq d\) and \(a+d\leq b+c\). Then
\(\tau^{\prime}(a)+\tau^{\prime}(d)\leq\tau^{\prime}(b)+\tau^{\prime}(c)\).
Proof.: As \(\tau^{\prime}\) is concave, Lemma 46 applies.
**Lemma 49**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a\geq b\geq 0\), \(d\geq c\geq 0\). Then
\[\tau_{\surd}(a)-\tau_{\surd}(b)-\tau_{\surd}(c)+\tau_{\surd}(d)\leq 2\tau_{ \surd}\bigg{(}\frac{1}{2}(a-b+d-c)\bigg{)}\.\]
Proof.: As \(a\geq b\), \(d\geq c\), subaddiitivi of \(\tau_{\surd}\)
\[\tau_{\surd}(a)-\tau_{\surd}(b)-\tau_{\surd}(c)+\tau_{\surd}(d)\leq\tau_{ \surd}(a-b)+\tau_{\surd}(d-c)\,.\]
Furthermore, by concavity of \(\tau_{\surd}\),
\[\frac{1}{2}\tau_{\surd}(a-b)+\frac{1}{2}\tau_{\surd}(d-c)\leq\tau_{\surd} \bigg{(}\frac{1}{2}(a-b+d-c)\bigg{)}\.\qed\]
**Lemma 50**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a\geq b\geq c\geq d\geq 0\), \(a+d\geq b+c\). Then
\[\tau_{\surd}(a)-\tau_{\surd}(b)-\tau_{\surd}(c)+\tau_{\surd}(d)\leq\tau_{ \surd}(a-b-c+d)\,.\]
Proof.: Define \(f(x,y)=\tau_{\surd}(x)+\tau_{\surd}(y)-\tau_{\surd}(x+y)\) for \(x,y\geq 0\). Then \(\partial_{x}f(x,y)=\tau_{\surd}^{\prime}(x)-\tau_{\surd}^{\prime}(x+y)\geq 0\) and similarly \(\partial_{y}f(x,y)\geq 0\). Set \(\delta:=a-b\) and \(\epsilon:=c-d\). The assumptions ensure \(\delta\geq\epsilon\geq 0\). Then,
\[f(b,\delta)\geq f(b,\epsilon)\geq f(d,\epsilon)\,.\]
Thus,
\[0 \geq f(d,\epsilon)-f(b,\delta)\] \[=\tau_{\surd}(d)+\tau_{\surd}(\epsilon)-\tau_{\surd}(d+\epsilon)- \tau_{\surd}(b)-\tau_{\surd}(\delta)+\tau_{\surd}(b+\delta)\] \[=\tau_{\surd}(d)+\tau_{\surd}(\epsilon)-\tau_{\surd}(c)-\tau_{ \surd}(b)-\tau_{\surd}(\delta)+\tau_{\surd}(a)\,.\]
With this we get
\[\tau_{\surd}(d)-\tau_{\surd}(c)-\tau_{\surd}(b)+\tau_{\surd}(a) \leq\tau_{\surd}(\delta)-\tau_{\surd}(\epsilon)\] \[\leq\tau_{\surd}(\delta-\epsilon)\] \[=\tau_{\surd}(a-b-c+d)\,.\qed\]
**Lemma 51** (Simple Merging Lemma).: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b\geq 0\), \(a,c\in\mathbb{R}\). Then
\[\tau(|a|)-\tau(|c|)-\tau(|a-b|)+\tau(|c-b|)\leq 2\left(\tau\bigg{(}\frac{a-c+ b}{2}\bigg{)}-\tau\bigg{(}\frac{|a-c-b|}{2}\bigg{)}\right)\mathds{1}_{a>c}\,.\]
Proof.: Define \(f(x):=\tau(|x|)-\tau(|x-y|)\). Then \(f^{\prime}(x)=\mathsf{sgn}(x)\tau^{\prime}(|x|)-\mathsf{sgn}(x)\tau^{\prime}( |x-y|)\). If \(y\geq 0\) then: if \(x\geq 0\) then \(|x|\geq|x-y|\), if \(x\leq 0\) then \(|x|\leq|x-y|\). Thus, \(f^{\prime}(x)\geq 0\), as \(\tau^{\prime}\) is increasing. Hence, \(f(x)\) is increasing. Thus, if \(a\leq c\), then
\[\tau(|a|)-\tau(|a-b|)\leq\tau(|c|)-\tau(|c-b|)\,.\]
This shows the inequality for the case \(a\leq c\).
Now assume \(a>c\). Set \(q:=a-b\) and define
\[g(b):=\tau(|q+b|)-\tau(|c|)-\tau(|q|)+\tau(|c-b|)-2\left(\tau\bigg{(}\frac{q- c}{2}+b\bigg{)}-\tau\bigg{(}\frac{q-c}{2}\bigg{)}\right)\,.\]
We have
\[g^{\prime}(b)=\mathsf{sgn}(q+b)\tau^{\prime}(|q+b|)-\mathsf{sgn}(c-b)\tau^{ \prime}(|c-b|)-2\tau^{\prime}\bigg{(}\frac{q-c}{2}+b\bigg{)}\,.\]
Case 1: \(\mathsf{sgn}(q+b)=+1\), \(\mathsf{sgn}(c-b)=+1\):
\[g^{\prime}(b)=\tau^{\prime}(q+b)-\tau^{\prime}(c-b)-2\tau^{\prime}\bigg{(} \frac{q-c}{2}+b\bigg{)}\,\,,\]
As \(\tau^{\prime}\) is concave, it is subadditive and \(\tau^{\prime}(2x)\leq 2\tau^{\prime}(x)\). Furthermore, \(q+b=a>c\geq c-b\). Thus,
\[\tau^{\prime}(q+b)-\tau^{\prime}(c-b)\leq\tau^{\prime}(q+b-c+b)\leq 2\tau^{ \prime}\bigg{(}\frac{q-c}{2}+b\bigg{)}\,\,.\]
Case 2: \(\mathsf{sgn}(q+b)=-1\), \(\mathsf{sgn}(c-b)=-1\):
\[g^{\prime}(b)=-\tau^{\prime}(-q-b)+\tau^{\prime}(b-c)-2\tau^{\prime}\bigg{(} \frac{q-c}{2}+b\bigg{)}\,\,,\]
Similarly to the first case, we have \(b-c\geq-c>-a=-q-b\) and
\[\tau^{\prime}(b-c)-\tau^{\prime}(-q-b)\leq\tau^{\prime}(b-c+q+b)\leq 2\tau^{ \prime}\bigg{(}\frac{q-c}{2}+b\bigg{)}\,\,.\]
Case 3: \(\mathsf{sgn}(q+b)=+1\), \(\mathsf{sgn}(c-b)=-1\):
\[\tau^{\prime}(q+b)+\tau^{\prime}(b-c)-2\tau^{\prime}\bigg{(}\frac{q-c}{2}+b\bigg{)}\,\]
\(\tau^{\prime}\) is concave, thus
\[\frac{1}{2}\tau^{\prime}(q+b)+\frac{1}{2}\tau^{\prime}(b-c)\leq\tau^{\prime} \bigg{(}\frac{q-c}{2}+b\bigg{)}\.\]
Case 4: \(\mathsf{sgn}(q+b)=-1\), \(\mathsf{sgn}(c-b)=+1\):
\[-\tau^{\prime}(-q-b)-\tau^{\prime}(c-b)-2\tau^{\prime}\bigg{(}\frac{q-c}{2}+b \bigg{)}\,\]
\[-\tau^{\prime}(-q-b)-\tau^{\prime}(c-b)\leq 0\,.\]
Together: In every case, we have \(g^{\prime}(b)\leq 0\) and \(g(0)=0\). Thus,
\[g(b)\leq 0\,.\]
### Mechanical Proofs
The following auxiliary results consist of simple term transformations. Their proofs are not commented further.
**Lemma 52**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\in[0,\infty)\). Assume \(a\geq c\), \(a-b-2c\geq 0\). Then
\[2(a-b-2c)\tau^{\prime}_{\bigvee}((a-b)^{2}+4cb)+\tau^{\prime}(c+b)-2\tau^{ \prime}(a-c)\leq 0\,. \tag{89}\]
Proof.: \[2(a-b-2c)\tau^{\prime}_{\bigvee}((a-b)^{2}+4cb)+\tau^{\prime}(c+b )-2\tau^{\prime}(a-c)\] \[\tau^{\prime}\geq 0 \leq 2(a-b-2c)\tau^{\prime}_{\bigvee}((a-b)^{2}+4cb)-\tau^{\prime} (a-c)\] \[=2(a-b)\tau^{\prime}_{\bigvee}((a-b)^{2}+4cb)-4c\tau^{\prime}_{ \bigvee}((a-b)^{2}+4cb)-\tau^{\prime}(a-c)\] \[a\geq b,\tau^{\prime}_{\bigvee}\searrow \leq 2(a-b)\tau^{\prime}_{\bigvee}((a-b)^{2})-4c\tau^{\prime}_{ \bigvee}((a-b)^{2}+4cb)-\tau^{\prime}(a-c)\] \[=\tau^{\prime}(a-b)-4c\tau^{\prime}_{\bigvee}(a^{2}-b((a-2c)+(a- b-2c)))-\tau^{\prime}(a-c)\] \[a-2c\geq b\geq 0,\tau^{\prime}_{\bigvee}\searrow \leq\tau^{\prime}(a-b)-4c\tau^{\prime}_{\bigvee}(a^{2})-\tau^{ \prime}(a-c)\] \[=\tau^{\prime}(a-b)-\frac{2c}{a}\tau^{\prime}(a)-\tau^{\prime}(a-c)\] \[\tau^{\prime}\nearrow \leq\tau^{\prime}(a)-\frac{2c}{a}\tau^{\prime}(a)-\tau^{\prime}(a-c)\] \[=\left(1-\frac{2c}{a}\right)\tau^{\prime}(a)-\tau^{\prime}(a-c)\] \[a\geq 2c,\tau^{\prime}\frown \leq\tau^{\prime}\bigg{(}\left(1-\frac{2c}{a}\right)a\bigg{)}-\tau ^{\prime}(a-c)\] \[=\tau^{\prime}(a-2c)-\tau^{\prime}(a-c)\] \[\tau^{\prime}\nearrow \leq 0\,.\]
**Lemma 53**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\in[0,\infty)\). Assume \(a\geq c\), \(a-b-2c\leq 0\). Then
\[-2(b+2c-a)\tau^{\prime}_{\bigvee}((a-b)^{2}+4cb)+\tau^{\prime}(c+b)-2\tau^{\prime }(a-c)\leq 0\,. \tag{90}\]
Proof.: \[-2(b+2c-a)\tau^{\prime}_{\bigvee}((a-b)^{2}+4cb)+\tau^{\prime}(c+b)-2 \tau^{\prime}(a-c)\] \[=-2(b+2c-a)\tau^{\prime}_{\bigvee}(a^{2}+b(-2a+b+4c))+\tau^{ \prime}(c+b)-2\tau^{\prime}(a-c)\] \[\leq-2(b+2c-a)\tau^{\prime}_{\bigvee}(a^{2}+b(-2a+b+4a))+\tau^{ \prime}(c+b)-2\tau^{\prime}(a-c)\] \[=-2(b+2c-a)\tau^{\prime}_{\bigvee}((a+b)^{2})+\tau^{\prime}(c+b)- 2\tau^{\prime}(a-c)\] \[=-\frac{b+2c-a}{a+b}\tau^{\prime}(a+b)+\tau^{\prime}(c+b)-2\tau^{ \prime}(a-c)\] \[a\geq c,\tau^{\prime}\nearrow \leq-\frac{b+2c-a}{a+b}\tau^{\prime}(a+b)+\tau^{\prime}(a+b)-2\tau^ {\prime}(a-c)\] \[=\frac{2(a-c)}{a+b}\tau^{\prime}(a+b)-2\tau^{\prime}(a-c)\] \[0\leq a-c\leq a+b,\tau^{\prime}\frown \leq 2\tau^{\prime}(a-c)-2\tau^{\prime}(a-c)\] \[=0\,.\]
**Lemma 54**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\geq 0\), \(s\in[-1,1]\). Assume \(a\geq b\geq sc\). Then
\[\tau^{\prime}(a-b)+2(b-sc)\tau^{\prime}_{\bigvee}(c^{2}-2scb+b^{2})-2\tau^{ \prime}(a-sc)\] \[c\geq sc,\tau^{\prime}_{\bigvee}\searrow \leq\tau^{\prime}(a-b)+2(b-sc)\tau^{\prime}_{\bigvee}(s^{2}c^{2}-2scb +b^{2})-2\tau^{\prime}(a-sc)\] \[=\tau^{\prime}(a-b)+\tau^{\prime}(b-sc)-2\tau^{\prime}(a-sc)\] \[a\geq b\geq sc,\tau^{\prime}\nearrow \leq\tau^{\prime}(a-sc)+\tau^{\prime}(a-sc)-2\tau^{\prime}(a-sc)\] \[=0\,.\]
**Lemma 55**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,d\in[0,\infty)\). Assume \(a\geq d\). Then
\[\tau^{\prime\prime}(a)-2\tau^{\prime\prime}(d)\leq 0\,.\]
Proof.: \[\tau^{\prime\prime}(a)-2\tau^{\prime\prime}(d)\] \[\tau^{\prime\prime}\searrow d\leq a \leq-\tau^{\prime\prime}(d)\] \[\tau^{\prime\prime}\geq 0 \leq 0\,.\]
**Lemma 56**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(u,v\in[0,\infty)\). Assume \(u\geq v\). Then
\[\tau^{\prime}(u)-\tau^{\prime}(u+v)+4v\tau^{\prime}_{\sqrt[]{}}(4uv)-2\tau^{ \prime}(v)\leq 0\,.\]
Proof.: \[\tau^{\prime}(u)-\tau^{\prime}(u+v)+4v\tau^{\prime}_{\sqrt[]{}}(4uv)-2\tau^{ \prime}(v)\] \[\tau^{\prime}\nearrow \leq 4v\tau^{\prime}_{\sqrt[]{}}(4uv)-2\tau^{\prime}(v)\] \[u\geq v,\tau^{\prime}_{\sqrt[]{}}\searrow 4v\tau^{\prime}_{\sqrt[]{}}(4vv)-2 \tau^{\prime}(v)\] \[=\tau^{\prime}(2v)-2\tau^{\prime}(v)\] \[\tau^{\prime}\frown \leq 0\,.\]
**Lemma 57**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\geq 0\), \(s\in[-1,1]\). Assume \((s+1)c\leq 2a\), \(a\leq b\). Then
\[-\tau^{\prime}(b-a)+2(b-sc)\tau^{\prime}_{\sqrt[]{}}(c^{2}-2scb+b^{2})-2\tau^{ \prime}(a-sc)\leq 0\,.\]
Proof.: \[-\tau^{\prime}(b-a)+2(b-sc)\tau^{\prime}_{\sqrt[]{}}(c^{2}-2scb+b^{2})-2\tau^ {\prime}(a-sc)\] \[\tau^{\prime}\frown(\text{subaddiity}) \leq-\tau^{\prime}(b-a+2(a-sc))+2(b-sc)\tau^{\prime}_{\sqrt[]{}}( c^{2}-2scb+b^{2})\] \[=-\tau^{\prime}(b+a-2sc)+2(b-sc)\tau^{\prime}_{\sqrt[]{}}(c^{2}-2 scb+b^{2})\] \[a\geq\frac{1+s}{2}c,\tau^{\prime}\nearrow \leq-\tau^{\prime}(b+\frac{1+s}{2}c-2sc)+2(b-sc)\tau^{\prime}_{\sqrt[]{ }}(c^{2}-2scb+b^{2})\] \[=-\tau^{\prime}(b+\frac{1-s}{2}c-sc)+2(b-sc)\tau^{\prime}_{\sqrt[] {}}(c^{2}-2scb+b^{2})\] \[1-s\geq 0,\tau^{\prime}\nearrow \leq-\tau^{\prime}(b-sc)+2(b-sc)\tau^{\prime}_{\sqrt[]{}}(c^{2}-2 scb+b^{2})\] \[sc\leq c,\tau^{\prime}_{\sqrt[]{}}\searrow \leq-\tau^{\prime}(b-sc)+2(b-sc)\tau^{\prime}_{\sqrt[]{}}((sc)^{2}-2 scb+b^{2})\] \[=-\tau^{\prime}(b-sc)+\tau^{\prime}(b-sc)\] \[=0\,.\]
**Lemma 58**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,c\in[0,\infty)\), \(s\in[-1,1]\). Assume \(a\geq sc\). Then
\[\tau^{\prime}(a)+2(a-sc)\tau^{\prime}_{\sqrt[]{}}(c^{2}-2sca+a^{2})-2\tau^{ \prime}(a-sc)-2a\tau^{\prime\prime}(a-sc)\leq 0\,.\]
Proof.: \[\tau^{\prime}(a)+2(a-sc)\tau^{\prime}_{\sqrt{}}(c^{2}-2sca+a^{2})-2 \tau^{\prime}(a-sc)-2a\tau^{\prime\prime}(a-sc)\] \[a\geq sc,sc\leq c,\tau^{\prime}_{\sqrt{}}\searrow \leq\tau^{\prime}(a)+2(a-sc)\tau^{\prime}_{\sqrt{}}((sc)^{2}-2sca+a^{2})-2 \tau^{\prime}(a-sc)-2a\tau^{\prime\prime}(a-sc)\] \[=\tau^{\prime}(a)+\tau^{\prime}(a-sc)-2\tau^{\prime}(a-sc)-2a\tau^ {\prime\prime}(a-sc)\] \[=\tau^{\prime}(a)-\tau^{\prime}(a-sc)-2a\tau^{\prime\prime}(a-sc)\] \[sc\leq 2a,\tau^{\prime\prime}\geq 0 \leq\tau^{\prime}(a)-\tau^{\prime}(a-sc)-sc\tau^{\prime\prime}(a-sc)\] Lemma 14 (ii) \[\leq 0\,.\]
**Lemma 59**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(u,v\in[0,\infty)\). Then
\[\tau^{\prime}(u)-\tau^{\prime}(u+v)+4v\tau^{\prime}_{\sqrt{}}(4uv+v^{2})-2 \tau^{\prime}(v)\leq 0\,.\]
Proof.: \[\tau^{\prime}(u)-\tau^{\prime}(u+v)+4v\tau^{\prime}_{\sqrt{}}(4uv+v^{2})-2 \tau^{\prime}(v)\] \[=\tau^{\prime}(u)-\tau^{\prime}(u+v)+\frac{1}{\sqrt{u/v+1/4}} \tau^{\prime}(2v\sqrt{u/v+1/4})-2\tau^{\prime}(v)\] \[2\sqrt{u/v+1/4}\geq 1,\tau^{\prime}\sim \leq\tau^{\prime}(u)-\tau^{\prime}(u+v)+\frac{2\sqrt{u/v+1/4}}{ \sqrt{u/v+1/4}}\tau^{\prime}(v)-2\tau^{\prime}(v)\] \[=\tau^{\prime}(u)-\tau^{\prime}(u+v)\] \[v\geq 0,\tau^{\prime}\nearrow \leq 0\,.\]
**Lemma 60**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(a,b,c\in[0,\infty)\) and \(s\in[-1,1]\). Assume \(a\geq sc\geq b\). Then
\[\tau^{\prime}(a)-\tau^{\prime}(a-b)-2b\tau^{\prime\prime}(a-sc)\leq 0\,.\]
Proof.: \[\tau^{\prime}(a)-\tau^{\prime}(a-b)-2b\tau^{\prime\prime}(a-sc)\] \[b\leq sc,\tau^{\prime\prime}\searrow \leq\tau^{\prime}(a)-\tau^{\prime}(a-b)-2b\tau^{\prime\prime}(a-b)\] Lemma 14 (ii) \[\leq b\tau^{\prime\prime}(a-b)-2b\tau^{\prime\prime}(a-b)\] \[=-b\tau^{\prime\prime}(a-b)\] \[\tau^{\prime\prime}\geq 0 \leq 0\,.\]
**Lemma 61**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b,c\in[0,\infty)\). Assume \(2c\geq 3b\). Then
\[\tau^{\prime}(c)-\tau^{\prime}(c-b)-2b\tau^{\prime}_{\sqrt{}}(c^{2}-2(c-b)b+b ^{2})\leq 0\,.\]
Proof.: \[\tau^{\prime}(c)-\tau^{\prime}(c-b)-2b\tau^{\prime}_{\sqrt{}}(c^{2}-2(c -b)b+b^{2})\] \[=\tau^{\prime}(c)-\tau^{\prime}(c-b)-2b\tau^{\prime}_{\sqrt{}}(c^{2} -b(2c-3b))\] \[2c\geq 3b,\tau^{\prime}_{\sqrt{}}\searrow \leq\tau^{\prime}(c)-\tau^{\prime}(c-b)-2b\tau^{\prime}_{\sqrt{}}( c^{2})\] \[=\tau^{\prime}(c)-\tau^{\prime}(c-b)-\frac{b}{c}\tau^{\prime}(c)\] \[=\frac{c-b}{c}\tau^{\prime}(c)-\tau^{\prime}(c-b)\] \[\frac{c-b}{c}\in[0,1],\tau^{\prime}\frown \leq\tau^{\prime}(c-b)-\tau^{\prime}(c-b)\] \[=0\,.\]
**Lemma 62**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b,c\in[0,\infty)\). Assume \(c\geq b\). Then
\[\tau_{\sqrt{}}((c-b)^{2}+2b^{2})-\tau(c-b)-2\tau(b)\leq 0\,.\]
Proof.: \[\tau_{\sqrt{}}((c-b)^{2}+2b^{2})-\tau(c-b)-2\tau(b)\] \[\tau_{\sqrt{}}\frown\ (\text{subadditive}) \leq\tau_{\sqrt{}}((c-b)^{2})+\tau_{\sqrt{}}(2b^{2})-\tau(c-b)-2 \tau(b)\] \[=\tau_{\sqrt{}}(2b^{2})-2\tau(b)\] \[\tau_{\sqrt{}}\frown\ (\text{subadditive}) \leq\tau_{\sqrt{}}(b^{2})+\tau_{\sqrt{}}(b^{2})-2\tau(b)\] \[=0\,.\]
**Lemma 63**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b,c\in[0,\infty)\). Assume \(2c\leq 3b\). Then
\[\tau^{\prime}\bigg{(}\frac{3}{2}b\bigg{)}-\tau^{\prime}\bigg{(}\frac{1}{2}b \bigg{)}-2b\tau^{\prime}_{\sqrt{}}(c^{2})\leq 0\,.\]
Proof.: \[c\leq\frac{3}{2}b,\tau^{\prime}_{\sqrt{}}\searrow \leq\tau^{\prime}\bigg{(}\frac{3}{2}b\bigg{)}-\tau^{\prime}\bigg{(} \frac{1}{2}b\bigg{)}-2b\tau^{\prime}_{\sqrt{}}\bigg{(}\bigg{(}\frac{3}{2}b \bigg{)}^{2}\bigg{)}\] \[=\tau^{\prime}\bigg{(}\frac{3}{2}b\bigg{)}-\tau^{\prime}\bigg{(} \frac{1}{2}b\bigg{)}-\frac{2}{3}\tau^{\prime}\bigg{(}\frac{3}{2}b\bigg{)}\] \[=\frac{1}{3}\tau^{\prime}\bigg{(}\frac{3}{2}b\bigg{)}-\tau^{\prime }\bigg{(}\frac{1}{2}b\bigg{)}\] \[\frac{1}{3}\leq 1,\tau^{\prime}\frown \leq\tau^{\prime}\bigg{(}\frac{1}{2}b\bigg{)}-\tau^{\prime}\bigg{(} \frac{1}{2}b\bigg{)}\] \[=0\,.\]
**Lemma 64**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b\in[0,\infty)\). Then
\[\tau(\frac{3}{2}b)-\tau(\frac{1}{2}b)-2\tau(b)\leq 0\,.\]
Proof.: \[\tau(\frac{3}{2}b)-\tau(\frac{1}{2}b)-2\tau(b)\] \[=\tau\backslash(\frac{9}{4}b^{2})-\tau\backslash(\frac{1}{4}b^{ 2})-2\tau(b)\] \[\tau_{\bigvee}\sim\ (\text{subadditive}) \leq\tau\backslash(\frac{8}{4}b^{2})-2\tau(b)\] \[=\tau\backslash(2b^{2})-2\tau(b)\] \[\tau_{\bigvee}\sim\ (\text{subadditive}) \leq\tau_{\bigvee}(b^{2})+\tau_{\bigvee}(b^{2})-2\tau(b)\] \[=0\,.\]
**Lemma 65**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(x,b,c\in[0,\infty)\). Assume \(x\leq c\). Then
\[\tau^{\prime\prime}(x+b)-\tau^{\prime\prime}(x)+4b^{2}\tau_{\bigvee}^{\prime \prime}(c^{2}-2xb+b^{2})\leq 0\,.\]
Proof.: \[\tau^{\prime\prime}(x+b)-\tau^{\prime\prime}(x)+4b^{2}\tau_{ \bigvee}^{\prime\prime}(c^{2}-2xb+b^{2})\] \[\tau_{\bigvee}^{\prime\prime}\leq 0 \leq\tau^{\prime\prime}(x+b)-\tau^{\prime\prime}(x)\] \[\tau^{\prime\prime}\searrow \leq 0\,.\]
**Lemma 66**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(u,c\in[0,\infty)\). Then
\[2\tau^{\prime}(u)-2\tau^{\prime}(u/2)-\frac{u}{2}\tau^{\prime\prime}(c)-u\tau^ {\prime\prime}(u/2)\leq 0\,.\]
Proof.: \[2\tau^{\prime}(u)-2\tau^{\prime}(u/2)-\frac{u}{2}\tau^{\prime \prime}(c)-u\tau^{\prime\prime}(u/2)\] \[\tau^{\prime\prime}\geq 0 \leq 2\tau^{\prime}(u)-2\tau^{\prime}(u/2)-u\tau^{\prime\prime}(u/2)\] Lemma 14 (ii) \[\leq u\tau^{\prime\prime}(u/2)-u\tau^{\prime\prime}(u/2)\] \[=0\,.\]
**Lemma 67**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(u\in[0,\infty)\). Then
\[u\tau^{\prime}(u)-2u\tau^{\prime}(u/2)\leq 0\,.\]
Proof.: \[\begin{array}{ll}&u\tau^{\prime}(u)-2u\tau^{\prime}(u/2)\\ \tau^{\prime}\sim&\leq 2u\tau^{\prime}(u/2)-2u\tau^{\prime}(u/2)\\ &=0\,.\end{array}\]
**Lemma 68**.: Let \(\tau\in\mathcal{S}_{0}^{3}\). Let \(b,c\in[0,\infty)\), \(s\in[-1,1]\). Assume \(b\leq sc\). Then
\[\frac{(1-s)c}{2}\tau^{\prime}\bigg{(}\frac{(1+s)c}{2}-b\bigg{)}+2( 1-s)(b+c)\tau^{\prime}_{\sqrt{}}(c^{2}-2scb+b^{2})-2\tau^{\prime}\bigg{(}\frac {(1-s)c}{2}\bigg{)}+\] \[\quad\frac{(1+s)c}{2}\tau^{\prime}\bigg{(}\frac{(1+s)c}{2}-\tau^ {\prime}(c)-b(1-s)\tau^{\prime\prime}\bigg{(}\frac{(1-s)c}{2}\bigg{)}\] \[\leq\frac{(1-s)c}{2}\tau^{\prime}(c-b)+2(1-s)(b+c)\tau^{\prime}_ {\sqrt{}}((c-b)^{2})-2\tau^{\prime}\bigg{(}\frac{(1-s)c}{2}\bigg{)}+\] \[\quad-\frac{(1-s)c}{2}\tau^{\prime}(c)-b(1-s)\tau^{\prime\prime }\bigg{(}\frac{c-b}{2}\bigg{)}\.\]
Proof.: Apply
\[\tau^{\prime}((s+1)c/2-b) \leq\tau^{\prime}(c-b)\,, s\leq 1,\tau^{\prime}\nearrow,\] \[\tau^{\prime}_{\sqrt{}}(c^{2}-2scb+b^{2}) \leq\tau^{\prime}_{\sqrt{}}((c-b)^{2})\,, s\leq 1,\tau^{\prime}_{\sqrt{}}\searrow,\] \[\tau^{\prime}((s+1)c/2) \leq\tau^{\prime}(c)\,, s\leq 1,\tau^{\prime}\nearrow,\] \[-\tau^{\prime\prime}((1-s)c/2) \leq-\tau^{\prime\prime}((c-b)/2)\,, b\leq sc,\tau^{\prime\prime}\searrow.\]
## Acknowledgments
I want to thank Christophe Leuridan for his very helpful answer on math overflow1. His quick response to my post allowed me to obtain the proof of Lemma 18 very efficiently.
Footnote 1: [https://mathoverflow.net/questions/447718/smooth-approximation-of-nonnegative-nondecreasing-concave-functions/447722](https://mathoverflow.net/questions/447718/smooth-approximation-of-nonnegative-nondecreasing-concave-functions/447722)
|
2305.10415 | PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering | Medical Visual Question Answering (MedVQA) presents a significant opportunity
to enhance diagnostic accuracy and healthcare delivery by leveraging artificial
intelligence to interpret and answer questions based on medical images. In this
study, we reframe the problem of MedVQA as a generation task that naturally
follows the human-machine interaction and propose a generative-based model for
medical visual understanding by aligning visual information from a pre-trained
vision encoder with a large language model. We establish a scalable pipeline to
construct a large-scale medical visual question-answering dataset, named
PMC-VQA, which contains 227k VQA pairs of 149k images that cover various
modalities or diseases. We train the proposed model on PMC-VQA and then
fine-tune it on multiple public benchmarks, e.g., VQA-RAD, SLAKE, and
Image-Clef-2019, significantly outperforming existing MedVQA models in
generating relevant, accurate free-form answers. In addition, we propose a test
set that has undergone manual verification, which is significantly more
challenging, serving to better monitor the development of generative MedVQA
methods. To facilitate comprehensive evaluation and comparison, we have
maintained a leaderboard at
https://paperswithcode.com/paper/pmc-vqa-visual-instruction-tuning-for-medical,
offering a centralized resource for tracking progress and benchmarking
state-of-the-art approaches. The PMC-VQA dataset emerges as a vital resource
for the field of research, and the MedVInT presents a significant breakthrough
in the area of MedVQA. | Xiaoman Zhang, Chaoyi Wu, Ziheng Zhao, Weixiong Lin, Ya Zhang, Yanfeng Wang, Weidi Xie | 2023-05-17T17:50:16Z | http://arxiv.org/abs/2305.10415v6 | # PMC-VQA: Visual Instruction Tuning for
###### Abstract
In this paper, we focus on the problem of Medical Visual Question Answering (MedVQA), which is crucial in efficiently interpreting medical images with vital clinic-relevant information. Firstly, we reframe the problem of MedVQA as a generation task that naturally follows the human-machine interaction, we propose a generative-based model for medical visual understanding by aligning visual information from a pre-trained vision encoder with a large language model. Secondly, we establish a scalable pipeline to construct a large-scale medical visual question-answering dataset, named PMC-VQA, which contains 227k VQA pairs of 149k images that cover various modalities or diseases. Thirdly, we pre-train our proposed model on PMC-VQA and then fine-tune it on multiple public benchmarks, _e.g.,_ VQA-RAD and SLAKE, outperforming existing work by a large margin. Additionally, we propose a test set that has undergone manual verification, which is significantly more challenging, even the best models struggle to solve.
+
Footnote †: These authors contribute equally to this work.
+
Footnote †: Corresponding author.
## 1 Introduction
Large language models (LLMs), such as ChatGPT [1], PaLM [8], LLaMA [35], have recently demonstrated remarkable progress in a wide range of natural language processing (NLP) tasks, such as question answering, text classification, and interactive dialog. Notably, even in domains where expert knowledge is supposed to play a critical role, like medical diagnosis, these language models have also achieved impressive success, passing the United States Medical Licensing Examination (USMLE) [13; 17; 26; 32]. While recent LLMs excel in language understanding in the medical domain, they are essentially "blind" to visual modalities such as images and videos, hindering the utilization of visual content as a means of communication with these models.
In this paper, we focus on the problem of Medical Visual Question Answering (MedVQA), which aims to develop models that can comprehend text-based queries and produce accurate answers by leveraging medical visual content [21]. Existing MedVQA methods [25; 22; 6; 20] typically treat the problem as a retrieval task with a limited answer base and train multi-modal vision-language models with contrastive or classification objectives. Consequently, they are only useful for limited use cases where a finite set of outcomes is provided beforehand. We propose to develop the _first_ open-ended MedVQA system with a generative model as the backend, capable of handling diverse questions that arise in clinical practice, generating answers in free form without being constrained by the vocabulary. While there has been promising research in visual-language representation learning,
such as Flamingo [2] and BLIP [19], these models have primarily been trained on natural language and images, with very limited application in medical domain, due to the complex and nuanced visual concepts often found in medical scenarios.
To this end, we introduce a novel paradigm for MedVQA that harnesses the power of generative learning. Specifically, our proposed models start from the foundation models in medical domain, and train a bridge to align the pre-trained vision encoder and large language model via visual instruction tuning, we term the model as **MedVInT** (**Med**ical **V**isual **I**nstruction **T**uning). To accommodate different architectures, we offer two variants, named as MedVInT-TE and MedVInT-TD, that are tailored for encoder-based and decoder-based language models, respectively.
In order to effectively train the generative-based MedVQA models, our study reveals that existing datasets are limited in size, making them insufficient for training high-performing models. To overcome this challenge, we leverage well-established medical visual-language datasets [20] and initiate a scalable, automatic pipeline for constructing a new large-scale medical visual question-answering dataset. This new dataset, termed as **PMC-VQA**, contains 227k VQA pairs of 149k images, covering various modalities or diseases (Fig. 1), surpassing existing datasets in terms of both amount and diversity, as illustrated in Tab. 1. In our experiments, we pre-trained MedVInT on the collected PMC-VQA dataset and fine-tuned it on the existing MedVQA datasets, _e.g._, VQA-RAD [18] and SLAKE [23], outperforming existing models by a large margin, achieving over 80% accuracy on multi-choice selection. However, while evaluating on our proposed challenging benchmark, even the state-of-the-art models struggle, showing that there is still ample room for development in this field.
In summary, our contributions are as follows: **(i)** We reframe the problem of MedVQA as a generative learning task and propose MedVInT, a model obtained by aligning a pre-trained vision encoder with large language model through visual instruction tuning; **(ii)** We introduce a scalable pipeline and construct a large-scale MedVQA dataset, PMC-VQA, which far exceeds the size and diversity of existing datasets, covering various modalities and diseases; **(iii)** We pre-train MedVInT on PMC-VQA and fine-tune it on VQA-RAD [18] and SLAKE [23], achieving state-of-the-art performance and significantly outperforming existing models; **(iv)** We propose a new test set and present a more challenging benchmark for MedVQA, to evaluate the performance of VQA methods thoroughly.
## 2 Method
Here, we start with an introduction to the problem of generative medical visual question answering in Sec. 2.1; then we present the architecture detail in Sec. 2.2.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Dataset & Modality & Source & Images & QA pairs \\ \hline VQA-RAD [18] & Radiology & MedPix\({}^{\text{\textregistered}}\) database & 0.3k & 3.5k \\ PathVQA [12] & Pathology & PEIR Digit\({}^{\text{\textregistered}}\) Library [14] & 5k & 32.8k \\ SLAKE [23] & Radiology & MSD [3], ChestX-ray8 [36], CHAOS [15] & 0.7k & 14k \\ VQA-Med-2021 [5] & Radiology & MedPix\({}^{\text{\textregistered}}\) database & 5k & 5k \\ \hline PMC-VQA & Mixture\({}^{*}\) & PubMed Central\({}^{\text{\textregistered}}\) & 149k & 227k \\ \hline \hline \end{tabular}
* Mixture: Radiology, Pathology, Microscopy, Signals, Generic biomedical illustrations, _etc_.
\end{table}
Table 1: Comparison of existing medical VQA datasets with PMC-VQA, demonstrating the significant increase in size and diversity achieved by our dataset.
Figure 1: The top 20 figure types in PMC-VQA, cover a wide range of diagnostic procedures.
### Problem Formulation
MedVQA is a task of answering natural language questions about medical visual content, typically images or videos obtained from medical devices like X-ray, CT, MRI, or microscopy, _etc_. Specifically, our goal is to train a model that can output the corresponding answer for a given question, which can be expressed as:
\[\hat{a}_{i}=\Phi_{\text{MedVQA}}(\mathcal{I}_{i},q_{i};\Theta)=\Phi_{\text{dec} }(\Phi_{\text{vis}}(\mathcal{I}_{i};\theta_{\text{vis}}),\Phi_{\text{text}}(q_ {i};\theta_{\text{text}});\theta_{\text{dec}}) \tag{1}\]
Here, \(\hat{a}_{i}\) refers to the predicted answer, \(\mathcal{I}_{i}\in\mathbb{R}^{H\times W\times C}\) refers to the visual image, \(H,W,C\) are height, width, channel respectively. The posed question and corresponding ground-truth answer in the form of natural language are denoted as \(q_{i}\) and \(a_{i}\), respectively. \(\Theta=\{\theta_{\text{vis}},\theta_{\text{text}},\theta_{\text{dec}}\}\) denote the trainable parameters.
Existing approaches have primarily treated medical VQA as a classification problem, with the goal of selecting the correct answer from a candidate set, _i.e_., \(a_{i}\in\Omega=\{a_{1},a_{2},\dots,a_{N}\}\), where \(N\) represents the total number of answers within the dataset. Consequently, this approach limits the system's utility to predefined outcomes, hampering its free-form user-machine interaction potential.
In this paper, we take an alternative approach, with the goal to generate an open-ended answer in natural language. Specifically, we train the system by maximizing the probability of generating the ground-truth answer given the input image and question. The loss function used to train the model is typically the negative log-likelihood of the correct next token in the sequence, summed over all time steps, which can be expressed as :
\[\mathcal{L}(\Theta)=-\sum_{t=1}^{T}\log p(a^{t}|\mathcal{I},q^{1:T},a^{1:t-1};\Theta) \tag{2}\]
where \(T\) is the length of the ground-truth answer, and \(p(a^{t}|\mathcal{I},q^{1:T},a^{1:t-1};\Theta)\) is the probability of generating the \(t\)-th token in the answer sequence given the input image \(\mathcal{I}\), the question sequence \(q^{1:T}\), and the previous tokens in the answer sequence \(a^{1:t-1}\). This formulation allows the model to generate diverse and informative answers, which can be useful in a wider range of scenarios than traditional classification-based methods.
### Architecture
In this section, we introduce our proposed architecture for generative MedVQA (Fig. 2(a)). Specifically, we offer two model variants, which are tailored to encoder-based and decoder-based language models, respectively, denoted as MedVInT-TE (Sec. 2.2.1) and MedVInT-TD (Sec. 2.2.2).
#### 2.2.1 MedVInT-TE
**Visual Encoder.** Given one specific image \(\mathcal{I}\), we can obtain the image embedding, _i.e_., \(\mathbf{v}=\Phi_{\text{vis}}(\mathcal{I})\in\mathbb{R}^{n\times d}\), where \(d\) denotes the embedding dimension, \(n\) denotes the patch number. The vision encoder
Figure 2: (a) The proposed architecture of MedVInt, mainly consists of three components: a visual encoder to extract visual features, a language encoder to encode textual context, and a multimodal decoder to generate the answer; (b) The proposed question-answer pairs generation pipeline.
is based on a pre-trained ResNet-50 adopted from PMC-CLIP [20], with a trainable projection module. To produce a fixed shape of visual output, we add a trainable projection module on top of the ResNet-50, with the aim of bridging the gap between the pre-trained visual and language embeddings. We propose two distinct variants for this projection module. The first variant, MLP-based, employs a two-layer Multilayer Perceptron (MLP), while the second variant, transformer-based, employs a 12-layer transformer decoder supplemented with several learnable vectors as query input.
**Language Encoder.** Given one question on the image, to guide the language model with desirable output, we append a fixed prompt with the question, _i.e._, "Question: \(q\), the answer is:", and encode it with the language encoder: \(\mathbf{q}=\Phi_{\text{text}}(q)\in\mathbb{R}^{l\times d}\), where \(\mathbf{q}\) refers to the text embedding, \(l\) represents the sequential length for the question, and \(q\) is the prompted question. \(\Phi_{\text{text}}\) is initialized with the pre-trained language model. Note that our model can also be applied to multiple-choice tasks, by providing options and training it to output the right choice as "A/B/C/D". The prompt is then modified as "Question: \(q\), the options are: \(a_{1},a_{2},a_{3},a_{4}\), the answer is:", where \(a_{i}\) refers to the \(i\)-th option.
**Multimodal Decoder.** With encoded visual embeddings (\(\mathbf{v}\)) and question embeddings (\(\mathbf{q}\)), we concatenate them as the input to the multimodal decoder (\(\Phi_{\text{dec}}\)). The multimodal decoder is initialized from scratch with a 4-layer transformer structure. Additionally, acknowledging that the encoder-based language models lack casual masking, we reform the generation task as a mask language modeling task, _i.e._, the question input is padded with several '[MASK]' token and the decoder module learns to generate the prediction for the masked token.
#### 2.2.2 MedVInT-TD
**Visual Encoder.** The visual encoder is the same as MedVInT-TE.
**Text Encoder.** We design \(\Phi_{\text{text}}\) as a simple embedding layer similar to the primary GPT-like LLMs and initialized with their parameters. Same with MedVInT-TE, it also encodes the question input into embedding features \(\mathbf{q}\) and can perform multi-choice or blank through different prompts.
**Multimodal Decoder.** For the Transformer decoder-based language model, with its output format already being free-form text, we directly use its architecture as the multimodal decoder initialized with the pre-train weights. Specifically, we concatenate the image and text features as the input. However, directly using the text decoder as a multimodal decoder, may lead to significant mismatching between the image encoding space and the decoder input space. Therefore, to further fill the gap between the image embedding space, here, we pre-train the whole network using the PMC-OA[20] dataset in a caption-based manner, which is similar to BLIP-2 [19].
## 3 The PMC-VQA Dataset
Our study has identified the lack of large-scale, multi-modal MedVQA datasets as a significant obstacle to the development of effective generative MedVQA models. To address this issue, we present a scalable and automatic pipeline for creating a new large MedVQA dataset. In this section, we provide a detailed description of our dataset collection process, starting with the source data and continuing with the question-answer generation and data filtering procedures. Finally, we analyze the collected data from various perspectives to gain insights into its properties and potential applications.
**Source Data.** We start from PMC-OA [20], which is a comprehensive biomedical dataset comprising 1.6 million image-text pairs collected from PubMedCentral (PMC)'s OpenAccess subset [31], which covers 2.4 million papers. In order to maintain the diversity and complexity of PMC-VQA, we have used a version of **381K image-caption pairs** obtained from the first stage of the medical figure collection process without subfigure auto-separation. We have opted not to use the final released version of the dataset, which only includes subfigure separation, subcaption separation, and alignment, in order to maintain a certain level of complexity and avoid oversimplifying the dataset.
**Question-Answer Generation.** To automatically generate high-quality question-answer pairs within the constraints of an academic budget, we leverage the power of ChatGPT by inputting the image captions of PMC-OA as the content to the model. We use the following prompt to generate 5 question-answer pairs for each caption.
This approach allows us to generate a large volume of diverse and high-quality questions that cover a wide range of medical topics. After generating the question-answer pairs using ChatGPT, we applied a rigorous filtering process to ensure that the pairs met our formatting requirements. As a result, we obtained 1,497,808 question-answer pairs, and since the original captions are linked with images, the pairs can naturally find corresponding images, resulting in an average of 3.93 pairs per image.
**Data Filtering.** As the questions are sourced from image captions, some questions can be answered correctly using biomedical knowledge alone without the need for a specific image, for example, question: "which type of MRI sequence shows high signal in the marrow edema?". To address this issue, we trained a question-answer model using LLaMA-7B [35] with text data only and eliminated all questions that could be potentially answerable by the language model. This filtering process resulted in 848,433 high-quality question-answer pairs.
Furthermore, some questions in our data rely on additional information in the caption that cannot be answered using only the corresponding image, such as "How many patients were classified into the middle stage?" To identify these questions, we trained a question classification model to determine whether a question is answerable given the image alone. Specifically, we manually annotated 2192 question-answer pairs and randomly split them into a training set of 1752 pairs and a testing set of 440 pairs. We fine-tuned LLaMA-7B [35] on this training set, and our model achieved an accuracy of 81.77% on the test set. We then used this model for data cleaning, resulting in a total of 226,946 question-answer pairs corresponding to 149,075 images. From this cleaned dataset, we randomly selected 50,000 image-question pairs to create our test set, namely, PMC-VQA-test. Additionally, we also provided a small **clean** test set of 2,000 samples, which were manually verified for quality, termed as PMC-VQA-test-clean. During this manual verification procedure, we have estimated that over 80% of PMC-VQA-test can be retained.
**Data Analysis.** This section provides an analysis on images, questions, and answers in the PMC-VQA dataset. In detail, the dataset comprises 227k image-question pairs, some examples are presented in Fig.3, which demonstrates the wide diversity of images within our dataset. As indicated in Table1, PMC-VQA outperforms existing MedVQA datasets in terms of data size and modality diversity. The questions in our dataset cover a range of difficulties, from simple questions such as identifying image modalities, perspectives, and organs to challenging questions that require specialized knowledge and judgment. Additionally, our dataset includes difficult questions that demand the ability to identify the specific target sub-figure from the compound figure.
Our analysis of the PMC-VQA dataset can be summarized in three aspects: (i) **Images**: We show the top 20 figure types in the PMC-VQA in Fig. 1. The images in the PMC-VQA are extremely diverse, ranging from Radiology to Signals. (ii) **Questions**: We clustered the questions into different
Figure 3: Several examples of challenging questions and answers along with their respective images. To answer questions related to these images, the network must acquire sufficient medical knowledge, for example, for the first two images, it is essential to recognize the anatomy structure and modalities; for the third image, recognizing the X-ray image pattern of pathologies is necessary; for the final two images, apart from the basic biomedical knowledge, the model is also required to discern colors, differentiate subfigures, and perform Optical Character Recognition (OCR).
Ask 5 questions about the content and generate four options for each question. The questions should be answerable with the information provided in the caption, and the four options should include one correct and three incorrect options, with the position of the correct option randomized. The output should use the following template: ‘the question index’ question: ‘the generate question’ choice: ‘A:option content’ B:option content ‘C:option content’ D:option content ‘answer’: The correct option(A\(\backslash\)B\(\backslash\)C\(\backslash\)D).
types based on the words that start the question, as shown in Fig. 4. We found a surprising variety of question types, including "What is the difference...", "What type of imaging...", and "Which image shows...". Most questions range from 5 to 15 words, and detailed information about the distribution of question lengths is shown in the supplementary materials. (iii) **Answers**: The words in answers primarily encompass positional descriptions, image modalities, and specific anatomical regions. Detailed information about the top 50 words that appeared in the answers is provided in the supplementary materials. Most answers are around 5 words, which is much shorter than the questions. The correct options were distributed as follows: A (24.07\(\%\)), B (30.87\(\%\)), C (29.09\(\%\)), D (15.97 \(\%\)).
## 4 Experiments
In this section, we first introduce two existing primary MedVQA datasets, namely VQA-RAD and SLAKE (Sec. 4.1). We then provide a detailed description of our proposed dataset, PMC-VQA, which can be used for both multiple-choice and open-ended answering tasks (Sec. 4.2). Finally, we discuss the primary pre-trained models we use for ablation in Sec. 4.3. The implementation details is provided in the supplementary materials.
### Existing MedVQA Datasets
**VQA-RAD**[18] is a VQA dataset specifically designed for radiology, consisting of 315 images and 3,515 questions with 517 possible answers. The questions in VQA-RAD are categorized as either close-ended or open-ended, depending on whether answer choices are limited or not. We follow the official dataset split for our evaluation.
**SLAKE**[23] is an English-Chinese bilingual VQA dataset composed of 642 images and 14k questions. The questions are categorized as close-ended if answer choices are limited, otherwise open-ended. There are \(224\) possible answers in total. We only use the "English" part, and follow the official split.
**Baselines and Metrics.** We compare with various baselines on these two MedVQA datasets, namely, MEVF-BAN [25], CPRD-BAN [22], M3AE [6], PMC-CLIP [20]. PMC-CLIP [20] is the existing SOTA method on these two datasets. For evaluation, ACC scores are used. **Note that**, since our model is generative-based, we calculate ACC by matching the generative output with the options using difflib.SequenceMatcher and choosing the most similar one as the choice of the model, which is more difficult than the evaluation for retrieval-based methods due to the larger output space.
### PMC-VQA Dataset
The PMC-VQA dataset consists of a train set with 177K samples and a test set with 50K samples, which are respectively denoted as PMC-VQA-train and PMC-VQA-test. Additionally, the smaller clean test set with 2K samples that have been manually verified, is referred to as PMC-VQA-test-clean. The dataset can be used for both open-ended and multiple-choice tasks.
Figure 4: Question distribution of the training set by their first four words. From left to right are all questions, questions started with “What” and questions started with “Which”. The ordering of the words starts towards the center and radiates outwards.
**Multi-choice MedVQA.** Four candidate answers are provided for each question as the prompt. The model is then trained to **select the correct option** among them. The accuracy (ACC) score can be used to evaluate the performance of the model on this task.
**Open-ended MedVQA.** The total possible answers for PMC-VQA are over \(100\)K, which makes the traditional retrieval-based approach limited in usefulness for the answer set of such a level. Therefore, we provide another training style, called "blank", where the network is not provided with options in input and is required to **directly generate answers** based on the questions. For evaluation, we adopt two metrics. The first is Bleu scores, which are widely used to evaluate the quality of generated text against a set of references. The second is ACC scores, which can be computed by comparing the generated answer with the ground-truth answer using sentence similarity, as introduced in Sec. 4.1.
### Pre-trained Backbones
In this section, we introduce the pre-trained models used in our experiments. We separate them into language and vision backbones. Notably, while all the following models can be used in our architecture, by default, we use the "PMC-LLaMA" (or "PMC-LLaMA-ENC") and "PMC-CLIP" as backbones, since they are known to be more suitable for medical data according to previous works.
#### 4.3.1 Language Backbone
**LLaMA [35]** is a state-of-the-art large-scale language model, pre-trained on trillions of tokens and widely used in the research community. We adopt the 7B version, which consists of 32 transformer layers, as our language backbone.
**PMC-LLaMA [37]** is an open-source language model that is acquired by fine-tuning LLaMA-7B on a total of 4.8 million biomedical academic papers with auto-regressive loss. Compared to LLaMA, PMC-LLaMA demonstrates stronger fitting capabilities and better performance on medical tasks.
**PubMedBERT [11]** is an encoder-based BERT-like model that is trained from scratch using abstracts from PubMed and full-text articles from PubMedCentral in the corpus "The Pile" [10]. It has 12 transformer layers and 100 million parameters. Such domain-specific models proved to yield excellent text embedding capability before the era of large language models.
**LLaMA-ENC and PMC-LLaMA-ENC.** While LLaMA and PMC-LLaMA are known for their performance in text generation tasks, we also experiment with them as encoder models by passing a full attention mask and sampling the embedding from the last token. This allows for a direct comparison to be made with the aforementioned BERT-like models, which are also encoder-based.
#### 4.3.2 Vision Backbone
**CLIP [30]** is a model trained from scratch on a dataset of 400 million image-text pairs collected from the internet with contrastive loss. We use its "ViT-base-patch32" version as our visual encoder with 12 transformer layers, which has been pre-trained on natural images.
**PMC-CLIP [20]** is a medical-specific visual model based on CLIP architecture, which was trained on a dataset of \(1.6\) million biomedical image-text pairs collected from PubMed open-access papers using cross-modality contrastive loss. Compared to the pre-trained visual model on natural images, PMC-CLIP is specifically designed to handle medical images and text.
## 5 Results
In this section, we begin by evaluating our model on two publicly-available datasets, VQA-RAD and SLAKE, and compare it with existing MedVQA models, showing state-of-the-art performance. However, these datasets have limited diversity and scope, which led us to propose a more challenging MedVQA benchmark in Sec. 5.2. Our benchmark covers significantly more diverse modalities and diseases, and we demonstrate that even state-of-the-art methods struggle to perform well on it.
### Comparison on Existing Datasets
As shown in Tab. 2, comparing our model to existing ones, we can draw the following observations:
**State-of-the-art Performance of Generative MedVQA.** As shown in Tab. 2, our MedVInT model outperforms the previous state-of-the-art (SOTA) methods on both the VQA-RAD and SLAKE datasets, regardless of whether the "MedVInT-TE" or "MedVInT-TD" variant is used. We improved the overall accuracy (ACC) scores from 77.6% to 81.6% on VQA-RAD and from 84.3% to 88.0% on SLAKE. Notably, since our model generates answers rather than retrieving one from a pre-defined answer basis, the evaluation metric is more challenging, further demonstrating our superiority.
**Pre-training on PMC-VQA is Essential for Generative MedVQA.** Comparing results using the same architecture, with and without PMC-VQA, it is clear that pre-training with PMC-VQA significantly outperforms. Specifically, "MedVInT-TE" boosts the final results by approximately \(11\%\) on VQA-RAD and \(4\%\) on SLAKE compared to "MedVInT-TE-S" that refers to training the model from scratch without pre-trained on PMC-VQA. Similar improvements are observed with 'MedVInT-TD'. These results highlight the critical role that our PMC-VQA plays in addressing the major challenges that hinder the development of a generative MedVQA system.
**Both MedVInT-TE and MedVInT-TD Perform Well.** The gap between the two training styles mainly exists in open-ended questions, with "MedVInT-TD" performing better on VQA-RAD and "MedVInT-TE" being more effective on SLAKE. This difference can be attributed to the fact that the VQA-RAD answers are typically longer than those in SLAKE, making the "MedVInT-TD" model more suitable for generating such answers. Conversely, SLAKE questions often require short and concise responses, making the MedVInT-TE" model more appropriate for such retrieve-like tasks.
### Benchmark on PMC-VQA
In this section, we introduce our new MedVQA benchmark on PMC-VQA. We evaluate different methods for both open-ended and multiple-choice tasks. The results are summarized in Tab. 3 (See supplementary for more qualitative comparisons.).We can draw the following observations:
**Multimodal Understanding is Essential.** As shown in Tab. 3, when using only language, the model struggles to provide accurate answers and produces nearly random outcomes, with accuracies of only 26.1% in Blanking and 30.6% in Choice. It is worth noting that around 30% of the questions have "B" answers, making the 30.6% score nearly equivalent to the highest possible score attainable through guessing. These observations highlight the crucial role of multimodal understanding in our dataset and emphasize the strong relationship between the images and the questions posed.
**General Visual-language Models Struggle on MedVQA.** We evaluated the zero-shot performance of existing SOTA multimodal models, BLIP-2 and open-source version of Flamingo [19; 4]. As shown, even the best-performing models in natural images struggle to answer our MedVQA questions, demonstrating the challenging nature of our dataset and its strong biomedical relevance.
**PMC-VQA-test Presents a Significantly More Challenging Benchmark.** Notably, the previous SOTA multimodal model for MedVQA, PMC-CLIP [20], struggles on our dataset. Not only does it fail to solve the blanking task, but it also significantly underperforms on multi-choice questions, with accuracy close to random. These findings underline the difficulty of our dataset and its potential to serve as a more robust benchmark for evaluating VQA models.
\begin{table}
\begin{tabular}{l l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Pre-training Data} & \multicolumn{3}{c|}{VQA-RAD} & \multicolumn{3}{c}{SLAKE} \\ & & Open & Close & Overall & Open & Close & Overall \\ \hline MEVF-BAN [25] & – & 49.2 & 77.2 & 66.1 & 77.8 & 79.8 & 78.6 \\ CPRD-BAN [22] & – & 52.5 & 77.9 & 67.8 & 79.5 & 83.4 & 81.1 \\ M3AE [6] & ROCO [29], MedICaT [33] & 67.2 & 83.5 & 77.0 & 80.3 & **87.8** & 83.3 \\ PMC-CLIP [20] & PMC-OA [20] & 67.0 & 84.0 & 77.6 & 81.9 & **85.0** & 84.3 \\ \hline MedVInT-TE-S & – & 53.6 & 76.5 & 67.4 & 84.0 & 85.1 & 84.4 \\ MedVInT-TD-S & – & 55.3 & 80.5 & 70.5 & 79.7 & 85.1 & 81.8 \\ MedVInT-TE & PMC-VQA & **69.3** & **84.2** & **78.2** & **88.2** & 87.7 & **88.0** \\ MedVInT-TD & PMC-VQA & **73.7** & **86.8** & **81.6** & **84.5** & 86.3 & **85.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of ACC to SOTA approaches on VQA-RAD and SLAKE. We use the blank model for evaluation. Pre-training data indicates whether the model is pre-trained on the medical multi-modal dataset before training on the target dataset. The best result is in red, the second-best result is in blue. “Overall” refers to the micro-average ACC of all the Open and Close questions.
**Comparing Generative Model Backbones on PMC-VQA-test.** To further assess the effectiveness of our proposed method, we compared it against various baselines that use different generative model backbones. Our results show that replacing the general visual backbone with a specialized medical one leads to improved performance, highlighting the importance of visual understanding in MedVQA. Additionally, we observed that replacing the language backbone with a domain-specific model also leads to some improvements, although not as significant as those achieved in the visual domain.
**Different Projection Modules Demonstrate Comparable Performance.** We provide the comparison of baseline models using different projection modules (MLP or Transformer) on both open-ended and multiple-choice tasks. As shown, different projection modules demonstrate comparable performance across various evaluation tasks. Both architectures can effectively reconcile the diversity in
\begin{table}
\begin{tabular}{l|l|l|c c|c} \hline \hline Method & Language Backbone & Vision Backbone & \multicolumn{2}{c|}{\begin{tabular}{c} Blanking \\ ACC \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} Choice \\ Bleu-1 \\ \end{tabular} } & \multicolumn{1}{c}{
\begin{tabular}{c} Acc \\ \end{tabular} } \\ \hline \multicolumn{5}{l}{**Zero-shot**} \\ \hline PMC-CLIP [20] & PMC-CLIP [20] & PMC-CLIP [20] & \(-\) & \(-\) & 24.0 / 24.7 \\ BLIP-2 [19] & OPT-2.7B [38] & CLIP [30] & 22.5 / 21.8 & 5.2 / 7.6 & 24.6 / 24.3 \\ Open-Flamingo [4] & LLaMA[35] & CLIP [30] & 26.1 / 26.5 & 4.1 / 4.1 & 25.0 / 26.4 \\ \hline \multicolumn{5}{l}{**Trained on PMC-VQA**} \\ \hline LLaMA [35] & LLaMA [35] & \(-\) & 26.1 / 27.2 & 14.2 / 14.6 & 30.6 / 30.8 \\ \hline \multirow{7}{*}{MedVInT-TE-MLP} & PubMedBERT [11] & Scratch & 33.7 / 34.2 & 20.4 / 20.9 & 34.4 / 34.9 \\ & PMC-CLIP [20] & 33.7 / 34.4 & 20.4 / 20.8 & 34.5 / 34.3 \\ & & PMC-CLIP [20] & 35.2 / 36.4 & **22.0 / 23.2** & 37.1 / 37.6 \\ \cline{2-5} & \multirow{3}{*}{LLaMA-ENC [35]} & Scratch & 32.5 / 32.5 & 15.3 / 15.9 & 35.2 / 35.1 \\ & & CLIP [30] & 32.3 / 33.4 & 15.6 / 15.1 & 35.3 / 36.1 \\ & & PMC-CLIP [20] & 35.4 / **36.8** & 18.2 / 18.4 & 36.9 / 37.1 \\ \cline{2-5} & \multirow{3}{*}{PMC-LLaMA-ENC [37]} & Scratch & 32.6 / 35.0 & 16.2 / 17.0 & 37.0 / 38.0 \\ & & CLIP [30] & 33.0 / 34.4 & 16.6 / 16.5 & 37.1 / 38.5 \\ & & PMC-CLIP [20] & 34.8 / 35.3 & 18.1 / 18.6 & 38.2 / 39.2 \\ \hline \multirow{7}{*}{MedVInT-TE-Transformer} & PubMedBERT [11] & Scratch & 34.1 / 36.2 & **21.0 / 21.9** & 39.8 / 40.6 \\ & CLIP [30] & 33.9 / 34.6 & 20.6 / 21.8 & **39.9 / 40.9** \\ & PMC-CLIP [20] & 33.7 / 35.4 & 20.3 / 21.2 & **40.2 / 40.9** \\ \cline{2-5} & \multirow{3}{*}{LLaMA-ENC [35]} & Scratch & 32.0 / 33.5 & 15.1 / 15.3 & 38.4 / 39.7 \\ & CLIP [30] & 32.3 / 34.3 & 15.5 / 15.7 & 38.4 / 38.7 \\ & PMC-CLIP [20] & **35.9** / **37.1** & 19.0 / 19.3 & 38.9 / 39.4 \\ \cline{2-5} & \multirow{3}{*}{PMC-LLaMA-ENC [37]} & Scratch & 33.2 / 34.7 & 16.6 / 16.5 & 38.1 / 39.8 \\ & CLIP [30] & 33.6 / 35.1 & 16.7 / 17.2 & 38.7 / 38.9 \\ & PMC-CLIP [20] & **35.5** / 36.0 & 18.4 / 18.6 & 38.2 / 37.7 \\ \hline \multirow{7}{*}{MedVInT-TD-MLP} & LLaMA[35] & Scratch & 28.1 / 30.6 & 16.5 / 16.9 & 35.8 / 37.4 \\ & CLIP [30] & 30.2 / 32.7 & 18.6 / 18.5 & 35.8 / 37.1 \\ & PMC-CLIP [20] & 31.3 / 32.6 & 19.5 / 19.8 & 38.4 / **41.0** \\ \cline{2-5} & \multirow{3}{*}{PMC-LLaMA [37]} & Scratch & 28.3 / 30.6 & 16.4 / 17.3 & 35.8 / 37.0 \\ & CLIP [30] & 31.4 / 31.8 & 19.2 / 19.5 & 36.2 / 37.9 \\ & PMC-CLIP [20] & 32.1 / 31.7 & 19.7 / 20.2 & 38.4 / **42.3** \\ \hline \multirow{7}{*}{MedVInT-TD-Transformer} & LLaMA[35] & Scratch & 29.1 / 30.2 & 17.4 / 18.0 & 31.1 / 37.9 \\ & LLaMA [35] & CLIP [30] & 31.3 / 32.2 & 19.5 / 20.0 & 38.2 / 38.3 \\ \cline{1-1} & \multirow{3}{*}{PMC-LLaMA [37]} & PMC-CLIP [20] & 31.9 / 33.4 & 20.0 / 21.3 & 37.3 / 39.5 \\ \cline{1-1} \cline{2-5} & \multirow{3}{*}{PMC-LLaMA [37]} & Scratch & 28.6 / 29.8 & 16.8 / 17.4 & 36.8 / 36.9 \\ \cline{1-1} & & CLIP [30] & 31.4 / 32.6 & 19.5 / 20.4 & 36.8 / 36.9 \\ \cline{1-1} & & PMC-CLIP [20] & 32.7 / 33.6 & 20.3 / 21.5 & 39.4 / 40.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of baseline models using different pre-trained models on both open-ended and multiple-choice tasks. We reported the results on PMC-VQA-test / PMC-VQA-test-clean. “Scratch” means to train the vision model from scratch with the same architecture as “PMC-CLIP”.
the embedding dimensions arising from different pre-trained visual models, making our architecture adaptable to various visual model designs, regardless of whether they are based on ViT or ResNet.
## 6 Related Works
**Instruction Tuning with Large-language Models.** Large Language Models (LLMs) have recently achieved tremendous success [28; 27; 1] in generating high-quality text for various tasks such as language translation, summarization, and question answering. Open-source models, _e.g._, Alpaca [34], have proposed instruction tuning to train models using examples generated from ChatGPT[1], effectively improving the performance of language models. In the visual-language domain, concurrent work to ours, Mini-GPT4 [39] generates a high-quality image-text dataset by prompting ChatGPT with well-designed inputs. In this paper, we focus on visual instruction tuning for MedVQA, which poses unique challenges due to the complexity of medical texts and the variability of medical images.
**Medical Visual Question Answering.** The field of MedVQA has gained significant interest in recent years, with a growing number of studies [21]. Despite the increasing attention, building a robust and reliable MedVQA system remains challenging due to the complexity and variability of medical images, as well as the lack of large-scale and diverse MedVQA datasets. Existing publicly available MedVQA datasets have limitations on diversity, or dataset scale, for example, RadVisDial [16] only contains samples on chest x-ray images, VQA-Med [5], VQA-RAD [18], and SLAKE [23] have less than 10K images. To address these limitations, we propose the PMC-VQA dataset that includes 227k image-question pairs with various image modalities and question types.
## 7 Conclusion
In conclusion, this paper addresses the challenge of MedVQA, where even the strongest VQA models trained on natural images yield results that closely resemble random guesses. To overcome this, we propose MedVInT, a generative model tailored to advance this crucial medical task. MedVInT is trained by aligning visual data from a pre-trained vision encoder with language models. Additionally, we present a scalable pipeline for constructing PMC-VQA, a comprehensive MedVQA dataset comprising 227k VQA pairs across 149k images, spanning diverse modalities and diseases. Our proposed model delivers state-of-the-art performance on existing MedVQA datasets, providing a new and reliable benchmark for evaluating different methods in this field.
|
2306.07439 | Probing DDM and ML quantum concepts in shape phase transitions of
$γ$-unstable nuclei | In a recent paper (S. Ait El Korchi et al. 2020 EPL 132 52001), we explored,
inside the context of Critical Point Symmetries (CPSs) X(3) and Z(4), a
correlation between two exceedingly known quantum concepts, the Minimal Length
(ML) and the Deformation-Dependent Mass (DDM), that are commonly applied in
various areas of physics. Such a correlation has been strongly identified in
transition nuclei by calculating some physical observables of that quantum
system, like as energy spectra, moments of inertia and transition
probabilities. In this paper we extend that study to E(5) dynamical symmetry
corresponding to the shape phase transition U(5)$\leftrightarrow$O(6). The
experimental realization of the models was found to occur in some nuclei, using
the existing phenomenological potentials : Infinite Square Well, Davidson and
Kratzer, whose models fits provide the best agreement. Importantly the
calculations performed in this work using these potentials corroborate the fact
that the revealed correlation between both quantum concepts is not
destructively affected by the presence of other model parameters and hence its
existence is independent of the form or type of the used potential.
Undoubtedly, the present work will open the way for more investigations of this
correlation in the limits of other critical points symmetries in nuclear shape
phase transitions which play today a major role in nuclear structure research
from theoretical as well as experimental point of view. | S. Ait El Korchi, M. Chabab, A. El Batoul, A. Lahbas, M. Oulne | 2023-06-12T21:52:21Z | http://arxiv.org/abs/2306.07439v1 | # Probing DDM and ML quantum concepts in shape phase transitions of \(\gamma\)-unstable nuclei
###### Abstract
In a recent paper (S. Ait El Korchi et al. 2020 EPL 132 52001), we explored, inside the context of Critical Point Symmetries (CPSs) X(3) and Z(4), a correlation between two exceedingly known quantum concepts, the Minimal Length (ML) and the Deformation-Dependent Mass (DDM), that are commonly applied in various areas of physics. Such a correlation has been strongly identified in transition nuclei by calculating some physical observables of that quantum system, like as energy spectra, moments of inertia and transition probabilities. In this paper we extend that study to E(5) dynamical symmetry corresponding to the shape phase transition U(5)\(\leftrightarrow\)O(6). The experimental realization of the models was found to occur in some nuclei, using the existing phenomenological potentials : Infinite Square Well, Davidson and Kratzer, whose models fits provide the best agreement. Importantly the calculations performed in this work using these potentials, corroborate the fact that the revealed correlation between both quantum concepts is not destructively affected by the presence of other model's parameters and hence its existence is independent of the form or type of the used potential. Undoubtedly, the present work will open the way for more investigations of this correlation in the limits of other critical points symmetries in nuclear shape phase transitions which play today a major role in nuclear structure research from theoretical as well as experimental point of view.
keywords: Bohr-Mottelson model, Critical point symmetries, Shape phase transitions, Collective models. Pacs: 21.60.Ev, 21.60.Fw, 21.10.Re. +
Footnote †: journal:
## 1 Introduction
To understand and be able to correctly describe the properties of the atomic nucleus, we need a consistent theory of its structure. One phenomenological example of this is the Bohr-Mottelson model (BMM) [1] and its extensions. In the previous two decades, the amount of attention paid to its solutions has increased tremendously, and this has occurred for a variety of reasons. One of these is represented by the appearance of its cornerstone solutions, called E(5) [2], X(5)[3], Y(5)[4] and Z(5)[5]. In the same perspective, research involving the Bohr-Mottelson Hamiltonian concentrated more on understanding shape phase transitions and associated critical point symmetries (CPSs), but in the last two years, a new orientation of its use for the sake of shape coexistence, fluctuations and mixing phenomena has been proposed [6; 7; 8; 9; 10; 11]. Furthermore, critical point symmetries have typically paved the way for the development of
new models by combining various phenomenological potentials, resulting in new precisely or quasi-exactly separable models that can describe nuclei that are close to or far from existing critical point symmetries.
The BMM[1], including its theoretical underpinnings, provided us with a geometrical alternative theory of nuclear collective excitations, as opposed to the algebraic models[12; 13]. In essence, the geometric Bohr Mottelson model depicts the collective excitations of nuclei in terms of liquid drop surface oscillations. In the large majority of cases, the resolution of its Hamiltonian necessitates the employment of a potential \(V(\beta,\gamma)\) that is completely reliant on the parameters \(\beta\) and \(\gamma\), where \(\beta\) indicates ellipsoidal deformation and \(\gamma\) is the measure of axial asymmetry. Based on scientific studies, this realistic theoretical model was able to depict successfully the low energy collective states and the electromagnetic transitions of a large number of even-even nuclei[14; 15].
In the framework of BMM, the nuclear surface of the deformed nucleus is an ellipsoid arbitrarily oriented in space and described, for our purposes, by a second-order deformation such as [16]:
\[R(\theta,\phi)=R_{0}\left[1+\sum_{\mu=-2}^{+2}\alpha_{2,\mu}Y_{2,\mu}(\theta, \phi)\right], \tag{1}\]
where \(\alpha_{2,\mu}\) are tensors describing the deformations of the nucleus. They are expressed in terms of the radius of the spherical nucleus \(R_{0}\) and the spherical harmonics \(Y_{\lambda,\mu}\) as follows
\[\alpha_{2,\mu}=(-1)^{\mu}\alpha_{2,-\mu}^{*}=\frac{1}{R_{0}}\int R(\theta, \phi)Y_{2,\mu}(\theta,\phi)d\Omega. \tag{2}\]
However, it is indespensable to transform into a coordinate system which is "fixed" in the oscillating body. The collective coordinates in the body-fixed system are then connected to the space-fixed system by the following transformation [16]:
\[a_{2,\mu}=\sum_{\mu}\alpha_{2,\mu}\mathcal{D}_{\mu,\nu}\left(\theta_{i}\right), \tag{3}\]
where the \(\mathcal{D}_{\mu,\nu}\left(\theta_{i}\right)\) are the Wigner-D functions for the spherical harmonics of second order and the triad of Eulerian angles (\(\theta\), \(\varphi\), \(\psi\)) describing the relative orientation of the axes is here symbolized by \(\theta_{i}\). The ellipsoid about the principal axes in the proper coordinate frame, imposes that \(a_{2,1}=a_{2,-1}=0\) and \(a_{2,2}=a_{2,-2}\). Thus the five variables \(a_{\lambda,\mu}\) are replaced by the three Eulerian angles \(\theta_{i}\) and the two real internal coordinates \(a_{2,0}\) and \(a_{2,2}\). Again, in place of \(a_{2,0}\) and \(a_{2,2}\) it is beneficial to introduce the variables \(\beta\) and \(\gamma\) employing both closed relations : \(a_{2,0}=\beta\cos\gamma\) and \(a_{2,2}=a_{2,-2}=(\beta/\sqrt{2})\sin\gamma\), which are obviously respected the conventions of D.L. Hill and J. A. Wheeler [17]. In view of these conventions, the collective Bohr Hamiltonian for quadrupole shapes in the five-dimensional form is written as [1; 16]:
\[H=T+V(\beta,\gamma), \tag{4}\]
with,
\[T=\frac{-\hbar^{2}}{2B_{m}}\Bigg{[}\frac{1}{\beta^{4}}\frac{\partial}{ \partial\beta}\beta^{4}\frac{\partial}{\partial\beta}+\frac{1}{\beta^{2}sin3 \gamma}\frac{\partial}{\partial\gamma}sin3\gamma\frac{\partial}{\partial \gamma}-\frac{1}{4\beta^{2}}\sum_{k}\frac{Q_{k}^{2}}{sin^{2}(\gamma-\frac{2}{ 3}\pi k)}\Bigg{]}, \tag{5}\]
where \(B_{m}\) is the mass parameter, while \(Q_{k}\) represents the angular momentum in the variables \(\theta_{i}\). According to the selected form for the potential \(V(\beta,\gamma)\), there are three basic situations in which one can accomplish a perfect separation of the variables, one of them should be considered: potentials independent [18] of the collective variable \(\gamma\), called \(\gamma\)-unstable potentials, appropriate for describing vibrational and near vibrational nuclei. On the other hand, in terms of the form, it might be argued that the appropiate differential equation to be solved is, in some ways, nothing more than the Schrodinger equation, and that the record of examples known in the quantum mechanical setting also applies to the current situation. This is somewhat correct, however there are a number of significant distinctions that must be highlighted and for which the collective Hamiltonian requires particular consideration: because the Bohr equation is expressed in terms of two variables (rather than five) as previously indicated, its natural space is 5-dimensional rather than 3-dimensional, and this has an impact on not only the asymptotic behavior and boundary conditions, but also the group structure that we associate with the Bohr Hamiltonian.
Although shape/phase transitions result in abrupt changes in nuclear characteristics, so critical point symmetries aiming at defining the transition point must match to reliable spectra and B(E2) transition rates predictions. In the present work we study a \(\gamma\)-unstable Bohr hamiltonian, in the critical point symmetry E(5) which is designed to describe the shape phase transition from spherical U(5) to \(\gamma\)-unstable O(6) shapes. We will however consider three phenomenological potentials: the infinite square well, Davidson and Kratzer in the presence of two formalisms: the deformation dependent mass (DDM) and the minimal length (ML), while pointing out that these last are largely used and tested in Bohr's Hamiltonian solution fields [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Moreover, in a recent work [32] we have revealed the existence of a strong correlation between both formalisms that we will also check in the present study. The solutions obtained will be dubbed E(5)-ML-IFSW, E(5)-ML-D, E(5)-ML-K for the ML formalism within the infinite square well, Davidson and Kratzer potentials respectively, and in the same way we called E(5)-DDM-IFSW, E(5)-DDM-D and E(5)-DDM-K for the DDM formalism. The expressions for the energy spectrum as well as the wave functions are obtained by means of the asymptotic iteration method (AIM) [34] in the DDM case, and by using conjointly AIM and quantum perturbation method (QPM) [35] for the ML.
The present paper is outlined as follows: after the introduction which provides the background of the critical point symmetries (CPSs) as well as the physics content of Bohr's Hamiltonian, we present in Section II the Bohr Hamiltonian with deformation-dependent effective mass formalism, while in Section III we give a brief description of it but this time with minimal Length. The basic equations of E(5) involving deformation-dependent mass term with different phenomenological potentials and their solutions are given in Section IV. The models E(5) including the concept of minimal length with the same potentials and their solutions are also exposed in Section V. The B(E2) transition probabilities are considered in Section VI. Finally, Section VII is devoted to the numerical computations and discussion of theoretical results with experimental data for energy spectra, B(E2) transition probabilities and moment of inertia, while Section VIII contains the conclusions.
## 2 Bohr Hamiltonian with deformation-dependent effective mass
Deformation-dependent effective mass [36] is widely used in quantum physics and it is equivalent to a deformation of the canonical commutation relations:
\[[x_{i},x_{j}]=0,\ [x_{i},p_{j}]=\mathrm{i}\hbar\delta_{i,j},\ [p_{i},p_{j}]=0\, \tag{6}\]
where \(i=1,2,3\). By replacing the momentum components \(p_{i}=-\mathrm{i}\hbar\nabla_{i}=-\mathrm{i}\hbar\partial/\partial x_{i}\) by some deformed hermitian operators:
\[\pi_{i}=\sqrt{f(x)}\,p_{i}\sqrt{f(x)}\, \tag{7}\]
where the positive real deforming function \(f(x)\) depends on the coordinates \(x=(x_{1},x_{2},x_{3})\), both last commutators in Eq.(6) transform into :
\[[x_{i},\pi_{j}]=\mathrm{i}\hbar f(x)\delta_{i,j},\ [\pi_{i},\pi_{j}]= \mathrm{i}\hbar[f_{j}(x)\pi_{i}-f_{i}(x)\pi_{j}], \tag{8}\]
with \(f_{i}(x)\equiv\nabla_{i}f(x)\). Bohr equation with a mass depending on the deformation coordinate \(\beta\) describing the quadrupole vibrations and rotations in a unified manner, can be easily derived by using classical expression of Hamiltonian and Pauli-Podolsky prescription [37] for the canonical quantization in curvilinear coordinates,
\[(\nabla\Phi)^{i}=g^{ij}\frac{\partial\Phi}{\partial x^{j}},\qquad\nabla^{2} \Phi=\frac{1}{\sqrt{g}}\partial_{i}\sqrt{g}g^{ij}\partial_{j}\Phi, \tag{9}\]
equipped with the collective Bohr-Wheeler coordinates \(\beta\),\(\gamma\) and the three Euler angles \(\Omega\) which are connected with the rotational angles \(\omega_{k}\) by a linear transformation. Because the deformation function \(f\) is solely dependent on the radial coordinate \(\beta\), only the \(\beta\) component of the arising equation will be altered. The final result reads[20]
\[\Big{[}-\frac{1}{2}\frac{\sqrt{f}}{\beta^{4}}\frac{\partial}{\partial\beta} \beta^{4}f\frac{\partial}{\partial\beta}\sqrt{f}-\frac{f^{2}}{2\beta^{2}\sin 3 \gamma}\frac{\partial}{\partial\gamma}\sin 3\gamma\frac{\partial}{ \partial\gamma}+\frac{f^{2}}{8\beta^{2}}\sum_{k=1,2,3}\frac{Q_{k}^{2}}{\sin^{ 2}\left(\gamma-\frac{2}{3}\pi k\right)}+v_{eff}(\beta,\gamma)\Big{]}\Psi= \epsilon\Psi, \tag{10}\]
where reduced energies \(\epsilon=B_{m}E/\hbar^{2}\) and reduced potentials \(v(\beta,\gamma)=B_{m}V/\hbar^{2}\) have been used, with
\[v_{eff}(\beta,\gamma)=v(\beta,\gamma)+\frac{1}{4}(1-\delta-\lambda)f\nabla^{2}f+ \frac{1}{2}\left(\frac{1}{2}-\delta\right)\left(\frac{1}{2}-\lambda\right)( \nabla f)^{2}. \tag{11}\]
The model's free parameters \(\lambda\) and \(\delta\) are introduced originally by O. von Roos[38] in order to overcome the ambiguity problem in ordering of the kinetic energy operator, namely to identify effective potential(11) resulting from all possible choices of ordering which unambiguously own the same solutions.
## 3 Bohr Hamiltonian with minimal Length
The concept of ML can be implemented on the study of the physical systems by taking into account the deformed canonical commutation relation [39; 40; 41]:
\[[\hat{X},\hat{P}]=ih\left(1+\alpha^{2}\hat{P}^{2}\right), \tag{12}\]
where \(\alpha\) representing the ML parameter is a very small positive parameter. This commutation relation yields the following uncertainty relation:
\[(\Delta\hat{X})(\Delta\hat{P})\geq\frac{\hbar}{2}\left(1+\alpha(\Delta\hat{P} )^{2}+\hat{\tau}\right),\hat{\tau}=\alpha(\widehat{P})^{2}. \tag{13}\]
The above conception has been employed in numerous quantum physical theories that have suggested the existence of a ML having magnitudes ranging from the Planck constant (\(10^{-35}m\)) [42; 43] to the order \(10^{-15}m\)[44; 45; 46]. In this light, the ML concept has been proposed in the pioneering study[27]. Therefore, the collective quadrupole Hamiltonian of Bohr-Mottelson with minimal length is as follows:
\[H=-\frac{\hbar^{2}}{2B_{m}}\Delta+\frac{\alpha\hbar^{4}}{B_{m}}\Delta^{2}+V( \beta,\gamma), \tag{14}\]
with \(\Delta\) formulated in the following way :
\[\Delta=\frac{\hbar^{2}}{\sqrt{g}}\partial_{i}\sqrt{g}g^{ij}\partial_{j}, \tag{15}\]
her \(g\) is the determinant of the symmetric matrix \(g_{ij}\) having the form:
\[(g_{ij})=\begin{pmatrix}g_{11}&g_{12}&g_{13}&0&0\\ g_{21}&g_{22}&0&0&0\\ g_{31}&0&g_{33}&0&0\\ 0&0&0&g_{44}&0\\ 0&0&0&0&g_{55}\end{pmatrix}, \tag{16}\]
with [16]
\[g_{11} =\frac{\mathcal{J}_{1}}{B_{m}}\sin^{2}\Theta\cos^{2}\psi+\frac{ \mathcal{J}_{2}}{B}\sin^{2}\Theta\sin^{2}\psi+\frac{\mathcal{J}_{3}}{B_{m}} \cos^{2}\Theta,\] \[g_{12} =\frac{1}{B_{m}}(\mathcal{J}_{2}-\mathcal{J}_{1})\sin\Theta\sin \psi\cos\psi,\] \[g_{13} =\frac{\mathcal{J}_{3}}{B_{m}}\cos\Theta,\] \[g_{22} =\frac{\mathcal{J}_{1}}{B_{m}}\sin^{2}\psi+\frac{\mathcal{J}_{2}} {B_{m}}\cos^{2}\psi, \tag{17}\] \[g_{33} =\frac{\mathcal{J}_{3}}{B_{m}},\] \[g_{44} =1,\] \[g_{55} =\beta^{2},\]
where the moments of inertia are :
\[\mathcal{J}_{k}=4B_{m}\theta^{2}\sin^{2}\left(\gamma-k\frac{2\pi}{3}\right). \tag{18}\]
The Laplacian operator (15) in terms of collective variables \((\beta,\gamma)\) can be written as :
\[\Delta=\frac{1}{\beta^{4}}\frac{\partial}{\partial\beta}\beta^{4}\frac{ \partial}{\partial\beta}+\frac{1}{\beta^{2}sin3\gamma}\frac{\partial}{ \partial\gamma}sin3\gamma\frac{\partial}{\partial\gamma}-\frac{1}{4\beta^{2} }\sum_{k=1}^{3}\frac{Q_{k}^{2}}{sin^{2}(\gamma-\frac{2\pi}{3}k)}, \tag{19}\]
At the first order on \(\alpha\), the collective Schrodinger equation corresponding to the Hamiltonian(14) then reads as
\[\left[-\frac{\hbar^{2}}{2B_{m}}\Delta+\frac{\alpha\hbar^{4}}{B_{m}}\Delta^{2} +V\left(\beta,\gamma\right)-E_{n,L}\right]\Psi(\beta,\gamma,\Omega)=0. \tag{20}\]
Using the definition of reduced energies and those for potentials, also the following transformation
\[\Psi(\beta,\gamma,\theta_{i})=(1+2\alpha\hbar^{2}\Delta)\zeta(\beta,\gamma, \theta_{i}), \tag{21}\]
the equation (20) in \(\gamma\)-unstable case is simplified to
\[\left[(1+4B_{m}\alpha(E-\nu(\beta)))\Delta+(\epsilon_{n,L}-\nu(\beta))\right] \zeta(\beta,\gamma,\Omega)=0, \tag{22}\]
and the separation of variables is assumed by taking the following wave function
\[\zeta(\beta,\gamma,\theta_{i})=\xi(\beta)\,\Phi(\gamma,\theta_{i}). \tag{23}\]
The radial part \(\xi(\beta)\) obeys to
\[\left[\frac{1}{\beta^{4}}\frac{\partial}{\partial\beta}\beta^{4}\frac{ \partial}{\partial\beta}+\frac{\Lambda}{\beta^{2}}+\frac{2B_{m}}{\hbar^{2}} \tilde{K}(E,\beta)\right]\xi(\beta)=0, \tag{24}\]
with \(\tilde{K}(E,\beta)\) is given by
\[\tilde{K}(E,\beta)=\frac{E-\nu(\beta)}{1+4B_{m}\alpha(E-\nu(\beta))}. \tag{25}\]
## 4 E(5)-DDM models
### E(5)-DDM with Infinite Square Well potential
It is important to mention here that the signatures of the E(5) symmetry are derived theoretically from parameter-free calculations using the five-dimensional infinite well potential in the \(\beta\) shape variable, simulating a second-order phase transition between the spherical and \(\gamma\)-unstable phases.Therefore, since our goal is to study the E(5) model in the presence of the two concepts of DDM and ML, the first potential model that of course comes to our mind and which we should study is the model potential known as IFSW which has an anharmonic behaviour and its form is defined by :
\[u(\beta)=\left\{\begin{array}{ll}0,&\quad\mbox{if }0\leq\beta\leq\beta_{ \omega}\\ \infty,&\quad\mbox{if }\beta>\beta_{\omega}\end{array}\right., \tag{26}\]
where \(\beta_{\omega}\) specifies the width of the well. For this potential, the suitable deformation function \(f(\beta)\) is given by[32] :
\[f(\beta)=\frac{1}{\beta^{\omega}},\quad a<<1. \tag{27}\]
The eigenvalues and eigenfunctions of the Hamiltonian of equation (10) with IFSW potential are obtained by considering the following transformation \(\xi(\beta)=\beta^{(a-\frac{1}{2})}F(\beta)\),
\[E_{s,L}=\frac{\hbar^{2}}{2B_{0}}\bar{k}_{s,\bar{\eta}}^{2},\ \ \bar{k}_{s,\bar{\eta}}=\frac{(1+a)}{\beta_{\omega}^{(1+a)}}\chi_{s,\bar{\eta}}\, \tag{28}\]
and
\[\xi_{s,L}(\beta)=N_{s,L}\,\beta^{a-3/2}J_{u}\left(\frac{\bar{k}_{s,u}}{a+1} \beta^{a+1}\right),\ \ \ u=\frac{\bar{\eta}}{a+1}, \tag{29}\]
where \(N_{s,L}\) is a normalization constant of the wave function.
The energy spectrum is characterized by the principal quantum number \(s\) together with the total angular momentum \(L\). The parameter \(\bar{\eta}\) is given by
\[\bar{\eta}^{2}=\Lambda+a^{2}-3a+\frac{9}{4}, \tag{30}\]
where \(\chi_{s,u}\) is the \(s\)-th zero of the Bessel function of the first kind \(J_{\eta}(\bar{k}_{s,u}\beta_{\omega})\).
### E(5)-DDM with Davidson potential
In this part we are going to consider the Davidson potential [47] with two parameters,
\[u(\beta)=c\beta^{2}+\frac{\beta_{0}}{\beta^{2}}, \tag{31}\]
this form is a special case of the Davidson potential, where \(c\) and \(\beta_{0}\) are two free scaling parameters, and \(\beta_{m}=\left(\frac{\beta_{0}}{c}\right)^{1/4}\) represents the position of the minimum of the potential. For \(\beta_{0}=0\) and \(c=1\), the original solution of Bohr [1], without any formalism, which corresponds to a 5-dimensional (5-D) harmonic oscillator characterized by the symmetry U(5) \(\supset\) SO(5) \(\supset\) SO(3) \(\supset\) SO(2) [48], is obtained. Using the AIM [34] we get the generalized formula of the energy spectrum
\[E_{n,\tau}=\frac{\hbar^{2}}{2B_{m}}\Big{[}k_{0}+\frac{a}{2}(3+2\sigma_{1}+2 \sigma_{2}+\sigma_{1}\sigma_{2})+2a(2+\sigma_{1}+\sigma_{2})n+4an^{2}\Big{]}, \tag{32}\]
here \(n\) is the principal quantum number of \(\beta\) vibrations and \(k_{0}\),\(k_{2}\), \(k_{-2}\),\(\sigma_{1}\) and \(\sigma_{2}\) are given by
\[k_{0} =2a\Big{[}6+\Lambda\Big{]},\] \[k_{2} =a^{2}\Big{[}\Lambda+10\Big{]}+2c,\] \[k_{-2} =\Lambda+2+2\beta_{0}^{4}, \tag{33}\] \[\sigma_{1} =\sqrt{1+4k_{-2}},\] \[\sigma_{2} =\sqrt{1+4\frac{k_{2}}{a^{2}}},\]
while \(\Lambda\) is the eigenvalue of the second-order Casimir operator of SO(5) given by
\[\Lambda=\tau(\tau+3), \tag{34}\]
with \(\tau\) is the seniority quantum number [49], characterizing the irreducible representations of SO(5). The values of angular momentum \(L\) occurring for each \(\tau\) are provided by a well known algorithm and are listed in [18; 50]. Within the ground state band (gsb) one has \(L=2\tau\). It should be noted that the used appropriate deformation function for Davidson potential has the form [20; 23] :
\[f(\beta)=1+a\beta^{2},\ \ \ a<<1. \tag{35}\]
This choice is taken so as to arrive at an exact solution to the problem.
### E(5)-DDM with Kratzer potential
Here we are going to consider the Kratzer potential [51]
\[u(\beta)=\frac{-1}{\beta}+\frac{\tilde{B}}{\beta^{2}}, \tag{36}\]
for this potential the deformation function is given by [21]
\[f(\beta)=1+a\beta,\quad a<<1. \tag{37}\]
So, in \(\gamma\)-unstable nuclei, the energy spectrum of Kratzer reads [21]
\[E_{n,\tau}=-\frac{\hbar^{2}}{2B_{m}\left(n+\frac{1}{2}+\sqrt{\left(\tau+\frac {3}{2}\right)^{2}+2\tilde{B}}\right)^{2}}, \tag{38}\]
and the corresponding excited state wave function having the following form :
\[\xi_{n}(\beta)=N_{n}\beta^{-\mu_{n}}f^{\frac{1}{2}\left(\mu_{n}+\frac{\tilde{ K}}{\mu_{n}}-1\right)}P_{n}^{\left(\mu_{n}-\frac{\tilde{K}}{\mu_{n}}\mu_{n}+ \frac{\tilde{K}}{\mu_{n}}\right)}\left(\frac{2+a\beta}{a\beta}\right), \tag{39}\]
where \(\tilde{K}=k_{-2}-\frac{k_{-1}}{a}\) and \(\mu_{n}=\mu-n\), and by \(P_{n}^{\left(a,\beta\right)}(t)\) the Jacobi polynomials [52] are denoted, while the normalization coefficient \(N_{n}\) is taken from Ref [21].
## 5 E(5)-ML models
### E(5)-ML with IFSW
In the case of IFSW potential (26), the collective equation (24) of states becomes
\[\left[\frac{d^{2}}{d\beta^{2}}+\frac{4}{\beta}\frac{d}{d\beta}+\left(2\tilde{ K}+\frac{\Lambda}{\beta^{2}}\right)\right]\xi(\beta)=0. \tag{40}\]
This equation is solved by using a wave function in the form \(\xi(\beta)=\beta^{-\frac{3}{2}}f(\beta)\), then it becomes
\[\left[\frac{d^{2}}{d\beta^{2}}+\frac{1}{\beta}\frac{d}{d\beta}+\left(\bar{k}^ {2}-\frac{\eta^{2}}{\beta^{2}}\right)\right]f(\beta)=0, \tag{41}\]
with
\[\eta^{2}=\frac{9}{4}+\Lambda\text{ and }\bar{k}^{2}=\frac{2B_{m}}{\hbar^{2}} \tilde{K}. \tag{42}\]
The eigenfunction of the differential equation (40), which is finite at \(\beta=0\), is given by
\[\xi(\beta)=N_{n}\beta^{-\frac{3}{2}}.J_{\eta}(\bar{k}\beta), \tag{43}\]
and the corresponding energy septra are therefore obtained:
\[E_{s,L}=\frac{\hbar^{2}}{2B_{m}}\frac{\left(\frac{\chi_{s,\eta}}{\beta_{n}} \right)^{2}}{1-2\alpha\left(\frac{\chi_{s,\eta}}{\beta_{n}}\right)^{2}}, \tag{44}\]
where \(\chi_{s,\eta}\) is the s-th zero of the Bessel function of the first kind \(J_{\eta}(\bar{k}\beta)\).
### E(5)-ML with Davidson potential
In this paragraph we have jointly employed AIM and quantum perturbation theory as methods of resolution in order to solve the collective equation (20), since this equation is not soluble for the Davidson potential (31). In this context, we emphasize that we have already obtained good results using these methods in previous works [30; 32; 33]. An analytical formula for the energies of the ground, \(\beta\) and \(\gamma\) bands was then derived within the framework of E(5) symmetry with a Davidson harmonic oscillator potential in \(\beta\) shape variable. The energy spectrum appears such as this:
\[E_{n,L}=E_{n,L}^{(0)}+\Delta E_{n,L}, \tag{45}\]
with \(E_{n,L}^{(0)}\) is the unperturbed energy
\[E_{n,L}^{0}=\sqrt{\frac{\hbar^{2}}{2B_{m}}}\rho(4\mu+2+4n), \tag{46}\]
where
\[\mu=\frac{1}{2}+\frac{1}{2}\sqrt{4\Lambda+8\beta_{0}^{2}+9}\;\;,\rho=\sqrt{ \frac{a}{2}}. \tag{47}\]
It is also possible to get unperturbed wave functions:
\[\xi(\beta)=N_{n,L}\beta^{\mu-4}e^{-\rho\beta^{2}}{}_{1}F_{1}(-n;\mu+\frac{1}{2 };2\rho\beta^{2})=N_{n,L}\beta^{\mu-4}e^{-\rho\beta^{2}}\mathcal{L}_{n}^{(\mu -\frac{1}{2})}(2\rho\beta^{2}), \tag{48}\]
where \(\mathcal{L}_{n}^{(\mu-\frac{1}{2})}(2\rho\beta^{2})\) are the Laguerre polynomials and \(N_{n,L}\) the normalization constant. The quantity \(\Delta E_{n,L}\) is the correction to the energy spectrum induced by the minimal length using the perturbation theory at the first order. It is given by :
\[\Delta E_{n,L}=\frac{\alpha\hbar^{4}}{B_{m}}<\psi^{0}(\beta,\theta_{i})\mid \Delta^{2}\mid\psi^{0}(\beta,\theta_{i})>=4\alpha B_{m}[(E_{n,L}^{0})^{2}+2c \beta_{0}-2E_{n,L}^{0}(c\overline{\beta^{2}}+\beta_{0}\overline{\beta^{-2}})+ (c^{2}\overline{\beta^{4}}+\beta_{0}^{2}\overline{\beta^{-4}})], \tag{49}\]
where \(\psi^{0}(\beta,\theta_{i})\) are the eigenfunctions, solutions to the Schr\(\ddot{o}\)dinger equation for \(\alpha=0\). The quantities \(\overline{\beta^{\prime}}(i=2,-2,4,-4)\) are expressed as follows :
\[\begin{cases}\overline{\beta^{2}}=\frac{4n+1+2\mu}{4\rho},\\ \overline{\beta^{-2}}=\frac{4\rho}{2\mu-1},\\ \overline{\beta^{4}}=\frac{4\mu^{2}+24n^{2}+8\mu+36n+19}{16\rho^{2}},\\ \overline{\beta^{-4}}=\frac{16\rho^{2}(4n+1+2\mu)}{(2\mu+1)(4\mu^{2}-8\mu+3)},\end{cases} \tag{50}\]
The corrected wave function can be also determined by means of a quantum perturbation method as
\[F_{n}^{Corr}(\beta)=F_{n}(\beta)+\sum_{k\neq n}\left[\frac{\int_{0}^{\infty} \beta^{2}F_{k}(\beta)\vartheta(n,\alpha,c,\beta_{0},E_{n,L}^{(0)})F_{n}(\beta )d\beta}{E_{n,L}^{(0)}-E_{k,L}^{(0)}}\right]F_{k}(\beta), \tag{51}\]
with,
\[\vartheta(n,\alpha,c,\beta_{0},E_{n,L}^{(0)})=4B_{m}\alpha\bigg{[}(E_{n,L}^{0 })^{2}+2c\beta_{0}-2E_{n,L}^{0}(c\overline{\beta^{2}}+\beta_{0}\overline{ \beta^{-2}})+(c^{2}\overline{\beta^{4}}+\beta_{0}^{2}\overline{\beta^{-4}}) \bigg{]}. \tag{52}\]
### E(5)-ML with Kratzer potential
In the case of ML with Kratzer's potential in the form (36), the collective equation(20) has no analytical solution, so to solve it we proceed as in the Davidson potential. For the unperturbed energy one obtains :
\[E_{n,\tau}^{0}=\frac{\hbar^{2}}{2B_{m}}\left[\frac{1}{2(n+\mu)}\right]^{2}\, \tag{53}\]
with
\[\mu=\frac{1}{2}+\sqrt{B+\left(\tau+\frac{3}{2}\right)^{2}}, \tag{54}\]
and the unperturbed wave function is given by
\[F(\beta)= N_{n,L}.\beta^{n-2}e^{-\rho\beta}\underline{\mathcal{L}}_{n}^{(2\mu -1)}(2\rho\beta), \tag{55}\]
where \(\underline{\mathcal{L}}_{n}^{(2\mu-1)}\) denotes the associated Laguerre polynomials and \(N_{n,L}\) is a normalization constant given by
\[N_{n,L}=\left[\frac{(2\rho)^{(2\mu+1)}\Gamma(n+1)}{\Gamma(n+2\mu)(2n+2\mu)} \right]^{\frac{1}{2}}. \tag{56}\]
The total energy spectrum in this case has the form :
\[E_{n,\tau}=E_{n,\tau}^{0}+\Delta E_{n,\tau}, \tag{57}\]
where \(\Delta E\) is given by
\[\Delta E_{n,\tau}=4\alpha B_{m}\bigg{[}(E_{n,L}^{0})^{2}+2E_{n,L}^{0}\overline {\beta^{-1}}+\overline{\beta^{-2}}(1-2AE_{n,L}^{0})-2A\overline{\beta^{-3}}+A ^{2}\overline{\beta^{-4}}\bigg{]}, \tag{58}\]
where the quantities \(\overline{\beta^{\prime}}(i=-1,-2,-3,-4)\) are expressed as follows
\[\begin{cases}\overline{\beta^{-1}}=\frac{1}{2}.\frac{k}{n+\mu},\\ \overline{\beta^{-2}}=\frac{1}{2}.\frac{k^{2}}{(2\mu+1)(\mu+n)},\\ \overline{\beta^{-3}}=\frac{1}{4}.\frac{k^{3}}{\mu(\mu-1)(2\mu-1)},\\ \overline{\beta^{-4}}=\frac{1}{4}.\frac{k^{4}(\mu+3n)(2\mu+1)+3n(n-1)}{\mu(4 \mu^{2}-1)(\mu-1)(2\mu-3)(\mu+n)},\end{cases} \tag{59}\]
The corrected wave function can be calculated using Eq.(51) with
\[\vartheta(n,\alpha,c,\beta_{0},E_{n,L}^{(0)})=4\alpha B_{m}\bigg{[}(E_{n,L}^{ 0})^{2}+2E_{n,L}^{0}\overline{\beta^{-1}}+\overline{\beta^{-2}}(1-2AE_{n,L}^{ 0})-2A\overline{\beta^{-3}}+A^{2}\overline{\beta^{-4}}\bigg{]}. \tag{60}\]
## 6 B(E2) Transition rates
In general, the quadrupole operator is given by [2]
\[T_{M}^{(E2)}=t\beta\bigg{[}D_{M,0}^{(2)}(\theta_{i})\cos(\gamma)+\frac{1}{ \sqrt{2}}\bigg{(}D_{M,2}^{(2)}(\theta_{i})+D_{M,-2}^{(2)}(\theta_{i})\bigg{)} \sin(\gamma)\bigg{]}, \tag{61}\]
where \(t\) is a scaling factor and the Wigner functions of Euler angles are denoted by \(D_{M,\alpha}^{(2)}(\theta_{i})\). (\(\alpha=0,2,-2\)) represents the angular momentum quantum number. For the \(\gamma\)-unstable nuclei, \(B(E2)\) transition rates from an initial to a final
state are given by :
\[B(E2;L_{i}\alpha_{i}\to L_{f}\alpha_{f})=\frac{1}{2L_{i}+1}|<\L_{f}\alpha_{f}|T_{ M}^{(E2)}|L_{i}\alpha_{i}>|^{2}. \tag{62}\]
The reduced matrix element \(|<\L_{f}\alpha_{f}|T_{M}^{(E2)}|L_{i}\alpha_{i}>|\) is obtained through the Wigner-Eckart theorem [53]. Then the B(E2) is given by:
\[B(E2;L_{m,\tau}\to(L+2)_{\pi,\tau+1})=\frac{(\tau+1)(4\tau+5)}{(2\tau+5)(4\tau+ 1)}t^{2}\times I_{\beta}\bigg{(}n_{i},L_{i},\alpha_{i},n_{f},L_{f},\alpha_{f} \bigg{)}^{2}, \tag{63}\]
where \(I_{\beta}(n_{i},L_{i},\alpha_{i},n_{f},L_{f},\alpha_{f})\) the integral over \(\beta\) having the following form
\[I_{\beta}(n_{i},L_{i},\alpha_{i},n_{f},L_{f},\alpha_{f})=\int_{0}^{\infty} \beta\;\xi_{n_{i},L_{i},\alpha_{i}}(\beta)\;\xi_{n_{f},L_{f},\alpha_{f}}( \beta)\;\beta^{4}\;d\beta, \tag{64}\]
the factor \(\beta^{4}\) comes from the volume element.
## 7 Results and Discussion
In order to see how the models presented in this paper deal with concrete nuclei, we have used the nuclide chart as well as recent works searching for nuclei with collective spectrum populated in most cases with at least eight states and whose observed experimental ratios \(R_{L/2}\) (\(L=0,2,4\)) fall in the existence interval of our models. Literally three phenomelogical potentials, namely: infinite square well, Davidson and Kratzer, are used to compare and examine the characteristic structure of these nuclei in the vicinity of the critical point symmetry E(5). The comparison has been done within two formalisms: the minimal length (ML) and the deformation-dependent mass (DDM) for 52 even-even nuclei namely: \({}^{98-104}\)Ru, \({}^{102-116}\)Pd, \({}^{106-120}\)Cd, \({}^{118-134}\)Xe, \({}^{130-136}\)Ba, \({}^{142}\)Ba, \({}^{134-138}\)Ce,\({}^{140}\)Nd,\({}^{148}\)Gd,\({}^{140-142}\)Sm, \({}^{142-144}\)Gd, \({}^{152}\)Gd, \({}^{186-200}\)Pt. It should be noted that the same set of experimental data for spectra has been previously chosen to be studied in earlier works [20; 21], so that these nuclei are good candidate for \(\gamma-unstable\). This section includes all results obtained with E(5)-ML and E(5)-DDM models in four parts **7.1, 7.2, 7.3** and **7.4**.
Comparison of energy spectra \(R_{L_{\gamma}/n}\) ratios of excited collective states between ML and DDM formalisms for the three potentials
The interesting low-lying bands inside the E(5) symmetry are basically distinguished by the principal quantum number \(n\) and also the angular momentum \(L\). These lowest bands are as follows : the ground state band (g.s.) is characterized by \(n=0\), the \(\beta\)-band by \(n=1\) and the \(\gamma\)-band is obtained from degeneracies of the g.s. levels as follows: the \(L=2\) member of the \(\gamma\) band is degenerate with the \(L=4\) member of g.s. band, the \(L=3,4\) members of the same band are degenerate with the \(L=6\) member, the \(L=5,6\) members are degenerate with the \(L=8\) member, and so on.
The theoretical energy ratios \(R_{4/2}=E(4_{1}^{+})/E(2_{1}^{+})\) of the ground state band, as well as the \(\beta\) and \(\gamma\) bandheads, normalized to the \(2_{1}^{+}\) state are denoted by \(R_{0/2}=E(0_{\beta}^{+})/E(2_{1}^{+})\) and \(R_{2/2}=E(2_{\gamma}^{+})/E(2_{1}^{+})\) respectively and represented in Tables (1) and (2). The numerical results of our elaborated models which are given in Tables (1) and (2) are obtained with Eqs.(28), (32) and (38) for DDM formalism and Eqs.(44), (45) and (57) for ML one. It's worth noting that the results presented for DDM have been obtained for \(\delta=\lambda=0\). In fact, different choices for \(\lambda\) and \(\delta\) in the effective potential(11) lead to a renormalization of the optimal parameter values, so that the predicted energy levels remain exactly the same.
An overall impression of the comparison between the theoretical and experimental spectra presented in this work is that the theoretical predictions for our models keep up with the corresponding experimental values for few lower collective states, and after that there is a regression in the agreement between them especially with the Davidson potential. The root mean squared error (r.m.s) is used to evaluate this statement :
\[\sigma=\sqrt{\frac{\sum_{i=1}^{n}(E_{i}(exp)-E_{i}(th))^{2}}{(n-1)E(2_{1}^{+} )^{2}}}, \tag{65}\]
where \(E_{i}(th)\) and \(E_{i}(exp)\) are the theoretical and experimental energies of the \(i_{th}\) level, respectively, whereas \(n\) indicates the number of states. \(E(2_{1}^{+})\) is the energy of the first excited level of the g.s band.
Now, we point out the results that we have obtained in this work by involving ML and DDM quantum concepts in E(5), recalling that E(5) invokes a CPS description of the second-order phase transition along the trajectory from a harmonic vibrator to a \(\gamma\)-soft rotor. Historically, E(5) was revealed by Iachello even before X(5), the name being derived from the Euclidian algebra in five dimensions. Obviously, the first signature in nuclear structure one would look for is yrast energies consistent with E(5). The \(R_{4/2}\) value is 2.19, intermediate, as would be expected, between the harmonic vibrator (\(R_{4/2}=2.0\)) and the \(\gamma\)-soft rotor (\(R_{4/2}=2.5\)). It should be noted, in this context, that the numerical values of the signature ratio \(R_{4/2}\), within the framework of each proposed model, depend on the used potentials as we can see in Tables 1 and 2. The evolution of the \(R_{4/2}\) ratio as a function of the neutron number for some isotopic chains is depicted in figure (1) in the case of the Davidson and Kratzer potentials which thus offers a more flexible description of nuclei.
The results are quite interesting. Furthermore, for all considered nuclei, one can still observe, from the same tables, that the values of both structural parameters \(a\) and \(\alpha\) do not exceed 0.2, which is consistent with the models' assumption of small deformations as well as the methodologies utilized. Another observation that can be drawn is that the effect of the DDM and ML formalisms is greater for Davidson's potential than for the others, bearing in mind that for example in our model dubbed E(5)-DDM-D the Davidson involves two free parameters while in Ref
Figure 1: Evolution of \(R_{4/2}\) as function of the neutron number for Pt (upper panel (a)), Pd (upper panel (b)), Xe (lower panel (c)) and Cd (lower panel (d)) isotopic chains. The O(6)-Limit and U(5)-Limit are also shown for comparison.The results presented here are obtained by fitting data for every considered nucleus using the model parameters.
[20] the used potential contains only one. Our results are therefore much better. The energy adjustment based on two parameters gave results closer to the experimental ones and allowed us to work on more nuclei.
Despite the fact that the Davidson and Kratzer potentials have very different shapes especially for high values of \(\beta\), numerical results indicate that a good overall agreement with the experimental data is achieved in both cases especilly for \({}^{106}\)Pd, \({}^{106}\)Cd, \({}^{108}\)Cd, \({}^{128}\)Xe and \({}^{134}\)Ba. Those nuclei have been already cited in earlier works [15; 21] to be good candidates for the E(5). Depending on the nature of the nuclei (in particular their shapes) as shown in figure (1), each potential has a particularity to describe them, especially when the concepts are included. However, a global comparison between the results given in both tables and from the values of \(\sigma\) one can see that the ML formalism is fairly better for the description of the most nuclei. In addition to the comparison made between the three potentials in each of tables, one can also compare between the two formalisms ML and DDM. Here, one can see that the parameters \(a\) and \(\alpha\) are very close to each other for many nuclei. These remarks lead us to wonder about the relation existing between the parameters of each model and potential. Those parameters are ploted and studied in the following paragraph.
### Correlation between ML and DDM formalisms for the three potentials
The parameters' values \(a\) and \(\alpha\) that were acquired naturally from the fit on all available experimental data are depicted in Fig.2.
Indeed, this figure reveals a strong correlation between ML and DDM concepts where the cross-correlation coefficient for Kratzer and IFSW is close to 90%, whereas for Davidson it is greater than 99%. In the case of IFSW and Davidson potential, the related parameters \(a\) and \(\alpha\) have values ranging from 0.001 to 0.2. But, in the case of Kratzer potential, they are between 0.001 and 0.17. It is worth noting that several of the derived values of the parameters \(a\) and \(\alpha\) are extremely near to one another, so the corresponding points on correlation figures are surmounted. Exactly as in the case of the \(\gamma\)-rigid regime, the occurrence of such a correlation suggests that the two parameters \(a\) and \(\alpha\) should not be seen as optional parameters that might take arbitrary values to fit the experimental data, but rather as essential parameters related to the model's structure.
### B(E2) transition rates
The quantum observable associated with collectivity is the electromagnetic transition probability which is more directly related to the shape of the nuclear charge distribution. It indicates the presence of coherence in nuclear motion
Figure 2: Correlation between the parameters \(a\) and \(\alpha\) in E(5) model for Davidson (upper panel(a)), Kratzer (upper panel(b)) and IFSW (lower panel(c)) potentials.
of nuclei. The electromagnetic transition probabilities then provide a very sensitive test of the theoretical models, since they are correlated to the wave functions of the excited states. It's worth noting that the electric quadrupole excitations (E2) are more prevalent among the other types of excitations. It is for this reason that electric quadrupole transition probability B(E2) represents the most important measurement in the study of the nuclear collectivity. So, in the current study through Tables (3) and (4), we provide diverse representative B(E2) transitions normalized to the transition from the first excited level in the ground state band (gsb) and calculated with E(5)-DDM and E(5)-ML for 52 nuclei, employing the same optimal values of the free parameters derived from fitting the energy spectra for each nucleus. Through the obtained theoretical results, one can easily notice an overall agreement with the experiment for the B(E2) transition rates within the intraband of the gsb for which experimental data are available.
### Moment of inertia
In this section we must investigate the influence of the two scenario (DDM and ML) on the variation of the moment of inertia in great detail. There is a little question that such a study will help us to deepen and clarify our current research on the relationship between the two concepts. To begin, it is well understood that the kinematics of nuclei in rotational motion mode is often represented by the moment of inertia, which is traditionally defined as the ratio of the angular momentum L to the angular velocity. However, it may be stated that, for quantum systems, it is very difficult, if not impossible, to offer a universal definition of angular velocity as observable, although there are certain circumstances in which this notion is distinctly defined throughout the process of cranking. The related Inglis model [54] poses that the particles are surrounded by a distorted rotating self-consistent mean field, which resembles an externally cranked potential. In this regard, the relationship that enables us to determine the moment of inertia of the ground state as a function of the total angular momentum, has the form [55]:
\[\theta(L)\approx\frac{2L-1}{E(L)-E(L-2)}, \tag{66}\]
which is considerably used to clarify the backbending effect. This is an apparent irregularity in the evolution of angular momentum as a function of the frequency caused by bandcrossing.
In order to investigate the correlation between both concepts, as it has been previously done with other models, we plot the moment of inertia corresponding to both concepts for two candidates (\({}^{128}\)Xe and \({}^{134}\)Ba) of our elaborated models E(5)-ML and E(5)-DDM for the Davidson and Kratzer potentials, as we can see in Fig(3). From this figure, we can observe a strong correlation between both concepts, justified by the high values of the cross-correlation coefficient.
To end this part, an important note related to the moment of inertia should be mentioned. So in this regard, we observed that the impacts of both concepts (DDM and ML) on the moment of inertia have the same behavior for different potential models in the setting of E(5). Indeed, they have a damping effect on the variation of the moment of inertia when the angular momentum increases and keeping it in the norms of the experimental data removing by the way the drawbacks of the model. This finding is already revealed in our pioneering work[32], in which we examined the critical point symmetries Z(4) and X(3) under both concepts.
### Discussion of realization of E(5) CPS in nuclei
The conclusions presented in the final sections of this article are backed up by compelling empirical evidence put forward by a group of authors in their research. This evidence has been gathered and analyzed with meticulous attention to detail using rigorous scientific methods, and it strongly supports our findings that the nuclei located on or close to the bisectrix depicted in figure (2) are the most promising candidates. As previously stated, the nuclei that we are specifically referring to are: \({}^{106}\)Pd, \({}^{106}\)Cd, \({}^{108}\)Cd, \({}^{128}\)Xe, and \({}^{134}\)Ba. To exemplify the experimental evidence, Casten and Zamfir[56] proposed \({}^{134}\)Ba as a possible experimental candidate for an E(5) nucleus in their research. Although there were certain limitations in comparing the absolute transition probabilities with theoretical calculations, they highlighted that \({}^{134}\)Ba is close to exemplifying E(5) symmetry[56]. In the same context, Zamfir et al.[57] proposed \({}^{102}\)Pd as another possible candidate for an E(5) nucleus, but the lifetimes reported by Konstantinopoulos et al.[58] do not seem to support this suggestion. To conduct a more systematic search for possible candidates exhibiting E(5) characteristics, Clark et al.[59] scoured the ENSDF database[60] for nuclei that met certain criteria. The first step was to look for transitional nuclei in the mass regions of \(30\leqslant Z\leqslant 82\) and \(A\geqslant 60\), with \(2.00<E\left(4^{+}_{1}\right)/E\left(2^{+}_{1}\right)\leqslant 2.40\), which yielded over 70 possible candidates. The second requirement was for the existence of two excited \(0^{+}\) states
Figure 3: The Correlation between moments of inertia for ML and DDM formalism with Davidson (upper panel (a) and (b)) and Kratzer (lower panel (c) and (d)) potentials for \({}^{128}\)Xe and \({}^{134}\)Ba.
within 2.5 and 4.5 times \(E\left(2_{1}^{+}\right)\). Only six nuclei passed both criteria: \({}^{102}\)Pd, \({}^{106,108}\)Cd, \({}^{124}\)Te, \({}^{128}\)Xe, and \({}^{134}\)Ba. After examining the available data against the remaining criteria regarding the decays of the excited 0\({}^{+}\) states, only \({}^{128}\)Xe and \({}^{134}\)Ba were identified as viable candidates. Since \({}^{134}\)Ba had already been suggested as a candidate by Casten and Zamfir[56], \({}^{128}\)Xe was the sole newly identified E(5) candidate. Another study conducted by Coquard et al.[61] in 2009 used Coulomb excitation in inverse kinematics to analyze \({}^{128}\)Xe and obtain detailed spectroscopic information, including \(B(E2)\) values for numerous transitions. Based on their analysis of the relative energies of the excited 0\({}^{+}\) states and the absolute \(B(E2)\) values for their decays, the researchers concluded that \({}^{128}\)Xe is not an E(5) nucleus and proposed that \({}^{130}\)Xe might be a more suitable candidate. However, in subsequent Coulomb excitation measurements on \({}^{130,132}\)Xe by the same group, no evaluation was made of the E(5) character of these isotopes, likely because the excited 0\({}^{+}\) states were not significantly populated in either nucleus. Peters et al.[62] also recently suggested that the same nuclei, i.e., \({}^{130,132}\)Xe, have similar nuclear structures that exhibit the E(5) critical point symmetry. They investigated the level structures of these nuclei using the inelastic neutron scattering reaction, followed by \(\gamma\)-ray detection. The researchers measured the level lifetimes of the nuclei with the Doppler-shift attenuation method and characterized their low-lying excited states. It is worth mentioning that Liu and Zhang[63] also proposed the use of \({}^{130}\)Xe. Additionally, Frank et al.[64] and Long et al.[65] independently suggested the \({}^{104}\)Ru and \({}^{114}\)Cd nuclei, respectively, which are listed in Tables 1 and 2.
In certain instances, when comparing the results of a theoretical model to experimental data, it may not be possible to achieve a specific level of precision. Nonetheless, this outcome should not be interpreted as a failure, but rather as an indication that other factors may be involved.
## 8 Conclusion
Through this study, we have tested the robustness of the E(5) solution under ML and DDM concepts, for some concrete nuclei, by addressing the role of IFSW, Davidson and Kratzer potentials on some quantities as energy spectra, electric quadrupole transitions and moment of inertia. After successful reproduction of the experimental energy spectra and the electromagnetic B(E2) transition probabilities, the correlation between the two quantum concepts (ML and DDM) for some nuclei in the vicinity of the E(5) critical point symmetry such as X(3) and Z(4) [32] has been observed. The present study comes to corroborate the fact that the revealed correlation between both quantum concepts (ML and DDM) is universal and does not depend on the used model.
\begin{table}
\begin{tabular}{|c c c c c c c c c c|} \hline Nucleus & & \(R_{4,2}\) & \(R_{0,2}\) & \(R_{2,2}\) & \(\alpha\) & c & \(\beta_{0}\) & \(B\) & \(\sigma_{rms,z}\) \\ \hline \hline \({}^{116}\)Pd & resp & 2.58 & 3.3 & 2.2 & & & & & \\ & d & 2.34 & 3.45 & 2.3 & 0.0024 & 14.38 & 0.57 & - & 0.57 \\ & k & 2.4 & 5.59 & 2.4 & 0.0050 & - & - & 146 & 0.93 \\ & iresw & 2.24 & 3.14 & 2.2 & 0.0843 & - & - & - & 0.66 \\ \({}^{106}\)Cd & resp & 2.36 & 2.8 & 2.7 & & & & & \\ & d & 2.23 & 2.97 & 2.2 & 0.0030 & 0.003 & 0.51 & - & 0.24 \\ & k & 2.31 & 3.65 & 2.3 & 0.0040 & - & - & 64 & 0.35 \\ & iresw & 2.19 & 3.03 & 2.1 & 0.0010 & - & - & - & 0.32 \\ \({}^{108}\)Cd & resp & 2.38 & 2.7 & 2.52 & & & & & \\ & d & 2.13 & 2.37 & 2.1 & 0.0099 & 1.96 & 0.12 & - & 0.48 \\ & k & 2.32 & 3.8 & 2.3 & 0.0050 & - & - & 69 & 0.69 \\ & iresw & 2.19 & 3.03 & 2.2 & 0.0010 & - & - & - & 0.46 \\ \({}^{110}\)Cd & resp & 2.35 & 2.2 & 2.2 & & & & & \\ & d & 2.07 & 2.13 & 2.0 & 0.0100 & 2.13 & 0 & - & 0.54 \\ & iresw & 2.26 & 3.18 & 2.2 & 0.0100 & - & - & 48 & 0.74 \\ & iresw & 2.19 & 3.03 & 2.1 & 0.0010 & - & - & - & 0.76 \\ \({}^{114}\)Cd & resp & 2.30 & 2.0 & 2.2 & & & & & \\ & d & 2.04 & 2.07 & 2.0 & 0.0050 & 2.29 & 0 & - & 0.46 \\ & iresw & 2.21 & 2.8 & 2.2 & 0.0090 & - & - & 37 & 0.59 \\ & iresw & 2.19 & 3.03 & 2.1 & 0.0010 & - & - & - & 0.92 \\ \({}^{116}\)Cd & resp & 2.38 & 2.5 & 2.4 & & & & & \\ & d & 2.19 & 2.68 & 2.1 & 0.0010 & 9.07 & 0.33 & - & 0.32 \\ & k & 2.26 & 3.18 & 2.2 & 0.0030 & - & - & 48 & 0.32 \\ & iresw & 2.19 & 3.03 & 2. & 0.0020 & - & - & - & 0.79 \\ \hline \end{tabular}
\end{table}
Table 1: (continued)
\begin{table}
\begin{tabular}{|c c c c c c c c c c|} \hline Nucleus & & \(R_{4,2}\) & \(R_{0,2}\) & \(R_{2,2}\) & \(\alpha\) & c & \(\beta_{0}\) & \(B\) & \(\sigma_{r,m.s.}\) \\ \hline \hline \({}^{130}\)Ba & Exp & 2.52 & 3.3 & 2.5 & & & & & \\ & d & 2.40 & 3.31 & 2.4 & 0.0099 & 39.99 & 0.52 & - & 0.31 \\ & k & 2.4 & 5.4 & 2.4 & 0.0050 & - & - & 137 & 0.88 \\ & fSW & 2.21 & 3.06 & 2.2 & 0.000 & - & - & - & 0.37 \\ \({}^{132}\)Ba & Exp & 2.43 & 3.2 & 2.2 & & & & & \\ & d & 2.40 & 3.31 & 2.4 & 0.0099 & 4.14 & 0.48 & - & 0.38 \\ & k & 2.36 & 4.51 & 2.3 & 0.005 & - & - & 96 & 0.70 \\ & fSW & 2.22 & 3.09 & 2.2 & 0.0099 & - & - & - & 0.37 \\ \({}^{134}\)Ba & Exp & 2.32 & 2.9 & 1.9 & & & & & \\ & d & 2.15 & 2.69 & 2.1 & 0.0019 & 0.20 & 0.38 & - & 0.30 \\ & k & 2.19 & 2.73 & 2.1 & 0.0006 & - & - & 35 & 0.29 \\ & fSW & 2.19 & 3.03 & 2.2 & 0.001 & - & - & - & 0.38 \\ \({}^{136}\)Ba & Exp & 2.28 & 1.9 & 1.9 & & & & & \\ & d & 2.00 & 2.07 & 2.0 & 0.0010 & 0.25 & 0 & - & 0.19 \\ & k & 1.98 & 1.89 & 1.9 & 0.0008 & - & - & 15 & 0.14 \\ \({}^{142}\)Ba & fSW & 2.19 & 3.03 & 2.2 & 0.0010 & - & - & - & 0.65 \\ & k & 2.32 & 4.27 & 3.96 & & & & & \\ & d & 2.37 & 4.34 & 2.37 & 0.0023 & 5.72 & 1.66 & - & 0.54 \\ & k & 2.40 & 5.38 & 2.4 & 0.0020 & - & - & 136 & 0.67 \\ & fSW & 2.24 & 3.13 & 2.2 & 0.0199 & - & - & - & 0.76 \\ \({}^{134}\)Ce & Exp & 2.56 & 3.7 & 2.4 & & & & & \\ & d & 2.34 & 4.21 & 2.34 & 0.0010 & 4.72 & 1.62 & - & 0.57 \\ & k & 2.40 & 5.57 & 2.4 & 0.0030 & - & - & 145 & 0.83 \\ & fSW & 2.21 & 3.06 & 2.2 & 0.0199 & - & - & - & 0.85 \\ \hline \end{tabular}
\end{table}
Table 1: (continued)
\begin{table}
\begin{tabular}{|c c c c c c c c c c|} \hline Nucleus & & \(R_{4,2}\) & \(R_{0,2}\) & \(R_{2,2}\) & a & c & \(\beta_{0}\) & \(B\) & \(\sigma_{\tau,\mu,k,z}\) \\ \hline \hline \({}^{116}\)Pd & Exp & 2.58 & 3.3 & 2.2 & & & & & \\ & d & 2.39 & 3.15 & 2.3 & 0.0020 & 40.00 & 3.76 & - & 0.56 \\ & k & 2.42 & 3.3 & 2.4 & 0.0044 & - & - & 83 & 0.63 \\ & rsw & 2.19 & 3.04 & 2.2 & 0.0100 & - & - & - & 0.80 \\ \({}^{106}\)Cd & Exp & 2.36 & 2.8 & 2.7 & & & & & \\ & d & 2.27 & 2.83 & 2.3 & 0.0050 & 4.63 & 2.08 & - & 0.20 \\ & k & 2.33 & 2.8 & 2.3 & 0.0044 & - & - & 36 & 0.17 \\ & rsw & 2.18 & 3.1 & 2.2 & 0.0010 & - & - & - & 0.36 \\ \({}^{108}\)Cd & Exp & 2.38 & 2.7 & 2.52 & & & & & \\ & d & 2.18 & 2.7 & 2.2 & 0.0010 & 1.33 & 1.49 & - & 0.40 \\ & k & 2.34 & 2.7 & 2.3 & 0.0054 & - & - & 39 & 0.90 \\ & rsw & 2.08 & 3.51 & 2.1 & 0.0099 & - & - & - & 0.56 \\ \({}^{110}\)Cd & Exp & 2.35 & 2.2 & 2.2 & & & & & \\ & d & 2.18 & 1.22 & 2.2 & 0.0158 & 11.32 & 1.41 & - & 0.35 \\ & k & 2.29 & 1.9 & 2.3 & 0.0115 & - & - & 28 & 0.34 \\ & rsw & 2.19 & 3.03 & 2.2 & 0.0010 & - & - & - & 0.76 \\ \({}^{114}\)Cd & Exp & 2.30 & 2.0 & 2.2 & & & & & \\ & d & 2.12 & 0.9 & 2.1 & 0.0100 & 10.79 & 1.09 & - & 0.40 \\ & k & 2.25 & 1.7 & 2.2 & 0.0127 & - & - & 22 & 0.24 \\ & rsw & 2.19 & 3.03 & 2.2 & 0.0010 & - & - & - & 0.92 \\ \({}^{116}\)Cd & Exp & 2.38 & 2.5 & 2.4 & & & & & \\ & d & 2.22 & 2.37 & 2.2 & 0.0010 & 3.87 & 0.33 & - & 0.29 \\ & k & 2.27 & 2.8 & 2.3 & 0.0028 & - & - & 25 & 0.30 \\ & rsw & 2.06 & 3.54 & 2.1 & 0.0010 & - & - & - & 0.54 \\ \hline \end{tabular}
\end{table}
Table 2: (continued)
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline nucl. & \(\frac{4}{2}\)\(\frac{4}{
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline nucl. & \(\frac{4z-z_{2}}{2
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline nucl. & & \(\frac{4_{i}-2_{2}}{2_{i}\to 0_{1}}\) & \(\frac{6_{i}-4_{1}}{2_{i}\to 0_{1}}\) & \(\frac{8_{i}-6_{i}}{2_{i}\to 0_{1}}\) & \(\frac{10_{i}-8_{i}}{2_{i}\to 0_{1}}\) & \(\frac{2_{i}-2_{i}}{2_{i}\to 0_{1}}\) & \(\frac{2_{i}-2_{i}}{2_{i}\to 0_{1}}\) & \(\frac{0_{i}-2_{i}}{2_{i}\to 0_{1}}\) & \(\frac{2_{i}-0_{1}}{2_{i}\to 0_{1}}\) \\ & & & & & & \(\chi\) 10 & & \(\chi\) 10 & \\ \hline \({}^{128}\)Xe & Exp. & 1.47(20) & 1.94(26) & 2.39(40) & 2.74(114) & 1.19(19) & 15.9(23) & & \\ & D. & 1.64 & 2.09 & 2.53 & 2.96 & 1.64 & 0.0 & 0.77 & 10.79 & \\ & K. & 1.83 & 2.95 & 4.73 & 7.64 & 1.83 & 0.0 & 0.75 & 12.57 & \\ \({}^{132}\)Xe & Exp. & 1.24(18) & & & & 1.77(29) & 3.4(7) & & \\ & D. & 1.61 & 1.91 & 2.08 & 2.19 & 1.61 & 0.0 & 1.17 & 81.82 & \\ & K. & 2.78 & 7.13 & 17.89 & 43.35 & 2.78 & 0.0 & 2.49 & 0.07 & \\ \({}^{130}\)Ba & Exp. & 1.36(6) & 1.62(15) & 1.55(56) & 0.93(15) & & & & \\ & D. & 1.48 & 1.69 & 1.81 & 1.88 & 1.48 & 0.0 & 0.69 & 93.53 & \\ & K. & 1.54 & 2.01 & 2.54 & 3.22 & 1.54 & 0.0 & 0.56 & 39.43 & \\ \({}^{132}\)Ba & Exp. & & & & & & 3.35(64) & 90.7(177) & & \\ & D. & 1.52 & 1.74 & 1.86 & 1.94 & 1.52 & 0.0 & 0.88 & 99.49 &. \\ & K. & 1.61 & 2.20 & 2.94 & 3.95 & 1.61 & 0.0 & 0.66 & 20.59 & \\ \({}^{134}\)Ba & Exp. & 1.55(21) & & & & 2.17(69) & 12.5(41) & & & \\ & D. & 1.52 & 1.87 & 2.03 & 2.13 & 1.59 & 0.0 & 1.1 & 85.96 &. \\ & K. & 2.13 & 4.10 & 7.88 & 15.19 & 2.13 & 0.0 & 1.26 & 6.22 & \\ \({}^{142}\)Ba & Exp. & 1.40(17) & 0.56(14) & & & & & & \\ & D. & 1.97 & 2.77 & 3.54 & 4.29 & 1.97 & 0.0 & 1.89 & 0.15 & \\ & K. & 1.54 & 1.99 & 2.46 & 3.04 & 1.54 & 0.0 & 0.45 & 21.34 & \\ \({}^{148}\)Nd & Exp. & 1.61(13) & 1.76(19) & & & 0.25(4) & 9.3(17) & 0.54(6) & 32.82(816) & \\ & D. & 1.51 & 1.80 & 2.05 & 10.73 & 1.78 & 0.0 & 1.46 & 58.09 & \\ & K. & 1.57 & 2.08 & 2.67 & 3.47 & 1.57 & 0.0 & 0.59 & 30.88 & \\ \({}^{152}\)Gd & Exp. & 1.84(29) & 2.74(81) & & & 0.23(4) & 4.2(8) & 2.47(78) & & \\ & D. & 1.63 & 2.08 & 2.51 & 2.94 & 1.63 & 0.0 & 0.75 & 9.58 & \\ & K. & 1.80 & 2.96 & 5.14 & 10.30 & 1.80 & 0.0 & 1.41 & 32.70 & \\ \({}^{154}\)Dy & Exp. & 1.62(35) & 2.05(42) & 2.27(62) & 1.86(69) & & & & \\ & D. & 1.63 & 2.08 & 2.51 & 2.93 & 1.63 & 0.0 & 0.77 & 13.40 & \\ & K. & 1.78 & 2.89 & 5.06 & 10.73 & 1.78 & 0.0 & 1.46 & 58.09 & \\ \({}^{192}\)Pt & Exp. & 1.56(12) & 1.23(55) & & & 1.91(16) & 9.5(9) & & \\ & D. & 1.48 & 1.67 & 1.78 & 1.84 & 0.0 & 0.77 & 106.4 &. \\ & K. & 1.57 & 2.09 & 2.68 & 3.44 & 1.57 & 0.0 & 0.54 & 17.79 & \\ \({}^{194}\)Pt & Exp. & 1.73(13) & 1.36(45) & 1.02(30) & 0.69(19) & 1.81(25) & 5.9(9) & 0.01 & \\ & D. & 1.48 & 1.68 & 1.78 & 1.85 & 1.48 & 0.0 & 0.77 & 106.14 &. \\ & K. & 1.56 & 2.07 & 2.63 & 3.34 & 1.56 & 0.0 & 0.52 & 19.45 & \\ \({}^{196}\)Pt & Exp. & 1.48(3) & 1.80(23) & 1.92(23) & & & 0.4 & 0.07(4) & 0.06(6) & \\ & D. & 1.51 & 1.72 & 1.85 & 1.92 & 1.51 & 0.0 & 0.85 & 101.04 & \\ & K. & 1.61 & 2.21 & 2.97 & 4.04 & 1.61 & 0.0 & 0.69 & 23.11 & \\ \({}^{198}\)Pt & Exp. & 1.19(13) & \(>\)1.78 & & & 1.16(23) & 1.2(4) & 0.81(22) & 1.56(126) & \\ & D. & 1.57 & 1.84 & 1.99 & 2.09 & 1.57 & 0.0 & 1.04 & 88.96 & \\ & K. & 1.76 & 2.73 & 4.24 & 6.76 & 1.76 & 0.0 & 1.16 & 11.09 & \\ \hline \end{tabular}
\end{table}
Table 4: (continued) |
2306.10145 | Analysis of natural ventilation of refuge floor in High Rise Buildings | The refuge floors have been introduced into high-rise buildings with the aim
of ensuring a safe place to stay for the occupants in case of an emergency:
primarily during a fire breakout. Provisionals for natural ventilation of the
refuge floor have been made mandatory in all fire codes as it is not wise to
rely only on mechanical methods to remove the smoke-logged inside the refuge
floor. However, the effectiveness of such provisions has been questioned in
many studies as it highly depends on factors such as wind direction and
building geometry. In this study, using three-dimensional computational fluid
dynamics modelling, it is shown that careful analysis of smoke distribution can
provide the opportunity to integrate natural ventilation in an effective manner
to maintain prolonged habitable conditions on a refugee floor, even without
mechanical ventilation. Consequently, the performance of a refugee floor
natural ventilation strategy of a proposed 70-storey building is evaluated in
this study by evaluating the distribution of temperature, visibility, smoke,
and Carbon Monoxide concentration during a fire breakout, with a special
emphasis on windows placement and the size of the opening area. Initial windows
placement was as per the code requirements and later the windows layout and the
opening size were altered step by step based on the simulation findings until a
safe configuration is achieved. Results showed that adequate cross-ventilation
is a prime factor to establish a safe refugee floor area. Also, fire safety
strategies in a building need to be evaluated to confirm their effectiveness
since the dominant parameters such as wind direction, building geometry, and
building orientation are not identical for each building. | Hasarinda Kariyawasam, Dileepa Withanage, Harith Konara, Chathura Ranasinghe | 2023-06-16T19:25:21Z | http://arxiv.org/abs/2306.10145v1 | ## Analysis on natural ventilation of the refuge floor in a High Rise Building
## Abstract
The refuge floors have been introduced into high-rise buildings with the aim of ensuring a safe place to stay for the occupants in case of an emergency: primarily during a fire breakout. Provisionals for natural ventilation of the refuge floor have been made mandatory in all fire codes as it is not wise to rely only on mechanical methods to remove the smoke-logged inside the refuge floor. However, the effectiveness of such provisions has been questioned in many studies as it highly depends on factors such as wind direction and building geometry. In this study, using three-dimensional computational fluid dynamics modelling, it is shown that careful analysis of smoke distribution can provide the opportunity to integrate natural ventilation in an effective manner to maintain prolonged habitable conditions on a refugee floor, even without mechanical ventilation. Consequently, the performance of a refugee floor natural ventilation strategy of a proposed 70-storey building is evaluated in this study by evaluating the distribution of temperature, visibility, smoke, and Carbon Monoxide concentration during a fire breakout, with a special emphasis on windows placement and the size of the opening area. Initial windows placement was as per the code requirements and later the windows layout and the opening size were altered step by step
based on the simulation findings until a safe configuration is achieved. Results showed that adequate cross-ventilation is a prime factor to establish a safe refugee floor area. Also, fire safety strategies in a building need to be evaluated to confirm their effectiveness since the dominant parameters such as wind direction, building geometry, and building orientation are not identical for each building.
**Keywords-- high-rise buildings, refuge floor, natural ventilation, CFD**
## Nomenclature
Letters and symbols
\(\mathbf{u}\): Velocity vector
\(\omega\): Vorticity
\(H\): Stagnation energy per unit mass
\(\mathbf{f}_{b}\): Drag force exerted by the subgrid-scale particles and droplets
\(\tau\): Viscous stress
\(\tau_{ij}\)_dev_: Deviatoric part of viscous stress
\(\delta_{ij}\): Kronecker delta
\(\rho\): Density
\(p\): Pressure
\(K\): Kinetic energy
\(\varepsilon\): Rate of viscous dissipation
\(\mu\): Viscosity
\(S_{ij}\): Strain tensor
\(\mu_{t}\): Turbulent viscosity
\(\Delta\): Filter width
\(k_{SGS}\): Subgrid kinetic energy
\(\bar{u}\): Average value of \(u\) at the grid cell center
\(\widehat{\bar{u}}\): Weighted average of \(u\) over the adjacent cells
\(l_{mix}\): Mixing length
\(y^{+}\): Non-dimensional wall-normal distance
\(u_{*}\): Friction velocity
\(\kappa\): Von Karman constant
\(z_{0}\): Aerodynamic roughness
\(L\): Obukhov length
\(\theta\): Potential temperature
\(\theta_{0}\): Ground level potential temperature
\(\theta_{*}\): Scaling potential temperature
\(\psi_{m}\): Similarity function of wind model
\(\psi_{h}\): Similarity function of wind model
CFD: Computational Fluid Dynamics
FDS: Fire Dynamic Simulator
HVAC : Heating, Ventilation, and Air Conditioning LES : Large Eddy Simulation HRR : Heat Release Rate
## Introduction
High-rise buildings are the most effective solution for utilising limited space in densely populated cities. If the height between the lowest level that a fire vehicle can access and the highest habitable floor is higher than 30 m it is defined as a high-rise building. It is called a super high-rise building if that height is greater than 60m [1]. The most concerned question in high-rise buildings is how to evacuate occupants in case of an emergency. A recent fire broke out at the Bronx apartment in New York, USA in January 2022 causing 17 deaths [2]. Fire at the Kaohsiung building in Taiwan in 2021 killed 64 people [3]. These high-rise fires, in addition to many listed in [4] expose the extent of destruction that could occur and raise concerns about the effectiveness of fire and smoke control and rescue methods in skyscrapers.
### The Refugee Floor
The "refuge floor" is one such mandatory safety measure in high-rise buildings. The refuge floor is designed to be a safe place for occupants to rest during an evacuation or stay until rescue teams arrive, especially for elders, children, and people with disabilities. In addition, it is a sub-base for firefighting operations and a command point for rescue teams. Also, refuge floors could provide access to an alternative staircase through the refuge floor if one of the proceeding staircases is blocked by smoke, fire, or obstruction and it is an assembly point for occupants in case the escape routes are completely obstructed [5].
As per many standards [1, 6, 7] design of a refuge floor is governed by a set of common guidelines. A refuge floor should be provided for every 10 floors commencing from the highest habitable floor for a building exceeding 60 m in height. The holding area should be sufficient for 50% of the occupant load of 10 floors above and 0.5 m\({}^{2}\) floor area should be reserved for a person. This area should be separated from the rest of the building by using compartment walls having a minimum fire-resistance rating of 2 hours. There should be an external corridor or a smoke-stop lobby to connect the holding area to the building. Emergency lighting should be provided to this floor and that should be connected to a secondary power supply such as a generator or battery etc. Emergency lighting should be energized within 15 seconds if a failure is occurred in the main power supply [1]. Furthermore, the minimum height of refuge floors should be 2.3 m [6].
Further, there are specific guidelines for refugee floor ventilation. Permanent openings should be in the refuge floor to allow natural cross ventilation and that will prevent the smoke from logging inside the floor. Permanent openings are generally located at opposite sides on the refuge floor to induce natural ventilation and the size should be at least 25% of the refuge floor area. The minimum height of the openings should be 1.2 m and those openings should be at least 1.5 m horizontally and 3 m vertically away from any adjoining unprotected opening. And all parts of the holding area should be placed within 9 m of any ventilation opening [1].
In addition, some supplementary recommendations are also provided in local standards. According to the fire code in Hong Kong, the open side of a refuge floor should have at least a direct or diagonal distance of 6 m from an opposite side of a street, the boundary of another site, any other external wall having a less fire-resistance rating, any other building on the same site. Also, the spandrel feature with a height of 0.9 m should be introduced to separate the refuge floor
from the below floor to reduce the risk of external fire or smoke spread and there should be a horizontal projection of 0.5 m between these two floors [6]. In Singapore, refuge floors are located every 20 storeys. The area assigned for one person is 0.3 m\({}^{2}\). Also, sprinkler systems should be installed if there's any non-residential room on the refuge floor [7].
It should be noted here that, in most fire hazards, people lose their lives due to inhaling toxic gases rather than due to direct contact with fire [8]. Therefore, smoke control within the refugee floor is a paramount design criterion. Wind direction, building architecture, and location of the fire are the main factors deciding the smoke spread pattern [9]. Such information is important for building designers to provide reliable ventilation strategies for any emergency.
Accordingly, the present study analyses the internal spread of fire and smoke in the 376 m tall 70-storey high-rise building, located in Colombo Sri Lanka. Evaluation of the present window arrangement on the refuge floor was assessed and suggested further modification for the window arrangement. Survivable conditions of the refuge floor were enhanced with the aim of increasing the time of evacuation from the building.
## Literature Review
Practical experiments could be performed to analyse the functionality of fire safety methods used in high-rise buildings. Since these experiments are costly and time-consuming, an alternative solution is required. Although reduced-scale experiments could be used, they produce a limited number of results. Hence numerical simulations are the apparent solution [10]. However, constructing an accurate mathematical model would be a complicated task. Fire Dynamics Simulator (FDS) is a widely used CFD code to model fire situations [10, 13, 14]. Mathematical models provided in FDS have been validated in many studies [15] for different fire scenarios such as building fires, compartment fires, and tunnel fires.
Ohbaa et al. [16] have experimentally investigated the airflow patterns that generate inside a room due to cross ventilation and derived how airflow bent downward at the front by the front eddy and move upward at the leeward side due to the recirculating eddy at the rear. The results also indicated that the behaviour of wind is 3-dimensional, and it consisted of severe pressure gradients, flow separations, re-attachments, and vortices, then it became highly turbulent with the growth of height [16]. Many studies have revealed that the presence of the refuge floor in the mid-height does not noticeably interfere with the surrounding wind patterns of the building [18, 19, 20]. Detailed analysis of the effect of the refugee floor openings has been performed by Cheng [19] and results showed that the use of only one opening side could generate severe damage to the ventilation and an increase in the opening size of sidewalls would enhance the ventilation inside the refuge floor. Kindangen et al. [21] have investigated the impact of the roof shape and its orientation on the airflow pattern inside the building. Results showed that velocity magnitude was directly affected by the roof shape and building overhang and roof height also controlled the airflow pattern inside the building.
Several studies have been performed by Cheng et al. [11, 12] to analyse the effect of wind direction on the airflow inside the refuge floor. The results indicated that cross-ventilation could be achieved from most wind angles. When the wind blew normally to the building smoke could log inside the refuge floor [12]. Therefore, such wind angles should be omitted, or mechanical ventilation should be introduced [11, 18, 22]. Cheng et al. [11] investigated the relation of the geometry inside the refuge floor especially the shape and number of service cores to the airflow behaviour inside the refuge floor. It was shown that multiple service cores induce more complications to the airflow than a centered service core. Also, rectangular service cores have been encouraged because circular cores may increase the reverse flow of wind behind the service core at some wind angles [11].
Positions of the service core and columns greatly influence the airflow patterns around the refuge floor [20]. Channel-like flow inside the refuge floor is an advantage to ensure a smoke-free environment because it enhances direct ventilation and induces flow circulation in windward and leeward parts of the refuge floor [23]. The door entrance to the refuge floor would be safer if it is located at sidewalls because the rectangular service core generates channel flow at sidewalls [11, 19]. The magnitude and the gradient of the wind flow are influential parameters to induce cross-ventilation inside the refuge floor [9]. The height of the refuge floor should be larger than 2.3m because, at lower heights, the wind flow strength and the pressure gradient between windward and leeward sides are weaker. An increase in refuge floor height could enhance natural ventilation but it would not be feasible from design perspective [22].
Lo et al. [9] have shown that if the origin of the fire is at the floor immediately below the refuge floor, the smoke exerted by the fire floor will migrate to the refuge floor from the openings of the floor with the influence of the wind. Several strategies were implemented to avoid this phenomenon. Lau et al. [13] have studied the use of drencher systems for the openings of the refuge floors as a key solution to avoid entering smoke from the outside of the building. The drencher system creates a water curtain using water heads that are energized by firewater tanks in the building. However, Chow et al. [24] have suggested drencher systems are not necessary since the existing methods can avoid smoke logging even for large fire loads [6]. In addition, the spandrel at the window opening could act as a fire-rated partition since it could limit the movement of fire into the refuge floor [3]. The propagation of fire along the external wall has been analysed by Satoh et al. [25]. The results showed that the oscillatory motion of fire is governed by the heat release rate of the fire, and it is weekly affected by window configurations such as window height, soffit height, and balcony height. Sugawa et al. [26] derived empirical equations for upward velocity, temperature, and flame angle referring the experimental results of the ejected fire plume from the external side wind. Soltanzadeh et al. [27] have evaluated the performance of the refuge
floor with other evacuation methods. Results showed that refuge floors increase the evacuation rate and reduce the queue of elevator evacuees. Fire codes especially in the USA, UK, and China specify elevators as an occupant evacuation method because the increase in height of the building would increase the evacuation time if the stairs are the only method of evacuation [28]. Cai et al. [28] have suggested a multiple-level pressurization system considering the fire floor to be more effective than regular smoke extraction and staircase pressurization. Occupant behaviour was analysed in studies using eye-tracking in experimental evacuation and Virtual Reality (VR) experiments [28, 29]. Results of the studies by. Mossberg et al. [28] have revealed that exit signs pointed to the fire lifts could be an influential factor to promote elevator evacuation. Ding et al. [30] studied the combination of stairs and elevators for evacuation and revealed that there is an ideal percentage of evacuation by elevators that was not related to the number of occupants removed from the elevators.
Although there are many studies analysing the airflow patterns on the refuge floor, there is only very limited research available on fire and smoke movement on the refuge floor. In fact, most studies have limited to scaled models, and analysis of full-scale actual buildings is very rare [16, 17, 18, 19, 20]. Furthermore, no work has been reported on the effect of window arrangement on smoke and fire spread on a refugee floor. Consequently, the present work aims to address these gaps by analysing the effect of window arrangement in a refugee floor on the smoke spread of a proposed 70-storey commercial building.
## Numerical Approach
For the work presented here the FDS simulation software was used. FDS solves the governing equations in a uniform rectilinear grid. Large Eddy Simulations (LES) were carried out with the reaction progress variable approach for combustion calculations. Eddy Dissipation Concept (EDC) was selected to simulate the reaction rate and infinite fast chemistry was assumed. Radiation heat transfer effects were also considered for calculations.
The momentum equation solved in three dimensions is given by,
\[\frac{\partial\mathbf{u}}{\partial t}-\mathbf{u}\times\omega+\nabla H-\bar{p}\nabla\left( \frac{1}{\rho}\right)=\frac{1}{\rho}\left[(\rho-\rho_{0})\mathrm{g}+\mathbf{f}_{b} +\nabla.\tau\right] \tag{1}\]
Where \((\mathbf{u}.\nabla)\mathbf{u}=\nabla|\mathbf{u}|^{2}/2-\mathbf{u}\times\omega\) and \(H\equiv|\mathbf{u}|^{2}/2+\bar{p}/\rho\)
Here, \(f_{b}\)- drag force exerted by the subgrid-scale particles and droplets, \(\tau\)- viscous stress
The transport equation of the resolved kinetic energy is given by,
\[\bar{\rho}\frac{DK}{Dt}+\frac{\partial}{\partial x_{j}}\left(\left[\bar{p} \delta_{ij}+\tau_{ij}\,^{dev}\right]\widetilde{u}_{t}\right)=\bar{p}\frac{ \partial\widetilde{u}_{t}}{\partial x_{i}}+\tau_{ij}^{dev}\frac{\partial \widetilde{u}_{t}}{\partial x_{j}}+(\bar{\rho}\mathrm{g}_{i}+\overline{f_{b,t }})\widetilde{u}_{t} \tag{2}\]
Here, \(\tau_{ij}\,^{dev}\)- deviatoric part of viscous stress, \(\delta_{ij}\)- Kronecker delta
The LES sub-grid closures assume production of sub-grid kinetic energy is equal to the dissipation of total kinetic energy which leads to the equation for \(\varepsilon\),
\[\varepsilon\equiv-2\mu\left(S_{ij}S_{ij}-\frac{1}{3}(\nabla.\mathbf{u})^{2}\right) \tag{3}\]
Here, \(\mu\)-viscosity, \(S_{ij}\)- strain tensor
The Deardorff model was used as the SGS turbulence model. The following equations were solved to obtain turbulent viscosity (\(\mu_{t}\)).
\[\mu_{t}=\rho C_{v}\Delta\sqrt{k_{SGS}} \tag{4}\]
\[k_{SGS}=\frac{1}{2}\left[(\overline{u}-\widehat{\overline{u}})^{2}+\left(( \bar{v}-\widehat{\overline{v}})\right)^{2}+\left((\overline{w}-\widehat{ \overline{w}})\right)^{2}\right] \tag{5}\]
Here, \(\rho\)- air density, \(C_{v}\)- model constant, \(\Delta\)- filter width, \(k_{SGS}\)- subgrid kinetic energy, \(\overline{u}\)- average value of \(u\) at the grid cell center, \(\widehat{u}\)- weighted average of \(u\) over the adjacent cells. \(C_{v}\)value is taken as 0.1.
Wall function was provided to achieve accurate results for velocity near walls. Mixing length (\(l_{mix}\)), and non-dimensional wall-normal distance (\(y^{+}\)) was combined in the equation as follows.
\[l_{mix}=C_{s}\Delta\big{[}1-e^{-y^{+}/A}\big{]} \tag{6}\]
Here \(C_{s}\) and \(A\) represents model constant and dimensionless empirical constant respectively. Values for the constants in the model are \(C_{s}=0.2\) and \(A=26\).
The oncoming wind needs to be accurately represented for better results. Monin-Obukhov similarity method [31] was used to model the wind profile suitable for a city area.
\[u(z)=\frac{u_{*}}{\kappa}\left[\ln\Big{(}\frac{z}{z_{0}}\Big{)}-\psi_{m}\left( \frac{z}{L}\right)\right] \tag{7}\]
\[\theta(z)=\theta_{0}+\frac{\theta_{*}}{\kappa}\left[\ln\Big{(}\frac{z}{z_{0}} \Big{)}-\psi_{h}\left(\frac{z}{L}\right)\right] \tag{8}\]
Where, \(u\) - wind speed profile, \(u_{*}\)- friction velocity, \(\kappa\)- Von Karman constant, \(z_{0}\) - aerodynamic roughness, \(L\)- Obukhov length, \(\theta\)-potential temperature, \(\theta_{0}\)- ground level potential temperature, \(\theta_{*}\)- scaling potential temperature, \(\psi_{m}\) and \(\psi_{h}\)- similarity functions. Model constants are taken as \(L=-100\), \(z_{0}=2\) and \(\kappa=0.41\).
## 3 Validation of the computational model for a fire in a 10-storey building
The experimental fire reported by Hadjisophocleous et al. [33] in a 10-storey building located in the fire research laboratory of the National Research Council of Canada was used as the case for validation. This building was set up in a manner that the generated smoke is moving through the stair shaft of the building and spreads to the other floors from the service core of the building.
Open/ Closed conditions of the doors in the building are shown in table 1. This door arrangement ensures that fire originates on the 2\({}^{\text{nd}}\) floor and would feed continuously with fresh air. Propane was used as the fuel of the combustion and the fire compartment was located at a compartment that has a volume of (9 x 3.8 x 3.3) m\({}^{3}\), and the fire tray has a cross-section area of (5 x 0.4) m\({}^{2}\) which is at the center of the fire compartment. Variation of the heat release rate during the fire was measured, and it was included in the mathematical model as a user-defined function (figure 1).
Thermodynamics properties of the propane fuel were added to the FDS model as shown in table 2. Also, the same properties of other materials such as concrete, wood, steel, and ceramic fiber coating were included as mentioned in table 3. The temperature at different locations and gas concentration in front of the stairs were selected as the parameters which were measured from the experimental setup and validated using FDS. The locations of thermocouples and gas analysers are shown in figure 2. The domain of this model is (15 x 9 x 28.8) m\({}^{3}\), and it is divided nearly into 4 million elements. Therefore, the volume of one cell is (0.1 x 0.1 x 0.1) m\({}^{3}\). More information related to this experiment could be found in [34].
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Floor No & DR4 & DR5 & Notes \\ \hline
10 & Open & Partly Open & \\ \hline
9 & Closed & Open & \\ \hline
8 & Open & Partly Open & \\ \hline
7 & Closed & Open & \\ \hline
6 & Open & Partly Open & \\ \hline
5 & Closed & Open & \\ \hline
4 & Open & Partly Open & \\ \hline
3 & Closed & Open & \\ \hline
2 & Open & Open & \\ \hline
1 (Ground floor) & Open & Open & DR2 Open DR6 Closed \\ \hline \end{tabular}
\end{table}
Table 1: Open/Partly Open/Closed conditions of doors on each floor [32]
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Floor No & DR4 & DR5 & Notes \\ \hline
10 & Open & Partly Open & \\ \hline
9 & Closed & Open & \\ \hline
8 & Open & Partly Open & \\ \hline
7 & Closed & Open & \\ \hline
6 & Open & Partly Open & \\ \hline
5 & Closed & Open & \\ \hline
4 & Open & Partly Open & \\ \hline
3 & Closed & Open & \\ \hline
2 & Open & Open & \\ \hline
1 (Ground floor) & Open & Open & DR2 Open DR6 Closed \\ \hline \end{tabular}
\end{table}
Table 2: Combustion properties of Propane [34]
[MISSING_PAGE_POST]
## References
### Comparison of Results
Figure 3 shows the comparison of temperature between experimental and simulation results on the fire floor. Figure 3 (a) shows the results at 0.62 m height and 3 (b) shows at 2.57 m height at location A. The temperature has risen with time proportional to the fire intensity. From the comparison of graphs, it can also be seen that the temperature value at increased heights from the floor level is higher due to the accumulation of hot burnt gases. The prediction of these trends by the computational model is quite satisfactory, though some deviation of the absolute value is seen.
Figure 2: Layout of thermocouple probes and gas analysers inside the building (measured in millimetres)
Figure 4 shows the results in the stair shaft on the second floor and fifth floors at location B. It's clear that in figure 4 (a) the simulation couldn't get the high spike at the beginning. However, the results match each other at the end of the graph. In figure 4 (b), both results could be considered in good agreement. Although, in the end, simulation results deviate from the experimental results. There are several possible reasons for this deviation in results. In the experimental building, the fire compartment was covered using ceramic fiber and it was not considered in this model. Also, square-shaped openings that create paths to the surrounding have dimensions of 0.24 x 0.24 m\({}^{2}\)[34] but in this model, we could create 0.2 x 0.2 m\({}^{2}\) due to the low resolution of the domain and it couldn't resolve due to the lack of computational power. This domain already contains 3.8 million mesh elements.
Figure 3: Variation of temperature with time on the fire floor (2\({}^{\text{nd}}\) floor) at southeast corner; (a) readings at 0.62 m height (b) readings at 2.57 m height
### Comparison of Gas concentration
Oxygen and Carbon Dioxide gas concentration at the stair shaft on the fire floor is shown in figure 5. It could be seen that the CO\({}_{2}\) and O\({}_{2}\) gas concentrations in the fire floor recovered 1000 seconds after the initiation of the fire. However, the results of this simulation indicate that it isn't fully recovered. The difference in the cross-section areas of the openings on the 4th floor and 8th floor that are mentioned above might be the reason for the deviation of the simulation results.
Figure 4: Variation of temperature with time at the middle of the stair shaft 06 m below the ceiling (a) readings on Second Floor (b) readings on Fifth Floor
## 5 Smoke Spread in the Refugee floor
### The building geometry and wind conditions
The selected building for the study has a height of 376 m and it would be one of the tallest buildings in South Asia. This mixed occupancy building facilitates hotels, offices, and residential and retail facilities. Two refuge floors are on 33\({}^{\mathrm{rd}}\) and 52\({}^{\mathrm{nd}}\) floors of this building and the smoke distribution on the 33\({}^{\mathrm{rd}}\) floor was analysed in this study. It is expected that a population of 1149 occupants would be accommodated between the 33\({}^{\mathrm{rd}}\) floor and 51\({}^{\mathrm{st}}\) floor. Then, this refuge floor should provide room for 50% of the occupants by allowing space of 0.5m\({}^{2}\) per person. The building design reserves an area of 666.03 m\({}^{2}\) for the refuge floor which is well above the requirement
Figure 5: Variation of gas concentrations with time in the stair shaft at the fire floor (a) O\({}_{2}\) gas concentration (b) CO\({}_{2}\) gas concentration
(287.5 m\({}^{2}\)). A building height, 10 floors above the 33\({}^{\rm rd}\) flow refugee area from the ground level were considered as the problem domain (figure 6). The remaining floors were not considered for the simulation domain as they are too far from the interested area (33\({}^{\rm rd}\) floor) to induce a noticeable effect on the flow inside the refuge area.
Annual weather reports of several years were referred to assign 30 \({}^{\rm o}\)C for atmospheric temperature and 4.5 ms-1 for wind velocity at a height of 10 m [35]. Concentrations of Soot and CO were assigned as 0.011 and 0.038 respectively based on the [36]. Experimentally estimated fire ramp by Rinne et al. [32] was chosen to model the heat and exhaust gas emission rates at the fire location.
Based on wind data, wind-blowing directions could be narrowed down to the West and North-East [35]. The fire was assumed to be originated in the staircase at the floor below the refuge floor. The fire propagation path was from the staircase through the service core at the refuge floor to the refuge space. The worst-case scenario was considered for simulations. It was assumed that
Figure 6: The selected building: (a)Location of the refugee floor is marked in ash colur (b) refuge floor layout
the electric system had failed therefore the pressurization of the staircase was stopped. All fire doors were assumed to be failed at the beginning and these doors acted as openings.
Page 20 of 37
Figure 7: Measuring points of the Refuge Floor – Window configuration - 0
Figure 8: Window configuration - 01 of the refuge floor
Figure 9: Window configuration – 02 of the refuge floor
The proposed window configuration by the building designers (configuration - 0) was evaluated at first and contrasted with two other modified window configurations (1 and 2) that could enhance the occupant's safety inside the refuge floor. As mentioned above, the area of permanent openings is 13% of the floor area in the window configuration - 0 which was suggested by the building designer. Configurations 01 and 02 are proposed for the analysis by increasing the ratio up to 25%. Both configurations possess more windows than configuration - 0 while the height of the windows in configuration - 2 is higher (2.5 m) than in the other two setups. Window configurations are shown in figures 7-9.
Smoke layer concentration, Carbon Monoxide (CO) concentration, temperature, airflow velocity, and visibility were analysed during the fire and smoke spread. The refuge floor was divided into two sections for the investigation as shown in figure 6. Three locations were selected for the analysis as shown in figure 7 considering the geometry of the floor (refuge area 1 and 2) and the criticality of the air quality.
Modifications were proposed by analysing the following weaknesses in the current window configuration. There was an accumulation of smoke in refuge area 1 in the designed window setup. Configuration - 1 was able to keep smoke logging at a minimum in refuge area 1 and improved the survivable conditions. The concept of configuration - 1 was based on the results of the designed setup. The number of windows was increased as it was necessary to comply with the newest fire regulations. An area of windows was added considering the increment of cross ventilation. Then, extreme velocities in refuge area 1 were detected in the current window setup. So, three windows were introduced to wall 01 to reduce the wind velocity. Also, there was a vortex created at refuge area 2 when the wind blew from the West direction. So, windows were introduced at wall 06 for configuration - 1. However, this vortex kept occurring in configuration - 1 as well. Therefore, the windows at wall 06 were removed and that opening area was added to the existing windows in configuration - 2.
## Results and Discussion
### Discussion using data on temperature
Temperature is the prominent parameter to discuss the safety of the refuge floor. Figure 10-(a) depicts the temperature distribution at 1 m height on the refuge floor when the wind was blowing from the North-East direction, and it was obtained 600 seconds after the ignition. There were three distributions in figure 10 (a)(i)-(iii) which give information about the three window configurations when the wind blew from the North-East direction. It is clear that the modifications (1 and 2) act better than the designed configuration (0) especially in refuge area 1. Figure 10 (b) represents the same results for the West direction. Overall, configuration 2 acted better than others. However, the temperature at refuge area 2 was way over the limit (40 \({}^{\circ}\)C) [38] which meant refuge floor area 2 was not survivable when the wind was from the West direction.
Since figure 11 indicates the temperature at 3 m height, it is clear that the temperature has risen with the height. From figure 11 (a) in which the wind blew from the North-East, configuration 1 acted better than the other. The major difference in configuration 1 was that it had four windows on wall 06 while there were not any in others. However, configuration 2 acted better when the wind was from the West due to the same difference.
Figure 10: Temperature distribution on the refuge floor at 1 m height after 600 seconds of fire; (a)- wind is from North-East (b)- wind is from West; i-current setup, ii-1st modification, ii-2nd modification; p, q, r are the fire locations
Figures 12 and 13 show a graphical representation of 20 seconds moving average temperature vs time considering the window arrangement and height. Overall, it could be seen that the proposed method is able to predict the average temperature of the window arrangement.
Figure 11: Temperature distribution on the refuge floor at 3 m height after 600 seconds of fire; (a)- wind is from North-East (b)- wind is from West; i-current setup, ii-1st modification, iii-2nd modification; p, q, r are the fire locations
modifications worked better than the designed window setup. The designed window configuration failed to keep under 40 \({}^{\circ}\)C in figure 12 when the wind was from the North-East while configuration - 1 showed the best performance. Figure 13 represents the variation of temperature with height, and it was an interesting observation that the minimum temperature was at 2 m height. The maximum velocity of wind inside the refuge floor was at 2 m height because the windows were positioned from 1 m to 3.5 m height. Therefore, the impact from the cross-ventilation was maximized in the middle (2 m). This behaviour was beneficial to stop the logging of smoke at the ceiling since the rising smoke would be caught by the moving wind.
Figure 13: Variation of temperature with time at different heights on P1; (a) wind is from West (b) wind is from North-East area 2; i-current setup, ii- 1st modification, iii- 2nd modification
### Discussion using data on concentration of Carbon Monoxide
CO is a toxic gas and the concentration needed to be kept below 9 ppm according to the ASHRAE standards [38]. The results of 20 seconds moving average CO monoxide concentration with time were plotted with time in figure 14. Overall, it is clear that modifications acted better than the designed window configuration. CO concentration was well below 9 ppm in both refuge areas 1 and 2 when the wind is from North-East. All window configurations failed to keep the survivable condition in refuge area 2 when the wind is from the West. It could be concluded that window configuration - 2 showed better results than others.
Figure 14: Variation of CO concentration with time of different window setups at 2m height; (a) readings from P2 in refuge area 1 (b) readings from P3 in refuge area 2; i-wind is from West, ii-wind is from North-East
### Discussion using data on smoke and visibility
Smoke spread throughout the refuge floor after 600 s of fire is shown in figure 15. Smoke entered the refuge area from the service core. Figure 15 (a) represents the smoke movement when the wind was from North-East. There was no smoke accumulation in refuge area 2 in all window setups. However, refuge area 1 suffered from smoke logging, and configuration - 1 showed the best results compared to others. Windows on wall 06 in configuration - 1 might increase the air volume inside the floor. When the wind blew from the West, smoke densely accumulated in refuge area 2. Also, refuge area 1 was affected by smoke. However, dense smoke was low in configuration - 1 than in the others. Cross-ventilation is the key factor to extract smoke from the refuge floor. When the wind blew from North-East, the window setups in every configuration were capable to induce cross ventilation. However, when the wind is from the West, the window arrangement failed to induce cross ventilation and that led to the logging of smoke inside the refuge floor.
Visibility was derived considering three main parameters namely Soot yield, mass extinction coefficient (K\({}_{m}\)), and visibility factor. Soot yield is the fraction of fuel mass that is converted to soot where the simple chemistry approach is being used. K\({}_{m}\) depends on various light-absorbing gas species. The value of K\({}_{m}\) is about 8700 m\({}^{2}\)/kg for flaming combustion. The visibility factor is a non-dimensional constant (C), and its default value is 3. C varies with the type of object that could see through the smoke. For the light-emitting sign, C = 8, and for the light-reflecting sign, C = 3. The maximum value for visibility is 30 m [31].
Figure 16 shows the variation of 20 seconds moving average visibility vs time. Overall, it is clear that the refuge floor would experience more visibility when the wind blew from North-East. Refuge area 2 was perfectly visible in every window configuration. While the designed window configuration showed some fluctuation in the visibility in refuge area 1, configuration - 1 could keep the visibility at the maximum during the considered time. Configuration - 1 could have more
visibility than the other two configurations at around 20 m in refuge area 1 when the wind was from the West. Although the visibility in refuge area 2 is gradually decreased in each setup with the increase of heat release rate and reaches 5 m when the fire is fully developed.
Figure 15: Smoke dispersion on refuge floor after 600 seconds of fire; (a)- wind is from North-East (b)- wind is from West; i-current setup, ii-1st modification, iii-2nd modification; p, q, r are the fire locations
### Velocity distribution within the refuge floor
Behaviour of the wind inside the refuge floor and around the building was analysed using velocity vector profiles shown in figure 17. It is clear that wind flow was affected by the shape of the building and a void was created at the leeward side. The wind was attracted to the void when it was near that. Velocity distribution is shown in figure 17 (a) when the wind is from North-East. Air circulation inside refuge floor area 2 could be seen in configuration - 1 due to the windows in wall 06. Wind flow from wall 05 to wall 07 was higher in configuration - 2 because of the increased area of openings and the absence of windows in wall 06. Both modifications had higher wind velocity inside the refuge floor than the designed window setup in refuge area 1. Generated cross ventilation is a key attribute in terms of removing accumulated smoke.
Figure 17 (b) illustrates results for the West wind direction. Windows in wall 01 were introduced in the modifications to minimize the unbearable wind velocity in refuge floor area 1. And the results
Figure 16: Variation of visibility with time of different window setups at 2m height; (a) readings from P2 in refuge area 1 (b) readings from P3 in refuge area 2; i-wind is from West, ii- wind is from North-East
show that the solution proposed in the modifications is effective. Air was circulating inside refuge area 2 since windows failed to induce an airflow inside the floor. This could lead to severe smoke accumulation in refuge area 2. Modifications were applied to create airflow by introducing windows to wall 06 and increasing the window area of the windows in wall 05 and wall 07. Although there were some improvements, it was not enough to extract smoke from the refuge floor. It is clear that the orientation of this building must be changed to obtain benefits from natural ventilation. Otherwise, this building must use mechanical ventilation to create an airflow inside refuge area 2.
Page 33 of 37
## Conclusion
This paper represented the application of CFD in measuring habitable conditions inside the refuge floor of a high-rise building in case of an emergency situation. Mathematical models such as turbulence and combustion were validated to ensure the correct models were used for the simulation of case building. The validation was conducted for a 10-storey building and the results proved that the mathematical model was able to model an actual scenario. The intention of this research was to identify the ability of natural ventilation to remove any amount of smoke that could log inside the refuge floor. Therefore, the model was implemented to achieve the maximum amount of smoke on the refuge floor. Temperature CO concentration and visibility were selected as the parameters to discuss the survivability of the refuge floor. Apart from the designed window configuration, two modifications were suggested for the assessment. There could be seen a nearly stagnant flow inside the refuge floor area 2 when the wind blew from the West direction. The openings have failed to generate cross-ventilation since the windward side is blocked. This revealed the fact that the orientation of the window openings and wind direction should be taken into account when the ventilation strategy is designed. The results of refuge area 1 when the wind
Figure 17: Velocity vector distribution on the refuge floor at 2 m height after 600 seconds of fire(a)- wind is from North-East (b)- wind is from West; i-current setup, ii-1st modification, iii-2nd modification; p, q, r are the fire locations
was moving from the West revealed the effectiveness of cross-ventilation to egress the smoke inside the refuge floor. The window arrangement of each configuration was capable of providing cross-ventilation when the wind blew from the North-East direction. Overall, considering the profiles of temperature, CO concentration, visibility, and smoke behaviour, configuration - 1 was better than configuration - 2 and designed setup.
Fire propagation through the service core was analysed in this study. Fire propagation from the outside of the building should be analysed. Also, the propagation of fire through different types of facades, analysis of insulation materials of the facade system, and the impact of the drencher system as a fire provision could be suggested for further research.
Authors appreciate the support of Prof. R.U. Halwathura, Eng. Shiromal Fernando, Eng. U. Virajini, Mr. H.W. Kularathna, Eng. S. Amarasekara, Mr. E.M. Jayathilaka Banda, Mr. Panduka Wijeywardena, and Mr. Lakmal Kandanage for the research. Also, authors acknowledge the financial support provided by University of Moratuwa.
|
2303.04936 | Degeneracy loci in the universal family of abelian varieties | Recent developments on the uniformity of the number of rational points on
curves and subvarieties in a moving abelian variety rely on the geometric
concept of the degeneracy locus. The first-named author investigated the
degeneracy locus in certain mixed Shimura varieties. In this expository note we
revisit some of these results while minimizing the use of mixed Shimura
varieties while working in a family of principally polarized abelian varieties.
We also explain their relevance for applications in diophantine geometry. | Ziyang Gao, Philipp Habegger | 2023-03-08T23:10:35Z | http://arxiv.org/abs/2303.04936v1 | # Degeneracy loci in the universal family of abelian varieties
###### Abstract.
Recent developments on the uniformity of the number of rational points on curves and subvarieties in a moving abelian variety rely on the geometric concept of the degeneracy locus. The first-named author investigated the degeneracy locus in certain mixed Shimura varieties. In this expository note we revisit some of these results while minimizing the use of mixed Shimura varieties while working in a family of principally polarized abelian varieties. We also explain their relevance for applications in diophantine geometry.
###### Contents
* 1 Introduction
* 2 Preliminaries on Abelian Schemes
* 3 Bi-algebraic Subvarieties and the University Family of Abelian Varieties
* 3.1 The Mumford-Tate Group
* 3.2 Bi-algebraic Subvarieties
* 4 A Criterion for Non-degeneracy
* 5 The Zeroth Degeneracy Locus in a Fiber Power
* 6 The First Degeneracy Locus and the Relative Manin-Mumford Conjecture
## 1. Introduction
The goal of this expository note is to reprove some arguments in [1, 1], especially regarding the degeneracy loci, _with a minimal use of the language of mixed Shimura varieties_.
With a view towards application, we will work in the following setup. Let \(\mathfrak{A}_{g}\to\mathbb{A}_{g}\) be the universal family of principally polarized \(g\)-dimensional abelian varieties with level-\(\ell\)-structure for some \(\ell\geq 3\). Then \(\mathfrak{A}_{g}\) carries the structure of a geometrically irreducible quasi-projective variety defined over a number field.
Let \(X\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}\). In [1], the first-named author defined the _\(t\)-th degeneracy locus_\(X^{\deg}(t)\) for each \(t\in\mathbb{Z}\); we refer to SS4 for a definition in our setting. By definition, \(X^{\deg}(t)\) is an at most countably infinite union of Zariski closed subsets of \(X\). Yet \(X^{\deg}(t)\) is Zariski closed in \(X\), see [1, Theorem 1.8].
The definition of \(X^{\deg}(t)\) involves _bi-algebraic subvarieties_ of \(\mathfrak{A}_{g}\) and \(\mathbb{A}_{g}\); bi-algebraic subvarieties are explained in the beginning of SS3.2. Ullmo and Yafaev characterized [11] bi-algebraic subvarieties of \(\mathbb{A}_{g}\) as the _weakly special subvarieties_ of \(\mathbb{A}_{g}\), when we view
###### Abstract.
We consider the _weakly special_ subvarieties of \(\mathfrak{A}_{g}\), where \(\mathfrak{A}_{g}\) is a mixed Shimura variety. We prove that the _weakly special_ subvarieties of \(\mathfrak{A}_{g}\) are precisely the _weakly special_ subvarieties, when we view \(\mathfrak{A}_{g}\) as a mixed Shimura variety, see [10, Definition 4.1(b)]. Then by some computation involving mixed Shimura varieties, a geometric characterization of bi-algebraic subvarieties of \(\mathfrak{A}_{g}\) is given by [1, Proposition 1.1].
## 1. Introduction
Let \(X\) be a smooth projective variety and \(\mathfrak{A}_{g}\) be a finite dimensional projective variety. We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly special_ if \(X\) is a _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\). If \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\), then \(X\) is _weakly special_ if \(X\) is a _weakly special_ subvariety of \(\mathfrak{A}_{g}\).
We say that \(X\) is _weakly special_ if \(X\) is a _weakly
and so ultimately Corollary 6.3, requires the Zariski closedness of \(X^{\deg}(1)\) in \(X\), which was proved using the theory of mixed Shimura varieties [1, Theorem 1.8]. We do not reprove Zariski closedness in the current paper.
Section 2 below contains some preliminaries on abelian schemes in characteristic \(0\).
### Acknowledgements
The authors would like to thank Gabriel Dill for discussions on abelian schemes and Lemma 2.1. ZG has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n\({}^{\circ}\) 945714). PH has received funding from the Swiss National Science Foundation project n\({}^{\circ}\) 200020_184623.
## 2. Preliminaries on Abelian Schemes
Let \(S\) be a smooth irreducible quasi-projective variety defined over an algebraically closed subfield of \(\mathbb{C}\). By abuse of notation we often consider our varieties as defined over \(\mathbb{C}\). Let \(\pi\colon\mathcal{A}\to S\) be an abelian scheme of relative dimension \(g\geq 1\). For \(s\in S\) we write \(\mathcal{A}_{s}\) for the abelian variety over \(\mathbb{C}(s)\). More generally, for a morphism \(T\to S\) of schemes we let \(\mathcal{A}_{T}\) denote the fiber power \(\mathcal{A}\times_{S}T\). Let \(\eta\in S\) denote the generic point.
Let \(\operatorname{End}(\mathcal{A}/S)\) denote the group of endomorphisms of the abelian scheme \(\mathcal{A}\to S\). It is a finitely generated free abelian group. For all \(s\in S\) let \(\varphi_{s}\) denote the restriction of \(\varphi\in\operatorname{End}(\mathcal{A}/S)\) to \(\mathcal{A}_{s}\). The associated group homomorphism homomorphism
\[\operatorname{End}(\mathcal{A}/S) \to\operatorname{End}(\mathcal{A}_{s}/\mathbb{C}(s)),\] \[\varphi \mapsto\varphi_{s} \tag{2.1}\]
is injective. As \(S\) is smooth, an endomorphism of the generic fiber \(\mathcal{A}_{\eta}\) extends to an endomorphism of \(\mathcal{A}\) over \(S\) by [1, Proposition I.2.7]. Therefore, (2.1) is bijective for \(s=\eta\).
Observe that any \(\varphi\in\operatorname{End}(\mathcal{A}/S)\) is a proper morphism. So the image \(\varphi(\mathcal{A})\) is Zariski closed in \(\mathcal{A}\). We will consider \(\varphi(\mathcal{A})\) as a closed subscheme of \(\mathcal{A}\) with the reduced induced structure. As \(\mathcal{A}\) is reduced, \(\varphi(\mathcal{A})\) is the schematic image of \(\varphi\).
The following lemma on endomorphisms of \(\mathcal{A}/S\) relies on a result of Barroero-Dill and ultimately on the theory of group schemes.
**Lemma 2.1**.: _Let \(\varphi\in\operatorname{End}(\mathcal{A}/S)\). Then \(\varphi(\mathcal{A})\) is an abelian subscheme of \(\mathcal{A}\). For all \(s\in S(\mathbb{C})\) the restriction \(\varphi_{s}\colon\mathcal{A}_{s}\to\mathcal{B}_{s}\) is surjective and its kernel has dimension \(g-\dim\mathcal{B}+\dim S\)._
Proof.: Let \(B\) be \(\varphi(\mathcal{A}_{\eta})\), this is an abelian subvariety of \(\mathcal{A}_{\eta}\) defined over the function field \(\mathbb{C}(\eta)\). By [1, Lemma 2.9] the abelian variety \(B\) is the generic fiber of an abelian subscheme \(\mathcal{B}\subseteq\mathcal{A}\).
Then \(\mathcal{A}_{\eta}\) is contain in the closed subset \(\varphi^{-1}(\mathcal{B})\) of \(\mathcal{A}\). As \(\mathcal{A}_{\eta}\) lies dense in \(\mathcal{A}\) we have \(\varphi(\mathcal{A})\subseteq\mathcal{B}\), set-theoretically. Furthermore, \(\varphi\) is proper and its image contains the dense subset \(B\) of \(\mathcal{B}\). So \(\varphi(\mathcal{A})=\mathcal{B}\) as sets. But \(\mathcal{A}\) and \(\mathcal{B}\) are reduced, so \(\mathcal{B}\) is the schematic image of \(\varphi\). In particular, \(\varphi(\mathcal{A})\) is an abelian subscheme of \(\mathcal{A}\).
For all \(s\in S(\mathbb{C})\) we have \(\varphi(\mathcal{A}_{s})=\mathcal{B}_{s}\) and this image has dimension \(\dim\mathcal{B}-\dim S\) since \(\mathcal{B}\to S\) is smooth. The lemma follows as the kernel of \(\varphi_{s}\) has dimension \(\dim\mathcal{A}_{s}-\dim\mathcal{B}_{s}\)
We will often treat \(\varphi(\mathcal{A})\) as an abelian scheme over \(S\) and \(\varphi\) as the homomorphism \(\mathcal{A}\to\varphi(\mathcal{A})\).
Let \(V\) be an irreducible variety defined over \(\mathbb{C}\). A subset of \(V(\mathbb{C})\) is called _meager_ in \(V\) if it is contained in an at most countably infinite union of Zariski closed proper subsets of \(V\).
We assume that all geometric endomorphisms of the generic fiber \(\mathcal{A}_{\eta}\) are defined over the function field \(\mathbb{C}(\eta)\) of \(S\). This condition is met, for example, if there is an integer \(\ell\geq 3\) such that all \(\ell\)-torsion points of \(\mathcal{A}_{\eta}\) are \(\mathbb{C}(S)\)-rational [20, Theorem 2.4].
A point \(s\in S(\mathbb{C})\) is called _endomorphism generic_ for \(\mathcal{A}/S\) if the homomorphism
\[\operatorname{End}(\mathcal{A}/S)\otimes\mathbb{Q}\to\operatorname{End}( \mathcal{A}_{s}/\mathbb{C})\otimes\mathbb{Q} \tag{2.2}\]
induced by (2.1) is surjective. Note that (2.2) is always injective. We define
\[S^{\mathrm{exc}}:=\big{\{}s\in S(\mathbb{C}):s\text{ is not endomorphism generic for }\mathcal{A}/S\big{\}}. \tag{2.3}\]
**Proposition 2.2**.: _The set \(S^{\mathrm{exc}}\) is meager in \(S\)._
Proof.: This proposition can be proved using Hodge theory. Masser [14, Proposition] gave a proof using an effective Nullstellensatz. In this reference one must assume, as we do above, that all geometric endomorphisms of \(\mathcal{A}_{\eta}\) are already defined over \(\mathbb{C}(S)\). As a consequence any endomorphism of \(\mathcal{A}_{s}\) for \(s\) outside a meager subset is the specialization of an endomorphism of the generic fiber. Then we use that (2.1) is surjective for \(s=\eta\).
A _coset_ in an abelian variety is the translate of an abelian subvariety by an arbitrary point.
**Lemma 2.3**.: _Let \(Y\) be an irreducible closed subvariety of \(\mathcal{A}\) with \(\pi(Y)=S\). Assume that there is a Zariski open and dense subset \(U\subseteq\pi(Y)\) such that for all \(s\in U(\mathbb{C})\), some irreducible component of \(Y_{s}\) is a coset in \(\mathcal{A}_{s}\). There exists \(\varphi\in\operatorname{End}(\mathcal{A}/S)\) with the following properties:_
1. _We have_ \(\dim\varphi(Y)=\dim\pi(Y)\)_._
2. _For all_ \(s\in S(\mathbb{C})\) _we have_ \(\dim\ker\varphi_{s}=\dim Y-\dim\pi(Y)\)_._
_Moreover, if \(\dim Y=\dim\pi(Y)\), then \(\varphi\) is the identity._
Proof.: If \(\dim Y=\dim\pi(Y)\) we take \(\varphi\) to be the identity, the conclusions are all true. Otherwise by generic flatness, we may and do replace \(U\) by a Zariski open and dense subset such that \(Y_{s}\) is equidimensional of dimension \(\dim Y-\dim\pi(Y)\) for all \(s\in U(\mathbb{C})\).
Let \(s\in U(\mathbb{C})\setminus S^{\mathrm{exc}}\). We fix an irreducible component \(Z_{s}\) of \(Y_{s}\) that is a coset in \(\mathcal{A}_{s}\), necessarily of dimension \(\dim Y-\dim\pi(Y)\). Next we pick \(\varphi_{s}\in\operatorname{End}(\mathcal{A}_{s}/\mathbb{C})\) whose kernel contains a translate of the said coset as an irreducible component. After multiplying \(\varphi_{s}\) by a positive multiple it extends to an endomorphism \(\varphi\) of \(\mathcal{A}\) as (2.2) is bijective.
For each \(\varphi\in\operatorname{End}(\mathcal{A}/S)\) we define \(\Sigma_{\varphi}\) to be the set of points \(s\in U(\mathbb{C})\setminus S^{\mathrm{exc}}\) with \(\varphi_{s}=\varphi\). We have \(S(\mathbb{C})=(S\setminus U)(\mathbb{C})\cup S^{\mathrm{exc}}\cup\bigcup_{ \varphi\in\operatorname{End}(\mathcal{A}/S)}\Sigma_{\varphi}\)
The set \(S^{\mathrm{exc}}\) is meager in \(S\) by Proposition 2.2 and so is \((S\setminus U)(\mathbb{C})\cup S^{\mathrm{exc}}\). As \(\operatorname{End}(\mathcal{A}/S)\) is at most countably infinite, the Baire Category Theorem implies that there exists \(\varphi\in\operatorname{End}(\mathcal{A}/S)\) such that the closure of \(\Sigma_{\varphi}\) in \(S^{\mathrm{an}}\) has non-empty interior. In particular, \(\Sigma_{\varphi}\) is Zariski dense in \(S\).
For all \(s\in\Sigma_{\varphi}\) we have \(\dim Z_{s}=\dim Y-\dim\pi(Y)\). So \(Z=\bigcup_{s\in\Sigma_{\varphi}}Z_{s}\) lies Zariski dense in the irreducible \(Y\).
For all \(s\in\Sigma_{\varphi}\), each \(Z_{s}\) is contained in a fiber of \(\varphi|_{Y}\) by our choice of \(\varphi_{s}\). So \(Z_{s}\), being an irreducible component of \(Y_{s}\), is an irreducible component of a fiber of \(\varphi|_{Y}\).
Generically, fibers of \(\varphi|_{Y}\) are equidimensional of dimension \(\dim Y-\dim\varphi(Y)\). So there exists \(s_{0}\in\Sigma_{\varphi}\) such that \(Z_{s_{0}}\) meets such a (generic) fiber. Then \(\dim Y-\dim\varphi(Y)=\dim Z_{s_{0}}=\dim\ker\varphi_{s_{0}}\). Recall that \(Y_{s_{0}}\) is equidimensional of dimension \(\dim Y-\dim\pi(Y)\) and has \(Z_{s_{0}}\) as an irreducible component. We conclude \(\dim Y-\dim\varphi(Y)=\dim Y-\dim\pi(Y)\) and so \(\dim\varphi(Y)=\dim\pi(Y)\). This implies (i). By Lemma 2.1 all \(\ker\varphi_{s}\) have the same dimension, here equal to \(\dim Y-\dim\pi(Y)\). This concludes (ii).
The exceptional set of an irreducible closed subvariety \(Y\) of \(\mathcal{A}\) is defined to be
\[Y^{\operatorname{exc}}:=\{P\in Y(\mathbb{C}):P\text{ is contained in a proper algebraic subgroup of }\mathcal{A}_{\pi(P)}\}. \tag{2.4}\]
If \(N\in\mathbb{Z}\) then \([N]\) denotes the multiplication-by-\(N\) morphism \(\mathcal{A}\to\mathcal{A}\).
**Lemma 2.4**.: _Let \(Y\) be an irreducible closed subvariety of \(\mathcal{A}\) and let \(S^{\prime}=\pi(Y)^{\operatorname{reg}}\) denote the regular locus of \(\pi(Y)\). We have one of the following two alternatives._
1. _Either_ \(Y^{\operatorname{exc}}\) _is meager in_ \(Y\)_,_
2. _or every_ \(P\in\pi|_{Y}^{-1}(S^{\prime})(\mathbb{C})\) _lies in a proper algebraic subgroup of_ \(\mathcal{A}_{\pi(P)}\)_. In this case,_ \(\bigcup_{N\in\mathbb{N}}[N](Y)\) _is not Zariski dense in_ \(\pi^{-1}(\pi(Y))\) _and if_ \(\eta\) _is the generic point of_ \(\pi(Y)\)_, then_ \(Y_{\eta}\) _lies in a proper algebraic subgroup of_ \(\mathcal{A}_{\eta}\)_._
Proof.: Let \(Y^{\prime}=Y\cap\mathcal{A}_{S^{\prime}}\). Suppose \(P\in Y^{\prime}(\mathbb{C})\) is in a proper algebraic subgroup of \(\mathcal{A}_{s}\) with \(s=\pi(P)\). Then there exists \(\varphi_{s}\in\operatorname{End}(\mathcal{A}_{s}/\mathbb{C})\setminus\{0\}\) with \(\varphi_{s}(P)=0\). If \(s\not\in S^{\prime\operatorname{exc}}\), then by definition some positive multiple of \(\varphi_{s}\) extends to an element of \(\operatorname{End}(\mathcal{A}_{S^{\prime}}/S^{\prime})\setminus\{0\}\). Therefore,
\[Y^{\operatorname{exc}}\subseteq\pi|_{Y}^{-1}(\pi(Y)\setminus S^{\prime})\cup \pi^{-1}(S^{\prime\operatorname{exc}})\cup\bigcup_{\varphi\in\operatorname{ End}(\mathcal{A}_{S^{\prime}}/S^{\prime})\setminus\{0\}}\ker\varphi. \tag{2.5}\]
By Proposition 2.2 the set \(\pi|_{Y}^{-1}(S^{\prime\operatorname{exc}})\) is meager in \(Y\). Moreover, \(\pi|_{Y}^{-1}(\pi(Y)\setminus S^{\prime})\) is Zariski closed and proper in \(Y\), and hence its complex points form a meager subset of \(Y\). Moreover, the last union in (2.5) is over an at most countably infinite union of proper algebraic subsets of \(\mathcal{A}_{S^{\prime}}\).
So if we are not in alternative (i), then there exists \(\varphi\in\operatorname{End}(\mathcal{A}_{S^{\prime}}/S^{\prime})\setminus\{0\}\) with \(Y\subseteq\overline{\ker\varphi}\), the Zariski closure of \(\ker\varphi\) in \(\mathcal{A}\). Note that \(Y^{\prime}=Y\cap\mathcal{A}_{S^{\prime}}\subseteq\overline{\ker\varphi}\cap \mathcal{A}_{S^{\prime}}=\ker\varphi\). Say \(P\in Y^{\prime}(\mathbb{C})\), then \(P\in\ker\varphi_{\pi(P)}\). By Lemma 2.1, \(\ker\varphi_{\pi(P)}\) is a proper algebraic subgroup of \(\mathcal{A}_{\pi(P)}\). Finally, \([N](Y^{\prime})\subseteq\ker\varphi\) for all \(N\in\mathbb{N}\). So \(\bigcup_{N\in\mathbb{N}}[N](Y)\) lies in \(\pi|_{Y}^{-1}(\pi(Y)\setminus S^{\prime})\cup\ker\varphi\) and is thus not Zariski dense in \(\pi^{-1}(\pi(Y))\). Finally, the generic fiber of \(Y\to S\) lies in the generic fiber of \(\ker\varphi\to S\), the latter is a proper algebraic subgroup of \(\mathcal{A}_{\eta}\).
Here is a useful consequence of the previous lemma.
**Lemma 2.5**.: _Let \(Y\subseteq\mathcal{A}\) and \(X\subseteq\mathcal{A}\) be irreducible closed subvarieties with \(Y\subseteq X\) and \(\pi(Y)\cap\pi(X)^{\operatorname{reg}}\neq\emptyset\). If \(Y^{\operatorname{exc}}\) is meager in \(Y\), then \(X^{\operatorname{exc}}\) is meager in \(X\)._
Proof.: If \(X^{\operatorname{exc}}\) is not meager in \(X\), then by Lemma 2.4 every point \(P\in X(\mathbb{C})\) with \(\pi(P)\in\pi(X)^{\operatorname{reg}}\) lies in a proper algebraic subgroup of \(\mathcal{A}_{\pi(P)}\). In particular, the complex
points of \(Y\cap\pi^{-1}(\pi(X)^{\rm reg})\) lie in \(Y^{\rm exc}\). The hypothesis implies that \(Y^{\rm exc}\) contains a non-empty open subset of \(Y^{\rm an}\). So \(Y^{\rm exc}\) is not meager in \(Y\) by the Baire Category Theorem.
## 3. Bi-algebraic Subvarieties and the University Family of Abelian Varieties
Ullmo and Yafaev [11] characterized the bi-algebraic subvarieties of (pure) Shimura varieties: they are precisely the weakly special subvarieties, _i.e._, the geodesic subvarieties studied by Moonen [10]. For a definition of bi-algebraic subgroup we refer to SS3.2 below. The first-named author [1], SS3] gave a complete characterization of the bi-algebraic subvarieties of \(\mathfrak{A}_{g}\), based on [1, SS8]. Below in Proposition 3.2 we follow the approach presented in these references but minimize the language of mixed Shimura varieties. Our main tool is Andre's normality theorem [1] for variations of mixed Hodge structures.
### The Mumford-Tate Group
Let \(g\geq 1\) and let \(\pi\colon\mathfrak{A}_{g}\to\mathbb{A}_{g}\) be the universal family of principally polarized \(g\)-dimensional abelian varieties with level-\(\ell\)-structure for some \(\ell\geq 3\). Then \(\mathfrak{A}_{g}\) and \(\mathbb{A}_{g}\) are geometrically irreducible, smooth quasi-projective varieties defined over a number field which we assume is a subfield \(\mathbb{C}\). We consider all varieties as defined over a subfield of \(\mathbb{C}\), sometimes executing a base change to \(\mathbb{C}\) without mention.
Let \(\mathfrak{H}_{g}\) denote Siegel's upper half space, _i.e._, the symmetric matrices in \(\operatorname{Mat}_{g\times g}(\mathbb{C})\) with positive definite imaginary part. By abuse of notation we write
\[\operatorname{unif}\colon\mathfrak{H}_{g}\to\mathbb{A}_{g}^{\rm an}\quad \text{and}\quad\operatorname{unif}\colon\mathbb{C}^{g}\times\mathfrak{H}_{g} \to\mathfrak{A}_{g}^{\rm an}\]
for both holomorphic uniformizing maps. Recall that \(\operatorname{Sp}_{2g}(\mathbb{R})\), the group of real points of the symplectic group, acts on \(\mathfrak{H}_{g}\).
We identify \(\mathbb{R}^{g}\times\mathbb{R}^{g}\times\mathfrak{H}_{g}\) with \(\mathbb{C}^{g}\times\mathfrak{H}_{g}\) via the natural semi-algebraic bijection
\[(\tau,u,v)\leftrightarrow(\tau,z)\quad\text{where }z=\tau u+v. \tag{3.1}\]
In the former coordinates, the corresponding uniformizing map \(\mathfrak{H}_{g}\times\mathbb{R}^{2g}\to\mathfrak{A}_{g}^{\rm an}\) is real analytic.
Let \(s\in\mathfrak{A}_{g}(\mathbb{C})\) and fix \(\tau\in\mathfrak{H}_{g}\) in its preimage under the uniformizing map, _i.e_, \(s=\operatorname{unif}(\tau)\). Let \(1_{g}\) denote the \(g\times g\) unit matrix, then the columns of \((\tau,1_{g})\) are an \(\mathbb{R}\)-basis of \(\mathbb{C}^{g}\) and \(\mathfrak{A}_{g,s}^{\rm an}\cong\mathbb{C}^{g}/(\tau\mathbb{Z}^{g}+\mathbb{Z}^ {g})\). The period lattice basis \((\tau,1_{g})\) allows us to identify \(H_{1}(\mathfrak{A}_{g,s}^{\rm an},\mathbb{Z})\) with \(\mathbb{Z}^{g}\times\mathbb{Z}^{g}\) and \(H_{1}(\mathfrak{A}_{g,s}^{\rm an},\mathbb{R})\) with \(\mathbb{R}^{g}\times\mathbb{R}^{g}\).
We briefly recall the monodromy action of \(\pi_{1}(\mathbb{A}_{g}^{\rm an},s)\), the (topological) fundamental group of \(\mathbb{A}_{g}^{\rm an}\) based at \(s\), on singular homology \(H_{1}(\mathfrak{A}_{g,s}^{\rm an},\mathbb{Z})\).
Suppose \([\gamma]\in\pi_{1}(\mathbb{A}_{g}^{\rm an},s)\) is represented by a loop \(\gamma\) in \(\mathbb{A}_{g}^{\rm an}\) based at \(s\). Then a lift \(\tilde{\gamma}\) of \(\gamma\) to \(\mathfrak{H}_{g}\) starting at \(\tau\) ends at \(M\tau\in\mathfrak{H}_{g}\) for some \(M=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\operatorname{Sp}_{2g}(\mathbb{Z})\). Then \(M\tau\) is the period matrix of the abelian variety \(\mathbb{C}^{g}/(M\tau\mathbb{Z}^{g}+\mathbb{Z}^{g})\) which is isomorphic to \(\mathbb{C}^{g}/(\tau\mathbb{Z}^{g}+\mathbb{Z}^{g})\). To describe this isomorphism we need the identity
\[I(M,\tau)^{\top}(M\tau,1_{g})=(\tau,1_{g})M^{\top}, \tag{3.2}\]
where \(I(M,\tau)=c\tau+d\), note the transpose and see [1, SS8.1 and Remark 8.1.4] for a discussion. We rearrange this equation. The map
\[\tau u+v\mapsto(I(M,\tau)^{\top})^{-1}(\tau u+v)=(M\tau,1_{g})(M^{\top})^{-1} \left(\begin{array}{c}u\\ v\end{array}\right), \tag{3.3}\]
here \(u,v\in\mathbb{R}^{g}\) are column vectors, induces the isomorphism \(\mathbb{C}^{g}/(\tau\mathbb{Z}^{g}+\mathbb{Z}^{g})\to\mathbb{C}^{g}/(M\tau \mathbb{Z}^{g}+\mathbb{Z}^{g})\).
By (3.3), the monodromy representation expressed in these coordinates is given by
\[\begin{split}\rho\colon\pi_{1}(\mathbb{A}^{\text{an}}_{g},s)& \to\operatorname{Sp}_{2g}(\mathbb{Z})\\ &[\gamma]\mapsto(M^{\top})^{-1}.\end{split} \tag{3.4}\]
Next we recall the definition of the Mumford-Tate group in our context.
We continue to assume \(\tau\in\mathfrak{H}_{g}\). Choose any \(M\in\operatorname{Sp}_{2g}(\mathbb{R})\) with \(\tau=M(\sqrt{-1}\cdot 1_{g})\); such an \(M\) exists as \(\operatorname{Sp}_{2g}(\mathbb{R})\) acts transitively on \(\mathfrak{H}_{g}\). We set
\[J_{\tau}=(M^{\top})^{-1}\Omega M^{\top}\quad\text{where}\quad\Omega=\left( \begin{array}{cc}0&1_{g}\\ -1_{g}&0\end{array}\right). \tag{3.5}\]
We claim that \(J_{\tau}\) is independent of the choice of \(M\); it depends only on \(\tau\). Indeed, if \(M^{\prime}\) is a further element of \(\operatorname{Sp}_{2g}(\mathbb{R})\) with \(\tau=M^{\prime}(\sqrt{-1}\cdot 1_{g})\), then \(M=M^{\prime}N\) where \(N\in\operatorname{Sp}_{2g}(\mathbb{R})\) stabilizes \(\sqrt{-1}\cdot 1_{g}\). So \(N\) is of the form \(\left(\begin{array}{cc}a&b\\ -b&a\end{array}\right)\) where \(a,b\in\operatorname{Mat}_{g}(\mathbb{R})\). This implies \((N^{\top})^{-1}\Omega N^{\top}=\Omega\) and so \((M^{\prime\top})^{-1}\Omega M^{\prime\top}=J_{\tau}\) on substituting \(M^{\prime}=MN^{-1}\).
Say \(x,y\in\mathbb{R}\), then
\[(x1_{2g}+yJ_{\tau})^{\top}\Omega(x1_{2g}+yJ_{\tau})=x^{2}\Omega+y^{2}J_{\tau}^ {\top}\Omega J_{\tau}+xy(\Omega J_{\tau}+J_{\tau}^{\top}\Omega).\]
The group \(\operatorname{Sp}_{2g}(\mathbb{R})\) contains \(\Omega\) and is mapped to itself by matrix transposition. Hence \(J_{\tau}\in\operatorname{Sp}_{2g}(\mathbb{R})\). Moreover, \(J_{\tau}^{2}=-1_{2g}\). So \(J_{\tau}^{\top}\Omega J_{\tau}=\Omega\) and \(J_{\tau}^{\top}\Omega=\Omega J_{\tau}^{-1}=-\Omega J_{\tau}\). We conclude \(h_{\tau}(z)=x1_{2g}+yJ_{\tau}\in\operatorname{GSp}_{2g}(\mathbb{R})\) for all \(z=x+\sqrt{-1}y\in\mathbb{C}^{\times}=\mathbb{C}\setminus\{0\}\) where \(x,y\in\mathbb{R}\). Moreover,
\[h_{\tau}\colon\mathbb{C}^{\times}\to\operatorname{GSp}_{2g}(\mathbb{R})\]
is a group homomorphism.
By (3.5) we have \(J_{\tau}^{\top}\tau=\tau\). Below we use the well-known identity \(I(MM^{\prime},\tau)=I(M,M^{\prime}\tau)I(M^{\prime},\tau)\) for all \(M,M^{\prime}\in\operatorname{Sp}_{2g}(\mathbb{R})\) and all \(\tau\in\mathfrak{H}_{g}\). We apply (3.2) to \(J_{\tau}^{\top}\) where \(J_{\tau}=(M^{\top})^{-1}\Omega M^{\top}\) and \(\tau=M(\sqrt{-1}\cdot 1_{g})\) and compute
\[(\tau,1_{g})J_{\tau} =I(J_{\tau}^{\top},\tau)^{\top}(\tau,1_{g})\] \[=I(-M\Omega M^{-1},\tau)^{\top}(\tau,1_{g})\] \[=\left(I(-M\Omega,M^{-1}\tau)I(M^{-1},\tau)\right)^{\top}(\tau,1_{ g})\] \[=-\left(I(M\Omega,\sqrt{-1}\cdot 1_{g})I(M^{-1},\tau)\right)^{\top}( \tau,1_{g})\] \[=-\left(I(M,\Omega(\sqrt{-1}\cdot 1_{g}))I(\Omega,\sqrt{-1}\cdot 1_{g}) I(M^{-1},\tau)\right)^{\top}(\tau,1_{g}).\]
Next we use \(\Omega(\sqrt{-1}\cdot 1_{g})=\sqrt{-1}\cdot 1_{g}\). Hence
\[\begin{split}(\tau,1_{g})J_{\tau}&=\sqrt{-1}\left(I(M, \sqrt{-1}\cdot 1_{g})I(M^{-1},\tau)\right)^{\top}(\tau,1_{g})\\ &=\sqrt{-1}\left(I(M,M^{-1}\tau)I(M^{-1},\tau)\right)^{\top}( \tau,1_{g})\\ &=\sqrt{-1}I(1_{2g},\tau)^{\top}(\tau,1_{g})\\ &=\sqrt{-1}(\tau,1_{g}).\end{split} \tag{3.6}\]
So \(J_{\tau}\) represents multiplication by \(\sqrt{-1}\) in the real coordinates determined by the \(\mathbb{R}\)-basis \((\tau,1_{g})\) of \(\mathbb{C}^{g}\).
Let \(s\in\mathbb{A}_{g}(\mathbb{C})\) lie below \(\tau\in\mathfrak{H}_{g}\). The _Mumford-Tate group_\(\operatorname{MT}(\mathfrak{A}_{g,s})\) of \(\mathfrak{A}_{g,s}\) is the smallest algebraic subgroup of \(\operatorname{GSp}_{2g,\mathbb{Q}}\) whose group of \(\mathbb{R}\)-points contains \(h_{\tau}(\mathbb{C}^{\times})\). As \(J_{\tau}=h_{\tau}(\sqrt{-1})\) we certainly have \(J_{\tau}\in\operatorname{MT}(\mathfrak{A}_{g,s})(\mathbb{R})\).
### Bi-algebraic Subvarieties
We keep the conventions introduced in the beginning of SS3.1. An irreducible closed subvariety \(Y\subseteq\mathfrak{A}_{g}\) is called _bi-algebraic_, if some (or equivalently any) complex analytic irreducible component of \(\operatorname{unif}^{-1}(Y^{\operatorname{an}})\) equals an irreducible component of \(\tilde{Y}(\mathbb{C})\cap(\mathbb{C}^{g}\times\mathfrak{H}_{g})\) for an algebraic subset \(\tilde{Y}\subseteq\mathbb{G}_{\operatorname{a},\mathbb{C}}^{g}\times \operatorname{Mat}_{g\times g,\mathbb{C}}\). All irreducible components of the intersection of \(2\) bi-algebraic subvarieties of \(\mathfrak{A}_{g}\) are bi-algebraic. So any irreducible closed subvariety \(Y\) of \(\mathfrak{A}_{g}\) is contained in a bi-algebraic subvariety \(Y^{\operatorname{biZar}}\) of \(\mathfrak{A}_{g}\) that is minimal with respect to inclusion.
Bi-algebraic subvarieties of \(\mathbb{A}_{g}\) are defined in a similar manner. By a theorem of Ullmo-Yafaev [11, Theorem 1.2], the bi-algebraic subvarieties of \(\mathbb{A}_{g}\) are precisely the weakly special subvarieties of \(\mathbb{A}_{g}\); here we consider \(\mathbb{A}_{g}\) as a Shimura variety. For any irreducible closed subvariety \(Y\) of \(\mathbb{A}_{g}\), we use \(Y^{\operatorname{biZar}}\) to denote the minimal bi-algebraic subvariety containing \(Y\).
**Lemma 3.1**.: _Let \(Y\subseteq\mathfrak{A}_{g}\) be an irreducible closed subvariety that is bi-algebraic and let \(\eta\in\pi(Y)\) be the generic point. For all \(P\in Y(\mathbb{C})\), each irreducible component of \(Y_{\pi(P)}\) is a coset in \(\mathfrak{A}_{g,\pi(P)}\) of dimension at least \(\dim Y-\dim\pi(Y)\)._
Proof.: Let \(P\in Y^{\operatorname{biZar}}(\mathbb{C})\) and let \(C\) be an irreducible component of \(Y_{\pi(P)}^{\operatorname{biZar}}\). By [10, Corollary 14.116 and Remark 14.117] we have \(\dim C\geq\dim Y-\dim\pi(Y)\).
The irreducible component \(C\) of \(Y_{\pi(P)}\) is a bi-algebraic subset of the abelian variety \(\mathfrak{A}_{g,\pi(P)}\). The lemma follows as by [11, Proposition 5.1], \(C\) is a coset in the ambient abelian variety.
We now come to a structural result of bi-algebraic subsets. We refer to the first-named author's more comprehensive result in [10] (the statement of [10, Proposition 3.3] contains a mistake; for a correct version see [10, Proposition 5.3]) using the language of mixed Shimura varieties.
**Proposition 3.2**.: _Let \(Y\subseteq\mathfrak{A}_{g}\) be a bi-algebraic subvariety with \(Y^{\operatorname{exc}}\) meager in \(Y\)._
* _There is a vector space_ \(W\subseteq\mathbb{R}^{2g}\) _defined over_ \(\mathbb{Q}\) _with_ \(\dim W=2(\dim Y-\dim\pi(Y))\) _with the following property. For all_ \(s=\operatorname{unif}(\tau)\in\pi(Y)(\mathbb{C})\)_, with_ \(\tau\in\mathfrak{H}_{g}\)_, the fiber_ \(Y_{s}\) _is a finite union of translates of_ \(\operatorname{unif}(\{\tau\}\times W)\subseteq\mathfrak{A}_{g,s}^{\operatorname {an}}\) _which is an abelian variety_ \(C_{s}\)
_._
2. _The quotient abelian varieties_ \(\mathfrak{A}_{g,s}/C_{s}\) _are pairwise isomorphic abelian varieties for all_ \(s\in\pi(Y)(\mathbb{C})\)_._
Each \(\tau\in\mathfrak{H}_{g}\) endows \(\mathbb{R}^{2g}\) with the structure of a \(\mathbb{C}\)-vector space, multiplication by \(\sqrt{-1}\) is represented by \(J_{\tau}\) from (3.5). The subspace \(W\subseteq\mathbb{R}^{2g}\) from part (i) is a \(\mathbb{C}\)-vector space for all \(\tau\) in question. The image \(\operatorname{unif}(\{\tau\}\times W)\) is an abelian subvariety of \(\mathfrak{A}_{g,s}\) of dimension \(\dim Y-\dim\pi(Y)\). In particular, \(Y\to\pi(Y)\) is equidimensional.
Proof.: If \(\pi(Y)\) is a point, say \(s\in\mathbb{A}_{g}(\mathbb{C})\), then \(Y_{s}\) is a coset in \(\mathfrak{A}_{g,s}\) by Lemma 3.1. The proposition holds in this case.
We will now assume \(\dim\pi(Y)\geq 1\). We identify \(\mathbb{R}^{2g}\times\mathfrak{H}_{g}\) with the universal covering of \(\mathfrak{A}_{g}(\mathbb{C})\); sometimes alluding to the complex structure induced by (3.1). The fundamental group of \(\mathfrak{A}_{g}^{an}\) based at some point \(P\) is a subgroup of \(\mathbb{Z}^{2g}\rtimes\operatorname{Sp}_{2g}(\mathbb{Z})\). The element \((M,\omega)\) acts by
\[(M\tau,M*u+\omega)\]
on \((\tau,u)\); where \(M*u=(M^{\top})^{-1}u\).
Recall that the ambient variety \(\mathfrak{A}_{g}\) is quasi-projective and so is \(Y\). By Bertini's Theorem a general linear space of codimension \(\dim Y-1\) intersected with \(Y^{\operatorname{reg}}\) is a smooth, irreducible curve \(\mathbf{x}\) that is quasi-finite over \(\pi(\mathbf{x})\). A suitable version of Lefschetz's Theorem for the topological fundamental group we may also assume that the homomorphism
\[\pi_{1}(\mathbf{x}^{\operatorname{an}},P)\to\pi_{1}(Y^{\operatorname{reg,an}},P) \tag{3.7}\]
induced by the inclusion \(\mathbf{x}\to Y^{\operatorname{reg,an}}\) is surjective for all \(P\in\mathbf{x}(\mathbb{C})\); see [1, Lemme 1.4]. We may fix \(P\) in very general position. For example, \(P\) is not contained in a proper algebraic subgroup of \(\mathfrak{A}_{g,s}\) for \(s=\pi(P)\). If we replace \(\mathbf{x}\) by a Zariski open and dense subset, the image of the induced homomorphism has finite index in \(\pi_{1}(Y^{\operatorname{reg,an}},P)\), [1, Lemme 4.4.17]. This suffices for us. So we may assume that \(\pi|_{\mathbf{x}}\colon\mathbf{x}\to\pi(\mathbf{x})\) is finite and etale.
Let \(\Gamma\) denote the image of \(\pi_{1}(\mathbf{x}^{\operatorname{an}},P)\) in \(\mathbb{Z}^{2g}\rtimes\operatorname{Sp}_{2g}(\mathbb{Z})\). Let \(\operatorname{Mon}(\mathbf{x})\) be the neutral component of the Zariski closure of \(\Gamma\) in \(\mathbb{G}_{\operatorname{a},\mathbb{Q}}^{2g}\rtimes\operatorname{Sp}_{2g, \mathbb{Q}}\) and let \(\operatorname{Mon}(Y^{\operatorname{reg}})\) be the neutral component of the Zariski closure of the image of \(\pi_{1}(Y^{\operatorname{reg,an}},P)\). We call \(\operatorname{Mon}(\mathbf{x})\) the _connected algebraic monodromy group_ of \(\mathbf{x}\). By the surjectivity of (3.7) and the discussion below we have
\[\operatorname{Mon}(\mathbf{x})=\operatorname{Mon}(Y^{\operatorname{reg}}). \tag{3.8}\]
By Lemma 3.1 we have \(P\in C(\mathbb{C})\) where \(C\) is an irreducible component of \(Y_{s}\) and a coset in \(\mathfrak{A}_{g,s}\) with \(\dim C\geq\dim Y-\dim\pi(Y)\). Now \(C\cap Y^{\operatorname{reg}}\) is Zariski dense and open in \(C\). So the image of \(\pi_{1}(C^{\operatorname{an}}\cap Y^{\operatorname{reg,an}},P)\) in \(\pi_{1}(C^{\operatorname{an}},P)\), induced by inclusion, has finite index. But \(\pi_{1}(C^{\operatorname{an}},P)\) can be identified with a subgroup of \(\mathbb{Z}^{2g}\cong H_{1}(\mathfrak{A}_{g,s}^{\operatorname{an}},\mathbb{Z})\) of rank \(2\dim C\geq 2(\dim Y-\dim\pi(Y))\).
The kernel of the projection \(\operatorname{pr}\colon\mathbb{G}_{\operatorname{a},\mathbb{Q}}^{2g}\rtimes \operatorname{Sp}_{2g,\mathbb{Q}}\to\operatorname{Sp}_{2g,\mathbb{Q}}\) restricted to \(\operatorname{Mon}(Y^{\operatorname{reg}})\) is an algebraic subgroup of \(\mathbb{G}_{\operatorname{a},\mathbb{Q}}^{2g}\times\{1_{2g}\}\). So it is \(\{1_{2g}\}\times W\) with \(W\) a linear subspace of \(\mathbb{G}_{\operatorname{a},\mathbb{Q}}^{2g}\). By the previous paragraph and by (3.8) we have
\[\dim W\geq 2(\dim Y-\dim\pi(Y)). \tag{3.9}\]
Let \(Z=\pi(Y)^{\rm reg}\). Then there is a natural representation \(\pi_{1}(Z^{\rm an},s)\to{\rm Sp}_{2g}(\mathbb{Z})\). The connected algebraic monodromy group \({\rm Mon}(Z)\subseteq{\rm Sp}_{2g,\mathbb{Q}}\) is the neutral component of the Zariski closure of the image of \(\pi_{1}(Z^{\rm an},s)\).
Note that \({\rm pr}({\rm Mon}(Y^{\rm reg}))={\rm Mon}(Z)\) by [12, Lemme 4.4.17].
Let \(M\in{\rm Mon}(Z)(\mathbb{C})\). The preimage \({\rm pr}|_{{\rm Mon}(Y^{\rm reg})}^{-1}(M)\) is \((M,\psi(M)+W)\) where \(\psi(M)\) is a unique complex point of \(\mathbb{G}_{{\mathfrak{A}},{\mathbb{Q}}}^{2g}/W\). For all \(M,M^{\prime}\in{\rm Mon}(Z)(\mathbb{C})\) we have
\[(M,\psi(M)+W)(M^{\prime},\psi(M^{\prime})+W)=(MM^{\prime},\psi(M)+M*\psi(M^{ \prime})+W)=(MM^{\prime},\psi(MM^{\prime})+W).\]
So \(\psi\colon{\rm Mon}(Z)\to\mathbb{C}^{2g}/W(\mathbb{C})\) is a cocycle. It must be a coboundary as \({\rm Mon}(Z)\) is semi-simple or trivial, by work of Deligne [12, Corollaire 4.2.9(a)]. Hence there exists \(v_{0}\in\mathbb{C}^{2g}\) with
\[\psi(M)=M*v_{0}-v_{0}+W\quad\text{for all}\quad M\in{\rm Mon}(Z)(\mathbb{C}).\]
Let \(\tilde{Y}\) be an algebraic subset of \(\mathbb{G}_{{\mathfrak{A}},{\mathbb{C}}}^{g}\times{\rm Mat}_{g\times g,{ \mathbb{C}}}\) such that the preimage of \(Y(\mathbb{C})\) in \(\mathbb{C}^{g}\times\mathfrak{H}_{g}\) and \(\tilde{Y}(\mathbb{C})\cap(\mathbb{C}^{g}\times\mathfrak{H}_{g})\) have a common complex analytic irreducible component, say \(\tilde{Y}_{0}\). We have \(\dim\tilde{Y}_{0}=\dim Y\).
Suppose \(\tau_{0}\in\mathfrak{H}_{g}\) lies above \(s\) and \((\tau_{0},u_{0})\in\tilde{Y}_{0}\) lies above \(P\). We return to real coordinates, so \(u_{0}\in\mathbb{R}^{2g}\).
An element \([\gamma]\in\Gamma\) is represented by a loop \(\gamma\) in \({\bf x}^{\rm an}\) based at \(P\). We lift \(\gamma\) to an arc in \(\mathbb{R}^{2g}\times\mathfrak{H}_{g}\) starting at \((\tau_{0},u_{0})\in\tilde{Y}(\mathbb{C})\). In particular, the end point of the lift lies in \(\tilde{Y}_{0}\) and equals \((M,u)(\tau_{0},u_{0})\) with \((M,u)=[\gamma]\in\Gamma\). For the orbit of \((\tau_{0},u_{0})\) under \(\Gamma\) we have
\[\Gamma\cdot(\tau_{0},u_{0})\subseteq\tilde{Y}_{0}\subseteq\tilde{Y}(\mathbb{C}).\]
By definition \(\tilde{Y}(\mathbb{C})\) is an algebraic subset of \({\rm Mat}_{g\times g}(\mathbb{C})\times\mathbb{C}^{g}\). As \(P\) is a regular point of \(Y\) we have that \((\tau_{0},u_{0})\) is a regular point of \(\tilde{Y}_{0}\). So
\[{\rm Mon}(Y^{\rm reg})(\mathbb{R})^{+}\cdot(\tau_{0},u_{0})={\rm Mon}({\bf x} )(\mathbb{R})^{+}\cdot(\tau_{0},u_{0})=\{(M,u)(\tau_{0},u_{0}):(M,u)\in{\rm Mon }({\bf x})(\mathbb{R})^{+}\}\subseteq\tilde{Y}_{0}; \tag{3.10}\]
the superscript \(+\) signals taking the neutral component in the Euclidean topology.
In addition to the connected algebraic monodromy group, we have the corresponding Mumford-Tate group.
First, recall that \(s\in\pi(Y)(\mathbb{C})\) determines a principally polarized abelian variety \(\mathfrak{A}_{g,s}\) defined over \(\mathbb{C}\). We consider its Mumford-Tate group \({\rm MT}(\mathfrak{A}_{g,s})\) coming from the corresponding weight \(-1\) pure Hodge structure; it is a reductive algebraic group. Moreover, \({\rm MT}(\mathfrak{A}_{g,s})\) is naturally an algebraic subgroup of \({\rm GSp}_{2g,\mathbb{Q}}\).
Second, after a finite and etale base change, which is harmless for our investigations, \({\bf x}\) becomes a section of an abelian scheme. It is a good, smooth one-motive (of rank \(\leq 1\)) in the sense of Deligne; see [1, SS4 and Lemma 5]. Attached to \({\bf x}\) is a variation of mixed Hodge structures. Restricted to each point of \({\bf x}\) we obtain a mixed Hodge structure. By [1, SS4 and Lemma 5] the mixed Hodge structure thus obtained has the same the Mumford-Tate group for all sufficiently general points in \({\bf x}\). We denote this group by \({\rm MT}({\bf x})\). We may assume \(P\) to be such a very general point. Then \({\rm MT}({\bf x})\) is naturally an algebraic subgroup of \(\mathbb{G}_{{\mathfrak{A}}}^{2g}\rtimes{\rm GSp}_{2g,\mathbb{Q}}\).
We also write \({\rm pr}\) for the projection \(\mathbb{G}_{{\mathfrak{A}},{\mathbb{Q}}}^{2g}\rtimes{\rm GSp}_{2g,\mathbb{Q}}\to{ \rm GSp}_{2g,\mathbb{Q}}\). By [1, Lemma 2(c)] we have surjectivity \({\rm pr}({\rm MT}({\bf x}))={\rm MT}(\mathfrak{A}_{g,s})\).
Andre [1, Theorem 1] proves that \(\operatorname{Mon}(\mathbf{x})\) is a normal subgroup of \(\operatorname{MT}(\mathbf{x})\) as \(P\) is in very general position. We do not require the statement that \(\operatorname{Mon}(\mathbf{x})\) is in fact a normal subgroup of the derived Mumford-Tate group.
Before moving on, we make the following remark. The second remark on [1, page 11] suggests that we could work directly with \(Y^{\operatorname{reg}}\), _i.e._, without bypassing to \(\mathbf{x}\).
Let \(M\in\operatorname{Mon}(Z)(\mathbb{C})\) be arbitrary. Then \((M,M*v_{0}-v_{0}+W)\subseteq\operatorname{Mon}(\mathbf{x})(\mathbb{C})\). Suppose \((h,v)\in\operatorname{MT}(\mathbf{x})(\mathbb{C})\). By Andre's Theorem, we have
\[(h,v)(M,M*v_{0}-v_{0}+W)(h,v)^{-1}\subseteq\operatorname{Mon}(\mathbf{x})( \mathbb{C}),\]
so
\[(hMh^{-1},v+hM*v_{0}-h*v_{0}-hMh^{-1}*v+h*W)\subseteq\operatorname{Mon}( \mathbf{x})(\mathbb{C}).\]
Recall (3.8). The second coordinate equals \(\psi(M)=hMh^{-1}*v_{0}-v_{0}+W\), so
\[v+hM*v_{0}-h*v_{0}-hMh^{-1}*v+h*W=hMh^{-1}*v_{0}-v_{0}+W.\]
We draw two conclusions.
First, we have \(h*W=W\). As the projection \(\operatorname{MT}(\mathbf{x})\to\operatorname{MT}(\mathfrak{A}_{g,s})\) is surjective, it follows that \(\operatorname{MT}(\mathfrak{A}_{g,s})\) acts on \(W\). The reductive group \(\operatorname{MT}(\mathfrak{A}_{g,s})\) also acts on a linear subspace \(W^{\perp}\subseteq\mathbb{G}_{\mathfrak{a},\mathbb{Q}}^{2g}\) with \(\mathbb{G}_{\mathfrak{a},\mathbb{Q}}^{2g}=W\oplus W^{\perp}\).
Second, and putting \(h=1\), we get
\[M*v-v\in W(\mathbb{C})\quad\text{for all}\quad M\in\operatorname{Mon}(Z)( \mathbb{C})\text{ and all }(1,v)\in\operatorname{MT}(\mathbf{x})(\mathbb{C}). \tag{3.11}\]
Now let us compute the kernel of \(\operatorname{MT}(\mathbf{x})\to\operatorname{MT}(\mathfrak{A}_{g,s})\) using [1, Proposition 1]. In its notation we set \(H=G=\operatorname{MT}(\mathbf{x})\) and claim \(E^{\prime}=\mathfrak{A}_{g,s}\). Indeed, \(P\) is not contained in a proper algebraic subgroup of \(\mathfrak{A}_{g,s}\) by hypothesis. So the cyclic subgroup it generates is Zariski dense in \(\mathfrak{A}_{g,s}\). The said proposition then implies that the kernel equals \(\mathbb{G}_{\mathfrak{a},\mathbb{Q}}^{2g}\). So (3.11) holds for all \(v\in\mathbb{C}^{2g}\).
In particular,
\(M*v-v\in W(\mathbb{C})\) for all \(M\in\operatorname{Mon}(Z)(\mathbb{C})\) and all \(v\in W^{\perp}(\mathbb{C})\). As \(\operatorname{Mon}(Z)\) is an algebraic subgroup of \(\operatorname{MT}(\mathfrak{A}_{g,s})\) it also acts on \(W^{\perp}\). As \(W(\mathbb{C})\cap W^{\perp}(\mathbb{C})=0\) we conclude that \(\operatorname{Mon}(Z)\) acts trivially on \(W^{\perp}\). So \(W^{\perp}(\mathbb{Q})\) is contained in the fixed part of the monodromy action on \(H_{1}(\mathfrak{A}_{g,s},\mathbb{Q})\).
Moreover, we write \(v_{0}=v_{0}^{\prime}+v_{0}^{\prime\prime}\) such that \(v_{0}^{\prime}\in W(\mathbb{R})\) and \(v_{0}^{\prime\prime}\in W^{\perp}(\mathbb{R})\). Then \(M*v_{0}=M*v_{0}^{\prime}+M*v_{0}^{\prime\prime}\in v_{0}^{\prime\prime}+W=v_{ 0}+W\) and
\[\psi(M)=M*v_{0}^{\prime}+M*v_{0}^{\prime\prime}-v_{0}^{\prime}-v_{0}^{\prime \prime}+W=M*v_{0}^{\prime}-v_{0}^{\prime}+W=W\]
for all \(M\in\operatorname{Mon}(Z)(\mathbb{C})\).
We summarize these last arguments and (3.8) by stating that the connected algebraic monodromy group satisfies
\[\operatorname{Mon}(Y^{\operatorname{reg}})=\operatorname{Mon}(\mathbf{x})=W \rtimes\operatorname{Mon}(Z).\]
By (3.10) we have
\[(\operatorname{Mon}(Z)(\mathbb{R})^{+}\cdot\tau_{0})\times(v_{0}+W(\mathbb{R}) )=\{(M\tau_{0},v_{0}+w):M\in\operatorname{Mon}(Z)(\mathbb{R})^{+},w\in W( \mathbb{R})\}\subseteq\check{Y}_{0}\]
By Moonen's work on weakly special subvarieties the orbit \(\operatorname{Mon}(Z)(\mathbb{R})^{+}\cdot\tau_{0}\) maps onto \(\pi(Y)\) under the uniformizing map \(\mathfrak{H}_{g}\to\mathbb{A}_{g}^{\operatorname{an}}\), see [11, SS3 and Proposition 3.7]. Generically, the fiber of \(Y\to\pi(Y)\) has dimension \(\dim Y-\dim\pi(Y)\), which is \(\leq\frac{1}{2}\dim_{\mathbb{R}}W(\mathbb{R})\) by (3.9). Hence \(\dim_{\mathbb{R}}\check{Y}_{0}\leq\dim_{\mathbb{R}}(\operatorname{Mon}(Z)( \mathbb{R})^{+}\cdot\tau_{0})\times(v_{0}+W(\mathbb{R}))\).
We now show \((\operatorname{Mon}(Z)(\mathbb{R})^{+}\cdot\tau_{0})\times(v_{0}+W(\mathbb{R}))= \tilde{Y}_{0}\). Indeed, note that the left-hand side is closed in \(\tilde{Y}_{0}\). Let \(T\) denote the singular points of the complex analytic space \(\tilde{Y}_{0}\). As \(\tilde{Y}_{0}\) is irreducible, \(\tilde{Y}_{0}\setminus T\) is a connected complex manifold. Moreover, \((\operatorname{Mon}(Z)(\mathbb{R})^{+}\cdot\tau_{0})\times(v_{0}+W(\mathbb{R} ))\setminus T\) is a topological (real) manifold of dimension \(2\dim\tilde{Y}_{0}\) contained in \(\tilde{Y}_{0}\setminus T\). So it is open in \(\tilde{Y}_{0}\setminus T\) by invariance of dimension. But it is also closed in \(\tilde{Y}_{0}\setminus T\). So \((\operatorname{Mon}(Z)(\mathbb{R})^{+}\cdot\tau_{0})\times(v_{0}+W(\mathbb{R }))\setminus T=\tilde{Y}_{0}\setminus T\). The claim follows as \(\tilde{Y}_{0}\setminus T\) is dense in \(\tilde{Y}_{0}\).
In particular, \((\operatorname{Mon}(Z)(\mathbb{R})^{+}\cdot\tau_{0})\times(v_{0}+W(\mathbb{R }))\) is complex analytic. Thus for all \(\tau\in\mathfrak{H}_{g}\) with \(\operatorname{unif}(\tau)\in\pi(Y)(\mathbb{C})\), \(W(\mathbb{R})\) is a complex subspace for the complex structure on \(\mathbb{R}^{2g}\) endowed by \(\tau\). Moreover, (3.9) is an equality.
This concludes (i) since \(W\) is an algebraic subgroup of \(\mathbb{G}_{\operatorname{a},\mathbb{Q}}^{2g}\). Part (ii) follows from [1, Corollaire 4.1.2] because \(W^{\perp}(\mathbb{Q})\) is contained in the fixed part of the monodromy action on \(H_{1}(\mathfrak{A}_{g,s},\mathbb{Q})\).
We end this section with a sufficient criterion for the meagerness of the bi-algebraic closure of a variety.
**Lemma 3.3**.: _Let \(Z\) be an irreducible closed subvariety of \(\mathbb{A}_{g}\), then \(Z\cap Z^{\operatorname{biZar,reg}}\neq\emptyset\). Let \(Y\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}\). Then \(\pi(Y)\cap\pi(Y^{\operatorname{biZar}})^{\operatorname{reg}}\neq\emptyset\). If \(Y^{\operatorname{exc}}\) is meager in \(Y\), then \(Y^{\operatorname{biZar,exc}}\) is meager in \(Y^{\operatorname{biZar}}\)._
Proof.: First we show that \(Z\) is not contained in the singular locus of \(Z^{\operatorname{biZar}}\). Indeed, being a singular point of \(Z^{\operatorname{biZar}}\) is an algebraic condition in \(\mathbb{A}_{g}\). A component of the preimage of \(Z^{\operatorname{biZar}}(\mathbb{C})\) under \(\mathfrak{H}_{g}\to\mathbb{A}_{g}^{\operatorname{an}}\) is algebraic. So being a singular point is also an algebraic condition in \(\mathfrak{H}_{g}\). Therefore, each irreducible component of \(Z\setminus Z^{\operatorname{biZar,reg}}\) is bi-algebraic. As \(Z^{\operatorname{biZar}}\) is the minimal bi-algebraic subvariety containing \(Z\) we have \(Z\cap Z^{\operatorname{biZar,reg}}\neq\emptyset\). The first part of the follows.
The second claim follows from the first one and since \(\pi(Y^{\operatorname{biZar}})=\pi(Y)^{\operatorname{biZar}}\).
The third claim follows from the second one and from Lemma 2.5 with \(X=Y^{\operatorname{biZar}}\).
## 4. A Criterion for Non-degeneracy
Recall that \(\mathfrak{A}_{g}\) is a geometrically irreducible quasi-projective variety defined over a number field. Again we take this number field to be a subfield of \(\mathbb{C}\). For the rest of this section we consider all subvarieties as defined over \(\mathbb{C}\).
Let \(X\subseteq\mathfrak{A}_{g}\) be an irreducible closed subvariety. We set
\[\delta(X)=\dim X^{\operatorname{biZar}}-\dim\pi(X^{\operatorname{biZar}})\geq 0,\]
and with \(t\in\mathbb{Z}\), also
\[X^{\deg}(t)=\bigcup_{\begin{subarray}{c}Y\subseteq X\\ \delta(Y)<\dim Y+t\\ \dim Y>0\end{subarray}}Y \tag{4.1}\]
where \(Y\) ranges over positive dimensional _irreducible_ closed subvarieties of \(X\). Thus
\[X^{\deg}(t)\subseteq X^{\deg}(t+1).\]
By [1, Theorem 7.1], \(X^{\deg}(t)\) is Zariski closed in \(X\). Moreover if \(X\) is defined over some algebraically closed field \(L\subseteq\mathbb{C}\) of characteristic \(0\), then \(X^{\deg}(t)\) is also defined over \(L\); see [1, Proposition 4.2.6].
**Remark 4.1**.: _Before moving on, let us take a look at \(X^{\deg}(t)\) when \(\pi(X)\) is a point. In this case, \(X\) is contained in a fiber of \(\pi\colon\mathfrak{A}_{g}\to\mathbb{A}_{g}\), which is an abelian variety. Call this abelian variety \(A\). For each irreducible subvariety \(Y\) of \(X\), we have \(\delta(Y)=\dim Y^{\operatorname{biZar}}\geq\dim Y\). In particular, \(X^{\deg}(t)=\emptyset\) if \(t\leq 0\)._
_By [1, Proposition 5.1], any bi-algebraic subvariety of \(A\) is a coset in \(A\), i.e., a translate of an abelian subvariety of \(A\). Conversely, any coset in \(A\) is bi-algebraic. Thus \(Y^{\operatorname{biZar}}\) is the smallest coset of \(A\) containing \(Y\). Now if \(\delta(Y)<\dim Y+1\), then \(Y^{\operatorname{biZar}}=Y\). Thus \(X^{\deg}(1)\) is the union of all positive-dimensional cosets in \(A\) that are contained in \(X\). This is precisely the Ueno locus or Kawamata locus._
_For general \(t\) and \(X\) still in \(A\), the union \(X^{\deg}(t)\) was studied by Remond [1, SS3] and by Bombieri, Masser, and Zannier in the multiplicative case [1, 1] under the name (\(b\)-)anomalous._
We investigate necessary conditions for when \(X=X^{\deg}(t)\). As a general result we mention [1, Theorem 8.1] and the exposition here is heavily motivated by this reference. Our approach works under the assumption that \(X^{\deg}(t)\) contains a non-empty open subset of \(X^{\operatorname{an}}\).
We keep the same setup as introduced in the beginning of SS3.1.
**Lemma 4.2**.: _Let \(Y\subseteq\mathfrak{A}_{g}\) be an irreducible closed subvariety such that \(Y^{\operatorname{exc}}\) is meager in \(Y\). Let \(S\) denote the regular locus of \(\pi(Y^{\operatorname{biZar}})\). Then \(\pi(Y)\cap S\neq\emptyset\) and there exists \(\varphi\in\operatorname{End}(\mathfrak{A}_{g,S}/S)\) with the following properties hold:_
1. _We have_ \(\dim\ker\varphi_{s}=\delta(Y)\) _for all_ \(s\in S(\mathbb{C})\)_._
2. _The fiber_ \(Y^{\operatorname{biZar}}_{s}\) _is a finite union of translates of_ \((\ker\varphi_{s})^{0}\) _for all_ \(s\in S(\mathbb{C})\)_._
3. _The abelian varieties_ \(\varphi(\mathfrak{A}_{g})_{s}\) _are pairwise isomorphic for all_ \(s\in S(\mathbb{C})\)_._
4. _If_ \(\delta(Y)=0\)_, then_ \(Y\) _is a point._
Proof.: Recall that \(\pi(Y)^{\operatorname{biZar}}\) is the smallest bi-algebraic subvariety of \(\mathbb{A}_{g}\) that contains \(\pi(Y)\) and that it equals \(\pi(Y^{\operatorname{biZar}})\).
By Lemma 3.3, \(Y^{\operatorname{biZar,exc}}\) is meager in \(Y^{\operatorname{biZar}}\) and \(\pi(Y)\cap S\neq\emptyset\).
By Proposition 3.2 applied to \(Y^{\operatorname{biZar}}\) each fiber of \(Y^{\operatorname{biZar}}\) above a complex point of \(\pi(Y^{\operatorname{biZar}})\) is a finite union of cosets of dimension \(\delta(Y)\).
We abbreviate \(\mathcal{A}=\pi^{-1}(S)\). We apply Lemma 2.3 to the abelian scheme \(\mathcal{A}/S\) and the subvariety \(Y^{\operatorname{biZar}}\cap\mathcal{A}\). Let \(\varphi\) be the endomorphism in the said lemma.
By the conclusion of Lemma 2.3(ii) we have \(\dim\ker\varphi_{s}=\dim Y^{\operatorname{biZar}}\cap\mathcal{A}-\dim\pi(Y^ {\operatorname{biZar}}\cap\mathcal{A})=\delta(Y)\) for all \(s\in S(\mathbb{C})\). Part (i) now follows. For later reference we remark that \(\varphi\) is the identity map if \(\delta(Y)=0\); see Lemma 2.3.
By Lemma 2.3(i) we have \(\dim\varphi(Y^{\operatorname{biZar}}\cap\mathcal{A})=\dim\pi(Y^{\operatorname {biZar}}\cap\mathcal{A})\). By the fiber dimension theorem, the general fiber of \(\pi|_{\varphi(Y^{\operatorname{biZar}}\cap\mathcal{A})}\colon\varphi(Y^{ \operatorname{biZar}}\cap\mathcal{A})\to\pi(Y^{\operatorname{biZar}}\cap \mathcal{A})=S\) is finite. For \(s\) in a Zariski open and non-empty subset of \(S\) we have that \(\varphi(Y^{\operatorname{biZar}}\cap\mathcal{A})_{s}\) is finite. Therefore, \(Y^{\operatorname{biZar}}_{s}\) is contained in a finite union of \((\ker\varphi_{s})^{0}\) for such \(s\). By dimension reasons, these \(Y^{\operatorname{biZar}}_{s}\) are a finite union of \((\ker\varphi_{s})^{0}\) and \((\ker\varphi_{s})^{0}+Y^{\operatorname{biZar}}_{s}=Y^{\operatorname{biZar}}_{s}\).
Note that \((\ker\varphi)^{0}\) is smooth over \(S\) with geometrically irreducible generic fiber, as it is an abelian scheme. Moreover, \(Y^{\operatorname{biZar}}\cap\mathcal{A}\) is Zariski open in \(Y^{\operatorname{biZar}}\) and thus irreducible. It
follows from a purely topological consideration that \((\ker\varphi)^{0}\times_{S}(Y^{\mathrm{bizar}}\cap\mathcal{A})\) is irreducible. A Zariski open and non-empty subset is mapped to \(Y^{\mathrm{bizar}}\cap\mathcal{A}\) under addition. This continues to hold on all of \((\ker\varphi)^{0}\times_{S}(Y^{\mathrm{bizar}}\cap\mathcal{A})\). Thus \((\ker\varphi_{s})^{0}+Y^{\mathrm{bizar}}_{s}=Y^{\mathrm{bizar}}_{s}\). By dimension reasons \(Y^{\mathrm{bizar}}_{s}\) is a finite union of \((\ker\varphi_{s})^{0}\) for all \(s\in S(\mathbb{C})\). Part (ii) follows.
For all \(s\in S(\mathbb{C})\), the image \(\varphi(\mathfrak{A}_{g,s})=\varphi(\mathfrak{A}_{g})_{s}\) is isogenous to \(\mathfrak{A}_{g,s}/(\ker\varphi_{s})^{0}\). The latter are pairwise isomorphic abelian varieties for all \(s\) by Proposition 3.2. By consider the morphism to a suitable moduli space we conclude that the \(\varphi(\mathfrak{A}_{g})_{s}\) are indeed pairwise isomorphic. We conclude (iii).
For the proof of part (iv) we assume that \(\delta(Y)=0\). As remarked above, \(\varphi\) is the identity. Therefore, \(\mathfrak{A}_{g,s}\) are pairwise isomorphic abelian varieties for \(s\in S(\mathbb{C})\). This implies that \(S\) is a point and so is \(\pi(Y)\). But then \(\pi(Y^{\mathrm{bizar}})\) is a point. As \(0=\delta(Y)=\dim Y^{\mathrm{bizar}}-\dim\pi(Y^{\mathrm{bizar}})\) we have that \(Y^{\mathrm{bizar}}\) is a point. The same holds for \(Y\) and this completes the proof of (iv).
Now we are ready to prove a necessary condition for \(X^{\mathrm{deg}}(t)\) being sufficiently large. The next proposition relies on the previous lemma and the Baire Category Theorem; recall that group of endomorphisms of an abelian scheme is at most countably infinite.
**Proposition 4.3**.: _Let \(t\in\mathbb{Z}\) and let \(X\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}\). Let \(\eta\) denote the generic point of \(\pi(X)\) and let \(S\) denote the regular locus of \(\pi(X)\). We suppose_
1. \(X^{\mathrm{deg}}(t)\) _contains an open and non-empty subset of_ \(X^{\mathrm{an}}\)__
2. _and_ \(X_{\eta}\) _is not contained in a proper algebraic subgroup of_ \(\mathfrak{A}_{g,\eta}\)_,_
_There exists a set \(\mathcal{Y}\) of irreducible closed positive dimensional subvarieties of \(X\) and \(\varphi\in\mathrm{End}(\mathfrak{A}_{g,S}/S)\) with the following properties for all \(Y\in\mathcal{Y}\)._
1. _We have_ \(\dim\ker\varphi_{s}=\delta(Y)\) _for all_ \(s\in S(\mathbb{C})\)_._
2. _The fiber_ \(Y^{\mathrm{bizar}}_{s}\) _is a finite union of translates of_ \((\ker\varphi_{s})^{0}\) _for all complex points_ \(s\) _of a Zariski open and dense subset_ \(\pi(Y)\)_._
3. _The abelian varieties_ \(\varphi(\mathfrak{A}_{g})_{s}\) _are pairwise isomorphic for all complex points_ \(s\) _of a Zariski open and dense subset of_ \(\pi(Y)\)_._
4. _We have_ \(\delta(Y)<\dim Y+t\) _and_ \(\pi(Y)\cap S\neq\emptyset\)_._
5. _The set_ \(Y^{\mathrm{exc}}\) _is meager in_ \(Y\)_._
_Finally, the closure of \(\bigcup_{Y\in\mathcal{Y}}Y(\mathbb{C})\) in \(X^{\mathrm{an}}\) has non-empty interior._
Proof.: By hypothesis (b) and Lemma 2.4 applied to \(X\subseteq\mathfrak{A}_{g}\), we have that \(X^{\mathrm{exc}}\) is meager in \(X\). Thus \(X^{\mathrm{exc}}\subseteq\bigcup_{i=1}^{\infty}X_{i}(\mathbb{C})\) such that all \(X_{i}\subsetneq X\) are Zariski closed. For a similar reason and using Proposition 2.2 there exist Zariski closed \(S_{1},S_{2},\ldots\subsetneq\pi(X)\), among them is \(\pi(X)\setminus S\), with \(S^{\mathrm{exc}}\subseteq\bigcup_{i=1}^{\infty}S_{i}(\mathbb{C})\).
By hypothesis the union of all \(Y\subseteq X\) with \(\dim Y>0\) and
\[\delta(Y)=\dim Y^{\mathrm{bizar}}-\dim\pi(Y^{\mathrm{bizar}})<\dim Y+t \tag{4.2}\]
contains a non-empty open subset of \(X^{\mathrm{an}}\). Let \(\mathcal{Y}\) be the collection of those \(Y\) with \(Y\not\subseteq\pi^{-1}(S_{i})\) and \(Y\not\subseteq X_{i}\) for all \(i\). There is a set \(N\subseteq X(\mathbb{C})\), meager in \(X\), such that \(N\cup\bigcup_{Y\in\mathcal{Y}}Y(\mathbb{C})\) contains a non-empty open subset of \(X^{\mathrm{an}}\).
Let \(Y\in\mathcal{Y}\) be arbitrary. In particular, \(\pi(Y)\cap S\neq\emptyset\). Set \(U_{Y}=\pi(Y)\cap\pi(Y^{\mathrm{bizar}})^{\mathrm{reg}}\cap S\); it is a Zariski open and dense subset of \(\pi(Y)\) by Lemma 3.3. Therefore, \(U_{Y}\not\subseteq S_{i}\) for all \(i\) by the choice of \(\mathcal{Y}\). The Baire Category Theorem implies \(U_{Y}(\mathbb{C})\not\subseteq\bigcup_{i=1}^{\infty}S_{i}(\mathbb{C})\), so \(U_{Y}(\mathbb{C})\not\subseteq S^{\mathrm{exc}}\).
By definition we have \(Y^{\rm exc}\subseteq X^{\rm exc}\) and so \(Y^{\rm exc}\subseteq\bigcup_{i=1}^{\infty}(Y\cap X_{i})(\mathbb{C})\). By the choice of \(\mathcal{Y}\) we conclude that \(Y^{\rm exc}\) is meager in \(Y\).
Apply Lemma 4.2 to \(Y\) and obtain \(\varphi_{Y}\), and restrict \(\varphi_{Y}\) to an endomorphism of the abelian scheme \(\pi^{-1}(U_{Y})\). Choose \(s\in U_{Y}(\mathbb{C})\setminus S^{\rm exc}\). Then \(\varphi_{Y}|_{\pi^{-1}(s)}\in\operatorname{End}(\mathfrak{A}_{g,s})\) extends to an endomorphism of \(\mathfrak{A}_{g,s}/S\). This extension is unique and it coincides with \(\varphi_{Y}\) on \(\pi^{-1}(U_{Y})\). We use \(\varphi_{Y}\) to denote this endomorphism of \(\mathfrak{A}_{g,S}/S\). Note that \(\delta(Y)=\dim\ker(\varphi_{Y})_{s}\) for all \(s\in S(\mathbb{C})\).
Recall that \(N\cup\bigcup_{Y\in\mathcal{Y}}Y^{\rm an}\) contains a non-empty open subset of \(X^{\rm an}\). We rearrange this union and conclude that the said open subset lies in \(N\cup\bigcup_{\varphi\in\operatorname{End}(\mathfrak{A}_{g,S}/S)}\overline{D _{\varphi}}\) where \(D_{\varphi}=\bigcup_{Y\in\mathcal{Y}:\varphi_{Y}=\varphi}Y(\mathbb{C})\) and \(\overline{D_{\varphi}}\) denotes the topological closure in \(X^{\rm an}\).
By the Baire Category Theorem there is \(\varphi\in\operatorname{End}(\mathfrak{A}_{g,S}/S)\) such that \(\overline{D_{\varphi}}\) has non-empty interior in \(X^{\rm an}\). In particular, \(D_{\varphi}\) is Zariski dense in \(X\).
We claim that the proposition follows with \(\mathcal{Y}\) replaced by \(\{Y\in\mathcal{Y}:\varphi_{Y}=\varphi\}\). Indeed, properties (i), (ii), and (iii) follow from the corresponding properties of Lemma 4.2 and (iv) and (v) follow from the choice of \(\mathcal{Y}\)
**Remark 4.4**.: _The case \(t=0\) is closely linked to large fibers of the Betti map; see [1, SS3] for a definition of the Betti map. The Betti map is real analytic and defined locally on \(X^{\rm reg,an}\). Suppose that the generic rank of the differential is strictly less than \(2\dim X\). This is the case if \(X\) fails to be non-degenerate in the sense of [1, Definition 1.5]. Then there is a non-empty open subset of \(X^{\rm an}\) on which the rank is pointwise strictly less than \(2\dim X\). Using the first-named author's Ax-Schanuel Theorem [1] for \(\mathfrak{A}_{g}\) one can recover that \(X^{\rm deg}(0)\) contains a non-empty open subset of \(X^{\rm an}\). So the hypothesis (a) for \(t=0\) in Proposition 4.3 is satisfied. See also [1, Theorem 1.7] for an equivalence._
## 5. The Zeroth Degeneracy Locus in a Fiber Power
We keep the notation from SS4 and consider all subvarieties as defined over \(\mathbb{C}\). We study ramifications of Proposition 4.3 in the case \(t=0\) for the \(m\)-fold fiber power \(\mathfrak{A}_{g}^{[m]}\) of \(\pi\colon\mathfrak{A}_{g}\to\mathbb{A}_{g}\), here \(m\in\mathbb{N}\). There is a natural morphism \(\mathfrak{A}_{g}^{[m]}\to\mathfrak{A}_{mg}\) which is the base change of the modular map \(\mathbb{A}_{g}\to\mathbb{A}_{mg}\) that attaches to an abelian variety its \(m\)-th power compatible with the principal polarization and level structure. It can be shown that \(\mathbb{A}_{g}\to\mathbb{A}_{mg}\) is a closed immersion. So \(\mathfrak{A}_{g}^{[m]}\to\mathfrak{A}_{mg}\) is a closed immersion. We will treat \(\mathfrak{A}_{g}^{[m]}\) as a closed subvariety of \(\mathfrak{A}_{mg}\).
By abuse of notation let \(\pi\colon\mathfrak{A}_{mg}\to\mathbb{A}_{mg}\) denote the structure morphism.
Let \(X\) be a Zariski closed subset of an abelian variety \(A\) defined over \(\mathbb{C}\). The stabilizer \(\operatorname{Stab}(X)\) of \(X\) is the algebraic group determined by \(\{P\in A(\mathbb{C}):P+X=X\}\).
**Theorem 5.1**.: _Let \(X\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}^{[m]}\). Consider \(X\subseteq\mathfrak{A}_{mg}\) and let \(\eta\) denote the generic point of \(\pi(X)\subseteq\mathbb{A}_{mg}\). We suppose_
* \(X^{\rm deg}(0)\) _contains an open and non-empty subset of_ \(X^{\rm an}\)_,_
* \(X_{\eta}\) _is not contained in a proper algebraic subgroup of_ \(\mathfrak{A}_{mg,\eta}\)_,_
* _and_ (5.1) \[\dim X\leq 2m.\]
_Then the following hold true._
1. _There exists a Zariski open and dense subset_ \(U\subseteq\pi(X)\) _such that for all_ \(s\in U(\mathbb{C})\) _the stabilizer_ \(\operatorname{Stab}(X_{s})\) _has dimension at least_ \(m\)_._
2. _There is a Zariski dense subset_ \(D\subseteq\pi(X)(\mathbb{C})\) _such that for all_ \(s\in D\) _the stabilizer_ \(\operatorname{Stab}(X_{s})\) _contains_ \(E^{m}\) _where_ \(E\subseteq\mathfrak{A}_{g,s}\) _is an elliptic curve._
Proof.: We apply Proposition 4.3 to \(X\subseteq\mathfrak{A}_{mg}\) in the case \(t=0\) and obtain \(\mathcal{Y}\) and \(\varphi\). We write \(S\) for the regular locus of \(\pi(X)\subseteq\mathbb{A}_{mg}\) and \(\mathcal{B}\) for the abelian scheme \(\varphi(\mathfrak{A}_{g,S}^{[m]})\) over \(S\), see Lemma 2.1.
Let \(Y\in\mathcal{Y}\). Note that \(\delta=\delta(Y)\geq 0\) is independent of \(Y\) by Proposition 4.3(i).
The generic fiber of \(\mathcal{B}_{\pi^{-1}(Y)}\to\pi(Y)\) is an abelian variety \(B\) defined over the function field of \(\pi(Y)\). By Proposition 4.3(iii) there is a finite extension \(L\) of the function field of \(\pi(Y)\), such that the base change \(B_{L}\) is a constant abelian variety over \(L\). We have \(\dim B_{L}=mg-\delta\). Let \(A_{L}\) denote the base change of the generic fiber of \(\mathfrak{A}_{g,\pi^{-1}(Y)}\to\pi(Y)\). Then \(B_{L}\) is a quotient of \(A_{L}^{m}\). Thus \(A_{L}^{m}\to B_{L}\) factors through \(\operatorname{Im}_{L/\mathbb{C}}(A_{L}^{m})_{L}\) where \(\operatorname{Im}_{L/\mathbb{C}}(\cdot)\) denotes the \(L/\mathbb{C}\)-image of an abelian variety defined over \(L\), see [11] for a definition and properties. Since \(A_{L}^{m}\to B_{L}\) is surjective we have \(\dim B_{L}\leq\dim\operatorname{Im}_{L/\mathbb{C}}(A_{L}^{m})=m\dim \operatorname{Im}_{L/\mathbb{C}}(A_{L})\). By (4.2) with \(t=0\) we find
\[mg-\dim Y<mg-\delta=\dim B_{L}\leq m\dim\operatorname{Im}_{L/\mathbb{C}}(A_{L}), \tag{5.2}\]
As \(Y\subseteq X\) we have \(\dim Y\leq\dim X\). The hypothesis \(\dim X\leq 2m\) combined with (5.2) yields \(m(g-2)<m\dim\operatorname{Im}_{L/\mathbb{C}}(A_{L})\). We cancel \(m\) and obtain
\[\dim\operatorname{Im}_{L/\mathbb{C}}(A_{L})\geq g-1.\]
If \(\pi(Y)\) is a point, then so is \(\pi(Y^{\operatorname{biZar}})=\pi(Y)^{\operatorname{biZar}}\). Again (4.2) with \(t=0\) implies \(\dim Y^{\operatorname{biZar}}<\dim Y\) which contradicts \(Y\subseteq Y^{\operatorname{biZar}}\). So
\[\dim\pi(Y)\geq 1.\]
From this we conclude \(\dim\operatorname{Im}_{L/\mathbb{C}}(A_{L})<g\) as otherwise general fibers of \(\mathfrak{A}_{g}\) above \(\pi(Y)\) would be pairwise isomorphic abelian varieties. Thus
\[\dim\operatorname{Im}_{L/\mathbb{C}}(A_{L})=g-1.\]
The canonical morphism \(A_{L}\to\operatorname{Im}_{L/\mathbb{C}}(A_{L})_{L}\) is surjective with connected kernel \(E\) as we are in characteristic \(0\). Here \(E\) is an elliptic curve and \(\operatorname{Im}_{L/\mathbb{C}}(E)=0\). Recall that \(\varphi\) induces a homomorphism \(\varphi_{L}\colon A_{L}^{m}\to B_{L}\) and \(B_{L}\) is a constant abelian variety. The composition \(E^{m}\to A_{L}^{m}\xrightarrow{\varphi_{L}}B_{L}\) factors through \(E^{m}\to\operatorname{Im}_{L/\mathbb{C}}(E^{m})_{L}=\operatorname{Im}_{L/ \mathbb{C}}(E)_{L}^{m}=0\). Therefore, \(E^{m}\) lies in the kernel of \(\varphi_{L}\). In particular,
\[\delta=\dim\ker\varphi_{L}\geq m. \tag{5.3}\]
We fix an irreducible variety \(W\) with function field \(L\) and a quasi-finite dominant morphism \(W\to\pi(Y)\). After replacing \(W\) by a Zariski open subset we can spread \(A_{L}\) and \(E\) out to abelian schemes \(\mathcal{A}\) and \(\mathcal{E}\) over \(W\), respectively. The \(j\)-invariant of \(\mathcal{E}/W\) is a morphism \(W\to\mathbb{A}^{1}\). If \(\dim W>1\), then there is an irreducible curve \(W^{\prime}\subseteq W\) on which \(j\) is constant. All elliptic curves above \(W^{\prime}(\mathbb{C})\) are isomorphic over \(\mathbb{C}\). But then infinitely many fibers of \(\mathcal{A}_{g}\) above points of \(\pi(\mathbb{C})\) are pairwise isomorphic. This is impossible and so we have \(\dim W\leq 1\). But \(\dim W=\dim\pi(Y)\geq 1\), hence
\[\dim\pi(Y)=1. \tag{5.4}\]
Recall that \(\varphi\) is defined above all but finitely many points of the curve \(\pi(Y)\). Recall also that \(\ker\varphi\) contains the \(m\)-th power of an elliptic curve on the generic fiber. So \(\ker\varphi_{s}\) contains the \(m\)-th power of an elliptic curve in \(\mathfrak{A}_{g,s}\) for all but finitely many \(s\in\pi(Y)(\mathbb{C})\).
We draw the following conclusion from Proposition 4.3(i) and (ii) for a Zariski open and dense \(U_{Y}\subseteq\pi(Y)\). If \(s\in U_{Y}(\mathbb{C})\), then \(Y_{s}\) is contained in a finite union of translates of \((\ker\varphi_{s})^{0}\). The latter is an algebraic group of dimension \(\delta\). Let \(P\in\pi|_{Y}^{-1}(U_{Y})(\mathbb{C})\) with \(\pi(P)=s\). Any irreducible component \(C\) of \(Y_{s}\) containing \(P\) has dimension at least \(\dim Y-\dim\pi(Y)\). So \(\dim C\geq\dim Y-1\geq\delta\) by (5.4) and (4.2) with \(t=0\). But \(C\subseteq Y_{s}\), so \(C\) is contained in a translate of \((\ker\varphi_{s})^{0}\). Thus \(C=P+(\ker\varphi)_{s}^{0}\subseteq X\). We conclude
\[\dim_{P}\varphi|_{X\cap\mathfrak{A}_{g,S}^{[m]}}^{-1}(\varphi(P))\geq\delta \quad\text{for all}\quad P\in\pi|_{Y}^{-1}(U_{Y})(\mathbb{C})\text{ and all }Y\in\mathcal{Y}. \tag{5.5}\]
By possibly removing finitely many points from \(U_{Y}\) we may arrange that \((\ker\varphi_{\pi(P)})^{0}\) contains the \(m\)-th power of an elliptic curve in \(\mathfrak{A}_{g,\pi(P)}\) for all \(P\in\pi|_{Y}^{-1}(U_{Y})(\mathbb{C})\).
We write \(D=\bigcup_{Y\in\mathcal{Y}}\pi|_{Y}^{-1}(U_{Y}(\mathbb{C}))\). The closure of \(D\) in \(X^{\mathrm{an}}\) equals the closure of \(\bigcup_{Y\in\mathcal{Y}}Y(\mathbb{C})\) in \(X^{\mathrm{an}}\). Indeed, this requires some point-set topology and the fact that \(\pi|_{Y}^{-1}(U_{Y}(\mathbb{C}))\) lies dense in \(Y^{\mathrm{an}}\). In particular, \(D\) is Zariski dense in \(X\) by Proposition 4.3.
So (5.5) holds on the Zariski dense subset \(D\) of \(X\). Therefore, the dimension inequality holds for all \(P\in(X\cap\mathfrak{A}_{g,S}^{[m]})(\mathbb{C})\) by the semi-continuity theorem on fiber dimensions.
Each fiber of \(\varphi\colon\mathfrak{A}_{g,S}^{[m]}\to\varphi(\mathfrak{A}_{g,S}^{[m]})\) is the translate of some \((\ker\varphi_{\pi(P)})^{0}\), which has dimension \(\delta\). We conclude that if \(P\in(X\cap\mathfrak{A}_{g,S}^{[m]})(\mathbb{C})\), then \(\varphi|_{X}^{-1}(\varphi(P))\) contains \(P+(\ker\varphi_{\pi(P)})^{0}\) as an irreducible component. So \((\ker\varphi_{\pi(P)})^{0}\) lies in the stabilizer of \(X_{\pi(P)}\).
The first claim of the theorem follows from (5.3) with \(U=\pi(X\cap\mathfrak{A}_{g,S}^{[m]})=S\).
The second claim follows as \(\ker\varphi_{s}\) contains the \(m\)-th power of an elliptic curve for all \(s\) in \(\pi(D)\), which is Zariski dense in \(\pi(X)\).
For an abelian variety \(A\) and \(m\in\mathbb{N}\) we define \(D_{m}\colon A^{m+1}\to A^{m}\) to be the Faltings-Zhang morphism determined by \(D_{m}(P_{0},\ldots,P_{m})=(P_{1}-P_{0},\ldots,P_{m}-P_{0})\).
**Lemma 5.2**.: _Let \(A\) be an abelian variety defined over \(\mathbb{C}\) and let \(C\subseteq A\) be an irreducible closed subvariety of dimension \(1\). Suppose \(m\geq 2\) and let \(X=D_{m}(C^{m+1})\subseteq A^{m}\). If \(B\) is an abelian subvariety of \(A\) with \(B^{m}\subseteq\operatorname{Stab}(X)\), then \(B=0\) or \(C\) is a translate of \(B\)._
Proof.: Let \(\varphi\colon A\to A/B\) denote the quotient homomorphism and \(\varphi^{m}\to A^{m}/B^{m}=(A/B)^{m}\) its \(m\)-th power. We set \(Z=\varphi^{m}(X)\). Then \(\dim Z\leq\dim X-m\dim B\) as \(B^{m}\) is in the stabilizer of \(X\). We have \(\dim X\leq C^{m+1}=m+1\). So
\[0\leq\dim Z\leq\dim X-m\dim B\leq m+1-m\dim B. \tag{5.6}\]
This implies \(\dim B\leq 1+1/m<2\) as \(m\geq 2\).
Let us suppose \(B\neq 0\), then \(\dim B=1\). Hence \(\dim Z\leq 1\) by (5.6). We have
\[(\varphi(P_{1}-P_{0}),\ldots,\varphi(P_{m}-P_{0}))=\varphi^{m}(P_{1}-P_{0}, \ldots,P_{m}-P_{0})\in Z(\mathbb{C})\]
for all \(P_{0},\ldots,P_{m}\in C(\mathbb{C})\). We fix \(P_{0}\) and let \(P_{1},\ldots,P_{m}\) vary. As \(\dim Z\leq 1\) and \(m\geq 2\) it follows that \(\varphi\) is constant on the curve \(C\). For dimension reasons we conclude that \(C\) equals a translate of \(B\).
**Corollary 5.3**.: _Let \(g\geq 2,m\geq 2,\) and let \(X\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}^{[m]}\) with \(\dim\pi(X)\leq m-1\). We suppose that for all complex points \(s\) of a Zariski open
_and dense subset of \(\pi(X)\), the fiber \(X_{s}\) is of the form \(D_{m}(C^{m+1})\) where \(C\subseteq\mathfrak{A}_{g,s}\) is not contained in the translate of a proper algebraic subgroup of \(\mathfrak{A}_{g,s}\). Then \(X^{\mathrm{deg}}(0)\) does not contain an non-empty open subset of \(X^{\mathrm{an}}\). Moreover, the generic Betti rank on \(X\) is \(2\dim X\) and \(X\) is non-degenerate in the sense of [1, Definition 1.5]._
Proof.: Let \(X_{s}=D_{m}(C^{m+1})\) with \(C\) as in the hypothesis. In particular, \(C\) is not equal to the translate of an abelian subvariety of \(\mathfrak{A}_{g,s}\). By Lemma 5.2, \(\mathrm{Stab}(X_{s})\) does not contain the \(m\)-th power of a non-zero abelian subvariety of \(\mathfrak{A}_{g,s}\). So conclusion (ii) of Theorem 5.1 cannot hold.
Moreover, \(X_{s}\) is not contained in a proper algebraic subgroup of \(\mathfrak{A}_{g,s}^{m}\) and this remains true for the generic point of \(\pi(X)\). Moreover, \(\dim X\leq\dim\pi(X)+m+1\leq(m-1)+m+1=2m\). So hypotheses (b) and (c) of Theorem 5.1 hold. Therefore, hypothesis (a) cannot hold. This is the first claim of the corollary. The second claim follows from Remark 4.4.
## 6. The First Degeneracy Locus and the Relative Manin-Mumford Conjecture
In this section we provide an exposition of the proof of Proposition 11.2 of the first-named author's work [1]. We proceed slightly differently and concentrate our efforts on subvarieties of the universal family of principally polarized abelian varieties with suitable level structure.
We keep the notation of SS3.1 with an important additional restriction. Let \(g\geq 1\) be an integer and equip \(\mathbb{A}_{g}\) with suitable level structure. Let \(\pi\colon\mathfrak{A}_{g}\to\mathbb{A}_{g}\) denote the universal family. In this section we consider \(\mathfrak{A}_{g}\) and \(\mathbb{A}_{g}\) as irreducible quasi-projective varieties defined over a number field \(\overline{\mathbb{Q}}\), the algebraic closure of \(\mathbb{Q}\) in \(\mathbb{C}\).
The set of torsion points \(\mathfrak{A}_{g,\mathrm{tors}}\) is \(\bigcup_{s\in\mathbb{A}_{g}(\mathbb{C})}\{P\in\mathfrak{A}_{g,s}(\mathbb{C}):P\ \text{has finite order.}\}\).
We consider here a variant of the Relative Manin-Mumford Conjecture, inspired by S. Zhang [10] and formulated in work of Pink [19] as well as Bombieri-Masser-Zannier [1]. We also refer to Zannier's book [10] for a formulation. In contrast to the general case, we retain \(\mathfrak{A}_{g}\) as an ambient group scheme and work only with varieties defined over \(\overline{\mathbb{Q}}\).
The following conjecture depends on the dimension parameter \(g\in\mathbb{N}\).
**Conjecture RelMM(\(g\))**.: _Let \(X\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}\) defined over \(\overline{\mathbb{Q}}\) and let \(\eta\in\pi(X)\) denote the generic point. We assume that \(\dim X<g\) and that \(X_{\eta}\) is not contained in a proper algebraic subgroup of \(\mathfrak{A}_{g,\eta}\). Then \(X(\overline{\mathbb{Q}})\cap\mathfrak{A}_{g,\mathrm{tors}}\) is not Zariski dense in \(X\)._
For curves, this conjecture is known, even over \(\mathbb{C}\), thanks to work of Masser-Zannier and Corvaja-Masser-Zannier [10, 11, 12, 13, 14, 15, 16]. Stoll [15] proved an explicit case. For surfaces some results are due to the first-named author [1] and Corvaja-Tsimerman-Zannier [11].
The goal of this section is to prove Conjecture RelMM(\(g\)) for all \(g\) conditional on the following conjecture. Below, a subscript \(\mathbb{C}\) indicates base change to \(\mathbb{C}\).
**Conjecture 6.1**.: _Let \(g\in\mathbb{N}\), let \(X\) be an irreducible closed subvariety of \(\mathfrak{A}_{g}\) defined over \(\overline{\mathbb{Q}}\). If \(\dim X>0\) and \(X(\overline{\mathbb{Q}})\cap\mathfrak{A}_{g,\mathrm{tors}}\) is Zariski dense in \(X\), then \(X_{\mathbb{C}}^{\mathrm{deg}}(1)\) is Zariski dense in \(X\)._
The goal will be achieved by induction on \(g\). The induction step, which is conditional, is the following theorem.
**Theorem 6.2**.: _Suppose Conjecture 6.1 holds. Let \(g\geq 2\) be an integer and suppose Conjecture RelMM(\(g^{\prime}\)) holds for all \(g^{\prime}\in\{1,\ldots,g-1\}\). Then Conjecture RelMM(\(g\)) holds._
Proof.: Let \(X\subseteq\mathfrak{A}_{g}\) be an irreducible closed subvariety defined over \(\overline{\mathbb{Q}}\) satisfying the hypothesis of Conjecture RelMM(\(g\)). Let \(\eta\) denote the generic point of \(\pi(X)\). We assume \(X(\overline{\mathbb{Q}})\cap\mathfrak{A}_{g,\text{tors}}\) is Zariski dense in \(X\) and will derive a contradiction.
We observe \(\dim X>0\). So Conjecture 6.1 implies that \(X^{\deg}_{\mathbb{C}}(1)\) is Zariski dense in \(X\). (Note that \(X\) satisfies \(\dim X<g\) and a condition on \(X_{\eta}\), but the two are not required to invoke Conjecture 6.1). By [1, Theorem 7.1], \(X^{\deg}_{\mathbb{C}}(1)\) is Zariski closed in \(X\). Thus \(X_{\mathbb{C}}=X^{\deg}_{\mathbb{C}}(1)\).
By Lemma 2.4 and since \(X_{\eta}\) is not contained in a proper algebraic subgroup of \(\mathfrak{A}_{g,\eta}\) we conclude that \(X^{\text{exc}}\) is meager in \(X\).
We may apply Proposition 4.3 to \(X_{\mathbb{C}}\subseteq\mathfrak{A}_{g,\mathbb{C}}\) in the case \(t=1\), both hypotheses (a) and (b) are met as \(X\) is as in Conjecture RelMM(\(g\)). We obtain \(\mathcal{Y}\) and \(\varphi\in\operatorname{End}(\mathfrak{A}_{g,S}/S)\) as in the proposition with \(S\) the regular locus of \(\pi(X)\). (We note that \(\varphi\) is defined over \(\overline{\mathbb{Q}}\).) Let \(\delta\) denote the common value of \(\delta(Y)\) for \(Y\in\mathcal{Y}\).
Observe that \(\bigcup_{Y\in\mathcal{Y}}Y\cap\mathfrak{A}_{g,S,\mathbb{C}}\) lies Zariski dense in \(X_{\mathbb{C}}\). It is harmless to assume that \(\dim Y\) are equal for all \(Y\in\mathcal{Y}\), the same can be assumed for \(\dim\pi(Y)\).
Let \(\mathcal{B}=\varphi(\mathfrak{A}_{g,S})\), which is an abelian scheme over \(S\) of relative dimension \(g^{\prime}\), say.
For each \(Y\in\mathcal{Y}\) the exceptional locus \(Y^{\text{exc}}\) is meager in \(Y\) by Proposition 4.3(v). So \(Y^{\text{biZar,exc}}\) is meager in \(Y^{\text{biZar}}\) by Lemma 3.3. Proposition 3.2(i) applied to \(Y^{\text{biZar}}\) implies that each \(Y^{\text{biZar}}_{s}\) is a finite union of \(\operatorname{unif}(\{\tau\}\times W)\), for \(s=\operatorname{unif}(\tau)\); here \(W\subseteq\mathbb{R}^{2g}\) is a linear subspace defined over \(\mathbb{Q}\) with \(\dim W=2\delta\) and independent of \(s\). Recall that if \(s\in S(\mathbb{C})\), then \(Y^{\text{biZar}}_{s}\) is a finite union of translates of \((\ker\varphi_{s})^{0}\), see Proposition 4.3(ii).
For each \(s\in S(\mathbb{C})\), the endomorphism \(\varphi_{s}\) lifts to a linear map \(\mathbb{R}^{2g}\to\mathbb{R}^{2g}\) mapping \(\mathbb{Z}^{2g}\) to itself. The lift is independent of \(s\) and necessarily vanishes on \(W\). Therefore, \(\varphi(Y^{\text{an}}\cap\mathfrak{A}^{\text{an}}_{g,S})\) is the image of \(\operatorname{unif}(\tilde{Y}\times\{\text{finite set}\})\) where \(\operatorname{unif}(\tilde{Y})=\pi(Y)\cap S(\mathbb{C})\).
The abelian scheme \(\mathcal{B}/S\) may not be principally polarized. But its geometric generic fiber is isogenous to a principally polarized abelian variety. We fix an etale morphism \(S_{\text{pp}}\to S\) with base change \(\mathcal{B}^{\prime}=\mathcal{B}\times_{S}S_{\text{pp}}\) as in the diagram (6.1) below. After spreading out and possibly replacing \(S_{\text{pp}}\) by a Zariski open and dense subset we may arrange that \(\mathcal{B}^{\prime}\to\mathcal{B}_{\text{pp}}\) is a fiberwise an isogeny over \(S_{\text{pp}}\) and \(\mathcal{B}_{\text{pp}}\) is principally polarized. But now the level structure from \(\mathfrak{A}_{g,S}\) may have been lost under the isogeny. To remedy this we fix yet another etale morphism \(S_{\text{pp,ls}}\to S_{\text{pp}}\) and do a base change to add suitable torsion sections to \(\mathcal{B}_{\text{\tiny{\rm l}}}\) and ultimately obtain suitable level structure. This does not affect the principal polarization. Thus we get a principally polarized abelian scheme \(\mathcal{B}_{\text{pp,ls}}/S_{\text{pp,ls}}\) with suitable level structure. Its relative dimension equals the relative dimension of \(\mathcal{B}/S\). We obtain a Cartesian diagram into the corresponding fine moduli space as in the right
of the following commutative diagram
(6.1)
We chase \(\varphi(X\cap\mathfrak{A}_{g,S})\subseteq\mathcal{B}\) through the correspondences \(\mathcal{B}\leftarrow\mathcal{B}^{\prime}\rightarrow\mathcal{B}_{\mathrm{pp}}\) and \(\mathcal{B}_{\mathrm{pp}}\leftarrow\mathcal{B}_{\mathrm{pp,ls}}\rightarrow \mathfrak{A}_{g^{\prime}}\) by taking preimages and images and fix an irreducible component \(X^{\prime}\) of the Zariski closure of the image inside \(\mathfrak{A}_{g^{\prime}}\). Consider \(Y\in\mathcal{Y}\) and chase \(\varphi(Y\cap\mathfrak{A}_{g,S,\mathbb{C}})\subseteq\mathcal{B}\) through the diagram as just described. Recall that \(\varphi(Y(\mathbb{C})\cap\mathfrak{A}_{g,S}(\mathbb{C}))\) is the image of \(\mathrm{unif}(\tilde{Y}\times\{\text{finite set}\})\). Locally in the Euclidean topology on the base, our abelian schemes are trivializable in the real analytic category. Moreover, all abelian varieties of \(\mathcal{B}\) above \(S(\mathbb{C})\cap\pi(Y(\mathbb{C}))\) are isomorphic by Proposition 3.2(ii). So \(\varphi(Y(\mathbb{C})\cap\mathfrak{A}_{g,S}(\mathbb{C}))\) ends up as a finite set in \(\mathfrak{A}_{g^{\prime}}\). Thus applying both correspondence has fibers of dimension at least \(\dim\varphi(Y\cap\mathfrak{A}_{g,S,\mathbb{C}})=\dim\pi(Y)\). Thus \(\dim X^{\prime}\leq\dim\varphi(X\cap\mathfrak{A}_{g,S,\mathbb{C}})-\dim\pi(Y)\) by analysis of the fibers of the two correspondences.
Note \(\dim\varphi(X\cap\mathfrak{A}_{g,S})\leq\dim X-(\dim Y-\dim\pi(Y))\) because all fibers of \(Y\rightarrow\pi(Y)\) have finite image under \(\varphi\). We find \(\dim X^{\prime}\leq\dim X-\dim Y<g-\dim Y\), having used \(\dim X<g\).
The relative dimension of \(\mathcal{B}/S\) is \(g^{\prime}=g-\delta\). As all elements in \(\mathcal{Y}\) have positive dimension, Lemma 4.2(iv) applied to an element in \(\mathcal{Y}\) implies \(\delta\geq 1\). Therefore, \(g^{\prime}\leq g-1\). We have further \(\delta\leq\dim Y\) by Proposition 4.3(iv) with \(t=1\). We conclude \(\dim X^{\prime}<g-\dim Y\leq g-\delta=g^{\prime}\). In particular, \(g^{\prime}\geq 1\).
Chasing the Zariski dense set of torsion points in \(X(\overline{\mathbb{Q}})\cap\mathfrak{A}_{g,\mathrm{tors}}\) through the diagram shows that the torsion points in \(X^{\prime}(\overline{\mathbb{Q}})\) are Zariski dense in \(X^{\prime}\). The generic fiber of \(\varphi(X\cap\mathfrak{A}_{g,S})\to S\) is not contained in a proper algebraic subgroup of the generic fiber of \(\mathcal{B}\to S\) by the hypothesis on \(X\) in \(\mathrm{RelMM}(g)\). This implies that the generic fiber of \(X^{\prime}\rightarrow\pi(X^{\prime})\) is not contained in a proper algebraic subgroup of the generic fiber of \(\mathfrak{A}_{g^{\prime},\pi^{-1}(X^{\prime})}\rightarrow\pi(X^{\prime})\).
Recall that \(g^{\prime}\in\{1,\ldots,g-1\}\) and so \(\mathrm{RelMM}(g^{\prime})\) holds by hypothesis. But then the properties of \(X^{\prime}\) contradict the conclusion of \(\mathrm{RelMM}(g^{\prime})\).
**Corollary 6.3**.: _Conjecture 6.1 implies Conjecture 6.1 for all \(g\in\mathbb{N}\)._
Proof.: By Theorem 6.2 it suffices to prove \(\mathrm{RelMM}(1)\). The condition on \(\dim X\) in Conjecture \(\mathrm{RelMM}(1)\) implies that \(X\) is a point. The condition on \(X_{\eta}\) and \(g=1\) imply that \(X\) is not a torsion point. So Conjecture \(\mathrm{RelMM}(1)\) holds true.
|
2304.01631 | Topological superconductivity in doped magnetic moiré semiconductors | We show that topological superconductivity may emerge upon doping of
transition metal dichalcogenide heterobilayers above an integer-filling
magnetic state of the topmost valence moir\'e band. The effective attraction
between charge carriers is generated by an electric p-wave Feshbach resonance
arising from interlayer excitonic physics and has a tuanble strength, which may
be large. Together with the low moir\'e carrier densities reachable by gating,
this robust attraction enables access to the long-sought p-wave BEC-BCS
transition. The topological protection arises from an emergent time reversal
symmetry occurring when the magnetic order and long wavelength magnetic
fluctuations do not couple different valleys. The resulting topological
superconductor features helical Majorana edge modes, leading to half-integer
quantized spin-thermal Hall conductivity and to charge currents induced by
circularly polarized light or other time-reversal symmetry-breaking fields. | Valentin Crépel, Daniele Guerci, Jennifer Cano, J. H. Pixley, Andrew Millis | 2023-04-04T08:43:42Z | http://arxiv.org/abs/2304.01631v1 | # Topological superconductivity in doped magnetic moire semiconductors
###### Abstract
We show that topological superconductivity may emerge upon doping of transition metal dichalcogenide heterobilayers above an integer-filling magnetic state of the topmost valence moire band. The effective attraction between charge carriers is generated by an electric \(p\)-wave Feshbach resonance arising from interlayer excitonic physics and has a tunable strength, which may be large. Together with the low moire carrier densities reachable by gating, this robust attraction enables access to the long-sought \(p\)-wave BEC-BCS transition. The topological protection arises from an emergent time reversal symmetry occurring when the magnetic order and long wavelength magnetic fluctuations do not couple different valleys. The resulting topological superconductor features helical Majorana edge modes, leading to half-integer quantized spin-thermal Hall conductivity and to charge currents induced by circularly polarized light or other time-reversal symmetry-breaking fields.
+
Footnote †: These authors contributed equally.
Introduction--Topological \(p\)-wave superconductors have been intensively sought after, in part because they host Majorana boundary modes [1; 2; 3; 4], a key ingredient for the realization of topological quantum computations [5]. Despite intensive efforts [6; 7], topological superconductivity (TS) remains elusive, and material candidates are largely limited to fine-tuned non-stochiometric compounds [8; 9; 10; 11; 12; 13] that inevitably suffer from defects. The recent advent of gate-tunable moire heterostructures allows one to bypass this unfavorable condition, and at the same time offers a unique context for intertwined topology and superconductivity [14; 15; 16; 17; 18; 19; 20; 21].
Here, we propose a clear route towards the realization of helical \(p\)-wave superconductivity in transition metal dichalcogenide (TMD) moire heterobilayers. This TS displays a valley-chirality locked \(p\pm ip\) order protected by an emergent \(\mathbb{Z}_{2}\) time-reversal symmetry. The TS arises in the weakly doped layer-transfer regime; where one layer contains one hole per moire unit cell, forming a Mott insulator with a large gap to in-layer charge excitations, and an additional \(x\ll 1\) carriers are added to the other layer, forming a dilute Fermi liquid. This regime has recently been reached in MoTe\({}_{2}\)/WSe\({}_{2}\) heterobilayers [22], where coexistence of local moments and itinerant carriers was reported [23; 24; 25].
For the physics addressed in this letter, the crucial feature of the TMD moire bilayer is their strong interlayer Coulomb interaction, which leads to a remarkable range of (charged) interlayer excitons that strongly couple to mobile carriers. For example, full experimental control over electron-exciton scattering in TMD bilayers was recently demonstrated through interlayer trion dressing [26], whose electric field dependent energy enabled scanning across a Feshbach resonance for this composite system [27; 28] In our theory, the crucial role is played by a low-lying charge-2e interlayer exciton (quaternion), also recently probed by spectroscopy in TMD bilayers [29]. This low-lying virtual state's contribution to electron scattering may be described in terms of an effective \(p\)-wave scattering length \(a_{p}\) between doped charges that changes sign under electrostatic gating (see Fig. 1a). Proximity to this Feshbach resonance provides the strong interaction necessary for the emergence of robust superconductivity. Independent of superconductivity it is important to note that this \(p\)-wave electric Feshbach resonance is a solid-state realization of a phenomenon that is still intensely looked for in ultracold gases [30; 31; 32; 33; 34; 35].
The physics of the weakly-doped layer-transfer regime is rich, with many different phases and phenomena depending on the stacking, interlayer potential and hybridization configurations. In this paper we focus on the arrangement of most interest for topological super
Figure 1: a) An out-of-plane electric field \(E_{z}\) can change the sign of \(a_{p}\), the \(p\)-wave scattering length between doped charge. This solid-state Feshbach resonance yields bound states of energy \(B_{2}<0\) below the band bottom when \(a_{p}>0\)[36; 37]. b) Schematic phase diagram predicted for AA-stacked TMD heterobilayers as a function of chemical potential \(\mu\), which can be varied by electrostatic gating. Below the BKT temperature \(T_{c}\), the system either evolves into a \(\mathbb{Z}_{2}\) topological superconductor (\(\mu\gtrsim 0\)) or a gapped Bose-Einstein condensate of pairs (\(\mu\lesssim 0\)). They are separated by a crossover at finite-temperature (dots), which becomes a phase transition at \(T=0\) (star) [38; 39; 40]. For large \(\mu\), \(T_{c}\) almost matches the pair binding \(T^{*}\) (dashes).
conductivity, namely, AA-stacked bilayers with interlayer hybridization comparable to but weaker than the interlayer potential difference, which is in turn weaker than the interaction scales. The AB-stacked case and more general parameter regimes will be presented elsewhere [41]. In the case targeted here, the layer hybridization couples the itinerant carriers in the lightly doped layer to excitons, providing the \(p\)-wave pairing, and also leads to ferromagnetic order in the Mott insulating layer. This ferromagnetic order does not couple the Fermi pockets of the lightly doped layer, implying an emergent time reversal symmetry that promotes the \(p\)-wave superconducting state to the topologically protected DIII class [42], which features helical Majorana edge modes.
Because the pairing depends on parameters independent of the Fermi surface and remains strong even in the very low density limit, we expect that the low carrier densities reachable by gating in moire heterostructures enables access to the full evolution from weakly bound Cooper pairs forming a \(\mathbb{Z}_{2}\) topological superconductor (\(\mathbb{Z}_{2}\) BCS regime) to a Bose-Einstein condensate of tightly bound pairs (BEC regime), as sketched in Fig. 1b.
_Model --_ Lattice parameter mismatch means that stacking two inequivalent TMD layers at zero or non-zero twist angle will create a moire pattern with a unit cell that is large relative to atomic dimensions. Extensive experimental [18; 43] and theoretical [44; 45; 46; 47; 48; 49] studies have established that the low energy physics of this situation may be described by a generalized Hubbard model \(H=H_{\text{int}}+H_{uu}+H_{dd}+H_{ud}\) involving two interpenetrating triangular lattices (one for each layer) featuring in-layer \(H_{uu},H_{dd}\) and interlayer \(H_{ud}\) nearest neighbor hoppings [48], in-layer interactions \(U_{u},U_{d}\) and an interlayer interaction \(V\) (see Fig. 2a). The hopping terms are
\[H_{ab}=-t_{ab}\sum_{\langle i,j\rangle_{ab}}c_{i}^{\dagger}e^{-i\sigma^{z} \nu_{ij}^{ab}\varphi_{ab}}c_{j}, \tag{1}\]
with \((a,b)\in\{u,d\}\) denoting the up or down layer, \((i,j)\) labelling orbitals in these layers, and \(\langle i,j\rangle_{ab}\) denoting nearest neighbor pairs having \(i\in a\) and \(j\in b\) (see Fig. 2a). We have written the fermionic operator for holes as two-component spinors \(c_{j}^{\dagger}=[c_{j,K}^{\dagger},c_{j,K^{\prime}}^{\dagger}]\), labelled by the spin-valley locked degrees of freedom of the two wannier TMDs. The hopping parameters are in general complex [19]; in the 'AA' stacked configuration studied here, we may choose the interlayer hopping parameters to be real (\(\varphi_{ud}=0\)) and set \(\varphi_{uu}=\varphi_{dd}=2\pi/3\) with \(\nu_{ij}=+1\) when the link \(i\to j\) turns right and \(\nu_{ij}=-1\) otherwise. In this convention, the \(t_{ab}\) are real and positive. The interaction terms may be written
\[H_{\text{int}}=\Delta\sum_{i\in u}n_{i}+\sum_{a,i\in a}U_{a}n_{i,\uparrow}n_{ i,\downarrow}+V\sum_{\langle i,j\rangle_{ud}}n_{i}n_{j}. \tag{2}\]
The interlayer potential difference \(\Delta\) is about \(0.1\,\)eV for the MoTe\({}_{2}\)/WSe\({}_{2}\) system of immediate experimental relevance, and may be tuned by an out-of-plane electric field [50; 51].
Representative values estimated from a continuum model for MoTe\({}_{2}\)/WSe\({}_{2}\) bilayers [47] are \(U_{d}\approx 0.30\,\)eV \(\gtrsim\Delta\), \(U_{u}\approx 0.24\,\)eV and \(V\approx 0.14\,\)eV, the large size of the moire unit cell relative to the interplane separation explaining \(V\sim U_{u,d}\). The superconducting state discussed in this work appears when \(V\) exceeds \(\Delta/4\) (see below). The interaction strength increases with increasing \(V\) until \(V\) becomes so large that the charge transfer gap exceeds the Mott gap and the doped holes go into the magnetic layer.
_Magnetic coupling --_ We investigate the physics of the Hamiltonian defined by Eqs. 1&2 in the layer-transfer limit \(t\equiv t_{ud}\ll\Delta\ll\Delta+3V<U_{d}\). In this regime, carriers added up to a density of one hole per per moire unit cell go into the lower layer and due to the large \(U_{d}\) form a Mott insulator at the density \(n_{h}=1\), while a small density \(x\) of carriers added beyond \(n_{h}=1\) will go into the upper plane. We now consider the interactions affecting these \(x\) extra carriers.
When \(U_{d}\gg\Delta\), the leading magnetic interaction is a trion-mediated exchange which, combined with the strong single-layer spin orbit coupling, leads to \(xy\)-ferromagnetism in the Mott layer [21]. In-plane magnetism in the lower level acts as a spin-flip operator for carriers in the upper layer. However, the spin-valley locking in the monolayers, transferred to the moire \(\pm\kappa\) valleys after downfolding (see Fig. 2b), means that low energy spin-flips involve momentum transfers of the order of \(\kappa-\kappa^{\prime}\). For this reason, a small density of carriers doped above the Mott insulating state cannot undergo spin-flip scattering from the ferromagnetic order or its low-lying spin wave excitations at low-energy (see Fig. 2c). As a result, the bottom layer effectively behaves as a featureless charge reservoir and the low-energy carriers in
Figure 2: a) Wannierized model of TMD heterobilayers (Eqs. 1&2) keeping the dominant intra- and inter-layer interactions (\(U_{d}\), \(U_{u}\) and \(V\)) and tunnelings. The phases \(\varphi_{uu}=\varphi_{dd}=2\pi/3\) of the tunnelings are depicted by arrows for the \(K\) spin-valley component. b) Folding of the monolayer \(\pm K_{u/d}\) points onto the mini-Brillouin zone corners \(\kappa\) and \(\kappa^{\prime}\). c) In A-stacked bilayers, the magnetic state stabilized at filling \(n_{h}=1\) is a ferromagnet that cannot induce low-energy spin-flips due to spin-valley locking of the charge carriers described by parabolic dispersion around \(\kappa\) and \(\kappa^{\prime}\).
AA-stacked bilayers enjoy both an emergent time reversal symmetry (TRS) \(T=i\tilde{\sigma}^{y}K\) and the full U(1) spin-rotation symmetry generated by \(\tilde{\sigma}^{z}\), although both are spontaneously broken by the Mott state. Here, \(\tilde{\sigma}\) denotes the spin-valley Pauli matrices projected to the active modes \([\psi^{\dagger}_{q,\tilde{q}},\psi^{\dagger}_{q,\tilde{q}}]\) near the top of the valence band, with \(\Uparrow\) / \(\Downarrow\)= (\(\kappa/\kappa^{\prime},\uparrow/\downarrow\)). The system features two spin-valley locked hole pockets with dispersion \(\varepsilon_{q}=q^{2}/2m\)[52], shown in Fig. 2c, related by the emergent TRS.
_Equal-spin pairing instability --_ Since carriers near the Fermi surface (FS) only couple to the density of the insulating bottom layer, our system is an experimentally viable realization of the setup recently discussed in terms of model systems to describe a repulsive mechanism for superconductivity [53; 54; 55; 56; 57]. This mechanism relies on the existence of a charge-\(2e\) exciton (quaternion) with lower energy than all charge \(e\) and neutral excitations of the system at \(t=0\), which is achieved thanks to the large \(V\) of our model. This quaternion provides a closed scattering channel that can be virtually occupied by pairs of mobile carriers to obtain a non-zero binding energy (see Fig. 3a), in direct analogy to the physics of Feshbach resonance.
The effective attraction is explicitly seen when the interaction term of our lattice model is projected onto the active modes at low doping, retaining pair operators with relative form factor of zeroth or first order in the small momentum deviations away from \(\pm\kappa\)[52]
\[\mathcal{H}^{\text{int}}_{k,k^{\prime}}=g_{s}\mathcal{S}^{\dagger}_{k} \mathcal{S}_{k^{\prime}}+\sum_{\ell s}(g_{t}-\ell sg^{\prime}_{t})\left( \mathcal{H}^{\ell s}_{k}\right)^{\dagger}\mathcal{T}^{\ell s}_{k^{\prime}}, \tag{3}\]
where \(\mathcal{S}_{k}=(\psi_{-k,\tilde{\uparrow}}\psi_{k,\tilde{\Downarrow}}-\psi_{ -k,\tilde{\Downarrow}}\psi_{k,\tilde{\Uparrow}})/\sqrt{2}\) denotes the \(s\)-wave pair operator, while \(\mathcal{T}^{\ell-}_{k}=k_{\ell}\psi_{-k,\tilde{\Downarrow}}\psi_{k,\tilde{ \Downarrow}}\), \(\mathcal{T}^{\ell 0}_{k}=k_{\ell}(\psi_{-k,\tilde{\Uparrow}}\psi_{k,\tilde{ \Downarrow}}+\psi_{-k,\Downarrow}\psi_{k,\tilde{\Uparrow}})/\sqrt{2}\) and \(\mathcal{T}^{\ell-}_{k}=k_{\ell}\psi_{-k,\tilde{\Uparrow}}\psi_{k,\tilde{ \Uparrow}}\) describe \(p\)-wave pairs of spin \(s=-1,0,+1\), respectively. Their orbital angular momentum \(\ell=\pm\) is fixed by their form factors \(k_{\pm}=k_{x}\pm ik_{y}\). As claimed, the \(g\)-coefficients extracted from second order perturbation theory [52], plotted in Fig. 3b, unveil attractive interactions in the \(p\)-wave channel of our model for large enough \(V/\Delta\)[52]. The \(s\)-wave scattering amplitude receives a contribution from the large on-site repulsion and therefore remains positive for our parameter regime, \(g_{s}\sim U_{u}>0\). The largest negative interaction strength is found in the \(\{\mathcal{T}^{-+},\mathcal{T}^{+-}\}\) sector, which describes valley-chirality locked \(p\pm ip\) equal-spin pairing.
The pair binding energy \(T^{*}\) extracted from the log-singularity of the particle-particle susceptibility in these channels is shown in Fig. 3c[52]. It perfectly agrees with the generic BCS-like formula for \(p\)-wave attraction \(k_{B}T_{c}\propto\exp[-1/(\rho E_{F}\tilde{g})]\)[58], where \(\tilde{g}=4\rho|g_{t}+g^{\prime}_{t}|/\pi\) is the dimensionless attraction strength in the dominant pairing channel, and \(\rho=m/(2\pi\hbar^{2})\) the constant density of states near the band bottom [59]. Continuum model calculations give the gap-to-\(T^{*}\) ratio \(2\Delta_{\text{sc}}/(k_{B}T^{*})\approx 3\).
\(\mathbb{Z}_{2}\)_topological superconductor --_ We now show that the emergent low-energy TRS of doped holes grants topological protection to the superconducting state, resulting in pairs of helical Majorana modes on its edges. Introducing the bosonic fields \(\phi_{\pm}\) to describe the superconducting order parameters in the \(\mathcal{T}^{\pm\mp}\) channels, and performing a Hubbard-Stratonovitch transformation, we obtain the Bodgoliubov-de Gennes (BdG) Hamiltonian
\[\mathcal{H}^{\text{BdG}}_{q}=\frac{1}{2}\begin{bmatrix}h_{q}&\Delta_{q}\\ \Delta^{\dagger}_{q}&-h_{q}\end{bmatrix},\quad\Delta_{q}=\begin{bmatrix}0& \phi_{+}q_{+}\\ \phi_{-}q&0\end{bmatrix}, \tag{4}\]
expressed in Nambu space \([\psi_{q,\Uparrow},\psi_{q,\Downarrow},\psi^{\dagger}_{-q,\Downarrow},\psi^{ \dagger}_{-q,\Uparrow}]\), with \(h_{q}=q^{2}/2m-\mu\) and \(\mu\) the chemical potential. The block structure of \(\mathcal{H}^{\text{BdG}}\) translates into a decoupled sum of free energies for the \(\Uparrow\) / \(\Downarrow\) sectors \(\mathcal{F}=\sum_{a=\pm}(\alpha|\phi_{a}|^{2}+|\phi_{a}|^{4})\). Below \(T_{c}\), _i.e._ for \(\alpha<0\), the minimization of \(\mathcal{F}\) implies that both species be equally populated \(|\phi_{+}|=|\phi_{-}|\)[52]. Up to an irrelevant gauge choice, we thus have \(\phi_{+}=\phi_{-}^{\star}=\phi e^{i\theta}\).
The two spin-valley components of \(\mathcal{H}^{\text{BdG}}\) decouple into time-reversal conjugated \(2\times 2\) blocks that can be written using Nambu Pauli matrices \(\vec{\tau}\) as \(\mathcal{H}^{s}_{q}=E_{q}\vec{\eta}^{s}_{q}\cdot\vec{\tau}\) with \(E_{q}^{2}=h_{q}^{2}+|\phi q|^{2}\). As illustrated in Fig. 4a, the unit vectors \(n_{q}^{s}=\left|\phi\left(R_{0}q\right)_{x},s\phi\left(R_{0}q\right)_{y},h_{q} \right|/E_{q}\), where \(R_{\theta}\) is the rotation matrix by angle \(\theta\) around the \(z\)-axis, fully wrap around the Bloch sphere as momentum is varied provided \(\mu>0\). This ensures a non-zero Chern number to all four Bogoliubov bands. Since the vectors \(n_{q}^{s}\) are mirrors of one
Figure 3: a) The lowest excitation with which isolated carriers (orange) hybridize has higher energy than the charge \(2e\) excitons that couples to pairs of carriers. This offers a strong energy reduction to pairs and can produce a non-zero binding energy. b) Interaction coefficients in the different \(p\)-wave channels (Eq. 3) with dotted lines marking the value of \(V/\Delta\) above which they become negative – 1/4 for the leading pairing channel. c) The pair binding energy \(T^{*}\) is non-zero above this value, and increases with \(V/\Delta\) from small to large values compared to the Fermi energy \(E_{F}\), describing a BCS to BEC evolution. The solid and dotted lines respectively show numerical estimates and results from a generic \(p\)-wave BCS formula, both obtained for \(x=0.1\), \(t_{uu}=\Delta\) and \(t^{2}/(\Delta t_{uu})=0.25\).
another with respect to the \((xz)\) plane for opposite spin \(s=\pm\), the Chern numbers for the two hole-like Bogoliubov bands are opposite. They hence carry a non-trivial spin Chern number \(C_{s}=1\).
The existence of this spin-Chern number follows from the particle-hole operator \(\mathcal{P}=\tilde{\sigma}^{0}\tau^{x}\) acting as \(\mathcal{P}\mathcal{H}_{q}^{\text{BdG}}\mathcal{P}=-\mathcal{H}_{-q}^{\text{ BdG}}\) and an additional chiral symmetry \(\mathcal{O}=\tilde{\sigma}^{z}\tau^{z}\) that commute with \(\mathcal{H}_{q}^{\text{BdG}}\) which imply that the state belongs to the DIII class of the Altland-Zirnbauer classification [42] and the obtained triplet superconducting state is topologically protected. The topological phase displays a superposition of \(p\pm ip\) superconducting components [60], easily anticipated given the form of \(\Delta_{q}\) in Eq. 4. More remarkable is the existence of counter-propagating chiral Majorana modes at the edge of the system, described by
\[\mathcal{L}=i\chi_{\Uparrow}(\partial_{t}-\partial_{x})\chi_{\Uparrow}+i\chi_ {\Uparrow}(\partial_{t}+\partial_{x})\chi_{\Downarrow}, \tag{5}\]
where \(\chi_{\Uparrow}=u\psi_{\Uparrow}+u^{*}\psi_{\Uparrow}^{\dagger}\) and \(\chi_{\Downarrow}=u^{*}\psi_{\Downarrow}+u\psi_{\Downarrow}^{\dagger}\), with \(u\) the normalized hole component of the \(\Uparrow\) band of \(\mathcal{H}_{q}^{\text{BdG}}\)[5]. This pair of helical edge modes is protected by \(\mathcal{O}\), for which they are eigenmodes with opposite eigenvalues.
The obtained \(\mathbb{Z}_{2}\) topological superconductor can be revealed by a half-integer value of the spin-thermal Hall conductivity, coming from the Majorana edge modes [61]. While spin-thermal Hall currents have not yet been measured, any TRS breaking perturbation, _e.g._ circularly polarized light, can imbalance the population in the two valleys and confer properties akin to chiral \(p\)-wave superconductors, which can be probed through electric currents [62]. In addition, Majorana zero modes in vortex cores [61] could be observed with scanning tunneling microscopy as zero-bias peaks [63].
Beyond its topological protection, another interesting feature arises from the large momentum transfer \(\kappa-\kappa^{\prime}\) required to coherently scatter holes between the two valleys, which prevents any quartic term besides the residual pair repulsion from appearing in the Ginzburg free energy [52]. The lowest order term coupling the phases of \(\phi_{+}\) and \(\phi_{-}\) is thus of sixth order \((\phi_{+}^{*})^{3}\phi_{-}^{3}+hc\). This high order term induces a noteworthy third-order Josephson effect with distinctive low-energy Leggett modes [64].
\(p\)_-wave BEC-BCS transition --_\(T^{*}\) in Fig. 3c only matches the BKT transition temperature \(T_{c}\) in the weak-coupling regime \(E_{F}\gg|g_{t}+\tilde{g}_{t}|\)[58]. In the opposite strong-coupling limit, _i.e._ at doping concentrations \(x=2\rho E_{F}<\rho|g_{t}+\tilde{g}_{t}|\), our model still exhibits pairing in some regime of parameters. To see this, we consider the binding energy of two charge carriers doped above the magnetic state \(B_{2}=E_{2}-2E_{1}+E_{0}\), \(E_{N}\) denoting the ground state energy for \(N\) doped charges.
We first obtain \(B_{2}\) as a function of the original parameters of the model by solving an effective lattice model containing all second order processes in \(t/\Delta\)[52]. The results of this calculation are presented in Fig. 4b, and show the following trend: as \(\Delta\) is decreased, _e.g._ by application of an out-of-plane electric field, the ratios \(V/\Delta\) and \(t^{2}/t_{uu}\Delta\) increase up to a critical point where bound states emerge \(B_{2}<0\). This can be seen as a condensed-matter analog of a Feshbach resonance, where the non-retarded interaction between two fermions can be tuned from positive to negative using an externally controllable parameter. From a low-energy scattering perspective, this can be understood as tuning the \(p\)-wave scattering length \(a_{p}\) from negative to positive. The universal relation \(B_{2}\sim\hbar^{2}/ma_{p}^{2}\log(r/a_{p})\) holds when \(a_{p}>0\)[36; 37], where \(r\) is the range of \(p\)-wave interactions, comparable to the lattice constant.
The presence of bound pairs at infinitesimal doping offers access to the full evolution from a BEC of pairs to the BCS superconducting state, studied above in the weak coupling limit (Fig. 3). This evolution should be distinguished from the \(s\)-wave case in several ways. For \(p\)-wave interactions, the BCS and BEC regions are separated by a transition, _i.e._ by a gap closing [65], while it is a smooth crossover for \(s\)-wave interactions [66]. This is easily observed in our BdG Hamiltonian, whose eigen-energies vanish at \(\mu=q=0\) even when \(\phi>0\) remains finite. This gap closure, highlighted in Fig. 4c, separates the BCS regime \(\mu>0\) from the BEC regime \(\mu<0\)
Figure 4: a) BdG band structure for the superconducting state. The non-trivial and opposite winding of the Nambu vectors of the lower bands \(n_{q}^{s}\), shown with arrows, signals a non-zero spin Chern number. b) Pairing persists down to the two-particle level for certain choices of parameters, as indicated by a negative binding energy \(B_{2}<0\). The black lines show the separation between the regions with and without bound states below the carrier band edge obtained from the effective continuum theory Eq. 3. c) The evolution from BEC (left) to BCS (right) at zero temperature involves a topological phase transition, illustrated here using the BdG band structure as a function of the chemical potential \(\mu\). \(C_{s}\) denotes the spin Chern number of the negative-energy BdG bands.
Another difference is that the physics in the \(p\)-wave case necessarily involves another length-scale in addition to the scattering length [65; 67].
In our specific model, the topological protection of the superconducting state in the FM case endows the BEC-BCS transition with a topological character. This is understood from the spin-split BdG Hamiltonians \(\mathcal{H}^{s}_{q}\), which exhibit a textbook example of a band-inversion when the "mass" \(\mu\) crosses zero energy [68]. The topological properties of the superconducting state are lost at any finite temperature due to thermal proliferation of topological excitations [38; 39; 40], and as a result the \(T=0\) transition into a crossover at any finite temperature.
Conclusion --We have exposed physical mechanisms leading to the emergence of a robust attraction and a low-energy time-reversal symmetry in AA-stacked transition metal dichalcogenides moire heterobilayers doped above unit filling, which together produce a topologically protected helical \(p\)-wave superconductor at sufficiently low temperatures. The topological properties are inherited from the strong spin-orbit coupling of the original monolayers, when the latter is preserved by the magnetic Mott state. The attraction relies on the large interlayer interaction \(V\) of the bilayer, and increases when electrostatic gating reduces the valence band offset \(\Delta\) between layers. For our theory to apply, the layer-transfer gap should also remain smaller than the in-layer Mott gap at filling one. All these scales can, in principle, be experimentally probed by scanning-tunneling microscopy or compressibility measurements to provide experimental guidance on how to reach the regime of interest for superconductivity.
Acknowledgments --D.G. thanks Michele Fabrizio for correspondence at the early stage of the work. V.C. is grateful to A. Imamoglu for an insightful discussion shaping some of the ideas presented here. We also acknowledge enlightening discussions with Chetan Nayak. This work was partially supported by the Air Force Office of Scientific Research under Grant No. FA9550-20-1-0260 (J.C.) and Grant No. FA9550-20-1-0136 (J.H.P.) and the Alfred P. Sloan Foundation through a Sloan Research Fellowship (J.C., J.H.P.). A.J. M. acknowledges support from the NSF MRSEC program through the Center for Precision-Assembled Quantum Materials (PAQM) NSF-DMR-2011738. The Flatiron Institute is a division of the Simons Foundation.
|
2309.01815 | Effective Hamiltonian approach to the kinetic infinitely long-range
Ising (the Husimi-Temperley) model | The linear master equation (ME) describing the stochastic kinetics of
Ising-type models has been transformed into a nonlinear ME (NLME) for a
time-dependent effective Hamiltonian (EH). It has been argued that for models
with large number of spins ($N$) NLME is easier to deal with numerically than
ME. The reason is that the non-equilibrium probability distribution entering ME
scales exponentially with the system size which for large $N$ causes numerical
under- and overflow problems. NLME, in contrast, contains quantities scaling
with $ N $ not faster than linearly.
The advantages of NLME in numerical calculations has been illustrated on the
problem of decay of metastable states in the kinetic Husimi-Temperley model
(HTM) previously studied within ME approach by other authors. It has been shown
that the use of NLME makes possible to extend by orders of magnitude the ranges
of numerically accessible quantities, such as the system size $ N $ and the
lifetimes of metastable states, as well as the accuracy of the calculations. An
excellent agreement of numerical results with previous studies has been found.
It has been shown that in the thermodynamic limit EH for HTM exactly
satisfies a nonlinear first order differential equation. The system of
characteristic equations for its solution has been derived and it has been
shown that the conventional mean field equation is one of them. | V. I. Tokar | 2023-09-04T21:09:47Z | http://arxiv.org/abs/2309.01815v1 | Effective Hamiltonian approach to the kinetic infinitely long-range Ising (the Husimi-Temperley) model
###### Abstract
The linear master equation (ME) describing the stochastic kinetics of Ising-type models has been transformed into a nonlinear ME (NLME) for a time-dependent effective Hamiltonian (EH). It has been argued that for models with large number of spins (\(N\)) NLME is easier to deal with numerically than ME. The reason is that the non-equilibrium probability distribution entering ME scales exponentially with the system size which for large \(N\) causes numerical under- and overflow problems. NLME, in contrast, contains quantities scaling with \(N\) not faster than linearly.
The advantages of NLME in numerical calculations has been illustrated on the problem of decay of metastable states in the kinetic Husimi-Temperley model (HTM) previously studied within ME approach by other authors. It has been shown that the use of NLME makes possible to extend by orders of magnitude the ranges of numerically accessible quantities, such as the system size \(N\) and the lifetimes of metastable states, as well as the accuracy of the calculations. An excellent agreement of numerical results with previous studies has been found.
It has been shown that in the thermodynamic limit EH for HTM exactly satisfies a nonlinear first order differential equation. The system of characteristic equations for its solution has been derived and it has been shown that the conventional mean field equation is one of them.
## I Introduction
The equilibrium statistics of a classical many-body system at a fixed temperature can be described by the canonical probability distribution (CPD) [1]. In this paper we will deal with the spin-lattice models frequently used to describe statistics of uniaxial magnets, lattice gases, and binary alloys [1; 2; 3]. In these models CPD depends only on the configuration energy which is conventionally called the Hamiltonian. The CPD is proportional to the Boltzmann factor which is the exponential function of the Hamiltonian divided by \(-k_{B}T\)--the absolute temperature in energy units with the minus sign [1]. In models with short-range interactions the energy scales linearly with the system size \(N\). So if exact analytic solution is unknown, the straightforward use of CPD in approximate calculations would necessitate numerical calculation of the exponential function with the argument scaling as \(O(N)\) which can be difficult or even impossible for large \(N\). To deal with this problem sophisticated combinatorial techniques have been developed to make possible the calculation of quantities of interest without resorting to the numerical exponentiation and using only \(O(1)\) quantities, such as the specific magnetization and the energy density per site [1].
From the standpoint of out of equilibrium statistics CPD is a particular case of more general non-equilibrium probability distribution (NPD) which coincides with CPD in thermal equilibrium. This is conveniently formalized in the effective Hamiltonian (EH) approach [4; 5; 6] where the dependence of NPD on EH is posited to be the same as that of CPD on the equilibrium Hamiltonian. This is achieved by defining EH as the logarithm of NPD multiplied by \(-k_{B}T\). This trick allows one to use the approximate techniques of equilibrium statistics [1] also in the non-equilibrium case.
This, however, does not fully solve the problem because unlike the equilibrium case where the Hamiltonian is constant and is supposed to be known, when the system is out of equilibrium EH should be determined separately. In the case of the kinetic Ising models [7; 8; 9], EH evolves with time according to the master equation (ME) for NPD [7]. Because NPD depends on EH and \(N\) exponentially, the numerical solution of ME for large \(N\) encounters the same over- and underflow difficulties as in the statistical averaging.
The aim of the present paper is to suggest a modification of ME along the lines of Ref. [4] such that in the case of homogeneous systems with short range interactions the resulting evolution equation contained only quantities \(O(1)\). It will be shown that this can be achieved at the cost of transforming the linear ME into a nonlinear evolution equation which will be abbreviated as NLME. Its derivation will be given in Sec. II. NLME for the kinetic Husimi-Temperley model (HTM) (also known as the infinitely long-range Ising-, the mean field- (MF), the Curie-Weiss-, and the Weiss-Ising model [10; 11; 12; 13; 14]) requiring under the Glauber dynamics [8] will be derived in Sec. III. The advantages of NLME will be illustrated on the problem of decay of metastable states in HTM. The decay problem was previously investigated in Refs. [10; 11; 13] in the framework of ME, the Fokker-Planck equation, and the Monte Carlo simulations. A scaling law for the lifetime of metastable states was suggested and shown to be valid for large absolute values of a scaling parameter [13]. In Secs. IV and VI it will be shown that the use of NLME allows one to extend the testing range of the scaling law as well as the accuracy of the calculated lifetimes
of the _metastable_ states by orders of magnitude. Finally, In Sec. V it will be shown that NLME for HTM in the thermodynamic limit reduces to a nonlinear differential equation of the first order. The characteristic equations will be derived and it will be shown that the conventional MF kinetic equation of Refs. [3; 12; 15] describes a characteristic passing through the minimum of a free energy function. In concluding Sec. VII a brief summary will be presented and further arguments given to support the suggestion that NLME is a prospective approach to the solution of stochastic models of the Ising type.
## II Effective Hamiltonian approach to the kinetic Ising models
For brevity, the set of \(N\) Ising spins \(\sigma_{i}=\pm 1\), \(i=1,N\), will be denoted as \(\mathbf{\sigma}=\{\sigma_{i}\}\). In the stochastic approach to non-equilibrium kinetics the statistical properties of an out of equilibrium system can be described by the time-dependent NPD \(P(\mathbf{\sigma},t)\) which satisfies the ME [7]
\[P_{t}(\mathbf{\sigma},t)=\sum_{\mathbf{\sigma}^{\prime}}[r(\mathbf{\sigma}^{\prime}\to\mathbf{ \sigma},t)P(\mathbf{\sigma}^{\prime},t)-r(\mathbf{\sigma}\to\mathbf{\sigma}^{\prime},t).P (\mathbf{\sigma},t)], \tag{1}\]
Here and in the following by subscript \(t\) we will denote the partial time derivative; the transition rates \(r\) in the kinetic Ising models will be chosen according to Refs. [8; 9] as
\[r(\mathbf{\sigma}\to\mathbf{\sigma}^{\prime},t)=\frac{1}{\tau_{0}}\frac{e^{-\mathcal{ H}^{0}(\mathbf{\sigma}^{\prime},t)}}{e^{-\mathcal{H}^{0}(\mathbf{\sigma}^{\prime},t)}+e^{- \mathcal{H}^{0}(\mathbf{\sigma},t)}}. \tag{2}\]
where \(1/\tau_{0}\) is the rate of transition \(\mathbf{\sigma}\to\mathbf{\sigma}^{\prime}\) and the dimensionless Hamiltonian \(\mathcal{H}^{0}\) is assumed to include the Boltzmann factor \(\beta=1/k_{B}T\) as one of the parameters. \(\mathcal{H}^{0}\) may depend on time which is necessary, for example, in modeling hysteretic phenomena [12]. The dependence of the Hamiltonian on \(\mathbf{\sigma}\) can be arbitrary but in this study we will assume that \(\mathcal{H}^{0}\) is an extensive quantity, that is, it scales linearly with the system size \(N\)[1]. In this case the exponential functions in Eq. (2) scale with \(N\) as
\[e^{-\mathcal{H}^{0}(\mathbf{\sigma},t)}=e^{-u^{0}(\mathbf{\sigma},t)N}, \tag{3}\]
where \(u^{0}=O(1)\) is the Hamiltonian density. The exponential behavior in Eq. (3) may hinder numerical solution of ME (1) because the terms on the right hand side (rhs) of Eq. (2) will suffer from the problem of numerical over- and underflow at sufficiently large \(N\). However, when the transition \(\mathbf{\sigma}\to\mathbf{\sigma}^{\prime}\) is local, this difficulty is easily overcome. Multiplying the numerator and denominator in Eq. (2) by \(\exp\{\frac{1}{2}[\mathcal{H}^{0}(\mathbf{\sigma},t)+\mathcal{H}^{0}(\mathbf{\sigma} ^{\prime},t)]\}\) one arrives at
\[r(\mathbf{\sigma}\to\mathbf{\sigma}^{\prime})=\frac{1}{2\tau_{0}}\frac{\exp\left[ \frac{1}{2}\Delta^{\mathcal{H}^{0}}(\mathbf{\sigma},\mathbf{\sigma}^{\prime},t) \right]}{\cosh[\frac{1}{2}\Delta^{\mathcal{H}^{0}}(\mathbf{\sigma},\mathbf{\sigma}^{ \prime},t)]} \tag{4}\]
where
\[\Delta^{\mathcal{H}^{0}}(\mathbf{\sigma},\mathbf{\sigma}^{\prime},t)=\mathcal{H}^{0}( \mathbf{\sigma},t)-\mathcal{H}^{0}(\mathbf{\sigma}^{\prime},t). \tag{5}\]
In lattice models with local spin interactions the \(O(N)\) scaling of \(\mathcal{H}^{0}\) is a consequence of the summation over \(N\) lattice sites. If configurations \(\mathbf{\sigma}\) and \(\mathbf{\sigma}^{\prime}\) differ only locally, the difference in Eq. (5) in such models vanishes at large distances from the region of the flipping spins so the summations over sites will be spatially restricted leading to \(\Delta^{\mathcal{H}^{0}}=O(1)\), hence, all terms in Eq. (4) will be of order unity.
We note that Eq. (5) is antisymmetric under the exchange of spin arguments so the denominator in Eq. (4) is the same for both terms in Eq. (1). Replacing it by a constant can simplify ME while preserving its qualitative features [4; 12]. In the present paper, however, more complex Glauber prescription (2) will be used for compatibility with the majority of literature on the kinetic HTM (see, e.g., Refs. [13; 16] and references therein).
Another source of the undesirable behavior briefly discussed in the Introduction comes from NPD \(P\) which should depend on \(N\) similar to CPD because it would coincide with it in thermal equilibrium
\[P(\mathbf{\sigma},t)|_{t\to\infty}\to P^{0}(\mathbf{\sigma})=e^{-\mathcal{H}^{0}(\bm {\sigma})}/Z \tag{6}\]
where \(Z\) is the partition function and \(\mathcal{H}^{0}\) in Eq. (6) is assumed to be time-independent because otherwise the equilibrium would not be attainable. It is easily checked that \(P^{0}\) is a stationary solution of Eq. (1) which means that at least for large \(t\) NPD \(P\) in Eq. (1) behaves similar to \(P^{0}\). This can be formalized in the EH approach [3; 4; 6] where by analogy with Eqs. (6) and (3) NPD is depends on EH \(\mathcal{H}\) as
\[P(\mathbf{\sigma},t)=e^{-\mathcal{H}(\mathbf{\sigma},t)}\equiv e^{-Nu(\mathbf{\sigma},t)}, \tag{7}\]
where \(\mathcal{H}|_{t\to\infty}\to\mathcal{H}^{0}+\ln Z\) and \(u\) is the EH density. In Eq. (7) it has been assumed that the normalization constant similar to \(\ln Z\) is included in \(\mathcal{H}\). This is convenient in many cases because ME (1) preserves normalization so whenever possible the initial \(\mathcal{H}(\mathbf{\sigma},t=t_{0})\) is reasonable to choose in such a way that \(P(t_{0})\) in Eq. (7) was easily normalizable. This would eliminate the necessity to calculate an equivalent of the partition function \(Z\) at later stages of evolution because in general this is a nontrivial task.
In Ref. [4] a simple way to eliminate the undesirable exponential behavior from ME is suggested which was implemented as follows. Dividing both sides of Eq. (1) by \(P\) from Eq. (7) one arrives after trivial rearrangement at the evolution equation for EH
\[\mathcal{H}_{t}(\mathbf{\sigma},t)=\sum_{\mathbf{\sigma}^{\prime}}\frac{\exp\left(- \frac{1}{2}\Delta^{\mathcal{H}^{0}}\right)-\exp\left(\frac{1}{2}\Delta^{ \mathcal{H}^{0}}-\Delta^{\mathcal{H}}\right)}{2\tau_{0}\cosh(\frac{1}{2} \Delta^{\mathcal{H}^{0}})} \tag{8}\]
where \(\Delta^{\mathcal{H}}\) is defined as in Eq. (5) and by similar reasoning it should be an \(O(1)\) quantity. The arguments of deltas in Eq. (8) have been omitted for brevity; they are the same as in Eq. (5). As is seen, all terms on the rhs are \(O(1)\) but because of the summation over \(\mathbf{\sigma}^{\prime}\) the equation scales linearly with \(N\).
However, if the system is spatially homogeneous, Eq. (8) can be further reduced to an \(O(1)\) equation. For example, if the sites are organized in a Bravais lattice, all sites are equivalent and summations over sites implicit on both sites of the equation can be lifted. Presumably, this can be achieved with the use of the formalism developed in the cluster approach to equilibrium alloy theory [2]. In this approach Eq. (8) should be formally expanded over a complete set of orthonormal polynomial functions of \(\mathbf{\sigma}\) characterized by the clusters of lattice sites of all possible configurations and positions [17]. The total number of the basis functions is exponentially large (\(\sim 2^{N}\)) but in the spatially homogeneous case the coefficients of clusters differing only by their position on the lattice will be the same. Therefore, only clusters in the vicinity of a single lattice site need be taken into account. Further, if the interactions in the system are local, i.e., cluster sizes are bounded by a maximum radius, the size of the system of equations can also be bounded. In practice in the alloy thermodynamics the number of clusters to keep in the expansion was found to be rather moderate in many cases [18; 19]. Hopefully, similarly simple approximations will be possible to develop in the non-equilibrium statistics as well. This case, however, is more difficult because in the EH long range interactions may arise due to kinetics [4].
## III Application to HTM
The HTM is an Ising-type model in which all spin pairs interact with the same strength \(J/N\) so the dimensionless Hamiltonian can be express in terms of the total spin [13]
\[M=\sum_{i=1}^{N}\sigma_{i}\equiv Nm \tag{9}\]
as
\[\mathcal{H}^{0} = -\frac{\beta J}{2}M^{2}-\beta H(t)M \tag{10}\] \[= -N\left[\frac{\beta}{2}m^{2}-h(t)m\right]\equiv Nu^{0}(m,t),\]
where \(m=O(1)\) is the specific magnetization and to simplify notation we have chosen \(J\) as our energy unit \(J=1\) so that the HTM critical temperature will satisfy \(k_{B}T_{c}=\beta_{c}^{-1}=1\); \(h(t)=\beta H(t)\) is the external magnetic field which can depend on time, though in the present study we will be interested mainly in constant-\(h\) case. In Eq. (10) \(u^{0}(m,t)\) is the physical Hamiltonian density which we will distinguish from the equilibrium Hamiltonian density because the equilibrium is incompatible with the time-dependence.
The physical picture is easier visualized in terms of function [13]
\[f^{0}(m,t)=u^{0}(m,t)-s(m) \tag{11}\]
where
\[s(m)=-\sum_{\binom{np}{2\sigma}}\frac{1\pm m}{2}\ln\frac{1\pm m}{2} \tag{12}\]
is the dimensionless entropy in the Stirling approximation. We will call \(f^{0}\) the fluctuating free energy (FFE) because unlike the true free energy it is not convex. A typical FFE for HTM that will be studied below is shown in Fig. 1.
\(h_{SP}\) in Fig. 1 and magnetization \(m_{SP}\) below in Eq. (45) are the values of \(h\) and \(m\) at the spinodal point defined by conditions
\[f^{0}_{m}=0\text{ and }f^{0}_{mm}=0. \tag{13}\]
The explicit expressions can be found in Ref. [13]. In Eq. (13) and in the following the subscript \(m\) will denote the symmetric discrete derivative defined for an arbitrary function \(g(m)\) as
\[g_{m}(m)=\frac{g(m+\epsilon)-g(m-\epsilon)}{2\epsilon}, \tag{14}\]
where \(\epsilon=1/N\). For example, from Eq. (10) one gets
\[u^{0}_{m}(m,t)=-\beta m-h(t). \tag{15}\]
### NLME for HTM
The choice of the kinetic HTM for the present study has been motivated mainly by its simplicity but also because it has been extensively studied previously within the linear master- and the Fokker-Planck equation approach in Refs. [10; 11; 13]. Most pertinent to our study will be the numerical data of Ref. [13] so to facilitate comparison we will try to follow the notation of that paper.
Figure 1: Equilibrium FFE Eq. (11) for \(T=0.8T_{c}\) and \(h=0.015\). A, B, and C denote, respectively, the two local minima and a local maximum appearing in \(f^{0}(m)\) when \(T<T_{c}\) and \(|h|<|h_{SP}|\) where \(h_{SP}\) is \(h\) at the spinodal point (further explanations are given in the text).
NLME for HTM can be derived straightforwardly from Eq. (8) using explicit expression for the physical Hamiltonian Eq. (10). However, in order to facilitate comparison with Ref. [13], below we derive it departing from Eq. (7) of that reference. To this end we first note that in Ref. [13] the authors considered the probability distribution of the magnetization \(M\) as the fluctuating quantity. We will denote it as \(P^{(M)}\). Our \(P\), however, describes the distribution of the fluctuating Ising spins; \(M\) and \(m\) in our formulas should be considered as a shorthand for the sum of spins Eq. (9). The configuration space in the latter case consists of \(2^{N}\) points while \(M\) takes only \(N+1\) squares from \(-N\) to \(N\) with step 2 (a spin flip changes \(M\) by 2). This is because many spin configurations correspond to one \(M\) value which can be accounted for by a combinatorial factor \({}_{N}C_{N_{\pm}}\)
\[P^{(M)}(M,t)=\frac{N!}{N_{+}!N_{-}!}P(M,t), \tag{16}\]
where \(N_{+}=(N+M)/2\) and \(N_{-}=(N-M)/2\)) are the numbers of, respectively, spins +1 and -1 in the configuration.
In our notation Eq. (7) of Ref. [13] reads
\[P^{(M)}_{t}(M,t)=-\frac{1}{2\tau_{0}}\sum_{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\, \} {\}} \}}} \}} \} \} \} \} {\}\}\}\}\ \}\ \}\ \}\ \ \ \}\ \ \ \ \ \\\\\}\ \}\ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\
second stage this state slowly (in comparison with the first stage) decays into the stable equilibrium with the magnetic momenta distributed mainly around point B in Fig. 1. In the present study the lifetime \(\tau\) has been determined at this second stage in contrast to Ref. [13] where the first stage was also included. Because in our definition the properties of both the minima and the Glauber rates depend only on the parameters of HTM, lifetime \(\tau\) does not depend on arbitrariness of the initial distribution. Specifically, similar to Ref. [13] it has been assumed that the probability to find the system at time \(t\) in the state with negative magnetization satisfies the conventional exponential decay law
\[n_{A}(t)=\int_{-1}^{0}P^{(M)}(Nm,t)dm\simeq n_{A}^{0}e^{-t/\tau}, \tag{22}\]
the difference being that in the present study the law has been assumed to hold only after the initial relaxation has completed while in Ref. [13] it was considered to be approximately valid throughout the whole decay process. In view of this difference, the comparison of our lifetime \(\tau\) with the calculations of Ref. [13] would be legitimate only if the initial fast relaxation time is negligible in comparison with \(\tau\). This has been achieved by restricting consideration to sufficiently deep potential wells near the local minimum A which can be easily satisfied in large systems \(N\geq 1000\) where the depth of even a shallow well in \(f^{0}\) in Fig. 1 is strongly enhanced by the factor \(N\) in the expressions of the Arrhenius type satisfied by the lifetime [10; 11; 13; 20]
\[\tau\sim e^{N[f^{0}(C)-f^{0}(A)]}\equiv e^{N\Delta f^{0}} \tag{23}\]
(we remind that \(\beta\) enters the definition of \(f^{0}\)).
The qualitative picture just described is illustrated in Fig. 2 where it can be seen that the decay law Eq. (22) is satisfied at least within twelve orders of magnitude. The influence of the initial relaxation could not be discerned at the time resolution \(100\tau_{0}\). It is to be noted that the lifetime \(\tau\) in Fig. 2 is the second smallest in all of our calculations so the influence of the initial relaxation in most of them has been even weaker.
Though the decay law Eq. (22) is only heuristic, in the calculations it has been satisfied with a remarkable accuracy. The specific decay rate
\[\lambda=\tau^{-1}/N \tag{24}\]
has been found to be equal to \(5.550091955(7)\times 10^{-7}\) at \(t=200\tau_{0},300\tau_{0},400\tau_{0},500\tau_{0}\), and \(5\cdot 10^{4}\tau_{0}\), that is, it had the same value to the accuracy in nine to ten significant digits.
The simplicity of the behavior seen in Fig. 2 suggests a simple underlying physics which can be surmised from the behavior of FFE during the decay shown in Fig. 3. As can be seen, at intermediate times (\(t=500\tau_{0}\) has been chosen as an example) \(f(m,t)\) in the vicinity of local minima A and B in Fig. 1 can be accurately approximated as
\[f(m,t)\approx f^{0}(m)+C_{A(B)}(t). \tag{25}\]
In terms of NPD this translates into two quasi-equilibrium distributions peaked near A and B and characterized by filling factors \(n_{A}(t)\) and \(n_{B}(t)\). Their population dynamics can be phenomenologically described by two linear evolution equations as discussed in Sec. II.C.2 in Ref. [21]. The solution for \(n_{A}(t)\) found in Ref. [21] is the same as our Eq. (22) except that we have neglected the residual equilibrium contribution which in large-\(N\) models is usually small: for the HTM parameters used in the calculations \(n_{A}^{(eq)}\approx 10^{-38}\). The EH density \(u\) and FFE \(f\) differ only in time-independent function \(s(m)\), therefore, large lifetimes of the metastable states means a slow evolution of both. The behavior of \(f\) in Fig. 3 near
Figure 2: Probability of survival of the metastable states Eq. (22) calculated with the use of Eq. (18) for HTM with \(N=10^{3}\), \(T=0.8T_{c}\), and \(h=0.06\). The points are spaced with time step \(100\tau_{0}\) and their size exceeds the accuracy of calculations. The exponential law has been fulfilled better than the accuracy of the drawing (see the text).
Figure 3: Solid line—FFE \(f\) corresponding to the solution of Eq. (18) at \(t=500\tau_{0}\) when \(n_{A}\approx n_{B}\) with the model parameters \(\beta=1.25\) (\(T=0.8T_{c}\)) and \(h=0.06\) with the initial condition Eqs. (19)–(20); dashed lines correspond to local fits of \(f^{0}\) in Eq. (25).
the minima is easily understood because Eq. (18) has a static solution \(u=u^{0}+C_{0}\) where the constant comes from the fact that only derivatives of \(u\) with respect to \(t\) and \(m\) are present in the equation. When \(u=u^{0}+C_{0}\) holds for all \(m\), \(C_{0}\) is fixed by the normalization condition as \(C_{0}=\ln Z\). But when the equality is only local, as in the figure,--both minima contribute to the normalization via \(C_{A}\) and \(C_{B}\) in Eq. (25).
Still, the cause of the quasi-static behavior of the roughly horizontal line that connects the two regions near the minima in Fig. 3 needs clarification. The qualitative analysis simplifies in the thermodynamic limit where one of the parameters (\(N\)) disappears from NLME and its solutions simplify because the escape over barrier is completely suppressed.
## V Thermodynamic limit and the MF equation
For our purposes in taking the limit \(N\to\infty\) in NLME it is convenient, besides setting \(\epsilon=0\), to interchange in Eq. (18) the second terms in the sum over the upper and the lower signs as
\[u_{t}(m,t)=\frac{1}{4}\sum_{\binom{u_{0}}{l_{o}}}\frac{e^{\pm[u_{m}^{0}(m,t)- u_{m}(m,t)]}}{\cosh u_{m}^{0}(m,t)}\left[e^{\pm u_{m}(m,t)}(1\pm m)-e^{\mp u_{m} (m,t)}(1\mp m)\right] \tag{26}\]
where use has been made of the explicit form of \(u^{0}\) Eq. (10); besides, henceforth we will choose \(\tau_{0}\) as our time unit and will omit it in evolution equations.
Now it is easily seen that terms in the large brackets are nullified by
\[u_{m}=-\frac{1}{2}\ln\frac{1+m}{1-m}=-\operatorname{artanh}m=s_{m}(m)\equiv u_{ m}^{1} \tag{27}\]
where the penultimate equality follows from Eq. (12). Thus, there exists a locally stationary solution independent of the Hamiltonian parameters which in terms of FFE reads
\[f^{1}(m)=C_{1} \tag{28}\]
where \(C_{1}\) as a constant. Further, after some rearrangement Eq. (26) takes the form
\[u_{t}=\cosh^{2}u_{m}\left(m+\tanh u_{m}\right)(\tanh u_{m}^{0}-\tanh u_{m}) \tag{29}\]
where for brevity the arguments \((m,t)\) of all functions have been omitted. As is seen, if \(u^{0}\) is time-independent the locally stationary solution satisfying \(u_{t}(m,t)=0\) is given either by \(u=u^{0}+Const\) due to the last factor on the rhs or by \(u^{1}\) Eq. (27) because of to the second factor.
Now the structure of \(f\) seen in Fig. 3 becomes qualitatively transparent. In the thermodynamic limit the two segments of \(f\) near the minima are given by Eq. (25) with the approximate equality becoming exact and with time-independent \(C_{A(B)}\), \(n_{A(B)}\) remaining unchanged with time because of the infinitely high barrier separating the minima. At finite but sufficiently large \(N\) the solution becomes weakly time-dependent but the qualitative picture becoming the more accurate the larger \(N\).
In the Kramers approach to escape over the potential barrier a crucial role plays the diffusivity which is proportional to \(\epsilon=1/N\)[10; 11; 13; 20] and thus can be neglected when the system is very large or the processes studied, such as the high-frequency hysteresis [12; 16; 22], are fast in comparison with the lifetime of metastable states. In such cases the simple Eq. (29) should be easier to use than the system of equations (18) which may become unmanageable at large \(N\). However, the method of solution of Eq. (29) should be chosen wisely. The problem is that many numerical techniques are based on a finite difference approximation of the derivative over \(m\) consisting in division of interval \(m\in[-1,1]\) into, say, \(L\) discretization steps. In the case of large systems (e.g., \(N\sim 10^{23}\)) \(L\) will be much smaller than \(N\). This would effectively reduce the system size to \(\sim L\) which may introduce a nonphysical time evolution, such as the decay of metastable states. A method of characteristics is devoid of such difficulties [23; 24]. The characteristic equations for Eq. (29) has been derived in Appendix A.
### MF equation
In the Stirling approximation Eq. (16) can be written as
\[P^{(M)}(M,t)\simeq e^{-N[u(m,t)-s(m)]}=e^{-Nf(m,t)}. \tag{30}\]
When \(h\neq 0\) the symmetry \(m\rightleftarrows-m\) is broken so in general case there is only one global minimum in \(f(m,t)\) at some \(m_{0}(t)\) which can be found from the condition
\[\left.f_{m}(m,t)\right|_{m=m_{0}(t)}=\left(u_{m}+\operatorname{artanh}m\right) \left|{}_{m=m_{0}}=0 \tag{31}\]
where use has been made of Eqs. (21) and (12). For our purposes the condition is convenient to cast in the form
\[\left(m+\tanh u_{m}\right)\left|{}_{m=m_{0}}=0\right. \tag{32}\]
which in terms of the canonical variables of Appendix A can be written as a constraint
\[\chi(m,q)=m+\tanh q=0. \tag{33}\]
Assuming the FFE minimum in the initial condition is at \(m_{0}(t_{0})\) the equation for the characteristic passing through this point is obtained from Eqs. (100), (111), (112), and (32) as [15]
\[\dot{m}_{0}=-m_{0}+\tanh[\beta m_{0}+h(t)]. \tag{34}\]
It is straightforward to verify that characteristic Eqs. (101) and (102) are also satisfied provided constraint Eq. (33) is fulfilled along the characteristic. This is indeed the case because the total time derivative of \(\chi\) according to Eq. (100) is
\[\dot{\chi}(m,q)=\{\chi,\mathbf{H}\}\propto\chi(m,q). \tag{35}\]
Thus, Eq. (33) will be fulfilled along the characteristic if, as we have assumed, it is satisfied at the initial point \(t=t_{0}\).
MF Eq. (34) is a closed equation for the average magnetization \(m_{0}\) which has been widely used in the studies of the hysteretic behavior (see, e.g., [12; 14; 16; 22]). In the context of the present study a useful observation is that in the case of constant \(h\) its variables can be separated and a closed-form solution obtained.
Despite this, Eq. (29) is not superfluous in thermodynamic limit. As we saw in previous section, the initial condition Eq. (19) had only one extremum but during the evolution two new extrema appeared (see Fig. 3). This behavior cannot be described by MF Eq. (34) but is present in Eq. (29), as illustrates the finite-\(N\) example.
In the thermodynamic limit such a behavior can be illustrated by the problem of coarsening in a binary alloy [25]. In symmetric (\(h=0\)) supercritical (\(T>T_{c}\)) phase \(f_{>}^{0}\) has one extremum--the minimum at \(m_{0}=0\). If quenched to a subcritical temperature \(T<T_{c}\) the system will evolve toward the equilibrium state with \(f_{<}^{0}\) having a double-well structure with two symmetric minima at A and B in Fig. 1 with \(h=0\). This evolution can be described by Eq. (29) while MF equation will remain stuck at point C which will turn into a local maximum at the end of the evolution.
## VI Lifetimes of Metastable States
From the derivations of the previous section it can be seen that MF Eq. (34) should be valid for all extrema \(m_{e}\) satisfying \(f_{m}|_{m=m_{e}}=0\), not only for the absolute minimum. Thus, the positions of the minima A and B in Fig. 3 will stay immobile throughout the evolution and MF evolution will not contribute to the decay. So only the diffusion mechanism [10; 11; 13] will be operative which presumably underlies the simplicity of Eq. (22).
Straightforward calculation of the lifetime \(\tau\) via solution of NLME Eq. (18) and the fit to Eq. (22) has been performed for HTMs of two sizes \(N=10^{3}\) and \(2\cdot 10^{3}\), two temperatures \(T=0.5T_{c}\) and \(0.8T_{c}\), and variety of \(h<h_{SP}\) values. The results are shown in Fig. 4 by open symbols. Larger sizes were not used because in order to achieve higher accuracy in Eq. (22) the integration over \(m\) has been replaced by the sum over the discrete values
\[m=-1+2n\epsilon,\quad n=0,1,2,\ldots \tag{36}\]
in order to use the exact combinatorial factor in \(P^{(M)}\) Eq. (16). But this necessitated calculation of \(N!\) which was found to be numerically difficult for \(N=4\cdot 10^{3}\) and \(10^{4}\) studied in Ref. [13]. The use of the Stirling approximation would be sufficient from practical standpoint but to substantiate a heuristic technique suggested below by numerical arguments high precision calculations were necessary. Though the use of NLME has made possible to extend the range of calculated lifetimes about 30 times in comparison with Ref. [13], the simulation time grew very quickly with \(\tau\) so determination of much larger values would have required prohibitively long calculations. A much more efficient, though heuristic, technique will be described below.
### Recurrence relation
To clarify the origin of the exponential law Eq. (22) in NLME solutions, let us consider the evolution of FFE shown in Fig. 3 in more detail. We first note that because the configurational entropy Eq. (12) does not depend on time, the time derivative of EH in Eq. (18) is equal to the time derivative of FFE so using the available solution for \(u\) the derivative could be calculated numerically, as shown in Fig. 5. As is seen, the time derivative is constant
Figure 4: Comparison of the lifetime values as found from the solution of Eqs. (18) (open symbols) and (39) (filled symbols and crosses) with the expression Eq. (45) [13]. Circles correspond to \(T=0.5T_{c}\), squares to \(T=0.8T_{c}\); smaller (larger) symbols correspond to \(N=10^{3}\) (\(N=2\cdot 10^{3}\)). The cross at larger (smaller) \(\tau\) is for \(N=10^{5}\) (\(N=10^{6}\)); the straight line corresponds to \(\tilde{\tau}=\tau\). The symbol sizes do not reflect the accuracy of the data which is higher than the drawing resolution.
in the region \(m\leq 0\) needed in Eq. (22). Furthermore, it is easy to see that in this range it should be equal to specific rate \(\lambda\) from Eq. (24) which means that the distribution of \(m\) remains the same throughout the evolution and only the total density of metastable states changes with time according to the exponential law Eq. (22). In fact, because the evolution is very slow it is reasonable to expect that the magnetization distribution should be close to the equilibrium one Eq. (6) as illustrated in Fig. 3. In other words, at negative values of \(m\) an accurate solution to EH density should be feasible with the _ansatz_
\[u(m,t)\simeq\lambda t+v(m,\lambda). \tag{37}\]
After substitution in Eq. (18) one sees that the time variable disappears from the equation. Also, the two terms in the sum would contain \(v_{m}\) at two successive points \(m-\epsilon\) and \(m+\epsilon\), so if the leftmost \(m=-1\) value is known, the rest can be found successively by recursion [26]. But at \(m=-1\) only one term remains on the rhs of Eq. (18), so \(v(m=-1+\epsilon)\) can be expressed through \(\lambda\) and the HTM parameters. Next introducing
\[x_{n+1}=e^{-2[v_{m}(m+\epsilon)-u_{m}^{0}(m+\epsilon)]}-1, \tag{38}\]
Eq. (18) for \(x_{n}\) can be cast in the form of a nonlinear recurrence relation
\[x_{n+1}=a_{n}\frac{x_{n}}{1+x_{n}}+b_{n} \tag{39}\]
where
\[a_{n}=\frac{1+m}{1-m}e^{2u_{m}^{0}(m+\epsilon)}\frac{1+\exp[-2u _{m}^{0}(m+\epsilon)]}{1+\exp[-2u_{m}^{0}(m-\epsilon)]} \tag{40}\] \[\text{and}\] \[b_{n}=-\frac{2\lambda}{1-m}\left(1+e^{2u_{m}^{0}(m+\epsilon)} \right). \tag{41}\]
(Note that in the above equations only subscript \(m\) stands for the discrete derivative, all other being just integer numbers.) Thus, from the recurrence relation Eq. (39) all \(v(m,\lambda)\) can be found provided a suitable value of \(\lambda\) is chosen in Eq. (41). To understand how this can be done, a more careful analysis of Eqs. (39)-(41) is in order.
To this end we first introduce a useful formal rearrangement of the recurrence. As can be seen from Eq. (40), \(a_{0}\) is equal to zero so that according to Eq. (39) \(x_{1}\) is equal to \(b_{0}\). Thus, the recurrence can be initiated by \(x_{1}\) which, as can be seen from Eq. (36), has the lowest physically allowed subscript. However, a simpler solution of the linear recurrence in Appendix B is obtained if it is initiated by \(x_{0}=0\) and \(a_{0}=1\). This is admissible because for \(m=-1\) (\(n=0\)) the first term on the rhs of Eq. (39) vanishes, so with \(x_{0}=0\)\(a_{0}\) may be ascribed any value.
A trivial solution of the recurrence is obtained with the choice \(\lambda=0\). Then from Eqs. (39) and (41) follows that \(x_{n}=0\). According to Eq. (38) this means that \(v_{m}=u_{m}^{0}\), that is, the equilibrium solution is recovered. This ought to be expected because \(\lambda=0\) corresponds to a stationary state. Solutions of Eq. (39) will be convenient to visualize with the use of FFE
\[\tilde{f}(m,\lambda)=v(m,\lambda)-s(m). \tag{42}\]
Curve 1 in Fig. 6 corresponds to the fully decayed state (barring the remainder \(n_{A}\sim 10^{-38}\)), i.e., to the equilibrium state \(v=u^{0}\). A less trivial solution can be obtained for small \(x_{n}\) satisfying
\[\max|x_{n}|\ll 1 \tag{43}\]
in which case \(x_{n}\) in the denominator in Eq. (39) can be neglected and a closed-form solution obtained (see Appendix B)
\[x_{n}=\sum_{l=0}^{n-1}b_{l}\exp\left(\sum_{k=l+1\leq n-1}^{n-1} \ln a_{k}\right). \tag{44}\]
Using Eq. (41) two immediate conclusions can be made: (i) \(x_{n}\propto\lambda\) and (ii) all \(x_{n}\) for \(n\geq 1\) are negative. Thus,
Figure 5: Time derivative of FFE found numerically from Eq. (18) for HTM with the same parameters as in Fig. 2 and for the same five values of evolution time (from bottom to top) as in the text following Eq. (24).
Figure 6: (Color online) FFE Eq. (42) calculated with the use of Eqs. (39) and (38) for values of \(\lambda\) given in Table 1.
when \(\max|x_{n}|\to 1\) or \(x_{n}\to-1\) a singularity at a finite \(m=-1+2n\epsilon\) will arise in \(x_{n}\), hence, also in EH and FFE Eq. (42) which means that the solution is unphysical at this value of \(\lambda\).
Thus, the physical values should be sought in the interval \(0\leq\lambda\leq\lambda_{max}\) with the upper limit found as the largest \(\lambda\) at which the solution does not diverge. This has been done via a trial and error process for the same HTM parameters as in Fig. 2. The results are presented in Fig. 6 and Table 1. The range of calculations has been extended beyond \(m=0\) into the region where according to Fig. 5 solution Eq. (37) is still valid. It has been found that \(v(m,\lambda)\) is very sensitive to the precise value of \(\lambda\) near \(\lambda_{max}\). For example, curves 7 and 8 correspond to \(\lambda\) values that differ on a unit in the eleventh significant digit. Yet curve 7 shows that the value is still too large (the solution diverges) while curve 8 already goes deep down and if assumed to be correct predicts \(n_{A}\sim 10^{-27}\), that is the decay is practically terminated. In fact, the curve is drawn farther than the range of validity of Eq. (39) as can be seen by comparing Figs. 5 and 6. But even at the upper end of the region of validity \(m\leq 0.4\) the value of \(f\) is such that \(n_{A}\sim 10^{-10}\). This, however, is an overestimation because the negative derivative of curve 8 indicates that the true value of \(\tilde{f}\) near minimum B should be noticeably smaller. Anyway, we are going to compare the calculated lifetimes with those of Ref. [13] defined as the weighted time averages so the contributions of small \(n_{A}(t)\) were insignificant. Thus, in our case a single value of \(\lambda_{max}\) should be sufficient to determine the lifetime from Eq. (24). This is consistent with the data shown in Figs. 2 and 5 where \(\lambda\approx 5.55009\cdot 0^{-7}\) (see Table 1) was sufficient to describe the decay from \(n_{A}\approx 0.9\) to \(10^{-12}\). The agreement of \(\lambda\) found from the recurrence relation with that determined in a fit to the solution of the exact evolution equation to all six significant digits provided by the fitting software supports the suggestion that the heuristic technique based on recurrence Eq. (39) does make possible an accurate determination of lifetimes of the metastable states in HTM. Similar agreement between the two techniques has been found for several additional sets of HTM parameters.
The values of \(\tau\) calculated in this way are shown in Fig. 4 by filled symbols. The excellent agreement with the results of Ref. [13] is illustrated by comparison with the analytic expression Eq. (45) suggested in that paper
\[\tilde{\tau}=\frac{\pi}{\sqrt{|m_{SP}|(h_{SP}-h)}}e^{N\Delta f^{0}}. \tag{45}\]
where \(\Delta f^{0}\) has been defined in Eq. (23). In our calculations the notion of scaling in the vicinity of the spinodal has not been used because in some cases, such as the one depicted in Fig. 1, the simulated systems were quite far from it. Therefore, in order to make comparison with Eq. (41) of Ref. [13]\(\Delta f^{0}\) for the barrier height has been used instead of the scaling expression. The comparison with the latter has also been performed with the agreement being only slightly worse than in Fig. 4, arguably, for the above mentioned reason. The upper limit of the data presented in Fig. 4 has been defined by the fact that in order for the calculations were meaningful the second term in the denominator of Eq. (39) \(1+x_{n}\) should contribute to the recursion from \(n=1\) onward. Therefore, \(n_{1}=O(\lambda)\) should exceed the smallest number able to change the result when added to unity. In the double precision arithmetic used in calculations it should be larger than \(\sim 2\cdot 10^{-16}\). The use of software with this quantity being much smaller would make possible to predict much larger lifetimes.
In principle, analytical expression of the type of Eq. (45) should be possible to derive from the condition
\[\max|x_{n}|\to 1. \tag{46}\]
However, this would necessitate knowledge of an analytical expression for \(x_{n}\) which is difficult to obtain for \(x_{n}\to-1\) where Eq. (39) is strongly nonlinear. However, the leading exponential in \(N\) behavior can be estimated in the linear approximation Eq. (44). To this end in the expression Eq. (41) for \(b_{n}\) we retain only factor \(\lambda\) which needs to be determined and in the summation over \(k\) only the first two factors in the expression Eq. (40) will be kept because the last factor does not contribute to the logarithm in the limit \(\epsilon\to 0\). With the use of Eq. (12) the first two factors in Eq. (40) can be unified in \(\exp[2f_{m}^{0}(m_{k})]\); here and below \(m_{n},m_{l}\) and \(m_{k}\) are connected to \(n,l,k\) via Eq. (36). Next, approximating \(2\sum_{l(k)}\approx N\int dm_{l(k)}\) (\(dm_{l(k)}\simeq 2/N\)) and integrating over \(dm_{k}\) one arrives at an estimate
\[\max_{n}|x_{n}| \sim \max_{m_{n}}\lambda e^{Nf^{0}(m_{n})}\int_{-1}^{m_{n}}dm_{l}e^{- Nf^{0}(m_{l})} \tag{47}\] \[\sim \lambda e^{N[f^{0}(C)-f^{0}(A)]}\equiv\lambda e^{N\Delta f^{0}} \simeq 1.\]
where the integration over \(m_{l}\) for large \(N\) has been estimated by Laplace's method, and within the range of validity of Eq. (39) the maximum is attained at \(m_{n}=m_{C}\). The estimate Eq. (23) now follows from Eq. (24).
To conclude this section, a few words about the decay of unstable states when \(h>h_{SP}\) (in the geometry of Fig.
\begin{table}
\begin{tabular}{c|l} Curve & \multicolumn{1}{c}{\(\lambda\cdot 10^{7}\)} \\ \hline
1 & 0.0 \\
2 & 6.0 \\
3 & 5.6 \\
4 & 5.551 \\
5 & 5.5501 \\
6 & 5.550092 \\
7 & 5.5500919602 \\
8 & 5.5500919601 \\ \hline Fit & 5.55009 \\ \end{tabular}
\end{table}
Table 1: Values of parameter \(\lambda\) for curves shown in Fig. 6 and the fit of data in Fig. 2 to Eq. (22).
1) which was also studied in Ref. [13]. This case has not been detailed in the present paper because of its complexity. While the decay of metastable states has a clear underlying physics of the escape over the energy barrier, the unstable case is governed by at least two competing mechanisms. First, because there is no local minimum in \(f^{0}\) at \(m_{A}\), at large \(h\gg h_{SP}\) the minimum in Eq. (19) will be moving toward \(m_{B}\) according to MF Eq. (34) in which case the time dependence of \(n_{A}\) will be close to the step function \(\theta(-m_{0}(t))\). However, near the spinodal when \(h\) approaches \(h_{SP}\) from above the MF evolution slows down [13] and the diffusion mechanism starts to dominate. The exponential behavior similar to Eq. (22) has been observed for \(h\to h_{SP}^{+}\) but in the absence of the double-well structure of \(f^{0}\) its origin is obscure. The second minimum at \(m_{B}\) has been formed which presupposes the existence of the barrier in FFE \(f\) but whether this kinetically induced shape may play the same role as the double-well structure of equilibrium \(f^{0}\) for \(h>h_{SP}\) is not clear. Besides, because the initial NPD is to a large extent arbitrary the FFE shape will strongly depend on it. Therefore, additional insights are needed to explain the complexities of the decay of unstable states.
## VII Conclusion
In this paper a nonlinear master equation (NLME) describing time evolution of the effective Hamiltonian (EH) density has been suggested to overcome the numerical difficulties caused by the presence in the conventional linear ME [7] of quantities exponentially depending on the system size \(N\). In contrast, NLME scales at most as \(O(N)\). In the Ising models with short range interactions NLME is strongly nonlinear hence, difficult to solve. However, this should not be surprising because NLME is supposed to describe all possible non-equilibrium phenomena that may arise during the evolution starting from any of the uncountable number of possible initial conditions.
To illustrate some salient features of NLME in a simple framework, the problem of decay of metastable states in the kinetic Husimi-Temperley model (HTM) has been considered. The problem was previously studied in the framework of linear equations in Refs. [10; 11; 13] which data have been used for comparison purposes. An excellent agreement has been found between numerical NLME results and the asymptotic analytic expression for the lifetime of the metastable state of Ref. [13]. With the use of NLME it has been possible to cover much broader range of HTM parameters and to achieve much better accuracy.
The exponential dependence of lifetime on the system size ensures that in macroscopic systems the lifetime will reach values so large that from physical standpoint the metastable states will behave as effectively stable ones. Because of this, for large \(N\) it is reasonable to take the thermodynamic limit \(N\to\infty\). It has been shown that in this case NLME simplifies to a nonlinear first-order differential equation possessing, in particular, two locally stable solutions which can be combined to construct stationary EHs different from the equilibrium one. To solve the differential equation a system of characteristic equations has been derived which, in particular, reduces to the conventional MF equation [15] for magnetizations corresponding to the fluctuating free energy extrema.
It seems counterintuitive to switch from a linear equation to a nonlinear one. However, this seems to be a common way of dealing with many-body problems. In the case of ME this was discussed in Ref. [7] (see Remark in Ch. V.8) where it was argued that a linear/nonlinear dichotomy is purely mathematical and does not reflect the underlying physics. For example, the linear Liouville equation is equivalent to Newton's equations which in the case of interacting particles are nonlinear, yet it is the latter that are used in practical calculations.
A promising example of NLME use was discussed in Ref. [4]. In a simple pair approximation to EH which was also successfully used in Ref. [27] it was found that NLME leads to a qualitatively more sound description of the spinodal decomposition than the MF approximation to ME used, e.g., in Ref. [3]. Namely, NLME predicted a power-law growth of the volumes of separating phases while the MF approximation produces an exponential behavior. The latter is incompatible with the relaxational nature of the stochastic dynamics where the growth exponents cannot be positive [7]. Besides, the characteristic length scale in the MF solution remains constant throughout the growth while NLME predicts a coarse-graining behavior in accordance with experiment.
Yet another example is the BBGKY hierarchy of an infinite number of linear equations which in practical calculations is usually approximated by a finite number of nonlinear equations, such as the Boltzmann transport equation.
Thus, it is quite plausible that NLME is a proper way of dealing with stochastic kinetics of Ising-type models.
###### Acknowledgements.
I would like to express my gratitude to Hugues Dreysse for his hospitality, encouragement, and interest in the work. I am grateful to Universite de Strasbourg and IPCMS for their support.
## Appendix A The Hamilton-Jacobi formalism
Because Eq. (29) contains only derivatives of \(u\) but not the function itself, according to Ref. [24] it can be cast in the form of the Hamilton-Jacobi equation [28]
\[u_{t}+{\bf H}(m,q,t)=0, \tag{101}\]
where
\[q=u_{m}(m,t) \tag{102}\]
and
\[{\bf H}=-\cosh^{2}q\,(m+\tanh q)[\tanh u_{m}^{0}(m,t)-\tanh q] \tag{10}\]
is the rhs of Eq. (29) with minus sign. In the Hamiltonian formalism the total time derivative of any function \(g(m,q,t)\) of the "coordinate" \(m\) and "momentum" \(q\) can be calculated as
\[\dot{g}\equiv\frac{dg}{dt}=\frac{\partial g}{\partial t}+\{g,{\bf H}\}, \tag{11}\]
where the Poisson bracket is defined as \(\{a,b\}=a_{m}b_{q}-a_{q}b_{m}\). Now the canonical Hamiltonian equations are easily found as
\[\dot{m} = {\bf H}_{q} \tag{12}\] \[\dot{q} = -{\bf H}_{m}. \tag{13}\]
Further assuming that at some \(t=t_{0}\) an initial Hamiltonian density \(u(m,t_{0})\) is known, by choosing any admissible value \(m_{0}\) and setting \(q(t_{0})\) equal to \(u_{m}(m_{0},t_{0})\) one can find \(m(t)\) and \(q(t)\) from the solution of the initial value problem for Eqs. (12) and (13). Next integrating equation
\[\dot{u}=q{\bf H}_{q}-{\bf H} \tag{14}\]
obtained from Eqs. (11), (12), (13), and (14) one arrives at a solution for \(u(m,t)\) in parametric form where at each \(t\) the values of \(u\) at different \(m\) are found from the solution of the above initial-value problem for all possible \(m_{0}=m(t_{0})\). Such a solution is a particular case of the general method of characteristics [23].
A rigorous derivation of the Hamiltonian-Jacobi formalism in the general case of many variables can be found in Ref. [24] but for our Eq. (14) the following heuristic arguments should suffice. First we observe that the partial derivative \(u_{m}\) in Eq. (29) couples the values of function \(u(m,t)\) at neighbor points \(m\pm\epsilon\) so by using, e.g., the method of lines one should solve a system of \(N\to\infty\) coupled ordinary evolution equations. In the method of characteristics one tries to reduce the problem to only a few such equations. To this end one may observe that by taking the total time derivative of Eq. (13) in the conventional way one gets
\[\dot{q}=u_{mt}+u_{mm}\dot{m}=-{\bf H}_{m}-{\bf H}_{q}u_{mm}+u_{mm}\dot{m} \tag{15}\]
where the second equality has been obtained by differentiating Eq. (14) with respect to \(m\) by considering \(q\) in \({\bf H}\) as just another notation for \(u_{m}\). Now if we _demand_ that Eq. (12) was satisfied, than Eq. (13) would follow from Eq. (15). Eq. (14) also is easily obtained by differentiating \(u(m,t)\), using Eq. (14 and applying the chain rule. Next assuming \(m=m(t)\) on the characteristic line, substituting it in Eqs. (12), (13), and (14) one arrives at a system of three evolution equations for three unknown functions. It is to be noted that Eqs. (12) and (13) are sufficient to derive Eq. (11) which shows the consistency of our assumptions.
## Appendix B Linear recurrence [26]
Let us consider a linear recurrence relation
\[x_{n+1}=a_{n}x_{n}+b_{n} \tag{16}\]
initiated by \(x_{0}\); \(a_{n}\) and \(b_{n}\), \(n=0,N\) are presumed to be known. Now by introducing
\[X_{n}=\frac{x_{n}}{\prod_{k=0}^{n-1}a_{k}} \tag{17}\]
and dividing both sides of Eq. (16) by \(\prod_{k=0}^{n}a_{k}\) one arrives at a linear difference equation
\[X_{n+1}-X_{n}=b_{n}/\prod_{k=0}^{n}a_{k} \tag{18}\]
which can be solved as
\[X_{n}=X_{0}+\sum_{l=0}^{n-1}\frac{b_{l}}{\prod_{k=0}^{l}a_{k}}. \tag{19}\]
In the main text it was assumed that \(x_{0}=0\) which in combination with Eq. (17) gives \(X_{0}=0\) and
\[x_{n}=\sum_{l=0}^{n-1}b_{l}\prod_{k=l+1\leq n-1}^{n-1}a_{k}=\sum_{l=0}^{n-1}b _{l}\exp\left(\sum_{k=l+1\leq n-1}^{n-1}\ln a_{k}\right). \tag{20}\]
|
2305.02011 | Approximating Long Cycle Above Dirac's Guarantee | Parameterization above (or below) a guarantee is a successful concept in
parameterized algorithms. The idea is that many computational problems admit
``natural'' guarantees bringing to algorithmic questions whether a better
solution (above the guarantee) could be obtained efficiently. The above
guarantee paradigm has led to several exciting discoveries in the areas of
parameterized algorithms and kernelization. We argue that this paradigm could
bring forth fresh perspectives on well-studied problems in approximation
algorithms. Our example is the longest cycle problem. One of the oldest results
in extremal combinatorics is the celebrated Dirac's theorem from 1952. Dirac's
theorem provides the following guarantee on the length of the longest cycle:
for every 2-connected n-vertex graph G with minimum degree \delta(G)\leq n/2,
the length of a longest cycle L is at least 2\delta(G). Thus, the ``essential''
part in finding the longest cycle is in approximating the ``offset'' k = L - 2
\delta(G). The main result of this paper is the above-guarantee approximation
theorem for k. Informally, the theorem says that approximating the offset k is
not harder than approximating the total length L of a cycle. In other words,
for any (reasonably well-behaved) function f, a polynomial time algorithm
constructing a cycle of length f(L) in an undirected graph with a cycle of
length L, yields a polynomial time algorithm constructing a cycle of length
2\delta(G)+\Omega(f(k)). | Fedor F. Fomin, Petr A. Golovach, Danil Sagunov, Kirill Simonov | 2023-05-03T10:04:22Z | http://arxiv.org/abs/2305.02011v1 | # Approximating Long Cycle Above Dirac's Guarantee
###### Abstract
Parameterization above (or below) a guarantee is a successful concept in parameterized algorithms. The idea is that many computational problems admit "natural" guarantees bringing to algorithmic questions whether a better solution (above the guarantee) could be obtained efficiently. For example, for every boolean CNF formula on \(m\) clauses, there is an assignment that satisfies at least \(m/2\) clauses. How difficult is it to decide whether there is an assignment satisfying more than \(m/2+k\) clauses? Or, if an \(n\)-vertex graph has a perfect matching, then its vertex cover is at least \(n/2\). Is there a vertex cover of size at least \(n/2+k\) for some \(k\geq 1\) and how difficult is it to find such a vertex cover?
The above guarantee paradigm has led to several exciting discoveries in the areas of parameterized algorithms and kernelization. We argue that this paradigm could bring forth fresh perspectives on well-studied problems in approximation algorithms. Our example is the longest cycle problem. One of the oldest results in extremal combinatorics is the celebrated Dirac's theorem from 1952. Dirac's theorem provides the following guarantee on the length of the longest cycle: for every \(2\)-connected \(n\)-vertex graph \(G\) with minimum degree \(\delta(G)\leq n/2\), the length of a longest cycle \(L\) is at least \(2\delta(G)\). Thus the "essential" part in finding the longest cycle is in approximating the "offset" \(k=L-2\delta(G)\). The main result of this paper is the above-guarantee approximation theorem for \(k\). Informally, the theorem says that approximating the offset \(k\) is not harder than approximating the total length \(L\) of a cycle. In other words, for any (reasonably well-behaved) function \(f\), a polynomial time algorithm constructing a cycle of length \(f(L)\) in an undirected graph with a cycle of length \(L\), yields a polynomial time algorithm constructing a cycle of length \(2\delta(G)+\Omega(f(k))\).
## 1 Introduction
One of the concepts that had a strong impact on the development of parameterized algorithms and kernelization is the idea of the above guarantee parameterization. Above guarantee parameterization grounds on the following observation: _the natural parameterization of a maximization/minimization problem by the solution size is not satisfactory if there is a lower bound for the solution size that is sufficiently large_[23]. To make this discussion concrete, consider the example of the classical NP-complete problem Max Cut. Observe that in any graph with \(m\) edges there is always a cut containing at least \(m/2\) edges. (Actually, slightly better bounds are known in the literature
[18, 10].) Thus Max Cut is trivially fixed-parameter tractable (FPT) parameterized by the size of the max-cut. Indeed, the following simple algorithm shows that the problem is FPT: If \(k\leq m/2\), then return yes; else \(m\leq 2k\) and any brute-force algorithm will do the job. However, the question about Max Cut becomes much more meaningful and interesting, when one seeks a cut above the "guaranteed" lower bound \(m/2\).
The above guarantee approach was introduced by Mahajan and Raman [45] and it was successfully applied in the study of several fundamental problems in parameterized complexity and kernelization. For illustrative examples, we refer to [1, 4, 14, 23, 25, 32, 33, 34, 36, 37, 44], see also the recent survey of Gutin and Mnich [35]. Quite surprisingly, the theory of the above (or below) guarantee _approximation_ remains unexplored. (Notable exceptions are the works of Mishra et al. [46] on approximating the minimum vertex cover beyond the size of a maximum matching and of Bollobas and Scott on approximating max-cut beyond the \(m/2+\sqrt{m/8}\) bound [10].)
In this paper, we bring the philosophy of the above guarantee parameterization into the realm of approximation algorithms. In particular,
The goal of this paper is to study the approximability of the classical problems of finding a longest cycle and a longest \((s,t)\)-path in a graph from the viewpoint of the above guarantee parameterization.
##### Our results.
Approximating the length of a longest cycle in a graph enjoys a lengthy and rich history [6, 8, 21, 20, 28, 29, 49]. There are several fundamental results in extremal combinatorics providing lower bounds on the length of a longest cycle in a graph. The oldest of these bounds is given by Dirac's Theorem from 1952 [17]. Dirac's Theorem states that a \(2\)-connected graph \(G\) with the minimum vertex degree \(\delta(G)\) contains a cycle of length \(L\geq\min\{2\delta(G),|V(G)|\}\). Since every longest cycle in a graph \(G\) with \(\delta(G)<\frac{1}{2}|V(G)|\) (otherwise, \(G\) is Hamiltonian and a longest cycle can be found in polynomial time) always has a "complementary" part of length \(2\delta(G)\), the essence of the problem is in computing the "offset" \(k=L-2\delta(G)\). Informally, the first main finding of our paper is that Dirac's theorem is well-compatible with approximation. We prove that approximating the offset \(k\) is essentially not more difficult than approximating the length \(L\).
More precisely. Recall that \(f\) is subadditive if for all \(x\), \(y\) it holds that \(f(x+y)\leq f(x)+f(y)\). Our main result is the following theorem.
**Theorem 1**.: _Let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a non-decreasing subadditive function and suppose that we are given a polynomial-time algorithm finding a cycle of length at least \(f(L)\) in graphs with the longest cycle length \(L\). Then there exists a polynomial time algorithm that finds a cycle of length at least \(2\delta(G)+\Omega(f(L-2\delta(G)))\) in a \(2\)-connected graph \(G\) with \(\delta(G)\leq\frac{1}{2}|V(G)|\) and the longest cycle length \(L\)._
The \(2\)-connectivity condition is important. As was noted in [24], deciding whether a connected graph \(G\) contains a cycle of length at least \(2\delta(G)\) is NP-complete. Theorem 1 trivially extends to approximating the longest path problem above \(2\delta(G)\). For the longest path, the requirement on \(2\)-connectivity of a graph can be relaxed to connectivity. This can be done by a standard reduction of adding an apex vertex \(v\) to the connected graph \(G\), see e.g. [24]. The minimum vertex degree in the new graph \(G+v\), which is \(2\)-connected, is equal to \(\delta(G)+1\), and \(G\) has a path of length at least \(L\) if and only if \(G+v\) has a cycle of length at least \(L+2\). Thus approximation of the longest cycle (by making use of Theorem 1) in \(G+v\), is also the approximation of the longest path in \(G\).
##### Related work.
The first approximation algorithms for longest paths and cycles followed the development of exact parameterized algorithms. Monien [47] and Bodlaender [9] gave parameterized
algorithms computing a path of length \(L\) in times \(\mathcal{O}(L!2^{L}n)\) and \(\mathcal{O}(L!nm)\) respectively. These algorithms imply also approximation algorithms constructing in polynomial time a path of length \(\Omega(\log L/\log\log L)\), where \(L\) is the longest path length in graph \(G\). In their celebrated work on color coding, Alon, Yuster, and, Zwick [2] obtained an algorithm that in time \(\mathcal{O}(5.44^{L}n)\) finds a path/cycle of length \(L\). The algorithm of Alon et al. implies constructing in polynomial time a path of length \(\Omega(\log L)\). A significant amount of the consecutive work targets to improve the base of the exponent \(c^{L}\) in the running times of the parameterized algorithms for longest paths and cycles [42, 50, 27, 5, 7]. The surveys [26, 43], and [15, Chapter 10] provide an overview of ideas and methods in this research direction. The exponential dependence in \(L\) in the running times of these algorithms is asymptotically optimal: An algorithm finding a path (or cycle) of length \(L\) in time \(2^{o(L)}n^{\mathcal{O}(1)}\) would fail the Exponential Time Hypothesis (ETH) of Impagliazzo, Paturi, and Zane [38]. Thus none of the further improvements in the running times of parameterized algorithms for longest cycle or path, would lead to a better than \(\Omega(\log L)\) approximation bound.
Bjorklund and Husfeldt [6] made the first step "beyond color-coding" in approximating the longest path. They gave a polynomial-time algorithm that finds a path of length \(\Omega(\log L/\log\log L)^{2}\) in a graph with the longest path length \(L\). Gabow in [29] enhanced and extended this result to approximating the longest cycle. His algorithm computes a cycle of length \(2^{\Omega(\sqrt{\log L/\log\log L})}\) in a graph with a cycle of length \(L\). Gabow and Nie [31] observed that a refinement of Gabow's algorithm leads to a polynomial-time algorithm constructing cycles of length \(2^{\Omega(\sqrt{\log L})}\). This is better than \((\log(L))^{\mathcal{O}(1)}\) but worse than \(L^{\varepsilon}\). Pipelining the algorithm of Gabow and Nie with Theorem 1 yields a polynomial time algorithm constructing in a 2-connected graph \(G\) a cycle of length \(2\delta(G)+\Omega(c^{\sqrt{\log k}})\). For graphs of bounded vertex degrees, better approximation algorithms are known [13, 21].
The gap between the upper and lower bounds for the longest path approximation is still big. Karger, Motwani, and Ramkumar [40] proved that the longest path problem does not belong to APX unless P = NP. They also show that for any \(\varepsilon>0\), it cannot be approximated within \(2^{\log^{1-\varepsilon}n}\) unless NP \(\subseteq\) DTIME\((2^{O(\log^{1/\varepsilon}n)})\). Bazgan, Santha, and Tuza [3] extended these lower bounds to cubic Hamiltonian graphs. For directed graphs the gap between the upper and lower bounds is narrower [8, 30].
Our approximation algorithms are inspired by the recent work Fomin, Golovach, Sagunov, and Simonov [24] on the parameterized complexity of the longest cycle beyond Dirac's bound. Fomin et al. were interested in computing the "offset" beyond \(2\delta(G)\) exactly. Their parameterizes algorithm decides whether \(G\) contains a cycle of length at least \(2\delta(G)+k\) in time \(2^{\mathcal{O}(k)}n^{\mathcal{O}(1)}\), and thus in polynomial time computes a cycle of length \(2\delta(G)+\Omega(\log k)\). However, the tools developed in [24] are not sufficient to go beyond \(\Omega(\log k)\)-bound on the offset. The main combinatorial tools from [24] are Erdos-Gallai decomposition and Dirac decomposition of graphs. For the needs of approximation, we have to develop novel ("nested") variants or prove additional structural properties of these decompositions.
Dirac's theorem is one of the central pillars of Extremal Graph Theory. The excellent surveys [12] and [11] provide an introduction to this fundamental subarea of graph theory. Besides [24], the algorithmic applications of Dirac's theorem from the perspective of parameterized complexity were studied by Jansen, Kozma, and Nederlof in [39].
##### Paper structure.
Section 2 provides an overview of the techniques employed to achieve our results. Then, Section 3 introduces notations and lists auxiliary results. Section 4 guides through the proof of the approximation result for \((s,t)\)-paths, which is the key ingredient required for Theorem 1. Section 5 is dedicated to the proof of Theorem 1 itself. Finally, we conclude with a
summary and some open questions in Section 6.
## 2 Overview of the proofs
In this section, we provide a high-level strategy of the proof of Theorem 1, as well as key technical ideas needed along the way. The central concept of our work is an approximation algorithm for the Longest Cycle problem. Formally, such an algorithm should run in polynomial time for a given graph \(G\) and should output a cycle of length at least \(f(L)\), where \(L\) is the length of the longest cycle in \(G\). The function \(f\) here is the _approximation guarantee_ of the algorithm. In our work, we allow it to be an arbitrary non-decreasing function \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) that is also subadditive (i.e., \(f(x)+f(y)\geq f(x+y)\) for arbitrary \(x,y\)). We also note that an \(f(L)\)-approximation algorithm for Longest Cycle immediately gives a \(\frac{1}{2}f(2L)\)-approximation algorithm for Long \((s,t)\)-Path in \(2\)-connected graphs (by Menger's theorem, see Lemma 2 for details).
Our two main contributions assume that we are given such an \(f\)-approximation algorithm as a black box. In fact, we only require to run this algorithm on an arbitrary graph as an oracle and receive its output. We do not need to modify or know the algorithm routine.
While the basis of our algorithm comes from the structural results of Fomin et al. [24], in the first part of this section we do not provide the details on how it is used.
The first of our contributions is a polynomial-time algorithm that finds a long \((s,t)\)-path in a given \(2\)-connected graph \(G\) with two vertices \(s,t\in V(G)\). The longest \((s,t)\)-path in \(G\) always has length \(\delta(G-\{s,t\})+k\) for \(k\geq 0\) by Erdos-Gallai theorem, and the goal of the algorithm is to find an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+\Omega(f(k))\) in \(G\). To find such a path, this algorithm first recursively decomposes the graph \(G\) in a specific technical way. As a result, it outputs several triples \((H_{i},s_{i},t_{i})\) in polynomial time, where \(H_{i}\) is a \(2\)-connected minor of \(G\) and \(s_{i},t_{i}\in V(H_{i})\). For each triple, the algorithm runs the black box to find a \(f\)-approximation of the longest \((s_{i},t_{i})\)-path in \(H_{i}\). In the second round, our algorithm cleverly uses constructed approximations to construct a path of length at least \(\delta(G-\{s,t\})+\Omega(f(k))\) in the initial graph \(G\). This is summarized as the following theorem.
**Theorem 2**.: _Let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a non-decreasing subadditive function and suppose that we are given a polynomial-time algorithm computing an \((s,t)\)-path of length at least \(f(L)\) in graphs with given two vertices \(s\) and \(t\) having the longest \((s,t)\)-path of length \(L\). Then there is a polynomial-time algorithm that outputs an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+\Omega(f(L-\delta(G-\{s,t\})))\) in a \(2\)-connected graph \(G\) with two given vertices \(s\) and \(t\) having the longest \((s,t)\)-path length \(L\)._
The second (and main) contribution of this paper is the polynomial-time algorithm that approximates the longest cycle in a given \(2\)-connected graph \(G\) such that \(2\delta(G)\leq|V(G)|\). It employs the black-box \(f\)-approximation algorithm for Longest Cycle to find a cycle of length \(2\delta(G)+\Omega(f(k))\), where \(2\delta(G)+k\) is the length of the longest cycle in \(G\). By Dirac's theorem applied to \(G\), \(k\) is always at least \(0\).
To achieve that, our algorithm first tries to decompose the graph \(G\). However, in contrast to the first contributed algorithm, here the decomposition process is much simpler. In fact, the decomposition routine is never applied recursively, as the decomposition itself needs not to be used: its existence is sufficient to apply another, simple, procedure.
Similarly to the first contribution, the algorithm then outputs a series of triples \((H_{i},s_{i},t_{i})\), where \(H_{i}\) is a \(2\)-connected minor of \(G\) and \(s_{i},t_{i}\in V(H_{i})\). The difference here is that for each triple the algorithm runs not the initial black-box \(f\)-approximation algorithm, but the algorithm of the first contribution, i.e. the algorithm of Theorem 2. Thus, the output of each run is an \((s_{i},t_{i})\)-path
of length \(\delta(H_{i}-\{s_{i},t_{i}\})+\Omega(f(k_{i}))\) in \(H_{i}\), where \(\delta(H_{i}-\{s_{i},t_{i}\})+k_{i}\) is the length of the longest \((s_{i},t_{i})\)-path in \(H_{i}\).
Finally, from each approximation, our algorithm constructs a cycle of length at least \(2\delta(G)+\Omega(f(k_{i}))\). It is guaranteed that \(k_{i}=\Omega(k)\) for at least one \(i\), so the longest of all constructed cycles is of length at least \(2\delta(G)+\Omega(f(k))\). The following theorem is in order.
**Theorem 1**.: _Let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a non-decreasing subadditive function and suppose that we are given a polynomial-time algorithm finding a cycle of length at least \(f(L)\) in graphs with the longest cycle length \(L\). Then there exists a polynomial time algorithm that finds a cycle of length at least \(2\delta(G)+\Omega(f(L-2\delta(G)))\) in a \(2\)-connected graph \(G\) with \(\delta(G)\leq\frac{1}{2}|V(G)|\) and the longest cycle length \(L\)._
One may note that Theorem 2 actually follows from Theorem 1 (again, by Menger's theorem, see Lemma 2). However, as described above, the algorithm in Theorem 1 employs the algorithm of Theorem 2, so we have to prove the latter before the former.
In the remaining part of this section, we provide more detailed proof overviews of both theorems, in particular, we explain how the algorithms employ the structural results of [24]. In both proofs, we complement these results by showing useful properties of specific graph decompositions. For clarity, we start with Theorem 1, as its proof is less involved.
### Approximating long cycles
The basis of our algorithm is the structural result due to Fomin et al. [24]. In that work, the authors show the following: There is an algorithm that, given a cycle in a \(2\)-connected graph, either finds a longer cycle or finds that \(G\) is of a "particular structure". This algorithm can be applied to any cycle of length less than \((2+\sigma_{1})\cdot\delta(G)\) (to be specific, we use \(\sigma_{1}=\frac{1}{24}\), see Lemma 14 for details).
To see how this structural result is important, recall that we aim to find a cycle of length at least \(2\delta(G)+\Omega(f(k))\) in a \(2\)-connected graph \(G\) with the longest cycle length \(2\delta(G)+k\). Our algorithm simply starts with some cycle in \(G\) and applies the result of [24] to enlarge it exhaustively. It stops when either a cycle is of length at least \((2+\sigma_{1})\cdot\delta(G)\), or the particular structure of \(G\) is found.
The crucial observation here is that if a long cycle is found, we can trivially find a good approximation. If \(\sigma_{1}\cdot\delta(G)\) is, e.g., less than \(\sigma_{1}/10\cdot f(k)\), then \(10\delta(G)<f(k)\). If we just apply the blackbox \(f\)-approximation algorithm for the Longest Cycle problem, we get a cycle of length at least \(f(2\delta(G)+k)\geq f(k)\geq 2\delta(G)+4/5\cdot f(k)\). Hence, by taking the longest of the cycles of length \((2+\sigma_{1})\cdot f(k)\) and of length \(f(2\delta(G)+k)\) we always achieve a good approximation guarantee on \(k\).
The most important part of the algorithm is employed when the "particular structure" outcome is received from the structural lemma applied on \(G\) and the current cycle \(C\). Here we need to be specific about this structure, and the outcome can be of two types. The first outcome is a bounded vertex cover of the graph. This vertex cover is of size at most \(\delta(G)+2(k^{\prime}+1)\), where \(k^{\prime}\geq 0\) is such that \(|V(C)|=2\delta(G)+k^{\prime}\). Such vertex cover is a guarantee that \(C\) is not much shorter than the longest cycle in \(G\): the length of the longest cycle is bounded by twice the vertex cover size, so \(k\leq 4(k^{\prime}+1)\). Hence, \(k^{\prime}=\Omega(k)\) and \(C\) is a sufficient approximation.
The second, and last, structural outcome is the Dirac decomposition, defined in [24]. Basically, this decomposition is obtained by finding a small separator of \(G\) (that consists of just two subpaths \(P_{1},P_{2}\) of the cycle \(C\)), and the parts of this decomposition are the connected components of \(G\) after the separation. The main result on Dirac decomposition proved in [24] is that there always exists a longest cycle that contains an edge in at least one of these parts.
While the definition and properties of Dirac decomposition may seem quite involved, our algorithm does not even require the Dirac decomposition of \(G\) to be found. In fact, we show a new nice property of Dirac decomposition. It guarantees that if a Dirac decomposition for \(G\) exists, then there also exists a \(2\)-vertex separator \(\{u,v\}\) of \(G\) that also divides the longest cycle in \(G\) into almost even parts. Our contribution is formulated in the following lemma.
**Lemma 1**.: _Let \(G\) be a \(2\)-connected graph and \(P_{1},P_{2}\) induce a Dirac decomposition for a cycle \(C\) of length at most \(2\delta(G)+\kappa\) in \(G\) such that \(2\kappa\leq\delta(G)\). If there exists a cycle of length at least \(2\delta(G)+k\) in \(G\), then there exist \(u,v\in V(G)\) such that_
* \(G-\{u,v\}\) _is not connected, and_
* _there is an_ \((u,v)\)_-path of length at least_ \(\delta(G)+(k-2)/4\) _in_ \(G\)_._
Our algorithm employs Lemma 1 in the following way. Since there are \(\mathcal{O}(|V(G)|^{2})\) vertex pairs in \(G\), our algorithm iterates over all vertex pairs. If a pair \(u,v\) separates the graph into at least two parts, then our algorithm finds a long \((u,v)\)-path that contains vertices in only one of the parts. Formally, it iterates over all connected components in \(G-\{u,v\}\). For a fixed connected component \(H\), our algorithm applies the algorithm of Theorem 2 to the graph \(G[V(H)\cup\{u,v\}]+uv\) (the edge \(uv\) is added to ensure \(2\)-connectivity), to find an approximation of the longest \((u,v)\)-path. By Lemma 1, if \(u,v\) is the required separating pair, then for at least one \(H\) the length of the found \((u,v)\)-path should be \(\delta(G)+\Omega(k)\). And if such a path is found, a sufficiently long \((u,v)\)-path outside \(H\) in \(G\) is guaranteed by Erdos-Gallai theorem. Together, these two paths form the required cycle of length \(2\delta(G)+\Omega(k)\).
With that, the proof overview of Theorem 1 is finished. The formal proof is present in Section 5.
### Approximating long \((s,t)\)-paths
While the algorithm of Theorem 1 does not use the underlying Dirac decomposition explicitly, in the case of finding \((s,t)\)-paths (and to prove Theorem 2), we require deeper usage of the obtained graph decomposition. While the Dirac decomposition of Fomin et al. was originally used in [24] to find long cycles above \(2\delta(G)\), for finding \((s,t)\)-paths above \(\delta(G-\{s,t\})\) the authors introduced the Erdos-Gallai decomposition.
In the formal proof of Theorem 2 in Section 4, we give a complete definition of Erdos-Gallai decomposition. In this overview, we aim to avoid the most technical details in order to provide an intuition of the structure of the decomposition and how our algorithm employs it.
Similarly to Dirac decomposition, the Erdos-Gallai decomposition is obtained through the routine that, given a graph \(G\) and an \((s,t)\)-path inside it, either enlarges the path or reports that two subpaths \(P_{1}\) (that starts with \(s\)) and \(P_{2}\) (that starts with \(t\)) of the given path induce (when deleted) an Erdos-Gallai decomposition in \(G\). This routine can be applied to an \((s,t)\)-path until it reaches \((1+\sigma_{2})\cdot\delta(G-\{s,t\})\) in length (specifically, \(\sigma_{2}=\frac{1}{4}\), see Lemma 7; in this overview, we also skip the case of a Hamiltonian \((s,t)\)-path for brevity). Note that, in contrast to the cycle enlargement routine of the Dirac decomposition, here the bounded vertex cover outcome is not possible. Similarly to the algorithm of the previous subsection, the only non-trivial part of the algorithm is dealing with the Erdos-Gallai decomposition outcome. In the other case, a single run of the black-box \(f\)-approximation algorithm for Longest Cycle provides the desired approximation immediately.
The main property of this decomposition due to [24] is as follows: If an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+k\) exists in \(G\), then there necessarily exists the path of length at least \(\delta(G-\{s,t\})+k\) that goes through one of the connected components in the decomposition. Moreover, for each of the connected components \(G_{i}\) there is exactly one pair of distinct entrypoints \(s_{i},t_{i}\): if
an \((s,t)\)-path in \(G\) goes through \(G_{i}\), it should necessary enter \(G_{i}\) in \(s_{i}\) (or \(t_{i}\)) once and leave \(G_{i}\) in \(t_{i}\) (or \(s_{i}\)) exactly once as well.
Additionally to that, we have that the degree of each \(G_{i}\) is not much different from \(G\): \(\delta(G_{i}-\{s_{i},t_{i}\})\geq\delta(G-\{s,t\})-2\) holds true. And this constant difference is always compensated by paths from \(s\) and \(t\) to \(s_{i}\) and \(t_{i}\): if we succeed to find an \((s_{i},t_{i})\)-path of length at least \(\delta(G_{i}-\{s_{i},t_{i}\})+k_{i}\) inside \(G_{i}\), we can always complete it with _any_ pair of disjoint paths from \(\{s,t\}\) to \(\{s_{i},t_{i}\}\) into an \((s,t)\)-path of length \(\delta(G-\{s,t\})+k_{i}\) in \(G\). Should this pair be longer than the trivial lower bound of \(2\), it grants the additional length above \(\delta(G-\{s,t\})+k_{i}\).
The previous paragraph suggests the following approach for our approximation algorithm: for each \(G_{i},s_{i},t_{i}\), our algorithm applies itself recursively to find an \((s_{i},t_{i})\)-path of length \(\delta(G_{i}-\{s_{i},t_{i}\}+\Omega(f(k_{i}))\), where \(k_{i}\) comes from the longest \((s_{i},t_{i})\)-path length in \(G_{i}\). Since the other part of the additional length comes from two disjoint paths between \(\{s,t\}\), and \(\{s_{i},t_{i}\}\), we would like to employ the black-box \(f\)-approximation algorithm to find the \(f\)-approximation of this pair of paths.
Unfortunately, finding such pair of paths reduces only to finding a long cycle through a given pair of vertices (it is enough to glue \(s\) with \(t\) and \(s_{i}\) with \(t_{i}\) in \(G\), and ask to find the long cycle through the resulting pair of vertices). In their work, Fomin et al. have shown that the problem of finding such a cycle of length at least \(k\) can be done in \(2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\) time. However, this is of little use to us, as \(k\) is only bounded by \(\mathcal{O}(\delta(G))\), but we require polynomial time. Simultaneously, we do not know of any way to force the black-box algorithm to find an \(f\)-approximation for a cycle through the given pair of vertices.
These arguments bring us away from the idea of a recursive approximation algorithm. Instead, our approximation algorithm will apply the black-box algorithm to a single "complete-picture" graph that is obtained according to the structure brought by the Erdos-Gallai decomposition. However, the recursion here remains in the sense that we apply the path-enlarging routine to each component of the decomposition. This brings us to the idea of the recursive decomposition, which we define as the _nested Erdos-Gallai decomposition_ in Section 4. This decomposition can be seen as a tree, where the root is the initial triple \((G,s,t)\), the children of a node represent the triples \((G_{i},s_{i},t_{i})\) given by the Erdos-Gallai decomposition, and the leaves of this decomposition are the graphs \(G_{i}\) where sufficient approximations of long \((s_{i},t_{i})\)-paths are found (by taking the longest of \((1+\sigma_{2})\cdot\delta(G-\{s_{i},t_{i}\})\)-long path from the enlarging routine and the approximation obtained from the blackbox algorithm). A schematic picture of this novel decomposition is present in Figure 1.
In Section 4, we show that a long path found inside a leaf \((G_{i},s_{i},t_{i})\) of the decomposition can be contracted into a single edge \(s_{i}t_{i}\). Moreover, if \((G_{j},s_{j},t_{j})\) is a child of a \((G_{i},s_{i},t_{i})\) in the decomposition, and the longest pair of paths from \(\{s_{i},t_{i}\}\) to \(\{s_{j},t_{j}\}\) is just a pair of edges (so it does not grant any additional length as described before), we contract these edges. The crucial in our proof is the claim that after such a contraction, if an \((s,t)\)-path of length \(\delta(G-\{s,t\})+k\) exists in the initial graph, an \((s,t)\)-path of length at least \(\Omega(k)\) exists in the graph obtained with described contractions. After doing all the contractions, the algorithm applies the black-box algorithm to the transformed graph and finds an \((s,t)\)-path of length \(f(\Omega(k))\) (which is \(\Omega(f(k))\) by subadditivity) inside it.
The final part of our algorithm (and the proof of Theorem 2) is the routine that _transforms_ this \((s,t)\)-path inside the _contracted_ graph \(G\) into a path of length \(\delta(G-\{s,t\})+\Omega(f(k))\) in the _initial_ graph \(G\). In this part, we prove that it is always possible to transform an \((s,t)\)-path of length \(r\) in the contracted graph into a path of length \(\Omega(r)\) that goes through at least one edge corresponding to a leaf of the nested Erdos-Gallai decomposition (hence, to a good approximation of \((s_{i},t_{i})\)-path inside \(G_{i}\)). Finally, we observe that reversing the contractions in \(G\) transforms this path into the required approximation.
This finishes the overview of the proof of Theorem 2. Section 4 contains it fully, with all
technical details and formal proofs.
## 3 Preliminaries
In this section, we define the notation used throughout the paper and provide some auxiliary results. We use \([n]\) to denote the set of positive integers \(\{1,\ldots,n\}\). We remind that a function \(f\colon D\to\mathbb{R}\) is _subadditive_ if \(f(x+y)\leq f(x)+f(y)\) for all \(x,y\in D\subseteq\mathbb{R}\). We denote the set of all nonnegative real numbers by \(\mathbb{R}_{+}\).
Recall that our main theorems are stated for arbitrary nondecreasing subadditive functions \(f:\mathbb{R}_{+}\to\mathbb{R}\), such that an algorithm achieving the respective approximation exists. Throughout the proofs, we will additionally assume that \(f(x)\leq x\) for every \(x\in\mathbb{R}_{+}\). For any integer \(x\geq 3\), this is already implied by the statement, since a consistent approximation algorithm cannot output an \((s,t)\)-path (respectively, cycle) of length greater than \(x\) in a graph where the longest \((s,t)\)-path (respectively, cycle) has length \(x\). However, for a general function \(f(\cdot)\) this does not necessarily hold on the whole \(\mathbb{R}_{+}\). If this is the case, for clarity of the proofs we redefine \(f(x):=\min\{x,f(x)\}\) for every \(x\in\mathbb{R}_{+}\). Clearly, \(f\) remains subadditive and non-decreasing, while also imposing exactly the same guarantee on the approximation algorithm.
Graphs.We consider only finite simple undirected graphs and use the standard notation (see, e.g., the book of Diestel [16]). We use \(V[G]\) and \(E(G)\) to denote the sets of vertices and edges, respectively, of a graph \(G\). Throughout the paper, we use \(n\) and \(m\) to denote the number of vertices and the number of edges of a considered graph if it does not create confusion. For a set \(X\subseteq V(G)\), \(G[X]\) is used to denote the subgraph of \(G\)_induced_ by \(X\) and we write \(G-X\) to denote the subgraph of \(G\) induced by \(V(G)\setminus X\). For a single-vertex set \(\{v\}\), we write \(G-v\) instead of \(G-\{v\}\). For a vertex \(v\), \(N_{G}(v)\) denotes the _(open) neighborhood_ of \(v\), that is, the set of the neighbors of \(v\) in \(G\). For
Figure 1: A schematic example of a nested Erdős-Gallai decomposition (left) and the corresponding recursion tree (right). Red straight paths inside \(G_{i}\) denote the pair of paths inducing an Erdős-Gallai decomposition in \(G_{i}\). Bold \((s_{i},t_{i})\)-paths are sufficient approximations of the longest \((s_{i},t_{i})\)-paths in \(G_{i}\). Dashed contours correspond to \(G_{i}\) with constant \(\delta(G_{i}-\{s_{i},t_{i}\})\), which is one of a few technical cases in the proof.
a set set \(X\subseteq V(G)\), \(N_{G}(X)=\big{(}\bigcup_{v\in X}N_{G}(v)\big{)}\setminus X\). The _degree_ of a vertex \(v\) is \(\deg_{G}(v)=|N_{G}(v)|\). We denote by \(\delta(G)=\min_{v\in V(G)}\deg_{G}(v)\) the _minimum degree_ of \(G\). We may omit the subscript in the above notation if the considered graph is clear from the context. We remind that the _edge contraction_ operation for \(uv\in E(G)\) replaces \(u\) and \(v\) by a single vertex \(w_{uv}\) that is adjacent to every vertex of \(N_{G}(\{u,v\})\). A set of vertices \(X\subseteq V(G)\) is _vertex cover_ if every edge of \(G\) has at least one endpoint in \(X\).
A _path_\(P\) in a graph \(G\) is a subgraph of \(G\) whose set of vertices can be written as \(\{v_{0},\ldots,v_{k}\}\) where \(E(P)=\{v_{i-1}v_{i}\mid i\in[k]\}\). We may write a path \(P\) as the sequence of its vertices \(v_{0},\ldots,v_{k}\). The vertices \(v_{0}\) and \(v_{k}\) are called _endpoints_ of \(P\) and other vertices are _internal_. For a path \(P\) with endpoints \(s\) and \(t\), we say that \(P\) is an \((s,t)\)-path. Two paths \(P_{1}\) and \(P_{2}\) are _(vertex-)disjoint_ if they have no common vertex and _internally disjoint_ if no internal vertex of either of the paths is a vertex of the other path. A _cycle_\(C\) in \(G\) is a subgraph of \(G\) with \(V(C)=\{v_{1},\ldots v_{k}\}\) and \(E(C)=\{v_{i-1}v_{i}\mid i\in[k]\}\), where \(k\geq 3\) and it is assumed that \(v_{0}=v_{k}\). The _length_ of a path (a cycle, respectively) is the number of its edges. For two internally disjoint paths \(P_{1}=v_{0},\ldots,v_{k}\) and \(P_{2}=u_{0},\ldots,v_{s}\) sharing exactly one endpoint \(v_{k}=u_{0}\), we write \(P_{1}P_{2}\) to denote their _concatenation_, that is, the path \(v_{0},\ldots,v_{k},u_{1},\ldots,u_{s}\). If \(P_{1}\) and \(P_{2}\) share both endpoints and at least one of them has internal vertices, we write \(P_{1}P_{2}\) to denote the cycle composed by the paths. A path \(P\) (a cycle \(C\), respectively) is _Hamiltonian_ if \(V(P)=V(G)\) (\(V(C)=V(G)\), respectively).
Recall that \(G\) is _connected_ if for every two vertices \(s\) and \(t\), \(G\) contains an \((s,t)\)-path. A _(connected) component_ of \(G\) is an inclusion maximal connected induced subgraph. A connected graph \(G\) with at least three vertices is \(2\)_-connected_ if for every \(v\in V(G)\), \(G-v\) is connected. A vertex \(v\) of a connected graph \(G\) with at least two vertices is a _cut-vertex_ if \(G-v\) is disconnected. A _block_ of a connected graph \(G\) is an inclusion maximal induced subgraph without cut-vertices. Note that if \(G\) has at least two vertices, then each block is either isomorphic to \(K_{2}\) or a \(2\)-connected graph. For a block \(B\) of \(G\), a vertex \(v\in V(B)\) that is not a cut-vertex of \(G\) is called _inner_. Blocks in a connected graph form a tree structure (viewing each block as a vertex of the forest and two blocks are adjacent if they share a cut-vertex). The blocks corresponding to the leaves of the block-tee, are called _leaf-blocks_. For \(s,t\in V(G)\), \(S\subseteq V(G)\setminus\{s,t\}\) is an \((s,t)\)_-separator_ if \(G-S\) has no \((s,t)\)-path; we also say that \(S\)_separates_\(s\) from \(t\). We also say that \(S\) separates two sets of vertices \(A\) and \(B\) if \(S\) separates each vertex of \(A\) from every vertex of \(B\).
The following useful observation follows immediately from Menger's theorem (see, e.g., [16, 41]).
**Lemma 2**.: _For any \(2\)-connected graph \(G\) with a cycle of length \(L\), there is a path of length at least \(L/2\) between any pair of vertices in \(G\). Moreover, given a cycle \(C\) and two distinct vertices \(s\) and \(t\), an \((s,t)\)-path of length at least \(|V(C)|/2\) can be constructed in polynomial time._
We observe that given an approximation algorithm for a longest cycle, we can use it as a black box to approximate a longest path between any two vertices.
**Lemma 3**.: _Let \(\mathcal{A}\) be a polynomial-time algorithm that finds a cycle of length at least \(f(L)\) in a graph with the longest cycle length \(L\). Then there is a polynomial-time algorithm using \(\mathcal{A}\) as a subroutine that, given a graph \(G\) and two distinct vertices \(s\) and \(t\), finds an \((s,t)\)-path of length at least \(\frac{1}{2}f(2L)\), where \(L\) is the length of a longest \((s,t)\)-path in \(G\)._
Proof.: Let \(G\) be a graph and let \(s,t\in V(G)\) be distinct vertices. We assume without loss of generality that \(G\) is connected. Let \(P\) be a longest \((s,t)\)-path in \(G\) and let \(L\) be its length. If \(st\) is a bridge of \(G\), then \(G\) has a unique \((s,t)\)-path and and its length is one. In this case, our algorithm returns this path that trivially can be found in polynomial time. Assume that this is not the case. Then \(st\notin E(P)\) and \(L\geq 2\).
We construct two copies \(G_{1}\) and \(G_{2}\) of \(G\). Denote by \(s_{1}\) and \(s_{2}\) the copies of \(s\) in \(G_{1}\) and \(G_{2}\), respectively. Similarly, let \(t_{1}\) and \(t_{2}\) be the copies of \(t\), and denote by \(P_{1}\) and \(P_{2}\) the copes of \(P\) in \(G_{1}\) and \(G_{2}\), respectively. Next, we construct the graph \(G^{\prime}\) by unifying \(s_{1}\) and \(s_{2}\), and \(t_{1}\) and \(t_{2}\) (if \(st\in E(G)\), the edges \(s_{1}t_{1}\) and \(s_{2}t_{2}\) are unified as well). Denote by \(s^{\prime}\) the vertex of \(G^{\prime}\) obtained from \(s_{1}\) and \(s_{2}\), and let \(t^{\prime}\) be the vertex obtained from \(t_{1}\) and \(t_{2}\). Note that \(P_{1}\) and \(P_{2}\) are internally disjoint \((s^{\prime},t^{\prime})\)-paths in \(G^{\prime}\). In particular, this implies that \(s^{\prime}\) and \(t^{\prime}\) are vertices of the same block \(B\) of \(G^{\prime}\), and \(P_{1}\) and \(P_{2}\) are paths in \(B\). Therefore, \(B\) contains the cycle \(C=P_{1}P_{2}\) of length \(2L\). We obtain that the longest cycle length in \(B\) is at least \(2L\). We call \(\mathcal{A}\) on \(B\) and this algorithm outputs a cycle \(C\) of length at least \(f(2L)\). Note that \(B\) is distinct from \(K_{2}\), i.e., is \(2\)-connected. By Lemma 2, \(B\) has an \((s^{\prime},t^{\prime})\)-path \(P^{\prime}\) of length at least \(\frac{1}{2}|V(C)|\geq\frac{1}{2}f(2L)\) that can be constructed in polynomial time. Notice that \(\{s^{\prime},t^{\prime}\}\) separates \(V(G_{1})\setminus\{s_{1},t_{1}\}\) from \(V(G_{2})\setminus\{s_{2},t_{2}\}\). Hence, \(P^{\prime}\) is either an \((s_{1},t_{1})\)-path in \(G_{1}\) or \((s_{2},t_{2})\)-path in \(G_{2}\). Assume that \(P^{\prime}\) is a path in \(G_{1}\) (the other case is symmetric). Since \(G_{1}\) is a copy of \(G\), the copy of \(P^{\prime}\) in \(G\) is an \((s,t)\)-path of length at least \(\frac{1}{2}f(2L)\) as required by the lemma.
Since \(G^{\prime}\) can be constructed in polynomial time and the unique block \(B\) of \(G^{\prime}\) containing \(s^{\prime}\) and \(t^{\prime}\) can be found in polynomial (linear) time (see, e.g., [41]), the overall running time is polynomial.
We will use as a subroutine an algorithm finding two disjoint paths between two pairs of vertices of total length at least \(k\), where \(k\) is the given parameter. For us, constant values of \(k\) suffice, though in fact there exists an FPT algorithm for this problem parameterized by the total length. It follows as an easy corollary from the following result of [24] about Long (\(s\), \(t\))-Cycle, the problem of finding a cycle of length at least \(k\) through the given two vertices \(s\) and \(t\).
**Theorem 3** (Theorem 4 in [24]).: _There exists an FPT algorithm for Long (\(s\), \(t\))-Cycle parameterized by \(k\)._
For completeness, we show the corollary next.
**Corollary 1**.: _There is an FPT algorithm that, given a graph \(G\) with two pairs of vertices \(\{s,t\}\) and \(\{s^{\prime},t^{\prime}\}\), and a parameter \(k\), finds two disjoint paths between \(\{s,t\}\) and \(\{s^{\prime},t^{\prime}\}\) in \(G\) of total length at least \(k\), or correctly determines that such paths do not exist._
Proof.: Construct a new graph \(H\) that consists of the graph \(G\) together with two additional vertices \(u\) and \(v\). The vertex \(u\) has exactly two neighbors in \(H\), \(s\) and \(t\), and the neighbors of \(v\) are \(s^{\prime}\) and \(t^{\prime}\). Now run the algorithm for Long (\(s\), \(t\))-Cyclewith the parameter \(k+4\) to find a cycle in \(H\) going through the vertices \(u\) and \(v\). If such a cycle is found, then removing the vertices \(u\) and \(v\) from it yields a pair of disjoint paths between \(\{s,t\}\) and \(\{s^{\prime},t^{\prime}\}\) in \(G\) of total length at least \(k\). In the other direction, if there is a pair of desired disjoint paths in \(G\), then together with the vertices \(u\) and \(v\) they constitute a cycle of length at least \(k+4\) in \(H\).
Finally, it is convenient to use the following corollary, which generalizes the theorem of Erdos and Gallai [19, Theorem 1.16].
**Corollary 2** (Corollary 3 in [24]).: _Let \(G\) be a \(2\)-connected graph and let \(s,t\) be a pair of distinct vertices in \(G\). For any \(B\subseteq V(G)\) there exists a path of length at least \(\delta(G-B)\) between \(s\) and \(t\) in \(G\). Moreover, there is a polynomial time algorithm constructing a path of such length._
## 4 Approximating \((s,t)\)-path
In this section, we provide the formal proof of Theorem 2, stating that any guarantee for approximating the longest cycle in a \(2\)-connected graph can be transferred to approximating the
longest \((s,t)\)-path above minimum degree. For the convenience of the reader, we recall the precise statement next.
**Theorem 2**.: _Let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a non-decreasing subadditive function and suppose that we are given a polynomial-time algorithm computing an \((s,t)\)-path of length at least \(f(L)\) in graphs with given two vertices \(s\) and \(t\) having the longest \((s,t)\)-path of length \(L\). Then there is a polynomial-time algorithm that outputs an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+\Omega(f(L-\delta(G-\{s,t\})))\) in a \(2\)-connected graph \(G\) with two given vertices \(s\) and \(t\) having the longest \((s,t)\)-path length \(L\)._
In order to obtain this result, we first recall the concept of Erdos-Gallai decomposition introduced in [24] together with a few of its helpful properties established there. Then we introduce the recursive generalization of this concept, called nested Erdos-Gallai decomposition, and show how to obtain with its help the compression of the graph such that a long \((s,t)\)-path in the compressed graph can be lifted to an \((s,t)\)-path in the original graph with a large offset.
### Erdos-Gallai decomposition
This subsection encompasses the properties of an Erdos-Gallai decomposition, defined next. The definition itself and most of the technical results presented here are due to [24]. Some of the results from [24] need to be modified in order to be used for our purposes, we supply such results with full proofs. Note that the statements in [24] hold in the more general case where there is also a low-degree vertex subset in the graph, here while recalling the results we automatically simplify the statements. Next, we recall the definition of an Erdos-Gallai decomposition.
**Definition 1** (Erdos-Gallai decomposition and Erdos-Gallai component, Definition 2 in [24]).: Let \(P\) be a path in a \(2\)-connected graph \(G\). We say that two disjoint paths \(P_{1}\) and \(P_{2}\) in \(G\) induce _an Erdos-Gallai decomposition for \(P\)_ in \(G\) if
* Path \(P\) is of the form \(P=P_{1}P^{\prime}P_{2}\), where the inner path \(P^{\prime}\) has at least \(\delta(G-\{s,t\})\) edges.
* There are at least two connected components in \(G-V(P_{1}\cup P_{2})\), and for every connected component \(H\), it holds that \(|V(H)|\geq 3\) and one of the following. 1. \(H\) is \(2\)-connected and the maximum size of a matching in \(G\) between \(V(H)\) and \(V(P_{1})\) is one, and between \(V(H)\) and \(V(P_{2})\) is also one; 2. \(H\) is not \(2\)-connected, exactly one vertex of \(P_{1}\) has neighbors in \(H\), that is \(|N_{G}(V(H))\cap V(P_{1})|=1\), and no inner vertex from a leaf-block of \(H\) has a neighbor in \(P_{2}\); 3. The same as (R2), but with \(P_{1}\) and \(P_{2}\) interchanged. That is, \(H\) is not \(2\)-connected, \(|N_{G}(V(H))\cap V(P_{2})|=1\), and no inner vertex from a leaf-block of \(H\) has a neighbor in \(P_{1}\).
The set of _Erdos-Gallai components_ for an Erdos-Gallai decomposition is defined as follows. First, for each component \(H\) of type (R1), \(H\) is an Erdos-Gallai component of the Erdos-Gallai decomposition. Second, for each \(H\) of type (R2), or of type (R3), all its leaf-blocks are also Erdos-Gallai components of the Erdos-Gallai decomposition.
As long as an Erdos-Gallai decomposition is available, Erdos-Gallai components allow us to bound the structure of optimal solutions in a number of ways. First, Fomin et al. [24] observe that the longest \((s,t)\)-path necessarily visits an Erdos-Gallai component.
**Lemma 4** (Lemma 7 in [24]).: _Let \(G\) be a graph and \(P_{1},P_{2}\) induce an Erdos-Gallai decomposition for an \((s,t)\)-path \(P\) in \(G\). Then there is a longest \((s,t)\)-path in \(G\) that enters an Erdos-Gallai component._
Next, since an Erdos-Gallai component has a very restrictive connection to the rest of the graph, it follows that any \((s,t)\)-path has only one chance of entering the component.
**Lemma 5** (Lemma 5 in [24]).: _Let \(G\) be a \(2\)-connected graph and \(P\) be an \((s,t)\)-path in \(G\). Let paths \(P_{1},P_{2}\) induce an Erdos-Gallai decomposition for \(P\) in \(G\). Let \(M\) be an Erdos-Gallai component. Then for every \((s,t)\)-path \(P^{\prime}\) in \(G\), if \(P^{\prime}\) enters \(M\), then all vertices of \(V(M)\cap V(P^{\prime})\) appear consecutively in \(P^{\prime}\)._
For the purposes of recursion, it is convenient to enclose an Erdos-Gallai component together with some of its immediate connections, so that this slightly larger subgraph behaves exactly like an \((s,t)\)-path instance. The subgraph \(K\) in the next lemma plays this role.
**Lemma 6** (Lemma 8 in [24]).: _Let paths \(P_{1},P_{2}\) induce an Erdos-Gallai decomposition for an \((s,t)\)-path \(P\) in graph \(G\). Let \(M\) be an Erdos-Gallai component in \(G\). Then there is a polynomial time algorithm that outputs a \(2\)-connected subgraph \(K\) of \(G\) and two vertices \(s^{\prime},t^{\prime}\in V(K)\), such for that every \((s,t)\)-path \(P^{\prime}\) in \(G\) that enters \(M\), the following hold:_
1. \(V(K)=(V(M)\cup\{s^{\prime},t^{\prime}\})\)_;_
2. \(P^{\prime}[V(K)]\) _is an_ \((s^{\prime},t^{\prime})\)_-subpath of_ \(P^{\prime}\) _and an_ \((s^{\prime},t^{\prime})\)_-path in_ \(K\)_;_
3. \(\delta(K-\{s^{\prime},t^{\prime}\})\geq\delta(G-\{s,t,s^{\prime},t^{\prime}\})\)_._
Most importantly, Erdos-Gallai decompositions capture extremal situations, where the current \((s,t)\)-path cannot be made longer in a "simple" way. The next lemma formalizes that intuition, stating that in polynomial time we can find either a long \((s,t)\)-path or an Erdos-Gallai decomposition. The lemma is largely an analog of Lemma 4 in [24], however our statement here is slightly modified. Next, we recall the statement from Section 2 and provide a proof.
**Lemma 7**.: _Let \(G\) be a \(2\)-connected graph such that \(\delta(G-\{s,t\})\geq 16\). There is a polynomial time algorithm that_
* _either outputs an_ \((s,t)\)_-path_ \(P\) _of length at least_ \(\min\{\frac{5}{4}\delta(G-\{s,t\})-3,|V(G)|-1\}\)_,_
* _or outputs an_ \((s,t)\)_-path_ \(P\) _with paths_ \(P_{1},P_{2}\) _that induce an Erdos-Gallai decomposition for_ \(P\) _in_ \(G\)_. Additionally, there is no_ \((s,t)\)_-path in_ \(G\) _that enters at least two Erdos-Gallai components of this Erdos-Gallai decomposition._
Proof.: Invoke Lemma 4 of [24] on \(G,s,t\) with \(B:=\{s,t\}\) and \(k:=\lfloor\delta(G-\{s,t\})/4\rfloor-2\). Note that the condition \(4k+8\leq\delta(G-\{s,t\})\) required by that lemma is satisfied. Now, we either get an \((s,t)\)-path of length \(\delta(G-\{s,t\})+k\), or an \((s,t)\)-path \(P\) with \(V(P)\cup\{s,t\}=V(G)\), or the required Erdos-Gallai decomposition with the paths \(P\), \(P_{1}\), \(P_{2}\). Clearly \(\delta(G-\{s,t\})+k>\frac{5}{4}\delta(G-\{s,t\})-3\), so if a path of length \(\delta(G-\{s,t\})+k\) is found, we are done. If an \((s,t)\)-path \(P\) has \(V(P)\cup\{s,t\}=V(G)\), then it is a hamiltonian path in \(G\), so we are done in the second case as well.
If we obtain an Erdos-Gallai decomposition, then we additionally need to check whether there exists an \((s,t)\)-path that goes through at least two Erdos-Gallai components of the Erdos-Gallai decomposition induced by \(P_{1}\) and \(P_{2}\) in \(G\). To this end, iterate over all ordered pairs of Erdos-Gallai components in the Erdos-Gallai decomposition. For each pair, apply Lemma 6 to each of the two
Erdos-Gallai components and obtain two triples \((K_{1},s_{1},t_{1})\) and \((K_{2},s_{2},t_{2})\). There is an \((s,t)\)-path entering both Erdos-Gallai components in the order given by the pair if and only if there exist three disjoint paths between the pairs \((s,a_{1}),(b_{1},a_{2}),(b_{2},t)\), where \((a_{i},b_{i})\) is a permutation of \((s_{i},t_{i})\) for each \(i\in[2]\).
When the permutations are fixed, such paths, if they exist, can be found in polynomial time using the famous algorithm of Robertson and Seymour for \(k\)-Disjoint Paths[48]. Since \(\delta(K_{i}-\{s_{i},t_{i}\})\geq\delta(G-\{s,t\})-2\) for each \(i\in[2]\), these three paths together with two \((s_{i},t_{i})\)-paths inside \(K_{i}\) combine into an \((s,t)\)-path of length at least \(2\delta(G-\{s,t\})-4>\frac{5}{4}\delta(G-\{s,t\})-3\), so the algorithm outputs this path and stops. If the disjoint path triple was not found on any of the steps, then there is indeed no \((s,t)\)-path entering at least two Erdos-Gallai components.
Finally, to deal with \((s,t)\)-paths that do not enter any Erdos-Gallai component, one can observe the following. Intuitively, such a path should be far from optimal, as going through an Erdos-Gallai component would immediately give at least \(\delta(G-\{s,t\})-\mathcal{O}(1)\) additional edges of the path. The final lemma of this subsection establishes how precisely the length of a path avoiding Erdos-Gallai components can be "boosted" in this fashion. To obtain this result, we first need a technical lemma from [24] that yields long paths inside separable components.
**Lemma 8** (Lemma 6 in [24]).: _Let \(H\) be a connected graph with at least one cut-vertex. Let \(I\) be the set of inner vertices of all leaf-blocks of \(H\). Let \(S\subseteq V(H)\setminus I\) separate at least one vertex in \(V(H)\setminus I\) from \(I\) in \(H\). For any vertex \(v\) that is not an inner vertex of a leaf-block of \(H\), there is a cut-vertex \(c\) of a leaf-block of \(H\) and a \((c,v)\)-path of length at least \(\frac{1}{2}\left(\delta(H)-|S|\right)\) in \(H\). This path can be constructed in polynomial time._
Now we move to \((s,t)\)-paths that avoid Erdos-Gallai components. The following Lemma 9 has been already stated in Section 2, here we recall the statement and provide a proof.
**Lemma 9**.: _Let \(P\) be an \((s,t)\)-path of length at most \(\delta(G-\{s,t\})+k\) and let two paths \(P_{1},P_{2}\) induce a Erdos-Gallai decomposition for \(P\) in \(G\). There is a polynomial time algorithm that, given an \((s,t)\)-path of length at least \(4k+5\) in \(G\) that does not enter any Erdos-Gallai component, outputs a path of length at least \(\min\{\delta(G-\{s,t\})+k-1,\frac{3}{2}\delta(G-\{s,t\})-\frac{5}{2}k-1\}\) in \(G\)._
Proof.: For clarity, we denote \(\delta:=\delta(G-\{s,t\})\). Let \(Q\) be the given \((s,t)\)-path in \(G\). Denote by \(S\) the set of the first \(k\) vertices on \(Q\) and by \(T\) the set of the last \(k\) vertices on \(Q\). Let \(s^{\prime}\) be the first vertex on \(Q\) that is not in \(S\) and \(t^{\prime}\) be the last vertex on \(Q\) that is not in \(T\). Since \(Q\) consists of more than \(2k\) vertices, \(s^{\prime},t^{\prime}\notin S\cup T\). The length of the \((s,s^{\prime})\)-subpath of \(Q\) and the length of the \((t^{\prime},t)\)-subpath of \(Q\) are equal to \(k\).
The total length of \(P_{1}\) and \(P_{2}\) is at most \(k\), hence \(|V(P_{1})\cup V(P_{2})|\leq k+2\). The length of the \((s^{\prime},t^{\prime})\)-subpath of \(Q\) is at least \(2k+5>2|V(P_{1})\cup V(P_{2})|\). Hence, this subpath contains at least one edge of \(G\) that is not incident to vertices in \(|V(P_{1})\cup V(P_{2})|\). Denote the endpoints of this edge by \(u\) and \(v\). Since \(Q\) does not enter any Erdos-Gallai component, this edge is an edge of a non-leaf-block of some separable connected component \(H\) of \(G-V(P_{1}\cup P_{2})\). The component \(H\) corresponds to either (R2) or (R3) in the definition of Erdos-Gallai decomposition. Without loss of generality, we assume that \(H\) corresponds to (R2).
We now consider two cases depending on the structure of \(H-(S\cup T)\). If \(S\cup T\) separates \(u\) or \(v\) from all cut vertices of the leaf-blocks in \(H\), then we have a set of size \(2k\) in \(H\) that satisfies the condition of Lemma 8. Take a vertex \(w\) in \(H\) that has a neighbour in \(V(P_{2})\) in \(G\). By Lemma 8, a \((w,c)\)-path of length at least
\[\frac{1}{2}\delta(H)-2k\geq\frac{1}{2}\delta(G-V(P_{1}\cup P_{2}))-2k\geq\frac{ 1}{2}\delta(G-\{s,t\})-\frac{5}{2}k-1\]
exists in \(H\) for some cut vertex \(c\) of some leaf-block \(L\) of \(H\). In this leaf-block, we have a vertex \(z\) with a neighbour in \(V(P_{1})\). By Corollary 2, we have a \((c,z)\)-path of length at least \(\delta(L-c)\geq\delta(G-\{s,t\})-2\) inside \(L\). Combine the two paths and obtain a \((z,w)\)-path of length at least \(\frac{3}{2}\delta(G-\{s,t\})-\frac{5}{2}k-3\) inside \(H\). Finally, prepend to this path a prefix of \(P_{1}\) connecting \(s\) with the neighbour of \(z\), and append to this path a suffix of \(P_{2}\) connecting the neighbour of \(w\) with \(t\). The length increases by at least two as \(s\neq z\) and \(w\neq t\). The obtained path is an \((s,t)\)-path of length at least \(\frac{3}{2}\delta(G-\{s,t\})-\frac{5}{2}k-1\).
The second case is when from \(v\) we can reach a cut vertex \(c\) of some leaf-block \(L\) in \(H\) while avoiding vertices in \(S\cup T\). Note that \(V(Q)\cap V(L-c)=\emptyset\) by the properties of Erdos-Gallai decomposition. Then choose \(w\) as an arbitrary vertex in \(L-c\) with a neighbour in \(V(P_{1})\). Now construct a \((v,s)\)-path \(Q^{\prime}\) in the following way. First, follow an arbitrary \((v,c)\)-path in \(H-(S\cup T)\). Then continue with a \((c,w)\)-path of length at least \(\delta(L-c)\geq\delta(G-\{s,t\})-2\) inside \(L-c\) that exists by Corollary 2. Note that this path has no common vertices with \(Q\). Finish \(Q^{\prime}\) by going from \(w\) to the neighbour of \(w\) in \(P_{1}\) and follow \(P_{1}\) backwards down to \(s\).
Let \(x\) be the last vertex before \(c\) on \(Q^{\prime}\) that belongs to \(V(Q)\). Let \(y\) be the first vertex after \(w\) on \(Q^{\prime}\) that belongs to \(V(Q)\). Both \(x,y\) are defined correctly since \(v,s\in V(Q)\). Consider the \((x,y)\)-subpath of \(Q^{\prime}\). It strictly contains the \((c,w)\)-path inside \(L\), so its length is at least \(\delta(G-\{s,t\})-1\). Also, the length of the \((s,x)\)-subpath of \(Q\) and the length of the \((x,t)\)-subpath of \(Q\) is at least \(k\) as \(x\notin S\cup T\).
We now construct a long \((s,t)\)-path in \(G\). If \(y\) is contained in the \((s,x)\)-subpath of \(Q\), then the \((s,t)\)-path is constructed in the following way: follow \(P_{1}\) from \(s\) to \(y\), then follow \(Q^{\prime}\) backwards from \(y\) down to \(x\), and finish by following \(Q\) from \(x\) to \(t\). The length of this path is at least \(\delta(G-\{s,t\})-1+k\). If \(y\) belongs to the \((x,t)\)-subpath of \(Q\), start by taking the \((s,x)\)-subpath of \(Q\), then follow \(Q^{\prime}\) from \(x\) to \(y\) and finish by following \(P_{2}\) from \(y\) to \(t\). This path also has length at least \(k+\delta(G-\{s,t\})-1\). The proof is complete.
### Proof of Theorem 2
To deal with the recursive structure of the solution, we introduce the following _nested_ generalization of an Erdos-Gallai decomposition. Intuitively, it captures how the structural observations of the previous subsection allow us to recursively construct Erdos-Gallai decompositions with the aim of finding a long \((s,t)\)-path. For an illustration of a nested Erdos-Gallai decomposition, see Figure 1. We recall the formal definition from Section 2.
**Definition 2** (Nested Erdos-Gallai decomposition).: A sequence of triples \((G_{1},s_{1},t_{1})\), \((G_{2},s_{2},t_{2})\),..., \((G_{\ell},s_{\ell},t_{\ell})\) is called a _nested Erdos-Gallai decomposition_ for \(G\) and two vertices \(s,t\in V(G)\) if
* \((G_{1},s_{1},t_{1})=(G,s,t)\);
* for each \(i\in[\ell]\), either
* \(\delta(G_{i}-\{s_{i},t_{i}\})<16\), or
* Lemma 7 applied to \(G_{i},s_{i},t_{i}\) gives a path \(P_{i}\) of length at least \(\min\{\frac{5}{4}\delta(G_{i}-\{s_{i},t_{i}\})-3,|V(G_{i})|-1\}\) in \(G_{i}\), or
* Lemma 7 applied to \(G_{i},s_{i},t_{i}\) gives a path \(P_{i}\) and two paths \(P_{i,1},P_{i,2}\) that induce an Erdos-Gallai decomposition for \(P_{i}\) in \(G_{i}\), and for each Erdos-Gallai component \(M\) of this decomposition there is \(j>i\) such that \((G_{j},s_{j},t_{j})\) is the result of Lemma 6 applied to \(M\) in \(G_{i}\). In this case, we say that \(G_{i}\) is _decomposed_.
* for each \(i\in\{2,\ldots,\ell\}\), there is \(e(i)<i\) such that \((G_{i},s_{i},t_{i})\) is a result of Lemma 6 applied to some Erdos-Gallai component of the Erdos-Gallai decomposition of \(G_{e(i)}\) for \(P_{e(i)}\).
The proof of Theorem 2 is performed in two steps: first, we show how to obtain a nested Erdos-Gallai decomposition for a given graph \(G\), and then we use the nested Erdos-Gallai decomposition to recursively construct a good approximation to the longest \((s,t)\)-path. The first part is achieved simply by applying Lemma 7 recursively on each Erdos-Gallai component until components are no longer decomposable. The main hurdle is the second part, on which we focus for the rest of the section. For completeness, first we show that a nested Erdos-Gallai decomposition can always be constructed in polynomial time.
**Lemma 10**.: _There is a polynomial time algorithm that, given a 2-connected graph \(G\) and its two vertices \(s\) and \(t\), outputs a nested Erdos-Gallai decomposition for \(G\), \(s\), \(t\)._
Proof.: The algorithm proceeds recursively, starting with the triple \((G_{1},s_{1},t_{1})=(G,s,t)\). For the given triple \((G_{i},s_{i},t_{i})\), if \(\delta(G_{i}-\{s_{i},t_{i}\})<16\), the algorithm stops. Otherwise, invoke the algorithm of Lemma 7 on \((G_{i},s_{i},t_{i})\). If this returns a path \(P_{i}\) of length at least \(\frac{5}{4}\delta(G_{i}-\{s_{i},t_{i}\})-3\), the algorithm stops. On the other hand, if an Erdos-Gallai decomposition is returned, for each Erdos-Gallai component \(M\) run the algorithm of Lemma Lemma 6 on \(M\) to obtain a triple \((G_{j},s_{j},t_{j})\), where \(j\) is the lowest free index among the triples produced so far. Run the main algorithm recursively on each of the triples generated on this step.
By definition, the algorithm above produces a nested Erdos-Gallai decomposition. To show that the running time is polynomial, first observe that running the algorithm without the subsequent recursive calls is clearly polynomial. Assume this running time is bounded by \(\alpha n^{c}\) for some constant \(\alpha>0\) and \(c\geq 1\), where \(n=|V(G)\setminus\{s,t\}|\) and \((G,\,s,\,t)\) is the current instance. We show by induction on the depth of the resulting nested Erdos-Gallai decomposition that the running time of the recursive algorithm is at most \(\alpha n^{c+1}\). If the instance does not spawn any recursive calls, this trivially holds. Otherwise, assume \(\ell\) new instances \((G_{j_{1}},s_{j_{1}},t_{j_{1}})\),..., \((G_{j_{\ell}},s_{j_{\ell}},t_{j_{\ell}})\) are produced, denote \(n_{i}=|V(G_{j_{i}}\setminus\{s_{j_{i}},t_{j_{i}}\}|\). Note that \(\ell\geq 2\) since there are always at least 2 components in an Erdos-Gallai decomposition. By induction, the running time is bounded by \(\alpha\cdot\left(n^{c}+\sum_{i=1}^{\ell}n_{i}^{c+1}\right)\). We now bound \(\sum_{i=1}^{\ell}n_{i}^{c+1}\), observe first that \(\sum_{i=1}^{\ell}n_{i}\leq n\), as all the sets \(V(G_{j_{i}}\setminus\{s_{j_{i}},t_{j_{i}}\}\) are disjoint and do not contain \(s\) or \(t\). We use the following numerical observation proven in [24].
**Claim 1** (Proposition 3 in [24]).: _Let \(a_{1},a_{2},\ldots,a_{q}\) be a sequence of \(q\geq 2\) positive integers with \(\sum_{i=1}^{q}a_{i}=n\). Let \(x>1\) be an integer. Then \(\sum_{i=1}^{q}a_{i}^{x}\leq(n-1)^{x}+1\leq n^{x}-n^{x-1}\)._
By Claim 1, we can bound the running time by
\[\alpha\cdot\left(n^{c}+\sum_{i=1}^{\ell}n_{i}^{c+1}\right)\leq\alpha\cdot \left(n^{c}+n^{c+1}-n^{c}\right)=\alpha n^{c+1},\]
completing the proof.
Clearly, it follows that the size of a nested Erdos-Gallai decomposition returned by Lemma 10 is also polynomial. Observe also that the construction algorithm invokes Lemma 7 for all sufficiently large \(G_{i}\), thus in what follows we assume that the corresponding paths \(P_{i}\) are already computed.
Now we focus on using a constructed nested Erdos-Gallai decomposition for approximating the longest \((s,t)\)-path. First of all, we present the algorithm long_nested_st_path that, given a nested Erdos-Gallai decomposition of \(G\), computes a long \((s,t)\)-path by going over the decomposition.
The pseudocode of long_nested_st_path is present in Algorithm 3. Intuitively, first the algorithm computes a compression \(H\) of the graph \(G\) that respects the nested Erdos-Gallai decomposition: components that are not decomposed are replaced by single edges, and edges that are "unavoidable" to visit a component are contracted. The computation of this compression is encapsulated in the nested_compress function presented in Algorithm 1. As a subroutine, this function uses the two_long_disjoint_paths algorithm given by Corollary 1, that finds two disjoint paths of at least the given length between the given pairs of vertices.
Next, the blackbox approximation algorithm long_st_path_approx is used to compute an \((s,t)\)-path \(Q\) in \(H\). The function nested_decompress reconstructs then this path in the original graph \(G\), see Algorithm 2 for the pseudocode. Later we argue (Lemma 11) that any \((s,t)\)-path in \(H\) of length \(r\) yields in this way an \((s,t)\)-path in \(G\) of length at least \(\delta(G-\{s,t\})+r/8-3\). Finally, either the length of \(Q\) in \(H\) was large enough and the reconstructed path provides the desired approximation or a long path can be found inside one of the components in a "simple" way, and then connected arbitrarily to \(\{s,t\}\). Specifically, in this component, it suffices to either take an approximation of the longest path computed by long_st_path_approx, or a long Erdos-Gallai path returned by the algorithm from Corollary 2, long_eg_st_path. Thus, in the final few lines long_nested_st_path checks whether any of these paths is longer than the reconstructed path \(Q\). The path from inside the component is extended to an \(\{s,t\}\)-path in \(G\) by using the algorithm two_long_disjoint_paths, given by Corollary 1, with the parameter \(0\).
```
nested_compress(\((G_{1},s_{1},t_{1}),(G_{2},s_{2},t_{2}),\ldots,(G_{\ell},s_{\ell},t_{\ell})\)) Input: a nested Erdos-Gallai decomposition for \(G\), \(s\) and \(t\). Output: the compressed graph \(H\).
1\(H\longleftarrow G\);
2foreach\(i\in\{2,\ldots,\ell\}\)do
3\(j\longleftarrow e(i)\);
4\(d_{i}\longleftarrow|\{s_{j},t_{j}\}\setminus\{s_{i},t_{i}\}|\);
5iftwo_long_disjoint_paths(\(G_{i},\{s_{j},t_{j}\},\{s_{i},t_{i}\},d_{i}+1\)) is No then
6 contract all edges of a maximum matching between \(\{s_{j},t_{j}\}\) and \(\{s_{i},t_{i}\}\) in \(H\);
7
8 end if
9if\(G_{i}\) is not decomposedthen
10 remove all vertices in \(V(G_{i})\setminus\{s_{i},t_{i}\}\) from \(H\);
11 add edge \(s_{i}t_{i}\) to \(H\) and mark it with \(G_{i}\);
12
13 end if
14
15 end if return\(H\);
```
**Algorithm 1**The algorithm compressing a given graph \(G\) with a given nested Erdos-Gallai decomposition.
Now, our goal is to show that the path that the long_nested_st_path algorithm constructs serves indeed as the desired approximation of the longest \((s,t)\)-path in \(G\). For the rest of this section, let \(G_{1},\ldots,G_{\ell}\) be the given nested Erdos-Gallai decomposition for \(G,s,t\). An important piece of intuition about nested Erdos-Gallai decomposition is that, as we go deeper into the nested Erdos-Gallai components, the minimum degree of the component \(\delta(G_{i}\setminus\{s_{i},t_{i}\})\) decreases, but we gain more and more edges that we collect while going from \(\{s,t\}\) to \(\{s_{i},t_{i}\}\). We introduce values that help us measure this difference between the nested components: for each \(i\in[\ell]\), denote \(d_{i}=|\{s_{e(i)},t_{e(i)}\}\setminus\{s_{i},t_{i}\}|\). In particular, by Lemma 6 we know that for any \(i\in[\ell]\)
\(\delta(G_{i})\geq\delta(G_{e(i)})-d_{i}\). On the other hand, any pair of disjoint paths that connects \(\{s_{e(i)},t_{e(i)}\}\) to \(\{s_{i},t_{i}\}\) contains at least \(d_{i}\) edges. This leads to the following simple observation about extending an \((s_{j},t_{j})\)-path in a component \(G_{j}\) to an \((s,t)\)-path in \(G\).
**Claim 2**.: _For each \(j\in[\ell]\), let \(G_{j_{1}},\ldots,G_{j_{c}}\) be such that \(j_{c}=j\) and \(j_{1}=1\) and \(e(j_{i+1})=j_{i}\) for each \(i\in[c-1]\). Let \(P\) be an \((s_{j},t_{j})\)-path in \(G_{j}\). Then \(P\) combined with any pair of disjoint paths connecting \(\{s,t\}\) to \(\{s_{j},t_{j}\}\) yields an \((s,t)\)-path in \(G\) of length at least \(|E(P)|+\sum_{i\in[c-1]}d_{j_{i+1}}\)._
However, there might also exist longer paths connecting nested components \(G_{e(i)}\) and \(G_{i}\). When we construct the compressed graph \(H\) in Algorithm 1, we distinguish between two cases. Either any pair of such paths have the total length \(d_{i}\), meaning that the only option is to use the edges of a matching between \(\{s_{e(i)},t_{e(i)}\}\) and \(\{s_{i},t_{i}\}\). In that case we simply contract these edges as we know that there is no choice on how to reach \(G_{i}\) from \(G_{e(i)}\). Or, there is a pair of disjoint paths of total length at least \(d_{i}+1\). This situation is beneficial to us in a different way: since we can find such a pair of paths in polynomial time, we can traverse at least \(d_{i}+1\) edges going from \(G_{e(i)}\) to \(G_{i}\), while we only lose at most \(d_{i}\) in the minimum degree. This dichotomy on the structure of
the "slice" between two nested components is the main leverage that allows us to lift the length of an \((s,t)\)-path in \(H\) to an offset above the minimum degree in \(G\). We formally show this crucial property of the compressed graph \(H\) and the nested_decompress routine in the next lemma.
**Lemma 11**.: _The nested_decompress routine transforms an \((s,t)\)-path \(Q\) in \(H\) of length \(r\) into an \((s,t)\)-path in \(G\) of length at least \(\delta(G-\{s,t\})+r/8-3\)._
Proof.: Observe that in the tree of the nested Erdos-Gallai decomposition, the path \(Q\) visits a rooted subpath of components \(G_{i}\). That is, there are indices \(j_{1},j_{2},\ldots,j_{c}\in[\ell]\) such that \(j_{1}=1\) and \(e(j_{i+1})=j_{i}\) for each \(i\in[c-1]\). This holds since in a Erdos-Gallai decomposition on each level, \(Q\) visits at most one Erdos-Gallai component by Lemma 7. Here we say that \(Q\) visits a component \(G_{i}\) if \(Q\) contains an edge of \(G_{i}\) that was not contracted in \(H\), and for non-decomposed components \(G_{i}\) this means that \(Q\) contains the edge \(s_{i}t_{i}\) in \(H\).
By Lemma 6, \(\delta(G_{j_{i+1}})\geq\delta(G_{j_{i}})-d_{j_{i+1}}\). Let \(h\in[\ell]\) be the largest integer such that \(Q\) enters \(G_{h}\), \(h=j_{c}\). Denote by \(p\) be the number of edges in \(E(Q)\setminus E(G_{h})\) and by \(y\) the length of the \((s_{h},t_{h})\)-subpath of \(Q\), then \(p+q=r\).
We now analyze the length of \(Q\) after performing the replacement operations in Lines 2.1-2.9. Denote by \(Y\) the set of all \(i\in[c-1]\) such that no contraction was made in Line 1.6 between \(\{s_{j_{i}},t_{j_{i}}\}\) and \(\{s_{j_{i+1}},t_{j_{i+1}}\}\). For each \(i\in Y\) with \(d_{j_{i+1}}>0\), performing the replacement operation in Line 2.7 between \(\{s_{j_{i}},t_{j_{i}}\}\) and \(\{s_{j_{i+1}},t_{j_{i+1}}\}\) in \(Q\) yields
\[|E(Q)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})|\geq d_{j_{i+1}}+1\geq 3d_{j_ {i+1}}/2.\]
Let \(p^{\prime}\) be the length of \(Q\) outside of \(G_{h}\) after all these replacements, from the above \(p^{\prime}\geq\frac{3}{2}\sum_{i\in Y}d_{j_{i+1}}\). Also, \(p^{\prime}\geq p\) since the replacement only takes place if it makes the path longer.
Denote by \(X:=[c-1]\setminus Y\) the set of all \(i\in[c-1]\) such that a contraction was made in Line 1.6 between \(\{s_{j_{i}},t_{j_{i}}\}\) and \(\{s_{j_{i+1}},t_{j_{i+1}}\}\). For each \(i\in X\) the algorithm reverses the respective edge contractions done in Line 1.6 in \(Q\). This increases the length of \(Q\) by \(d_{j_{i+1}}\), so after Line 2.9 it holds that \(|E(Q)\setminus E(G_{h})|\geq p^{\prime}+\sum_{i\in X}d_{j_{i+1}}\).
We now observe that any long \((s_{h},t_{h})\)-subpath in \(G_{h}\) can be combined with \(Q\) to preserve at least a constant fraction of \(p\) in the offset.
**Claim 3**.: _After Line 2.9, replacing the \((s_{h},t_{h})\)-subpath of \(Q\) with a path \(P\) in \(G_{h}\) of length \(\delta(G_{h}-\{s_{h},t_{h}\})+k^{\prime}\), where \(k^{\prime}\) is a nonnegative integer, yields an \((s,t)\)-path in \(G\) of length at least_
\[\delta(G-\{s,t\})+k^{\prime}+p/3.\]
Proof of Claim 3.: The length of the resulting path is at least
\[|E(Q)\setminus E(G_{j})|+|E(P)|\geq p^{\prime}+\sum_{i\in X}d_{j_ {i+1}}+\delta(G_{h}-\{s_{h},t_{h}\})+k^{\prime}\\ \geq p^{\prime}+\sum_{i\in X}d_{j_{i+1}}+\delta(G-\{s,t\})-\sum_ {i\in[c-1]}d_{j_{i+1}}+k^{\prime}\geq\delta(G-\{s,t\})+k^{\prime}+p^{\prime}- \sum_{i\in Y}d_{j_{i+1}}\\ \geq\delta(G-\{s,t\})+k^{\prime}+p/3.\]
Note that the last inequality holds since \(p^{\prime}\) is at least \(\frac{3}{2}\sum_{i\in Y}d_{j_{i+1}}\) and also at least \(p\). The path obtained at this point is an \((s,t)\)-path in \(G\) with possibly some contracted edges, since not all edge contractions were reversed. Reverse all remaining edge contractions affecting \(Q\) and obtain an \((s,t)\)-path in \(G\) of at least the same length.
For estimating the length of the \((s_{h},t_{h})\)-subpath, consider two cases depending on the type of \(G_{h}\).
\(G_{h}\) **is not decomposed.** In this case, \(Q\) contains the edge \(s_{h}t_{h}\) in \(H\), and in Line 2.12 this edge is replaced with the path \(P_{h}\). By Claim 3, this yields a path of length at least \(\delta(G-\{s,t\})+(r-1)/3\), since the length of \(P_{h}\) is at least \(\delta(G_{h}-\{s_{h},t_{h}\})\), and \(p=r-1\).
\(G_{h}\) **is decomposed.** By the choice of \(j\), the \((s_{h},t_{h})\)-subpath of \(Q\) does not enter any Erdos-Gallai component in the Erdos-Gallai decomposition induced by \(P_{j,1}\) and \(P_{j,2}\) in \(G_{h}\).
By Claim 3, an \((s_{h},t_{h})\)-path of length \(\delta(G_{h}-\{s_{h},t_{h}\})+k^{\prime}\) inside \(G_{h}\) combined with the outer part of \(Q\) obtains an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+p/3+k^{\prime}\) inside \(G\). We now focus on identifying a long enough \((s_{h},t_{h})\)-path inside \(G_{h}\).
Let \(k^{\prime}:=\lfloor(q-5)/8\rfloor\), so \(q\geq 8k^{\prime}+5\geq 4k^{\prime}+5\). If \(P_{h}\) is longer than \(\delta(G_{h}-\{s_{h},t_{h}\})+k^{\prime}\), then plugging \(P_{h}\) into Claim 3 gives an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+p/3+k^{\prime}+1\geq\delta(G-\{s,t\})+p/3+(q-5)/8>\delta(G- \{s,t\})+r/8-1\). Otherwise, we apply Lemma 9 to \(G_{h}\), \(P_{h}\) and the \((s_{h},t_{h})\)-subpath of \(Q\) to obtain an \((s_{h},t_{h})\)-path \(R\) in \(G_{h}\).
If the length of \(R\) is at least \(\delta(G_{h}-\{s_{h},t_{h}\})+k^{\prime}-1\), Claim 3 gives the desired bound of \(\delta(G-\{s,t\})+r/8-3\). Otherwise, \(\frac{1}{2}\delta(G_{h}-\{s_{h},t_{h}\})-\frac{5}{2}k^{\prime}<k^{\prime}\), then \(7k^{\prime}>\delta(G_{h}-\{s_{h},t_{h}\})\). It follows that \(q>\delta(G_{h}-\{s_{h},t_{h}\})+q/8+5\). Hence, by applying Claim 3 to the initial \((s_{h},t_{h})\)-subpath of \(Q\) we get a path of length at least \(\delta(G-\{s,t\})+p/3+q/8+5>\delta(G-\{s,t\})+r/8\). Since Algorithm 2 takes the longest of \(R\) and the original subpath of \(Q\), both cases are covered.
It will also be helpful to observe that in the "slice" between a decomposed component and the nested components, at most two edges of any path can be contracted. Note that this does not follow immediately, as a pair of edges to _each_ of the nested components is potentially contracted.
**Claim 4**.: _Let \(Q\) be an \((s_{j},t_{j})\)-path inside a decomposed graph \(G_{j}\). Then all edges \(E(Q)\cap E(G_{j})\setminus\bigcup_{e(i)=j}E(G_{i})\) are unchanged in \(H\) except for, possibly, contraction of the first and the last edge of \(Q\)._
Proof of Claim 4.Let \(i\) be such that a contraction is made for \(G_{i}\) in Line 1.6 with \(e(i)=j\) and \(d_{i}>0\). There are no two disjoint paths between \(\{s_{j},t_{j}\}\) and \(\{s_{i},t_{i}\}\) of total length at least \(d_{i}+1\).
Without loss of generality, we assume that \(s_{i}\neq s_{j}\), \(t_{i}\neq s_{j}\) and \(s_{i}\neq t_{j}\) and the edge \(s_{j}s_{i}\) is contracted. If \(s_{i}\notin V(Q)\), then \(Q\) is not affected in \(H\). We assume that \(s_{i}\in V(Q)\). If \(s_{i}\) is the second vertex in \(V(Q)\), then \(s_{j}s_{i}\) is the first edge of \(V(Q)\) as required.
Suppose now that \(s_{i}\) is not the second vertex in \(Q\). Then the \((s_{j},s_{i})\)-subpath of \(Q\) is of length at least two. If \(t_{i}\) does not belong to this subpath, we add the trivial (of length zero or one) \((t_{i},t_{j})\)-path in \(G_{j}\) and obtain two disjoint paths between \(\{s_{j},t_{j}\}\) and \(\{s_{i},t_{i}\}\) of total length at least \(2+|\{t_{i},t_{j}\}|-1>d_{i}\), which is a contradiction.
Hence, \(t_{i}\) is present on the \((s_{j},s_{i})\)-subpath of \(Q\). Then \(t_{i}\neq t_{j}\), so \(d_{i}=2\) and \(s_{i},t_{i},s_{j},t_{j}\) are all distinct. We have an \((s_{j},t_{i})\)-subpath of \(Q\) and an \((s_{i},t_{j})\)-subpath of \(Q\) which are disjoint. If one of them is of length at least two, then we have two disjoint paths of total length more than \(d_{i}\). Hence, \(s_{j}t_{i}\) and \(s_{i}t_{j}\) are the first and the last edge in \(Q\). The proof of the claim is complete.
\(\lrcorner\)
Now we are ready to prove the main lemma that bounds the length of the \((s,t)\)-path returned by Algorithm 3.
**Lemma 12**.: _long_nested_st_path outputs an \((s,t)\)-path in \(G\) of length at least \(\delta(G-\{s,t\})+f(k)/32-3\), where \(k=L-\delta(G-\{s,t\})\) and \(L\) is the length of the longest \((s,t)\)-path in \(G\)._
Proof.: Let \(T\) be the longest \((s,t)\)-path in \(G\), \(|E(T)|=L=\delta(G-\{s,t\})+k\). Our aim is to show that either \(T\) yields a sufficiently long path in \(H\) to use Lemma 11, or conclude that after contractions most of the path stays inside one Erdos-Gallai component, the deepest component visited. In case of the latter, we show that it suffices to take a long path inside this component.
We now introduce some notations for \(T\) with respect to the nested Erdos-Gallai decomposition structure, similarly to the proof of Lemma 11. Let \(h\in[\ell]\) be the largest integer such that \(T\) enters \(G_{h}\). Let \(j_{1},j_{2},\ldots,j_{c}\) be such that \(j_{c}=j\), \(j_{1}=1\) and \(e(j_{i+1})=j_{i}\) for each \(i\in[c-1]\). Denote by \(Y\) the set of all \(i\in[c-1]\) such that no contraction was made in Line 1.6 between \(\{s_{j_{i}},t_{j_{i}}\}\) and \(\{s_{j_{i+1}},t_{j_{i+1}}\}\). Denote \(X:=[c-1]\setminus Y\).
Consider what happens to the path \(T\) in the graph \(H\) between two consecutive nested components \(G_{j_{i}}\) and \(G_{j_{i+1}}\). Since \(T\) is the longest \((s,t)\)-path in \(G\), edges in \(E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})\) form two disjoint paths between \(\{s_{j_{i}},t_{j_{i}}\}\) and \(\{s_{j_{i+1}},t_{j_{i+1}}\}\) of longest possible total length. If \(|E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})|=d_{j_{i+1}}\), all edges in \(E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})\) are contracted in \(H\).
Otherwise, \(|E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})|>d_{j_{i+1}}\). By Claim 4, all edges in \(E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})\) are present in \(H\), except for possibly \(d_{j_{i+1}}\) of them (the first and/or the last). Also, recall that by properties of nested Erdos-Gallai decomposition, \(T\) does not enter any \(G_{i}\) with \(e(i)=j_{i}\) except for \(G_{j_{i+1}}\). Then the removal of the internal vertices of non-decomposed components does not affect edges in \(E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})\). Hence, at least \(|E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})|-d_{j_{i+1}}\) of the edges are present in \(H\) in this case, which is at least one third of the edges in \(E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})\) since \(d_{j_{i+1}}\leq 2\).
Let \(T^{\prime}\) be the path \(T\) with all contractions applied to \(H\). If \(G_{h}\) is not decomposed, we assume that \(T^{\prime}\) contains the edge \(s_{h}t_{h}\) marked with \(G_{h}\). By the above, we have
\[|E(T^{\prime})\setminus E(G_{h})|\geq\sum_{i\in[c-1]}|E(T)\cap E(G_{j_{i}}) \setminus E(G_{j_{i+1}})|-d_{j_{i+1}}\geq\frac{1}{3}(|E(T)\setminus E(G_{h})|- \sum_{i\in X}d_{j_{i+1}}).\]
The last inequality holds since for each \(i\in Y\) with \(d_{j_{i+1}}>0\), \(|E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})|-d_{j_{i+1}}\) is at least \(\frac{1}{3}|E(T)\cap E(G_{j_{i}})\setminus E(G_{j_{i+1}})|\), and for all the remaining indices \(i\),
\(d_{j_{i+1}}\). Denote \(p:=|E(T^{\prime})\setminus E(G_{h})|\), and from the above obtain the equivalent
\[|E(T)\setminus E(G_{h})|\leq 3p+\sum_{i\in X}d_{j_{i+1}}. \tag{1}\]
If \(p\geq k/4\), then an \((s,t)\)-path of length at least \(k/4+1\) is present in \(H\). In this case, an approximation of the longest \((s,t)\)-path in \(H\) in Line 3.2 of long_nested_st_path gives a path of length at least \(f(k/4+1)\geq f(k)/4\). By Lemma 11, running nested_decompress on this path results in an \((s,t)\)-path of length at least \(\delta(G-\{s,t\})+f(k)/32-3\) in \(G\), so in this case the proof is finished.
Otherwise, \(p<k/4\). Denote by \(T_{h}\) the \((s_{h},t_{h})\)-subpath of \(T\). For simplicity, denote \(\delta:=\delta(G-\{s,t\})\) and \(\delta_{h}:=\delta(G_{h}-\{s_{h},t_{h}\})\). We consider two cases.
\(G_{h}\)**is decomposed.** In this case, \(T_{h}\) does not enter any Erdos-Gallai component of \(G_{h}\). By Claim 4, at most two edges of \(T_{h}\) can be contracted in \(H\), denote the number of such edges by \(d^{\prime}_{h}\). Also, at least one edge is not contracted, so \(|E(T_{h})|-d^{\prime}_{h}\geq|E(T_{h})|/3\). Thus, \(T^{\prime}\) is an \((s,t)\)-path of length at least \(p+|E(T_{h})|-d^{\prime}_{h}\geq\frac{1}{3}\left(|E(T)|-\sum_{i\in X}d_{j_{i+1} }\right)\) in \(H\). If \(\sum_{i\in X}d_{j_{i+1}}\geq\delta+k/4-3\), then long_nested_st_path outputs an \((s,t)\)-path of length at least \(\delta+k/4-3\) by Claim 2, regardless of the length of \(P_{h}\) at Line 3.5.
Otherwise, \(T^{\prime}\) is an \((s,t)\)-path in \(H\) of length at least \(\frac{1}{3}(3k/4+3)=k/4+1\). Analogously to the case \(p\geq k/4\), long_nested_st_path then finds a path of length at least \(f(k)/4\), and the path \(Q\) returned by nested_decompress is of length at least \(\delta(G-\{s,t\})+f(k)/32-3\).
\(G_{h}\)**is not decomposed.** Here, our goal is to show that taking in \(G_{h}\) one of the \((s_{h},t_{h})\)-paths computed on Line 3.5 together with an arbitrary connection from \(\{s,t\}\) to \(\{s_{h},t_{h}\}\) gives a long enough \((s,t)\)-path in \(G\). Let \(k_{h}:=|E(T_{h})|-\delta_{h}\). Note that by the choice of \(T\), \(T_{h}\) is the longest \((s_{h},t_{h})\)-path in \(G_{h}\). We first show the following.
**Claim 5**.: _If \(G_{h}\) is not decomposed, then at Line 3.6 the length of \(P_{h}\) is at least \(\delta_{h}+f(k_{h})/8-3\)._
Proof of Claim 5.: First note that if the length of \(P_{h}\) from definition of nested Erdos-Gallai decomposition is at least \(|V(G_{h})|-1\), then \(P_{h}\) is a hamiltonian \((s_{h},t_{h})\)-path in \(G_{h}\). Then its length is maximum possible and is equal to \(|E(T_{h})|=\delta_{h}+k_{h}\geq\delta_{h}+f(k_{h})\). Hence, we can assume that the length of \(P_{h}\) given by nested Erdos-Gallai decomposition is at least \(\frac{5}{4}\delta_{h}-3\).
If \(f(k_{h})\geq\frac{8}{7}\delta_{h}\), then long_st_path_approx(\(G_{h}\), \(s_{h}\), \(t_{h}\)), the blackbox \((s,t)\)-path approximation algorithm, returns a path of length at least \(f(\delta_{h}+k_{h})\geq f(k_{h})\geq\delta_{h}+f(k_{h})/8\), so we are done.
Otherwise, \(f(k_{h})\leq\frac{8}{7}\delta_{h}\). If \(f(k_{h})\leq 24\), then it suffices for \(P_{h}\) to have length \(\delta_{h}\). In this case long_eq_st_path(\(G_{h}\), \(s_{h}\), \(t_{h}\)), the exact \((s,t)\)-path algorithm from Corollary 2, returns a \((s_{h},t_{h})\)-path of length at least \(\delta_{h}\).
It only remains to deal with the case where \(f(k_{h})>24\), and \(\delta_{h}\geq\frac{7}{8}f(k_{h})\). Since \(G_{h}\) is not decomposed and \(\delta_{h}>16\), by definition of nested Erdos-Gallai decomposition, \(P_{h}\) is of length at least
\[\frac{5}{4}\delta_{h}-3\geq\delta_{h}+\frac{1}{4}\delta_{h}-3\geq\delta_{h}+ \frac{1}{4}\cdot\frac{7}{8}f(k_{h})-3\geq\delta_{h}+\frac{1}{8}f(k_{h})-3.\]
Using (1), we can lower-bound \(k_{h}\) by
\[k_{h}=|E(T)|-|E(T)\setminus E(G_{h})|-\delta_{h}\geq|E(T)|-3p-\sum_{i\in X}d_{ j_{i+1}}-\delta_{h}=\delta+k-3p-\sum_{i\in X}d_{j_{i+1}}-\delta_{h}. \tag{2}\]
On Line 3.6, \(P_{h}\) is transformed into an \((s,t)\)-path in \(G\) of length at least \(|E(P_{h})|+\sum_{i\in X\cup Y}d_{j_{i+1}}\), by Claim 2. By Claim 5, this length is at least
\[\delta_{h}+f(k_{h})/ 8-3+\sum_{i\in X\cup Y}d_{j_{i+1}}\] \[\geq\delta+(\delta_{h}-\delta)+f\left(k+\delta-3p-\sum_{i\in X}d_ {j_{i+1}}-\delta_{h}\right)/8+\sum_{i\in X\cup Y}d_{j_{i+1}}-3\] \[\geq\delta+\underbrace{(\delta_{h}+\sum_{i\in X\cup Y}d_{j_{i+1} }-\delta)}_{\geq 0\text{ by Lemma \ref{lem:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm: thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:ththm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:ththm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:ththm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thmth:thm:thmthm:thm:thm:thm:thmthm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thmth:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmth:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thmth:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thmthm:thm:thmthm:thm:thmth:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thmthm:thm:thmthm:thm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thmthm:thm:thmthm:thm:thm:thm:thmthm:thmthm:thm:thm:thm:thm:thm:thm:thmthm:thm:thm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm:thm:thmthm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thm:thmthm
\(G\). We start with the definition of a Dirac decomposition, which can be seen as an analogue of an Erdos-Gallai decomposition for cycles.
**Definition 3** (Dirac decomposition and Dirac component, Definition 5 in [24]).: Let \(G\) be a \(2\)-connected graph and let \(C\) be a cycle in \(G\) of length at least \(2\delta(G)\). We say that two disjoint paths \(P_{1}\) and \(P_{2}\) in \(G\) induce _a Dirac decomposition for \(C\)_ in \(G\) if
* The cycle \(C\) is of the form \(C=P_{1}P^{\prime}P_{2}P^{\prime\prime}\), where each of the paths \(P^{\prime}\) and \(P^{\prime\prime}\) has at least \(\delta(G)-2\) edges.
* For every connected component \(H\) of \(G-V(P_{1}\cup P_{2})\) holds \(|V(H)|\geq 3\) and one of the following. 1. \(H\) is \(2\)-connected, the maximum size of a matching in \(G^{\prime}\) between \(V(H)\) and \(V(P_{1})\) is one, and between \(V(H)\) and \(V(P_{2})\) is also one; 2. \(H\) is not \(2\)-connected, exactly one vertex of \(P_{1}\) has neighbors in \(H\), that is, \(|N_{G}(V(H))\cap V(P_{1})|=1\), and no inner vertex from a leaf-block of \(H\) has a neighbor in \(P_{2}\); 3. The same as (D2), but with \(P_{1}\) and \(P_{2}\) interchanged. That is, \(H\) is not \(2\)-connected, \(|N_{G}(V(H))\cap V(P_{2})|=1\), and no inner vertex from a leaf-block of \(H\) has a neighbor in \(P_{1}\).
* There is exactly one connected component \(H\) in \(G-V(P_{1}\cup P_{2})\) with \(V(H)=V(P^{\prime})\setminus\{s^{\prime},t^{\prime}\}\), where \(s^{\prime}\) and \(t^{\prime}\) are the endpoints of \(P^{\prime}\). Analogously, there is exactly one connected component \(H\) in \(G-V(P_{1}\cup P_{2})\) with \(V(H)=V(P^{\prime\prime})\setminus\{s^{\prime\prime},t^{\prime\prime}\}\).
The set of _Dirac components_ for a Dirac decomposition is defined as follows. First, for each component \(H\) of type (D1), \(H\) is a Dirac component of the Dirac decomposition. Second, for each leaf-block of each \(H\) of type (D2), or of type (D3), this leaf-block is also a Dirac component of the Dirac decomposition.
First, we recall an important property of a Dirac decomposition that restricts how a cycle can pass through a Dirac component.
**Lemma 13** (Lemma 17 in [24]).: _Let \(G\) be a \(2\)-connected graph and \(C\) be a cycle in \(G\). Let paths \(P_{1},P_{2}\) induce a Dirac decomposition for \(C\) in \(G\). Let \(M\) be a Dirac component of the Dirac decomposition and \(P\) be a path in \(G\) such that \(P\) contains at least one vertex in \(V(P_{1})\cup V(P_{2})\). If \(P\) enters \(M\), then all vertices of \(M\) hit by \(P\) appear consecutively on \(P\)._
We now restate the result of [24] on the construction of a Dirac decomposition for a given graph. Note that here we state it in a slightly different form, which is more convenient in the setting of this paper. The statement is given below and the main difference is highlighted in bold.
**Lemma 14** (Lemma 20 in [24]).: _Let \(G\) be an \(n\)-vertex \(2\)-connected graph and \(k\) be an integer such that \(\delta(G)\geq 12\), \(0<k\leq\frac{1}{24}\delta(G)\), and_
\[2k+12\leq\delta(G)<\frac{\boldsymbol{n}}{\boldsymbol{2}}.\]
_Then there is an algorithm that, given a **non-hamiltonian** cycle \(C\) of length less than \(2\delta(G)+k\) in polynomial time finds either_
* _Longer cycle in_ \(G\)_, or_
* _Vertex cover of_ \(G\) _of size at most_ \(\delta(G)+2k\)_, or_
* _Two paths_ \(P_{1},P_{2}\) _that induce a Dirac decomposition for_ \(C\) _in_ \(G\)_._
To clarify this form, note that in the original statement of Lemma 20 in [24], the upper bound for \(\delta(G-B)\) is \(\frac{n}{2}-\frac{|B|+k}{2}\), where \(B\) is a given set of small-degree vertices. In the proof of Lemma 20 in [24], one can easily note that the only reason for this bound is the existence of at least one vertex in \(V(G)\setminus V(C)\setminus B\). Since in our work \(B\) is always empty, this is equivalent to \(V(G)\neq V(C)\), i.e. non-hamiltonicity of \(C\). Hence, the replacement of this bound with the condition on non-hamiltonicity of \(C\) is legitimate.
We finish this subsection with another important property of Dirac decomposition stating that there is a long cycle that enters at least one Dirac component. Unfortunately, this property, Lemma 19 in [24], is stated in a way requiring the offset above \(2\delta(G)\) for this long cycle to be much smaller than \(\delta(G)\). Here we provide this property in the form that does not require this and is much more convenient in our setting. Since it differs significantly from the original statement, we provide a proof of this result that is based on the proof of Lemma 19 from [24].
**Lemma 15** (Modified Lemma 19 from [24]).: _Let \(G\) be a graph and \(P_{1},P_{2}\) induce a Dirac decomposition for a cycle \(C\) of length at most \(2\delta(G-B)+\kappa\) in \(G\) such that \(2\kappa\leq\delta(G)\). If there exists a cycle of length at least \(2\delta(G)+k\) in \(G\) that contains at least one vertex in \(V(P_{1})\cup V(P_{2})\), then there exists a cycle of length at least \(2\delta(G)+k/2-1\) in \(G\) that enters a Dirac component._
Proof.: Suppose that there exists a cycle \(C^{\prime}\) of length at least \(2\delta(G)+k\) in \(G\) that contains at least one vertex in \(V(P_{1})\cup V(P_{2})\). If \(C^{\prime}\) already contains an edge of a Dirac component, we are done. We now assume that \(C^{\prime}\) does not contain any edge of any Dirac component. We show how to use \(C^{\prime}\) to construct a cycle of length at least \(2\delta(G)+k/2-1\) in \(G\) that contains an edge of a Dirac component of the given Dirac decomposition.
Let \(W\) be the set of all vertices of \(G\) that are vertices of non-leaf-blocks of (D2)-type or (D3)-type components in the Dirac decomposition. We start with the following claim.
**Claim 6**.: \(|W\cap V(C^{\prime})|>0\)_._
Proof of Claim 6.: This is a counting argument. Note that \(C^{\prime}\) cannot contain an edge with both endpoints inside a Dirac component of \(G\). Since Dirac components of \(G\) are (D1)-type components of the Dirac decomposition and leaf-blocks of (D2)-type or (D3)-type connected components, each edge of \(C^{\prime}\) has an endpoint either in \(V(P_{1})\cup V(P_{2})\), or inside a non-leaf-block of a (D2)-type or a (D3)-type connected component. The union of the vertex sets of the non-leaf-blocks forms the set \(W\). Hence, \((W\cap V(C^{\prime}))\cup V(P_{1})\cup V(P_{2})\) is a vertex cover of \(C^{\prime}\).
Note that a vertex cover of any cycle consists of at least half of its vertices. Then
\[2|(W\cap V(C^{\prime}))\cup V(P_{1})\cup V(P_{2})|\geq|V(C^{\prime})|.\]
By definition of a Dirac decomposition, \(|V(P_{1})\cup V(P_{2})|\leq\kappa-2\). Immediately we get that
\[2|W\cap V(C^{\prime})|\geq 2\delta(G)+k-2|V(P_{1})\cup V(P_{2})|\geq 2\delta(G)+k -2(\kappa-2)>0.\]
We now take a vertex \(w_{1}\in W\cap V(C^{\prime})\). The following claim allows constructing a long chord of \(C^{\prime}\) starting in \(w_{1}\).
**Claim 7**.: _Let \(H\) be a (D2)-type or a (D3)-type component in the Dirac decomposition. \(C^{\prime}\) does not contain any inner vertex of the leaf-blocks of \(H\)._
Proof of Claim 7.: Suppose that \(C^{\prime}\) contains some vertex \(u\in V(H^{\prime})\) that is an inner vertex of some leaf-block \(L\) of \(H\). As \(L\) is a Dirac component of \(G\), \(C^{\prime}\) cannot contain any edge of \(L\), so \(C^{\prime}\) should enter \(L\) from \(V(P_{1})\cup V(P_{2})\) through \(u\) and leave it immediately. By definition of Dirac decompositions, the only option to enter or leave \(L\) is to go through the only vertex in \(V(P_{1})\) (if \(H\) is of type (D2)) or in \(V(P_{2})\) (if \(H\) is of type (D3)). As \(C^{\prime}\) cannot contain any vertex twice, this is not possible.
Now construct the chord of \(C^{\prime}\) starting in \(w_{1}\). Since \(w_{1}\) is a vertex of a separable component \(H\), there is a cut vertex \(c_{1}\) of a leaf-block \(L_{1}\) of \(H\) reachable from \(w_{1}\) inside \(H\). The leaf-block \(L_{1}\) contains also at least one vertex \(v_{1}\neq c_{1}\) that is adjacent to a vertex in \(V(P_{1})\) (if \(H\) is of type (D2)) or to \(V(P_{2})\) (if \(H\) is of type (D3)) outside \(H\). We know that \(\delta(L_{1}-c_{1})\geq\delta(G-c_{1})-1\geq\delta(G)-2\), since the only outside neighbour of vertices in \(L_{1}-c_{1}\) is a single vertex in \(V(P_{1})\) or \(V(P_{2})\). By Corollary 2, there exists an \((c_{1},v_{1})\)-path inside \(L_{1}\) of length at least \(\delta(G)-2\). Combine this with a \((w_{1},c_{1})\)-path inside \(H\) and obtain a \((w_{1},v_{1})\)-path inside \(H\).
Note that the constructed \((w_{1},v_{1})\)-path can contain vertices from \(V(C^{\prime})\) apart from \(w_{1}\). Let \(w^{\prime}_{1}\in V(C^{\prime})\) be the vertex from \(V(C^{\prime})\) on the \((w_{1},v_{1})\)-path farthest from \(w_{1}\). Note that the \((w^{\prime}_{1},v_{1})\)-subpath still contains the \((c_{1},v_{1})\)-path as a subpath by Claim 7. Hence, we obtain a \((w^{\prime}_{1},v_{1})\)-path of length at least \(\delta(G)-2\) inside \(H\) that does not contain any vertex in \(V(C^{\prime})\setminus\{w^{\prime}_{1}\}\). To obtain a long chord of \(C^{\prime}\), it is left to reach the vertex in \(V(P_{1})\cup V(P_{2})\) from \(v_{1}\) outside \(H\), and then follow the cycle \(C\) until a vertex \(v^{\prime}_{1}\) of \(C^{\prime}\) is reached. This is always possible since \(V(C)\cap V(C^{\prime})\supseteq(V(P_{1})\cup V(P_{2}))\cap V(C^{\prime})\neq\emptyset\). We obtain a chord of length at least \(\delta(G)-1\) connecting \(w^{\prime}_{1}\) and \(v^{\prime}_{1}\).
The \((w^{\prime}_{1},v^{\prime}_{1})\)-chord of \(C^{\prime}\) splits \(C^{\prime}\) into two \((w^{\prime}_{1},v^{\prime}_{1})\)-arcs, and one of the arcs has length at least \(\delta(G)+k/2\). Combine this arc with the chord and obtain a cycle of length at least \(2\delta(G)+k/2-1\) in \(G\). This cycle contains an edge of a leaf-block of \(H\), i.e. of a Dirac component. The proof is complete.
### Existence of a separating pair
This subsection encapsulates the new combinatorial result behind Dirac decomposition that is crucial to our proof of Theorem 1. It helps us avoid using the Dirac decomposition explicitly in our algorithm, so that we can instead reduce to the algorithm for approximating \((s,t)\)-paths. The formal statement is recalled next.
**Lemma 1**.: _Let \(G\) be a \(2\)-connected graph and \(P_{1},P_{2}\) induce a Dirac decomposition for a cycle \(C\) of length at most \(2\delta(G)+\kappa\) in \(G\) such that \(2\kappa\leq\delta(G)\). If there exists a cycle of length at least \(2\delta(G)+k\) in \(G\), then there exist \(u,v\in V(G)\) such that_
* \(G-\{u,v\}\) _is not connected, and_
* _there is an_ \((u,v)\)_-path of length at least_ \(\delta(G)+(k-2)/4\) _in_ \(G\)_._
Proof.: Consider the longest cycle \(C^{\prime}\) in \(G\). We assume that this cycle is of length at least \(2\delta(G)+k\) and consider four cases.
**Case 1.**\(C^{\prime}\) _is completely contained in some connected component \(H\) of \(G-V(P_{1}\cup P_{2})\), and \(H\) is \(2\)-connected._ Then \(H\) is a Dirac component of type (D1). Since the matching size between \(V(H)\) and \(V(P_{i})\) for each \(i\in\{1,2\}\) is exactly one, by Konig's theorem all edges between \(V(H)\)
and \(V(P_{i})\) are covered by a single vertex. Denote this vertex by \(u\) for \(i=1\) and by \(v\) for \(i=2\). Since \(G\) is \(2\)-connected, \(u\) and \(v\) are distinct. As \(\{u,v\}\) separates \(H\) from the rest of the graph and \(V(C)\not\subset V(H\cup P_{1}\cup P_{2})\), we have that \(G-\{u,v\}\) is not connected. It is left to show that there exists a long \((u,v)\)-path in \(G\). Toward this, denote \(u^{\prime}=u\) if \(u\in V(H)\), and \(u^{\prime}\in N_{G}(u)\cap V(H)\) if \(u\in V(P_{1})\). Choose \(v^{\prime}\in V(H)\) similarly, i.e. \(v^{\prime}\) either equals \(v\) or is a neighbour of \(v\). Since \(G\) is \(2\)-connected, there is always a way to choose distinct \(u^{\prime}\) and \(v^{\prime}\). By Lemma 2, there is a \((u^{\prime},v^{\prime})\)-path of length at least \(\delta(G)+k/2\) in \(H\), hence there is also a \((u,v)\)-path of length at least \(\delta(G)+k/2\) in \(G\).
**Case 2.**\(C^{\prime}\) _is completely contained in a leaf-block of a connected component \(H\) of \(G-V(P_{1}\cup P_{2})\)._ That is, \(C^{\prime}\) is contained in a Dirac component of type (D2) or (D3). The choice of \(u\) and \(v\) is similar to Case 1. That is, if the Dirac component is of type (D2), choose \(u\) such that \(u\in N_{G}(V(H))\cap V(P_{1})\) and choose \(v\) equal to the cut vertex of the Dirac component. \(G-\{u,v\}\) is not connected as \(\{u,v\}\) separates the Dirac component from the rest of \(H\). There is an \((u,v)\)-path of length at least \(\delta(G)+k/2+1\) since there exists a \((z,v)\)-path of length at least \(\delta(G)+k/2\) by Lemma 2, where \(z\in N_{G}(u)\cap V(H-v)\). The choice of \(u\) and \(v\) for type (D3) is symmetrical.
**Case 3.**\(C^{\prime}\) _is completely contained in a non-leaf-block of a connected component \(H\) of \(G-V(P_{1}\cup P_{2})\)._ In this case, \(C^{\prime}\) is not contained in a Dirac component. Denote the non-leaf-block of \(H\) that contains \(C^{\prime}\) by \(K\). Without loss of generality, we assume that \(H\) corresponds to (D2), i.e. \(|N_{G}(V(H))\cap V(P_{1})|=1\). By Menger's theorem, there are either two vertices separating \(V(K)\) from \(V(P_{1}\cup P_{2})\) in \(G\) or three disjoint paths going from \(V(K)\) to \(V(P_{1}\cup P_{2})\). If the former is the case, denote these two vertices by \(u\) and \(v\). Obviously, \(G-\{u,v\}\) is not connected. There are two disjoint paths going from \(V(K)\) to \(V(P_{1}\cup P_{2})\), and one of these paths contains \(u\) and the other contains \(v\). Connect the endpoints of these paths in \(V(K)\) using a path of length at least \(\delta(G)+k/2\) inside \(V(K)\) given by Lemma 2. Clearly, the obtained long path contains an \((u,v)\)-path of length at least \(\delta(G)+k/2\) as a subpath.
If the latter is the case, then two of the three paths necessarily end in \(V(P_{2})\). Let these two paths start respectively from \(u_{1}\) and \(u_{2}\) in \(V(K)\) and end in \(v_{1}\) and \(v_{2}\) in \(V(P_{2})\). Note that these paths use only edges of \(H\) and edges between \(V(H)\) and \(V(P_{2})\) in \(G\). Moreover, none of the paths has an internal vertex in \(V(K)\) or \(V(P_{2})\). By Lemma 2, there is an \((u_{1},u_{2})\)-path of length at least \(\delta(G)+k/2\) in \(K\). Now construct a cycle in \(G\) by combining the \((u_{1},u_{2})\)-path with the \((u_{1},v_{1})\)-path, \((u_{2},v_{2})\)-path, and the subpath of \(C\) that goes between \(v_{1}\) and \(v_{2}\) outside of \(P_{2}\). Since that subpath contains \(P^{\prime}\) and \(P^{\prime\prime}\) from the definition of Dirac decomposition as subpaths, the obtained cycle is of length at least \((\delta(G)+k/2)+1+1+2\cdot(\delta(G)-2)\geq 3\delta(G)+k/2-2\geq 2\delta(G)+k/2\). This cycle contains an edge of \(P^{\prime}\), so it enters a Dirac component, so we can replace \(C^{\prime}\) with this cycle and apply the following case.
**Case 4.**\(C^{\prime}\) _has a common vertex with \(V(P_{1}\cup P_{2})\). By Lemma 15, we can assume that \(C^{\prime}\) enters a Dirac component \(K\) but its length is at least \(2\delta(G)+k/2-1\)._ Following Case 1 and Case 2, we know that there are \(u,v\in V(K\cup P_{1}\cup P_{2})\) such that in \(G-\{u,v\}\) vertices in \(V(K)\) are separated from the rest of the graph. By Lemma 13, vertices in \(V(K)\) appear consecutively on \(C^{\prime}\), so vertices and edges of \(C^{\prime}\) induce a path inside \(K\). Since \(C^{\prime}\) is not contained in \(V(K)\), at least one of \(u\) and \(v\) is present in \(C^{\prime}\), so we have two cases depending on \(|V(C^{\prime})\cap\{u,v\}|\). If \(u,v\in V(C^{\prime})\), then the longest arc of \(C^{\prime}\) going between \(u\) and \(v\) simply yields a path of length at least \((2\delta(G)+k/2-1)/2=\delta(G)+(k-2)/4\). If exactly one of \(u\) and \(v\) is present on \(C^{\prime}\), without loss of generality we assume \(u\in V(C^{\prime})\). Then \(V(C^{\prime})\subseteq V(K)\cup\{u\}\), as \(C^{\prime}\) does not pass through \(v\) -- the only other entry to \(K\). Similarly to Case 1 and Case 2, we have a vertex \(v^{\prime}\in V(K)\), which is either equal to \(v\) or is a neighbour of \(v\). Take a shortest path from \(v^{\prime}\) to \(V(C^{\prime})\) inside \(K\). Denote its endpoint by \(w\). Prolong the path starting in \(v^{\prime}\) with the longest arc of \(C^{\prime}\) that goes between \(w\) and \(u\). This yields a \((v^{\prime},u)\)-path, hence a \((v,u)\)-path, of length at least \(\delta(G)+(k-2)/4\) in \(G\)
### Proof of Theorem 1
In this subsection, we combine Theorem 2 and the results presented earlier in this section into the proof of Theorem 1.
Proof of Theorem 1.: Assume that we are given a blackbox algorithm that finds a cycle of length \(f(L)\) in a graph with the longest cycle length \(L\). We now describe the desired approximation algorithm that finds a cycle of length at least \(2\delta(G)+h(k)\) based on the blackbox algorithm, where
\[h(k)=\frac{1}{128}f(k)-8.\]
The input to our algorithm is a graph \(G\), let \(L\) be the length of the longest cycle in \(G\) and \(k=L-2\delta\). For convenience, denote \(\delta:=\delta(G)\). The goal of our algorithm is to find a cycle of length at least \(2\delta+h(k)\) in \(G\). Note that the algorithm does not estimate \(h(k)\) in any way, it merely outputs the longest cycle that was found during its run. We focus on showing that this cycle always has length at least \(2\delta+h(k)\).
The pseudocode of our algorithm is presented in Algorithm 4. The first few lines of the algrotihm are dedicated to eliminating various corner cases where either the blackbox approximation suffices directly, or a long Dirac cycle. This will help us avoid dealing with extreme parameter values later in the analysis.
If \(2\delta\geq n\), the algorithm will find and output a Hamiltonian cycle in \(G\) following Dirac's theorem on Line 4.5. For the rest of the analysis, we assume \(2\delta<n\). On Line 4.1 our algorithm applies the blackbox \(f(L)\)-approximation algorithm to \(G\). If \(f(L)\geq\frac{49}{24}\delta\), then the resulting cycle is of length at least \(2\delta+(f(L)-2\delta)\geq 2\delta+\frac{1}{49}f(L)\), which is at least \(2\delta+h(k)\). As the algorithm never makes the current cycle shorter, in this case the output will be automatically valid. We now also assume that \(f(L)<\frac{49}{24}\delta\).
On Line 4.2 our algorithm applies the FPT algorithm for Long Dirac Cycle to find a cycle of length at least \(2\delta+1\) in \(G\) in polynomial time. If such cycle is found, then our algorithm keeps the longest of this cycle and previously computed approximation. If \(h(k)=\frac{1}{128}f(k)-8\leq 1\), then this cycle is a required approximation. On the other hand, if a cycle of length at least \(2\delta+1\) does not exist in \(G\), then \(k=L-2\delta=0\), so a cycle of length \(2\delta\) is a valid approximation. Hence, in this case our algorithm just outputs a cycle of length at least \(2\delta\) guaranteed by Dirac's theorem in this case and stops.
We now can assume that \(f(k)\geq 9\cdot 128\). Since \(f(L)<\frac{49}{24}\delta\), it follows that \(\delta>24\). Thus, on Line 4.7, if \(\delta\leq 24\), our algorithm just stops as the required approximation cycle was already encountered by the algorithm.
Now we reach the main case of the algorithm, where we use the structural results on Dirac decomposition to find a long cycle. Before Line 4.9, the current cycle \(C\) has length at least \(2\delta\). If the length of \(C\) is less than \(\lfloor 2\frac{1}{24}\delta\rfloor\), then the algorithm of Lemma 14 is applied to the graph \(G\) and the cycle \(C\) with the parameter \(k^{\prime}=|V(C)|-2\delta+1\). If the outcome is a cycle longer than \(C\), then it replaces \(C\) with this cycle. If it still holds that \(|V(C)|<\lfloor 2\frac{1}{24}\delta\rfloor\), then our algorithm applies Lemma 14 to \(G\) and \(C\) again. This process repeats until one of the three possible structures are found in \(G\):
* Cycle \(C\) of length at least \(\lfloor 2\frac{1}{24}\delta\rfloor\);
* Vertex cover of size at most \(\delta+2(|V(C)|-2\delta+1)\);
* Two paths \(P_{1},P_{2}\) that induce a Dirac decomposition for \(C\).
\(\begin{array}{l}\mbox{longest\_cycle\_above\_degree\_approx}(G)\\ \mbox{\bf Input: $2$-connected graph $G$ of minimum degree $\delta$}\\ \mbox{\bf Output: a cycle $C$ of length at least $2\delta+h(k)$, where $k=L-2\delta$ and $L$ is the length of the longest cycle in $G$}\\ \mbox{\bf 4.1}\;C\longleftarrow\mbox{longest\_cycle\_approx}(G);\end{array}\)
```
1for\(i=1,2,\ldots,L-1\)do
2\(C\longleftarrow\)\(\lfloor\frac{1}{24}\delta\rfloor\)do
3if\(|V(C)|-2\delta<\lfloor\frac{1}{24}\delta\rfloor\) and Lemma 14 applied to \(G\) and \(C\) gives a longer cycledo
4\(C\longleftarrow\) a longer cycle in \(G\);
5
6 end for
7if\(|V(C)|\geq\lfloor 2\frac{1}{24}\delta\rfloor\) or Lemma 14 gives the vertex cover of \(G\)then
8return\(C\);
9
10 end if
11foreach\(u,v\in V(G)\) such that \(G-\{u,v\}\) is not connecteddo
12\(Q,R\longleftarrow\) empty paths;
13foreachconnected component \(H\) in \(G-\{u,v\}\)do
14\(S\longleftarrow\)longest_st_path_above_degree_approx(\(G[V(H)\cup\{u,v\}]+uv,u,v\));
15\(Q,R\longleftarrow\) two longest paths among \(Q,R,S\);
16
17 end for
18\(C\longleftarrow\) the longest of \(C,Q\cup R\);
19
20 end for
21return\(C\);
```
**Algorithm 4**The algorithm finding a cycle of length at least \(2\delta(G)+h(k)\) in a \(2\)-connected graph \(G\).
The first outcome is the desired \(h(k)\)-approximation of the offset since \(|V(C)|\geq 2\delta+\frac{1}{49}f(L)-1\geq 2\delta+h(k)\) in this case. In the second outcome, since a vertex cover upper-bounds the length of any cycle, \(L\leq 2\cdot(2\cdot|V(C)|-3\delta+2)\), hence \(|V(C)|-2\delta\geq\frac{L-4-2\delta}{4}\), so \(|V(C)|\geq 2\delta+\frac{k}{4}-1\). Automatically, \(C\) is a valid approximation in this case as well. Thus if any of the two situations occur, our algorithm simply returns the current cycle \(C\).
We move on to the most involved case where the two paths \(P_{1},P_{2}\) inducing a Dirac decomposition are found. Our goal now is to use Lemma 1 to find a separating pair of vertices that has a long path between them, and then use the already established algorithm from Theorem 2 to approximate the length of this path.
Hence, before moving further, we need to obtain an approximation algorithm for finding a long \((s,t)\)-path. For this, we apply Lemma 3 to the blackbox \(f(L)\)-approximation algorithm for the longest cycle and obtain an algorithm that finds an \((s,t)\)-path of length at least \(\frac{1}{2}f(2p)\) in a graph with the longest \((s,t)\)-path lenght \(p\). Finally, we apply Theorem 2 to the latter algorithm and obtain an algorithm finding an \((s,t)\)-path of length \(\delta+(\frac{1}{64}f(2k^{\prime})-3)\) where \(k^{\prime}=p-\delta\) for the longest \((s,t)\)-path length \(p\).
Since \(2(|V(C)|-2\delta)<\delta\), we can apply Lemma 1 to \(G\) and \(C\). We obtain that there exists a pair of vertices \(u,v\in V(G)\) such that \(G-\{u,v\}\) is not connected and there is a path of length at least \(\delta+(L-2\delta-2)/4\) between \(u\) and \(v\) in \(G\). Towards encountering this pair of vertices, our algorithm iterates over all possible \(u,v\in V(G)\) such that \(G-\{u,v\}\) is not connected. Assume that the pair \(\{u,v\}\) is fixed. Then for each connected component \(H\) of \(G-\{u,v\}\), our algorithm applies the \((s,t)\)-path approximation algorithm to \(G[V(H)\cup\{u,v\}]+uv\) to find a long \((u,v)\)-path. Note that this is a legitimate application of the algorithm since \(G[V(H)\cup\{u,v\}]+uv\) is \(2\)-connected.
Now, each application of the algorithm yields some \((u,v)\)-path. There are at least two connected components in \(G-\{u,v\}\), so at least two \((u,v)\)-paths are produced, and our algorithm simply combines the longest two paths among them in a cycle. The length of each path is at least \(\delta-2\). Moreover, if \(\{u,v\}\) is the pair given by Lemma 1, for at least one component the longest \((u,v)\)-path has length at least \(\delta+(k-2)/4\). Then for this connected component \(H\) the approximation algorithm finds an \((u,v)\)-path \(Q\) of length at least \(\delta(H-\{u,v\})+\frac{1}{64}f(2k^{\prime})-3\), where
\[k^{\prime}=\delta(G)+(k-2)/4-\delta(H-\{u,v\}).\]
Denote \(x=(\delta(G)-\delta(H-\{u,v\})\). Note that \(x\leq 2\) and can be negative. Then the length of \(Q\) is at least
\[\delta(G)-x+\frac{1}{64}f(2(x+(k-2)/4))-3.\]
If \(x\geq 0\), then the length of \(Q\) is at least
\[\delta(G)-2+\frac{1}{64}f((k-2)/2)-3.\]
If \(x<0\), then, as \(-x\geq f(|x|)\), the length of \(Q\) is at least
\[\delta(G) +f(|x|)+\frac{1}{64}f(-2|x|+(k-2)/2)-3\] \[\geq\delta(G)+\frac{1}{64}f(64|x|)+\frac{1}{64}f(-2|x|+(k-2)/2)-3\] \[\geq\delta(G)+\frac{1}{64}f(62|x|+(k-2)/2)-3\] \[\geq\delta(G)+\frac{1}{64}f((k-2)/2)-3.\]
In any case, the length of \(Q\) is at least
\[\delta+\frac{1}{64}f(k/2-1)-5\geq\delta+\frac{1}{128}f(k-2)-5\geq\delta+\frac {1}{128}f(k)-\frac{1}{128}f(2)-5\geq\delta+\frac{1}{128}f(k)-6.\]
Hence, if our algorithm combines the longest two paths given by the pair \(\{u,v\}\), it obtains a cycle of length at least \(2\delta+\frac{1}{128}f(k)-8=2\delta+h(k)\). Since the algorithm outputs the longest cycle among those constructed, the proof is complete.
Finally, we show the corollary of Theorem 1 that allows to approximate the longest path in a graph that is not necessarily \(2\)-connected.
**Corollary 3**.: _Let \(f:\mathbb{R}_{+}\to\mathbb{R}\) be a non-decreasing subadditive function. If there exists a polynomial-time algorithm finding a cycle of length \(f(L)\) in a \(2\)-connected graph with the longest cycle length \(L\), there is a polynomial time algorithm that outputs a path of length at least \(2\delta(G)+f(L-2\delta(G))/128-8\) in a graph \(G\) with \(\delta(G)<\frac{1}{2}|V(G)|\) and the longest path length \(L\)._
Proof of Corollary 3.: Take a graph \(G\). We assume that \(G\) is connected, otherwise we apply the same algorithm to each of its connected components.
Add a universal vertex to \(G\) and obtain a \(2\)-connected graph \(G^{\prime}\). Note that \(\delta(G^{\prime})=\delta(G)+1\) and there is a cycle of length at least \(c\) in \(G^{\prime}\) if and only if there is a path of length at least \(c-2\) in \(G\). Hence, \(c-2\delta(G^{\prime})=c-2\delta(G)-2=p-2\delta(G)\) for the longest cycle length \(c\) in \(G^{\prime}\) and the longest path length \(p\) in \(G\). Moreover, a cycle of length \(c\) in \(G^{\prime}\) can be transformed into a path of length at least \(c-2\) in \(G\).
Apply Theorem 1 to \(G^{\prime}\). The obtained cycle is of length at least \(2\delta(G^{\prime})+f(k)/128-8\) where \(k=c-2\delta(G^{\prime})\) for the longest cycle length \(c\) in \(G^{\prime}\). Finally, transform this cycle into a path of length at least \(2\delta(G)+f(k)/128-8\) in \(G\).
## 6 Conclusion
In this article, we have shown a general theorem that allows us to leverage all the algorithmic machinery for approximating the length of the longest cycle to approximate the "offset" of the longest cycle provided by the classical Dirac's theorem. As far as one can compute a cycle of length \(f(L)\) in a \(2\)-connected graph \(G\) with the longest cycle length \(L\), we can also construct a cycle of length \(2\delta(G)+\Omega(f(L-2\delta(G)))\). In particular, we can use the state-of-the-art approximation algorithm for Longest Cycle due to Gabow and Nie [30]. They achieve an algorithm finding a cycle of length \(f(L)=c^{\sqrt{\log L}}\) for some constant \(c>1\) in a graph with the longest cycle length \(L\). Note that \(f\) is non-decreasing and subadditive (as \(f\) is concave on \([1,+\infty]\), and any concave function is subadditive; we also can formally set \(f(x)=\min\{x,c^{\sqrt{\log x}}\}\) for \(x\geq 1\) and \(f(x)=x\) for \(x<1\) to fit the statement of Theorem 1). By substituting this to Theorem 1, we achieve a polynomial-time algorithm that outputs a cycle of length \(2\delta(G)+2^{\Omega(\sqrt{\log(L-2\delta(G))})}\) in a \(2\)-connected graph \(G\) with the longest cycle length \(L>2\delta(G)\).
In the field of parameterized algorithms, there are many results on computing longest cycles or paths above some guarantees. It is a natural question, whether approximation results similar to ours hold for other types of "offsets". To give a few concrete questions, recall that the _degeneracy_\(\operatorname{dg}(G)\) of a graph \(G\) is the maximum \(d\) such that \(G\) has an induced subgraph of minimum degree \(d\). By Erdos and Gallai [19], a graph of degeneracy \(d\geq 2\) contains a cycle of length at least \(d+1\). It was shown by Fomin et al. in [22] that a cycle of length at least \(L=\operatorname{dg}(G)+k\) in a \(2\)-connected graph can be found in \(2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\) time. This immediately yields a polynomial-time algorithm for computing a cycle of length at least \(\operatorname{dg}(G)+\Omega(\log(L-\operatorname{dg}(G)))\). Is there a better approximation of the longest cycle above the degeneracy?
Another concrete question. Bezakova et al. [4] gave an FPT algorithm that for \(s,t\in V(G)\) finds a detour in an undirected graph \(G\). In other words, they gave an algorithm that finds an \((s,t)\)-path of length at least \(L=\operatorname{dist}_{G}(s,t)+k\) in \(2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\) time. Here \(\operatorname{dist}_{G}(s,t)\) is the distance between \(s\) and \(t\). Therefore, in undirected graph we can find an \((s,t)\)-path of length \(\operatorname{dist}_{G}(s,t)+\Omega(\log(L-\operatorname{dist}_{G}(s,t))\) in polynomial time. The existence of any better bound is open. For directed graphs, the question of whether finding a long detour is FPT is widely open [4]. Nothing is known about the (in)approximability of long detours in directed graphs.
|
2303.11485 | Trirefringence and the M5-brane | The Hamiltonian formulation for nonlinear chiral 2-form electrodynamics in
six-dimensional Minkowski spacetime is used to show that small-amplitude
plane-wave perturbations of a generic uniform constant `magnetic' background
exhibit trirefringence: all three independent wave-polarisations have distinct
dispersion relations. While two coincide for Lorentz invariant theories, all
three coincide uniquely for the chiral 2-form theory on the worldvolume of the
M5-brane of M-theory. We argue that this is because, in this M-theory context,
the waves propagate in a planar M5-M2-M2 bound-state preserving 16
supersymmetries. We also show how our results imply analogous results for
nonlinear electrodynamics in a Minkowski spacetime of five and four dimensions. | Igor Bandos, Kurt Lechner, Dmitri Sorokin, Paul K. Townsend | 2023-03-20T22:45:59Z | http://arxiv.org/abs/2303.11485v4 | # Trirefringence and the M5-brane
###### Abstract
The Hamiltonian formulation for nonlinear chiral 2-form electrodynamics in six-dimensional Minkowski spacetime is used to show that small-amplitude plane-wave perturbations of a generic uniform constant'magnetic' background exhibit trirefringence: all three independent wave-polarisations have distinct dispersion relations. While two coincide for Lorentz invariant theories, all three coincide uniquely for the chiral 2-form theory on the worldvolume of the M5-brane of M-theory. We argue that this is because, in this M-theory context, the waves propagate in a planar M5-M2-M2 bound-state preserving 16 supersymmetries. We also show how our results imply analogous results for nonlinear electrodynamics in a Minkowski spacetime of five and four dimensions.
###### Contents
* 1 Introduction
* 2 Hamiltonian field equations
* 2.1 Expansion about a constant background
* 2.2 Plane waves on the background
* 2.3 Zero trirefringence conditions
* 2.4 Lorentz invariance
* 3 Relation to M-theory
* 4 Implications for 5D and 4D NLED
## 1 Introduction
The dispersion relation for an electromagnetic wave in an optically anisotropic medium is typically polarisation dependent; this is the phenomenon of birefringence. In theories of nonlinear electrodynamics (NLED), such as those arising as effective field theories for QED or SQED, a constant uniform electromagnetic background can be interpreted as an optical medium for small-amplitude plane-wave disturbances, which typically exhibit birefringence [1]. However, the Born-Infeld (BI) theory [2] is an exception to the rule [3]; in fact, Boillat [4] and Plebanski [5] have shown (in the closely related context of shock waves) that BI is the unique NLED with a weak-field limit for which there is no birefringence. There are others without a weak-field limit but, in contrast to BI, they are not electromagnetic-duality invariant [6].
Implicit in the above summary of NLED birefringence results of relevance here is the choice of a four-dimensional (4D) Minkowski spacetime. In higher dimensions there are more independent polarisations. In the 5D case, for example, the gauge vector field has three independent polarisations and one could expect to find three distinct dispersion relations for small-amplitude plane waves in a generic constant uniform electromagnetic background; i.e. trirefringence. As far as we are aware, this possibility has not been investigated, presumably because there is no obvious physical motivation but it is also less interesting from a purely theoretical perspective since one cannot expect to find any conformal limits of such theories. However, 5D NLEDs can be obtained by dimensional reduction from nonlinear theories of 6D chiral 2-form electrodynamics, without truncation since the chiral restriction ensures that there are still only three independent polarisations [7]. In this new context there are both weak-field and strong-field limits to interacting conformal chiral 2-form theories [8].
We have also shown in [8], extending observations of Perry and Schwarz [9], that there is a one-to-one correspondence, assuming Lorentz-invariance, between 6D chiral 2-form theories and 4D NLEDs that are electromagnetic duality invariant. This
correspondence suggests that the 6D partner to the 4D BI theory will have special trirefringence properties. In fact, we find that it is the unique chiral 2-form electrodynamics theory for which all three dispersion relations coincide; i.e. the unique "zero-trirefringence" theory. This is an apparently stronger uniqueness result than could have been expected from the "zero-birefringence" property of the 4D BI theory because more conditions are needed to ensure coincidence of three dispersion relations than are needed for two. However, Lorentz invariance is not manifest in the Hamiltonian formulation used here and, as we shall show, 6D Lorentz invariance requires, by itself, a coincidence of two of the three dispersion relations.
Previous investigations into bifrefringence in the 4D NLED context have all started with a manifestly Lorentz invariant Lagrangian function of the electric and magnetic fields. The analogous starting point for 6D chiral 2-form electrodynamics is not immediately available because the (nonlinear) chirality condition on the 3-form field strength already implies the field equations. This difficulty can be circumvented by the inclusion of additional fields, in various ways but never without the need for some other non-manifest symmetry that imposes constraints on interactions (see [8] and references therein). As we briefly review below, chirality is trivially incorporated in the Hamiltonian formulation, which also applies to any theory with the same phase-space as the free-field theory, irrespective of whether it is Lorentz invariant.
Our motivation for the investigation leading to these results comes from the importance of BI and its 6D chiral 2-form partner in String/M-theory. The worldvolume action for the D3-brane of IIB superstring theory is (for suitable boundary conditions) the \({\cal N}=4\) supersymmetrization of the 4D BI theory [10]. The worldvolume action for the M5-brane of M-theory [11, 12] can be similarly interpreted as the \((2,0)\)-supersymmetrization of the 6D chiral 2-form electrodynamics partner to the 4D BI theory [9]. In this context the 4D/6D pairing is a reflection of the String/M-theory dualities that relate the D3-brane with the M5-brane [13, 14]. We leave to the end of this article a discussion of implications of our results in this domain.
## 2 Hamiltonian field equations
In the Hamiltonian formulation of 6D chiral 2-form electrodynamics, the only independent field is a 2-form gauge potential \(A\) on the Euclidean 5-space. For time-space coordinates \(\{t,x^{i};i=1,\ldots,5\}\), the phase-space Lagrangian density takes the form [7, 8]
\[{\cal L}=\frac{1}{2}\dot{A}\cdot B-{\cal H}(B)\,, \tag{2.1}\]
where \(\dot{A}=\partial_{t}A\) and \(B\) is the'magnetic' 2-form field, with components
\[B^{ij}=(\mathbf{\nabla}\times A)^{ij}:=\frac{1}{2}\varepsilon^{ijklm }\partial_{k}A_{lm}\,, \tag{2.2}\]
and \(C\cdot C^{\prime}=\frac{1}{2}C^{ij}C^{\prime}_{ij}\) for any two 5-space 2-forms \((C,C^{\prime})\). The Hamiltonian density \({\cal H}\) and the 5-vector field-momentum density \({\bf p}\), with components
\[p_{i}:=(B\times B)_{i}=\frac{1}{8}\varepsilon_{ijklm}B^{jk}B^{lm}\,, \tag{2.3}\]
are the Noether charge densities associated to time and space translation invariance; we follow here the notation and conventions of [8]:
\[\dot{B}=\mathbf{\nabla}\times H\,,\qquad H:=\partial{\cal H}/ \partial B\,. \tag{2.4}\]
A basis for rotationally invariant functions of \(B\) is
\[s=\frac{1}{2}|B|^{2}=\frac{1}{4}B^{ij}B_{ij}\,,\qquad p\equiv|{\bf p}|=|B \times B|\,, \tag{2.5}\]
but it is convenient to impose rotational invariance by requiring \({\cal H}\) to be a function of \((s,p^{2})\), in which case
\[H={\cal H}_{s}B+2{\cal H}_{p^{2}}\left({\bf p}\times B\right), \tag{2.6}\]
where, here and below, subscripts \(s\) and \(p^{2}\) denote partial derivatives with respect to these independent variables.
Lorentz boost invariance remains non-manifest but it is a symmetry iff \(B\times B=H\times H\) (assuming unit speed of light) [8], and this is equivalent to
\[{\cal I}:={\cal H}_{s}^{2}+4s{\cal H}_{s}{\cal H}_{p^{2}}+4p^{2}{\cal H}_{p^{2 }}^{2}=1\,. \tag{2.7}\]
The choice \({\cal H}=s\) yields the free field theory.
### Expansion about a constant background
For any choice of \({\cal H}\), the field equations are solved by \(B=\bar{B}\), where \(\bar{B}\) is both uniform and constant. We may expand the field equation for \(B\) about this 'background' solution, which can then be viewed as a stationary homogeneous 'optical' fluid medium of energy density \(\bar{\cal H}={\cal H}(\bar{B})\) and momentum density \(\bar{\bf p}\). We may consider perturbations about this background by setting \(A=\bar{A}+{\rm a}\), where \(\mathbf{\nabla}\times\bar{A}=\bar{B}\) (note that \(\bar{A}\) cannot be uniform for non-zero \(\bar{B}\)). This implies that
\[B=\bar{B}+{\rm b}\,,\qquad{\rm b}=\mathbf{\nabla}\times{\rm a}\,, \tag{2.8}\]
and hence that \(\partial_{i}b^{ij}\equiv 0\). Expanding the field equation (2.4) to first order in \(b\) we find that
\[\dot{b}=\mathbf{\nabla}\times h(b)\,, \tag{2.9}\]
where \(h(b)\) is a two-form depending linearly on \(b\):
\[h_{ij} = Q\,b_{ij}+2\bar{\cal H}_{p^{2}}\left(\bar{B}b\bar{B}+\bar{B}^{2} b+b\bar{B}^{2}\right)_{ij} \tag{2.10}\] \[+\left[(Y+4\bar{s}X)(\bar{B}\cdot b)+2X(\bar{B}^{3}\cdot b)\right] \bar{B}_{ij}\] \[+2\left[X(\bar{B}\cdot b)+2\bar{\cal H}_{p^{2}p^{2}}(\bar{B}^{3} \cdot b)\right](\bar{B}^{3})_{ij}\]
for coefficient functions
\[\begin{split} Q=&\bar{\mathcal{H}}_{s}+4s\bar{\mathcal{H} }_{p^{2}}\,,\\ X=&\bar{\mathcal{H}}_{sp^{2}}+4\bar{s}\,\bar{ \mathcal{H}}_{p^{2}p^{2}}\,,\\ Y=& 4\bar{\mathcal{H}}_{p^{2}}+\bar{\mathcal{H}}_{ss}+4 \bar{s}\,\bar{\mathcal{H}}_{sp^{2}}\,.\end{split} \tag{2.11}\]
In what follows we shall omit the bars on the background values of \((s,p)\), and on \(\mathcal{H}\) and its derivatives; it should be clear from the context when we are considering a constant uniform background and when we are considering generic field configurations. However, we will retain the \(\bar{B}\) notation for the background value of \(B\).
### Plane waves on the background
For plane-wave solutions of (2.9) with angular frequency \(\omega\) and wave 5-vector \(\mathbf{k}\), the amplitudes \(\mathrm{b}_{ij}\) satisfy
\[-\omega\mathrm{b}=\mathbf{k}\times h(\mathrm{b})\,,\qquad k_{i}\mathrm{b}^{ij }=0\,, \tag{2.12}\]
but the second of these equations is implied by the first unless \(\omega=0\). The first equation can be written as a \(10\times 10\) matrix equation of the form \(M(\omega,k)\mathrm{b}=0\), where the components of \(\mathrm{b}\) are the ten independent components of the 2-form \(\mathrm{b}\). Each non-zero solution corresponds to a zero of \(\det M(\omega,k)\), a 10th-order polynomial in \(\omega\).
Using the \(O(5)\) rotation/reflection symmetry, we may choose the 5-space axes such that the only non-zero components of \(\bar{B}\) are
\[\bar{B}^{12}=-\bar{B}^{21}=\mathrm{B}_{1}\,,\qquad\bar{B}^{34}=-\bar{B}^{43}= \mathrm{B}_{2}\,, \tag{2.13}\]
for constants \(\mathrm{B}_{1}\geq\mathrm{B}_{2}\geq 0\), so that \(\mathrm{B}_{1}\mathrm{B}_{2}=p\). This canonical form for \(\bar{B}\) preserves an \(SO(2)\times SO(2)\) subgroup of \(O(5)\), which we may use to set
\[k_{2}=k_{4}=0\,. \tag{2.14}\]
The matrix \(M(\omega,k)\) is now block diagonal if we choose the first four components of \(\mathrm{b}\) to be \((\mathrm{b}_{24},\mathrm{b}_{13},\mathrm{b}_{15},\mathrm{b}_{35})\), so its determinant must factorise: \(\det M=\Delta_{4}\Delta_{6}\). One finds that
\[\Delta_{4}=\omega^{2}P_{2}(\omega)\,,\qquad\Delta_{6}=\omega^{2}P_{4}(\omega)\,, \tag{2.15}\]
for polynomials \(P_{2}\) and \(P_{4}\) of, respectively, second and fourth order in \(\omega\). The four linearly independent solutions with \(\omega=0\) are eliminated by the four conditions \(k_{i}\mathrm{b}^{ij}=0\), so plane-wave solutions correspond to zeros of either \(P_{2}\) or \(P_{4}\).
A calculation yields
\[P_{2}=(\omega+2k_{5}p\,\mathcal{H}_{p^{2}})^{2}-\chi \tag{2.16}\]
where
\[\chi=\mathcal{H}_{s}^{2}k_{5}^{2}+\mathcal{H}_{s}\left(Q_{1}k_{1}^{2}+Q_{2}k_{ 3}^{2}\right)\,, \tag{2.17}\]
with
\[Q_{\alpha}=\mathcal{H}_{s}+2\mathrm{B}_{\alpha}^{2}\mathcal{H}_{p^{2}}\qquad( \alpha=1,2). \tag{2.18}\]
The dispersion relation for one polarisation is therefore \(P_{2}=0\), with \(P_{2}\) given by (2.16). As expected, it reduces to \(\omega^{2}=|{\bf k}|^{2}\) in the free-field limit. More generally, it depends on the direction of the wave-vector because of the term in (2.16) with the factor of \(k_{5}p\) (i.e. \({\bf k}\cdot{\bf p}\)).
The remaining two dispersion relations must be obtained from the condition \(P_{4}=0\). A calculation yields
\[P_{4}=\left\{[\omega+k_{5}p\,(2{\cal H}_{p^{2}}+\Lambda)]^{2}-\chi^{\prime} \right\}P_{2}+\Upsilon k_{1}^{2}k_{3}^{2}\,, \tag{2.19}\]
where
\[\Lambda=2{\cal H}_{p^{2}}+{\cal H}_{ss}+4s{\cal H}_{sp^{2}}+4p^{2}{\cal H}_{p^ {2}p^{2}} \tag{2.20}\]
and
\[\chi^{\prime}=\Xi_{1}\Xi_{2}k_{5}^{2}+\Xi_{1}Q_{2}k_{1}^{2}+\Xi_{2}Q_{1}k_{3} ^{2}\,, \tag{2.21}\]
for the additional coefficient functions
\[\Xi_{\alpha}= {\cal H}_{s}+2s{\cal H}_{ss}+4p^{2}{\cal H}_{sp^{2}} \tag{2.22}\] \[+{\rm B}_{\alpha}^{2}(\Lambda-4s{\cal H}_{sp^{2}}-2{\cal H}_{ss})\,,\]
and finally
\[\Upsilon=N_{1}N_{2}-Q_{1}Q_{2}\,p^{2}\Lambda^{2} \tag{2.23}\]
with
\[N_{1}=Q_{2}\Xi_{1}-{\cal H}_{s}Q_{1}\,, \tag{2.24}\] \[N_{2}=Q_{1}\Xi_{2}-{\cal H}_{s}Q_{2}\,.\]
### Zero trirefringence conditions
The conditions required for all three dispersion relations to coincide is \(P_{4}=P_{2}^{2}\). From (2.19) we see that \(P_{2}\) is a factor of \(P_{4}\) for generic \({\bf k}\) only if \(\Upsilon=0\). The other factor is also \(P_{2}\) only if both \(\Lambda=0\) and \(\chi^{\prime}=\chi\) for all \({\bf k}\), which requires only that \(N_{1}=N_{2}=0\) since these two relations imply \(\Xi_{1}\Xi_{2}={\cal H}_{s}^{2}\). Moreover, the three relations
\[N_{1}=N_{2}=\Lambda=0 \tag{2.25}\]
imply \(\Upsilon=0\), so these three relations are the necessary and sufficient conditions for coincidence of all three dispersion relations. They may be simplified by the observation that
\[N_{1}+N_{2}=2\left(s{\cal H}_{s}+2p^{2}{\cal H}_{p^{2}}\right) \Lambda-8\left(s^{2}-p^{2}\right)\Lambda_{1}, \tag{2.26}\] \[N_{1}-N_{2}=2\sqrt{s^{2}-p^{2}}\left(4s\Lambda_{1}+8p^{2}\Lambda _{2}-{\cal H}_{s}\Lambda\right), \tag{2.27}\]
where
\[\Lambda_{1}:= {\cal H}_{s}{\cal H}_{sp^{2}}-{\cal H}_{p^{2}}{\cal H}_{ss} \tag{2.28}\] \[\Lambda_{2}:= {\cal H}_{s}{\cal H}_{p^{2}p^{2}}-{\cal H}_{p^{2}}{\cal H}_{sp^{2 }}\,.\]
This shows that equations (2.25) are jointly equivalent to the following three "zero trirefringence" conditions:
\[\Lambda_{1}=0\,,\qquad\Lambda_{2}=0\,,\qquad\Lambda=0\,. \tag{2.29}\]
The first two of these equations are trivially solved if \({\cal H}_{p^{2}}=0\), but then the third requires \({\cal H}\) to be a linear function of \(s\). Excluding this free field case, we may assume that \({\cal H}_{p^{2}}\neq 0\) and then define a new function \(T(s,p^{2})\) by the relation
\[{\cal H}_{s}=2T{\cal H}_{p^{2}}\,. \tag{2.30}\]
Using this in the equations \(\Lambda_{1}=\Lambda_{2}=0\) we find that
\[T_{s}{\cal H}_{p^{2}}^{2}=T_{p^{2}}{\cal H}_{p^{2}}^{2}=0\,, \tag{2.31}\]
from which we conclude that \(T\) is a constant. Using this fact to simplify the \(\Lambda=0\) condition, we then find the following simple second-order ODE for \({\cal H}\) as a function of \(p^{2}\):
\[{\cal H}_{p^{2}}+2\left(T^{2}+2Ts+p^{2}\right){\cal H}_{p^{2}p^{2}}=0\,. \tag{2.32}\]
The general solution has two integration constants. One is fixed by requiring positive energy and a choice of energy scale. The other is then fixed by requiring zero vacuum energy. The result is
\[{\cal H}={\cal H}_{\rm M5}:=\sqrt{T^{2}+2Ts+p^{2}}-T\,. \tag{2.33}\]
This is the Hamiltonian density for the chiral 2-form theory on the M5-brane [15, 16]; for brevity we shall call it the 'M5' theory. It is actually a family of theories labelled by the constant \(T\) (the M5-brane tension) which has dimensions of energy density, and the free-field theory is included as the \(T\to\infty\) limit. As already mentioned this 'M5' theory is the 6D partner to the 4D BI theory. It would be of interest to see whether there is a generalisation to 6D chiral 2-form dynamics of the recent characterisation of zero-birefringence NLEDs as those with a Lagrangian satisfying a particular "\(T\bar{T}\)-like flow equation" [17].
To summarise: within the class of chiral 2-form electrodynamics invariant under rotations and time-space translations, and with the same phase space as the standard free-field theory, only the one-parameter 'M5' family exhibits "zero-trirefringence". For this exceptional family, the one dispersion relation for the three independent wave-polarizations is found from setting \({\cal H}={\cal H}_{\rm M5}\) in (2.16) and then setting the resulting expression for \(P_{2}\) to zero; this yields
\[\left[\omega+\frac{{\bf k}\cdot{\bf p}}{T_{\rm eff}}\right]^{2}=\frac{T^{2}|{ \bf k}|^{2}-T|\bar{B}{\bf k}|^{2}}{T_{\rm eff}^{2}}\,, \tag{2.34}\]
where
\[T_{\rm eff}=\sqrt{T^{2}+2T\bar{s}+\bar{p}^{2}}\,. \tag{2.35}\]
Here we revert to the bar notation for background fields as a reminder that \(T_{\rm eff}\) (which will play a role later) is constant. In the \(T\to\infty\) (weak-field) limit this dispersion relation reduces to \(\omega^{2}=|{\bf k}|^{2}\), as expected.
We may also take the \(T\to 0\) (strong-field) limit for which \({\cal H}=p\). This defines an interacting conformal 6D chiral 2-form electrodynamics theory [16, 18]; its 4D partner is Bialynicki-Birula electrodynamics [1, 19]. All constant uniform background solutions now have \(p\neq 0\), and (2.34) reduces to the linear dispersion relation \(\omega+{\bf k}\cdot{\bf n}=0\), where \({\bf n}={\bf p}/p\). In this case \(b\) is a Fourier component of the first term in an expansion about \(B=\bar{B}\) of an exact solution of the full field equations of the form \(B=B_{\perp}(t-{\bf x}\cdot{\bf n},{\bf x}_{\perp})\), where \(n_{i}B_{\perp}^{ij}=0\) and \({\bf n}\cdot{\bf x}_{\perp}=0\) for fixed direction \({\bf n}\).
### Lorentz invariance
Surprisingly, the above results were obtained without the use of the Lorentz invariance condition (2.7), which is actually a consequence of the zero-trirefringence conditions (2.29). We shall now show that the Lorentz invariance condition by itself restricts trirefringence to birefringence; i.e. it implies that two of the three independent dispersion relations coincide. We begin with the observation that
\[{\cal I}_{s} = 2({\cal H}_{s}\Lambda-2s\Lambda_{1}-4p^{2}\Lambda_{2})\,,\] \[{\cal I}_{p^{2}} = 2({\cal H}_{p^{2}}\Lambda+\Lambda_{1}+2s\Lambda_{2})\,. \tag{2.36}\]
Next, we observe that (2.27) may be rewritten as
\[N_{1}-N_{2}=2\sqrt{s^{2}-p^{2}}\left({\cal H}_{s}\Lambda-{\cal I}_{s}\right)\,. \tag{2.37}\]
Now, using (2.37) together with (2.26) and (2.36) we obtain
\[N_{1}N_{2} \equiv \frac{1}{4}[(N_{1}+N_{2})^{2}-(N_{1}+N_{2})^{2}]\] \[= 8(s^{2}-p^{2})p^{2}\left(\Lambda_{2}{\cal I}_{s}-\Lambda_{1}{ \cal I}_{p^{2}}\right)+{\cal I}p^{2}\Lambda^{2}\,.\]
Substituting this expression into (2.23), and using the identity
\[Q_{1}Q_{2}\equiv{\cal I}\,, \tag{2.39}\]
where \({\cal I}\) is the expression defined in (2.7), we deduce that
\[\Upsilon=8(s^{2}-p^{2})p^{2}\left(\Lambda_{2}{\cal I}_{s}-\Lambda_{1}{\cal I} _{p^{2}}\right)\,. \tag{2.40}\]
This result shows that \(\Upsilon=0\) for any \({\cal H}\) such that \({\cal I}=1\), i.e. any Lorentz invariant theory. It then follows from (2.19) that Lorentz invariance implies \(P_{4}=P_{2}^{\prime}P_{2}\), where \(P_{2}^{\prime}(\omega)\) is another quadratic polynomial in \(\omega\). Thus, two of the three independent polarisations have coincident dispersion relations for generic (\(P_{2}^{\prime}\neq P_{2}\)) Lorentz invariant theories while \(P_{2}^{\prime}=P_{2}\) uniquely for the 'M5' case.
Relation to M-theory
It is natural to wonder whether there is some M-theory explanation for the zero-trirefringence property of the 'M5' chiral 2-form theory. In the context of the M5-brane worldvolume dynamics, the Minkowski vacuum for the 'M5' theory is a planar static M5-brane, and perturbations about it are propagated by the free-field equations of a (2,0)-supersymmetric 6D field theory; its on-shell supermultiplet includes the three polarisation modes of the 'M5' chiral 2-form electrodynamics and five others, one for each of the five scalars representing transverse fluctuations of the planar M5-brane in an 11-dimensional space-time [20]. It might appear that some of the 16 supersymmetries of this (2,0)-supermultiplet must be broken when constant uniform background fields are introduced on the M5-brane, but this is not necessarily the case, as we now explain.
It was shown in [21] that a static planar M5-brane with constant uniform 3-form field strength is \(\frac{1}{2}\)-supersymmetric; i.e. it preserves 16 of the 32 supersymmetries of the M-theory 11D Minkowski vacuum, independently of the strength of the 'background' 3-form field. If the skew eigenvalues \(\{{\rm B}_{\alpha};\alpha=1,2\}\) of the background'magnetic' 2-form are identified as "dissolved" M2-branes with charges
\[\zeta_{\alpha}=\sqrt{T}\,{\rm B}_{\alpha} \tag{3.1}\]
then the \(\frac{1}{2}\) supersymmetry is also implied by the supertranslation algebra associated to the M5-brane worldvolume dynamics provided that [21, 22]
\[P_{5}=\zeta_{1}\zeta_{2}/T\,, \tag{3.2}\]
which is the background field-momentum. The 'effective' M5-brane tension (i.e. total energy density) of these bound states is
\[P^{0}=\sqrt{T^{2}+\zeta_{1}^{2}+\zeta_{2}^{2}+P_{5}^{2}}\,\equiv T_{\rm eff}\,. \tag{3.3}\]
The construction in [22] of 11D supergravity solutions sourced by these M5-M2-M2 "bound states" (generalizing the simpler M5-M2 \(\frac{1}{2}\)-supersymmetric solution of [23]) further confirmed their \(\frac{1}{2}\) supersymmetry. These bound states are related by String/M dualities to the D2-D0-F1 "supertube" bound states of IIA superstring theory [24, 25] but with a planar D2-brane for which the generic \(\frac{1}{4}\) supersymmetry is enhanced to \(\frac{1}{2}\) supersymmetry [26].
Reverting to the bar notation for background fields, we may rewrite (3.3) as
\[T_{\rm eff}=\sqrt{T^{2}+2T\bar{s}+\bar{p}^{2}}\,. \tag{3.4}\]
This is the energy density of the M5-brane plus \(\bar{B}\) background, which we can view as a new worldvolume 'vacuum', preserving all 16 supersymmetries. We should then re-normalize the M5-brane vacuum energy to be zero when \(B=\bar{B}\). This means that we should replace \({\cal H}_{M5}\) by
\[{\cal H}^{\prime}_{M5}=\sqrt{T^{2}+2Ts+p^{2}}-T_{\rm eff}\,. \tag{3.5}\]
Now, in the expansion about the \(B=\bar{B}\) background, the energy is zero when \(b=0\). We thus expect the field equations (2.9) to be part of a larger set of equations for disturbances of a planar M5-M2-M2 bound state configuration preserving 16 supersymmetries. This leads us to conjecture that the zero-trirefringence property of the 'M5' theory is a consequence of its unique status as a consistent truncation of the maximally-supersymmetric 6D field theory found from expansion of the full M5-brane dynamics about a novel 1/2-supersymmetric vacuum.
## 4 Implications for 5D and 4D NLED
We conclude with a brief explanation of how our results for 6D chiral 2-form electrodynamics imply analogous results for 5D and 4D NLEDs by means of dimensional reduction. As we used symmetries preserved by the 6D \(B=\bar{B}\) background solution to choose the wave-vector of perturbations to have zero \(k_{2}\) and \(k_{4}\) components, we will first take all fields to be independent of \(x^{2}\), to get 5D results, and then of both \(x^{2}\) and \(x^{4}\), to get 4D results. In the former case the 5-space 2-form \(A=\frac{1}{2}dx^{i}\wedge dx^{j}A_{ij}\) can be written as
\[\frac{1}{2}dx^{a}\wedge dx^{b}\mathbb{A}_{ab}+dx^{2}\wedge dx^{a}V_{a}\,,\quad (a,b=1,3,4,5). \tag{4.1}\]
Correspondingly,
\[\begin{split} B^{ab}&=\frac{1}{2}\varepsilon^{abcd} F_{cd}\,,\quad(F_{ab}=2\partial_{[a}V_{b]})\\ B^{a2}&=\frac{1}{2}\varepsilon^{abcd}\partial_{b} \mathbb{A}_{cd}=:D^{a}\,.\end{split} \tag{4.2}\]
The gauge-invariant 4-space fields are therefore the two-form field strength \(F_{ab}\) and a divergence-free 4-vector field \(D^{a}\) that can be 'promoted' to an unconstrained 4-vector field by introducing a Lagrange multiplier field \(V_{0}\) to impose the constraint \(\partial_{a}D^{a}=0\). One then finds (ignoring total derivative terms) that
\[\frac{1}{2}\dot{A}\cdot B=E_{a}\,D^{a}\qquad(E_{a}=\partial_{a}V_{0}-\dot{V}_{ a})\,. \tag{4.3}\]
This is the'symplectic' term in the phase-space Lagrangian density (2.1) reduced to a 5D NLED. Its Hamiltonian is a function of \((s,p^{2})\) but now
\[s=\frac{1}{2}\left[|D|^{2}+|F|^{2}\right]\,. \tag{4.4}\]
and, since the components of \(\mathbf{p}\) in (2.3) are now
\[p_{2}=\frac{1}{8}\varepsilon^{abcd}F_{ab}F_{cd}\,,\qquad p_{a}=F_{ab}D^{b}\,, \tag{4.5}\]
we also have
\[p^{2}=\det F+|FD|^{2}\,. \tag{4.6}\]
For the special class of 5D NLEDs with Hamiltonian densities that are functions only of \((s,p^{2})\), our 6D results imply that the unique zero-trirefringence family has a Hamiltonian density that is formally the same as \({\cal H}_{M5}\) of (2.33), but this is now equivalent to
\[\sqrt{T^{2}\det\left(\mathbb{I}_{4}+F/\sqrt{T}\right)+T|D|^{2}+|FD|^{2}}-T\,. \tag{4.7}\]
This is the 5D BI Hamiltonian density, as can be seen from previous results on the Hamiltonian dynamics of the bosonic worldvolume fields for Dp-branes [27, 28].
We may now further dimensionally reduce by taking all fields to be independent of \(x^{4}\). In this case \(V_{4}\) becomes a scalar field, with canonical conjugate \(D^{4}\), and if we truncate by setting to zero this conjugate pair then we are left with the 3-vector \({\bf D}\) (conjugate to \({\bf V}\)) and the 3-space restriction of \(F\), which is the Hodge dual of the magnetic 3-vector field \({\bf B}\). The phase space Lagrangian density is now that of a 4D NLED with Hamiltonian density \({\cal H}\) that is again a function of \((s,p^{2})\), but now
\[s={{1\over 2}}\big{(}|{\bf D}|^{2}+|{\bf B}|^{2}\big{)}\,\qquad p^{2}=|{\bf D} \times{\bf B}|^{2}\,, \tag{4.8}\]
which implies that \({\cal H}\) is electromagnetic duality invariant. Our 6D trirefringence results now imply that all zero-birefringence 4D NLEDs in this class are members of the family for which \({\cal H}(s,p^{2})\) is formally the same as \({\cal H}_{M5}\) of (2.33), but this now defines the 4D BI theory. In other words, the 4D BI theory is the unique duality invariant NLED; this is a known result within a class of NLEDs with "good propagation" properties and a Lagrangian density that is an analytic function of Lorentz scalars [29]; the main novelty here is that we find it in Hamiltonian form, and by dimensional reduction of the 'M5' theory of 6D chiral 2-form electrodynamics.
From the 6D perspective, the truncation described above (following dimensional reduction) amounts to setting to zero the components \((B_{24},B_{13},B_{15},B_{35})\) of \(B\), only two of which are independent because of the identity \(\partial_{i}B^{ij}\equiv 0\). This truncation also removes the two linearly independent combinations of the \(({\rm b}_{24},{\rm b}_{13},{\rm b}_{15},{\rm b}_{35})\) perturbations of \(B\) about the \(\bar{B}\) background. In 6D these were the amplitudes describing the phase space for the single polarisation mode with dispersion relation \(P_{2}=0\) (as required for consistency of the truncation). In 4D they are the amplitudes for the 'extra' scalar field in the 4D \({\cal N}=4\) Maxwell supermultiplet relative to the 6D (2,0) antisymmetric tensor supermultiplet.
In the String/M-theory context, the 'dissolved' pair of orthogonal M2-branes (representing the \(\bar{B}\) background on a static planar M5-brane) becomes a 'dissolved' orthogonal F1-D1 pair of IIB strings, i.e. a constant uniform background of orthogonal \(({\bf D},{\bf B})\) fields, which generate the momentum \({\bf D}\times{\bf B}\) needed to preserve all 16 supersymmetries of a static planar D3-brane in the absence of the electromagnetic background fields.
## Acknowledgements
IB and DS have been partially supported by Spanish AEI MCIN and FEDER (ERDF EU) under grant PID2021-125700NB-C21 and by the Basque Government Grant IT1628-22. PKT has been partially supported by STFC consolidated grant ST/T000694/1.
|
2307.03576 | One Step of Gradient Descent is Provably the Optimal In-Context Learner
with One Layer of Linear Self-Attention | Recent works have empirically analyzed in-context learning and shown that
transformers trained on synthetic linear regression tasks can learn to
implement ridge regression, which is the Bayes-optimal predictor, given
sufficient capacity [Aky\"urek et al., 2023], while one-layer transformers with
linear self-attention and no MLP layer will learn to implement one step of
gradient descent (GD) on a least-squares linear regression objective [von
Oswald et al., 2022]. However, the theory behind these observations remains
poorly understood. We theoretically study transformers with a single layer of
linear self-attention, trained on synthetic noisy linear regression data.
First, we mathematically show that when the covariates are drawn from a
standard Gaussian distribution, the one-layer transformer which minimizes the
pre-training loss will implement a single step of GD on the least-squares
linear regression objective. Then, we find that changing the distribution of
the covariates and weight vector to a non-isotropic Gaussian distribution has a
strong impact on the learned algorithm: the global minimizer of the
pre-training loss now implements a single step of $\textit{pre-conditioned}$
GD. However, if only the distribution of the responses is changed, then this
does not have a large effect on the learned algorithm: even when the response
comes from a more general family of $\textit{nonlinear}$ functions, the global
minimizer of the pre-training loss still implements a single step of GD on a
least-squares linear regression objective. | Arvind Mahankali, Tatsunori B. Hashimoto, Tengyu Ma | 2023-07-07T13:09:18Z | http://arxiv.org/abs/2307.03576v1 | One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention
###### Abstract
Recent works have empirically analyzed in-context learning and shown that transformers trained on synthetic linear regression tasks can learn to implement ridge regression, which is the Bayes-optimal predictor, given sufficient capacity (Akyurek et al., 2023), while one-layer transformers with linear self-attention and no MLP layer will learn to implement one step of gradient descent (GD) on a least-squares linear regression objective (von Oswald et al., 2022). However, the theory behind these observations remains poorly understood. We theoretically study transformers with a single layer of linear self-attention, trained on synthetic noisy linear regression data. First, we mathematically show that when the covariates are drawn from a standard Gaussian distribution, the one-layer transformer which minimizes the pre-training loss will implement a single step of GD on the least-squares linear regression objective. Then, we find that changing the distribution of the covariates and weight vector to a non-isotropic Gaussian distribution has a strong impact on the learned algorithm: the global minimizer of the pre-training loss now implements a single step of _pre-conditioned_ GD. However, if only the distribution of the responses is changed, then this does not have a large effect on the learned algorithm: even when the response comes from a more general family of _nonlinear_ functions, the global minimizer of the pre-training loss still implements a single step of GD on a least-squares linear regression objective.
## 1 Introduction
Large language models (LLMs) demonstrate the surprising ability of in-context learning, where an LLM "learns" to solve a task by conditioning on a prompt containing input-output exemplars (Brown et al., 2020; Lieber et al., 2021; Radford et al., 2019; Wang and Komatsuzaki, 2021). Recent works have advanced the understanding of in-context learning via empirical analysis (Min et al., 2022; Wei et al., 2023; Akyurek et al., 2023; von Oswald et al., 2022; Dai et al., 2023), but theoretical analysis remains limited (Xie et al., 2022).
A recent line of work (Garg et al., 2022; Akyurek et al., 2023; von Oswald et al., 2022; Dai et al., 2023) empirically finds that transformers can be trained to implement algorithms that solve linear regression problems in-context. Specifically, in each input sequence the transformer is given a set of in-context examples \((x_{i},y_{i})\), where \(y_{i}=w^{\top}x_{i}+\epsilon_{i}\) with a shared and hidden random coefficient vector \(w\) and random noise \(\epsilon_{i}\), and a test example \(x\).1 The transformer is then trained to predict \(y=w^{\top}x+\epsilon\), where \(\epsilon\) denotes random noise from the same distribution as \(\epsilon_{i}\). These works find that the transformer outputs a prediction \(\hat{y}\) which is similar to the predictions of existing, interpretable linear regression algorithms, such as gradient descent (GD) or ordinary least squares, applied to the dataset consisting of the pairs \((x_{i},y_{i})\). In particular, von Oswald et al. (2022) empirically show that a one-layer transformer with linear self-attention and no MLP layer will implement a single step of gradient descent when trained on such a distribution.
Footnote 1: In some settings in these works, the noise is set to \(0\).
Several works (e.g. Akyurek et al. (2023); Liu et al. (2023); Giannou et al. (2023)) theoretically study the expressive power of transformers. In the context of linear regression tasks, Akyurek et al. (2023) describe how transformers can represent gradient descent, or Sherman-Morrison updates, and Giannou et al. (2023) describe how transformers can represent Newton's algorithm for matrix inversion. However, in addition to the expressive power of transformers, it is also of interest to understand the behavior of transformers trained with gradient-based algorithms. Furthermore, it is still useful to understand the behavior of models with restricted capacity--though practical LLMs are very expressive, they need to perform many tasks simultaneously, and therefore the capacity per problem may still be relatively limited. Thus, motivated by von Oswald et al. (2022), we theoretically study the global minima of the pre-training loss for one-layer transformers with linear self-attention on the linear regression data distribution described above.
**Contributions.** In this paper, we study transformers with one linear self-attention layer, and mathematically investigate which algorithms the transformers implement for synthetically generated linear regression datasets. We prove that the transformer which implements a single step of gradient descent on a least squares linear regression objective is the global minimizer of the pre-training loss. This exactly matches the empirical findings of von Oswald et al. (2022).
Concretely, we consider a setup similar to von Oswald et al. (2022); Akyurek et al. (2023). The model we study is a transformer with one linear single-head self-attention layer, which is the same model as the one empirically studied by von Oswald et al. (2022). The training data for this transformer consist of sequences of the form \((x_{1},y_{1},\ldots,x_{n},y_{n})\), where the \(x_{i}\) are sampled from \(\mathcal{N}(0,I_{d\times d})\) and \(y_{i}=w^{\top}x_{i}+\epsilon_{i}\), where \(w\) is sampled from \(\mathcal{N}(0,I_{d\times d})\) once per sequence, and the \(\epsilon_{i}\) are i.i.d. Gaussian noise with variance \(\sigma^{2}\). The pre-training loss is the expected error that the transformer achieves when predicting \(y=w^{\top}x\) given the test example \(x\) and the context \((x_{1},y_{1},\ldots,x_{n},y_{n})\), i.e. the pre-training loss is \(L=\mathbb{E}_{(x_{1},y_{1},\ldots,x_{n},y_{n}),x,y}[(y-\hat{y})^{2}]\), where \(\hat{y}\) is the output of the transformer given \((x_{1},y_{1},\ldots,x_{n},y_{n})\) and \(x\) as input.
We show in Section 3 that the transformer which is the global minimizer of the pre-training loss \(L\) implements one step of gradient descent on a linear regression objective with the dataset consisting of the \((x_{i},y_{i})\). More concretely, the transformer implements the prediction algorithm
\[\hat{y}=\eta\sum_{i=1}^{n}y_{i}x_{i}^{\top}x\,. \tag{1}\]
However, one step of GD is also preferred in part due to the distribution of the \(x_{i}\). In particular, if the covariance of \(x_{i}\) is no longer the identity matrix, we show (Section 4) that the global minimum of the pre-training loss corresponds to one step of GD, but with pre-conditioning.
Moreover, interestingly, our theory also suggests that the distribution of \(y_{i}|x_{i}\) does not play such a significant role in the algorithm learned by the transformer. In Section 5, we study a setting where \(y_{i}|x_{i}\) is _nonlinear_, but satisfies some mild assumptions, such as invariance to rotations of the distribution of the \(x_{i}\). As a concrete special case, the target function can be a neural network with any depth/width and i.i.d. random Gaussian weights. We show in Section 5 that a one-layer transformer with linear self-attention, which minimizes the pre-training loss, still implements one step of GD on a **linear regression** objective. Intuitively, this is likely because of the constraint imposed by the architecture, which prevents the transformer from making use of any more complex structure in the \(y_{i}\).
**Concurrent Works.** We discuss the closely related works of Ahn et al. (2023) and Zhang et al. (2023) which are concurrent with and independent with our work and were posted prior to our work on arXiv. The work of Ahn et al. (2023) gives theoretical results very similar to ours. They study one-layer transformers with linear self-attention with the same parameterization as von Oswald et al. (2022), and show that with isotropic \(x_{i}\), the global minimizer of the pre-training loss corresponds to one step of gradient descent on a linear model. They also show that for more general covariance matrices, the global minimizer of the pre-training loss corresponds to one step of pre-conditioned gradient descent, where the pre-conditioner matrix can be computed in terms of the covariance of \(x_{i}\).2
Different from our work, Ahn et al. (2023) also show additional results for multi-layer transformers (with linear self-attention) with residual connections trained on linear regression data. First, they study a restricted parameterization where in each layer, the product of the projection and value matrices has only one nonzero entry. In this setting, for two-layer transformers with linear self-attention, they show that the global minimizer corresponds to two steps of pre-conditioned GD with diagonal pre-conditioner matrices, when the data is isotropic. For linear transformers with \(k\) layers, they show that \(k\) steps of pre-conditioned GD corresponds to a critical point of the pre-training loss,3 where the pre-conditioner matrix is the inverse of the covariance matrix of the \(x_{i}\).4 Next, they study a less restrictive parameterization where the product of the projection and value matrices can be almost fully dense, and show that a certain critical point of the pre-training loss for \(k\)-layer linear transformers corresponds to \(k\) steps of a generalized version of the GD++ algorithm, which was empirically observed by von Oswald et al. (2022) to be the algorithm learned by \(k\)-layer linear transformers.
Footnote 3: One technical point is that they show there exist transformers representing this form of pre-conditioned GD having arbitrarily small gradient, but not that there exists a transformer with gradient exactly \(0\) which represents this form of pre-conditioned GD.
Footnote 4: Here, they assume that \(x_{i}\sim\mathcal{N}(0,\Sigma)\) and \(w\sim\mathcal{N}(0,\Sigma^{-1})\), which is the same assumption as our result in Section 4 but different from their result for one-layer transformers where \(x_{i}\sim\mathcal{N}(0,\Sigma)\).
Zhang et al. (2023) also theoretically study a setting similar to ours. They not only show that the global minimizer of the pre-training loss implements one step of GD (the same result as ours), but also show that a one-layer linear transformer trained with gradient flow will converge to a global minimizer. They also show that the transformer implements a step of pre-conditioned GD when the \(x_{i}\) are non-isotropic. They also characterize how the training prompt lengths and test prompt length affect the test-time prediction error of the trained transformer. Additionally, they consider the behavior of the trained transformer under distribution shifts, as well as the training dynamics when the covariance matrices of the \(x_{i}\) in different training prompts can be different.
One additional contribution of our work is that we also consider the case where the target function in the pre-training data is not a linear function (Section 5). This suggests that, compared to the distribution of the covariates, the distribution of the responses at training time does not have as strong of an effect on the algorithm learned by the transformer. We note that our proof in this setting is not too different from our proof in Section 3. Zhang et al. (2023) consider the case where the \(y_{i}\)'s in the test time prompt are obtained from a nonlinear target function, and consider the performance on this prompt of the transformer trained on prompts with a linear target function -- this is different from our setting in Section 5 since we consider the case where the training prompts themselves are obtained with a nonlinear target function.
## 2 Setup
Our setup is similar to von Oswald et al. (2022).
**One-Layer Transformer with Linear Self-Attention.** A linear self-attention layer with width \(s\) consists of the following parameters: a key matrix \(W_{K}\in\mathbb{R}^{s\times s}\), a query matrix \(W_{Q}\in\mathbb{R}^{s\times s}\), and a value matrix \(W_{V}\in\mathbb{R}^{s\times s}\). Given a sequence of \(T>1\) tokens \((v_{1},v_{2},\dots,v_{T})\), the output of the linear self-attention layer is defined to be \((\hat{v}_{1},\hat{v}_{2},\dots,\hat{v}_{T})\), where for \(i\in[T]\) with \(i>1\),
\[\hat{v}_{i}=\sum_{j=1}^{i-1}(W_{V}v_{j})(v_{j}^{\top}W_{K}^{\top}W_{Q}v_{i})\,, \tag{2}\]
and \(\hat{v}_{1}=0\). In particular, the output on the \(T^{\text{th}}\) token is
\[\hat{v}_{T}=\sum_{j=1}^{T-1}(W_{V}v_{j})(v_{j}^{\top}W_{K}^{\top}W_{Q}v_{T})\,. \tag{3}\]
As in the theoretical construction of von Oswald et al. (2022), we do not consider the attention score between a token \(v_{i}\) and itself. Our overall transformer is then defined to be a linear self-attention layer with key matrix
\(W_{K}\), query matrix \(W_{Q}\), and value matrix \(W_{V}\), together with a linear head \(h\in\mathbb{R}^{s}\) which is applied to the last token. Thus, the final output of the transformer is \(h^{\top}\hat{v}_{T}\). We will later instantiate this one-layer transformer with \(s=d+1\), where \(d\) is the dimension of the inputs \(x_{i}\). We note that this corresponds to a single head of linear self-attention, while one could also consider multi-head self-attention.
**Linear Regression Data Distribution.** The pretraining data distribution consists of sequences \(D=(x_{1},y_{1},\ldots,x_{n+1},y_{n+1})\). Here, the _exemplars_\(x_{i}\) are sampled i.i.d. from \(\mathcal{N}(0,I_{d\times d})\). Then, a weight vector \(w\in\mathbb{R}^{d}\) is sampled from \(\mathcal{N}(0,I_{d\times d})\), freshly for each sequence. Finally, \(y_{i}\) is computed as \(y_{i}=w^{\top}x_{i}+\epsilon_{i}\) where \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\) for some \(\sigma>0\). We consider the vector \(v_{i}=\left[\begin{array}{c}x_{i}\\ y_{i}\end{array}\right]\in\mathbb{R}^{d+1}\) to be a _token_ -- in other words, the sequence \((x_{1},y_{1},\ldots,x_{n+1},y_{n+1})\) is considered to have \(n+1\) tokens (rather than \(2(n+1)\) tokens). We use \(\mathcal{T}\) to denote the distribution of sequences defined in this way.
At both training and test time, \((x_{1},y_{1},\ldots,x_{n},y_{n},x_{n+1},y_{n+1})\) is generated according to the pretraining distribution \(\mathcal{T}\), i.e. the \(x_{i}\) are sampled i.i.d. from \(\mathcal{N}(0,I_{d\times d})\), a new weight vector \(w\in\mathbb{R}^{d}\) is also sampled from \(\mathcal{N}(0,I_{d\times d})\), and \(y_{i}=w^{\top}x_{i}+\epsilon_{i}\) where the \(\epsilon_{i}\) are sampled i.i.d. from \(\mathcal{N}(0,\sigma^{2})\). Then, the in-context learner is presented with \(x_{1},y_{1},\ldots,x_{n},y_{n},x_{n+1}\), and must predict \(y_{n+1}\). We refer to \(x_{1},\ldots,x_{n}\) as the _support_ exemplars and \(x_{n+1}\) as the _query_ exemplar. Here, \(v_{1},\ldots,v_{n}\) are defined as above, but \(v_{n+1}=\left[\begin{array}{c}x_{i}\\ 0\end{array}\right]\), following the notation of von Oswald et al. [2022].5 We note that this is not significantly different from the standard in-context learning setting, since even though the final token \(v_{n+1}\) has \(0\) as an extra coordinate, it does not provide the transformer with any additional information about \(y_{n+1}\).
Footnote 5: If we were to treat \(x_{i}\) and \(y_{i}\) as separate tokens, then we would need to deal with attention scores between \(y_{i}\) and \(y_{j}\) for \(i\neq j\), as well as attention scores between \(y_{i}\) and \(x_{j}\) for \(i\neq j\). Our current setup simplifies the analysis.
**Loss Function.** Given a one-layer transformer with linear self-attention and width \(d+1\), with key matrix \(W_{K}\in\mathbb{R}^{(d+1)\times(d+1)}\), query matrix \(W_{Q}\in\mathbb{R}^{(d+1)\times(d+1)}\), and value matrix \(W_{V}\in\mathbb{R}^{(d+1)\times(d+1)}\), and with a head \(h\in\mathbb{R}^{d+1}\), the loss of this transformer on our linear regression data distribution is formally defined as
\[L(W_{K},W_{Q},W_{V},h)=\mathbb{E}_{D\sim\mathcal{T}}[(h^{\top}\hat{v}_{n+1}-y_ {n+1})^{2}]\,, \tag{4}\]
where as defined above, \(\hat{v}_{n+1}\) is the output of the linear self-attention layer on the \((n+1)^{\text{th}}\) token, which in this case is \(\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\).
We now rewrite the loss function and one-layer transformer in a more convenient form. As a convenient shorthand, for any test-time sequence \(D=(x_{1},y_{1},\ldots,x_{n+1},0)\), we write \(\widetilde{D}=(x_{1},y_{1},\ldots,x_{n},y_{n})\), i.e. the prefix of \(D\) that does not include \(x_{n+1}\) and \(y_{n+1}\). We also define
\[G_{\widetilde{D}}=\sum_{i=1}^{n}\left[\begin{array}{c}x_{i}\\ y_{i}\end{array}\right]\left[\begin{array}{c}x_{i}\\ y_{i}\end{array}\right]^{\top}\,. \tag{5}\]
With this notation, we can write the prediction obtained from the transformer on the final token as
\[\hat{y}_{n+1}=h^{\top}W_{V}G_{\widetilde{D}}W_{K}^{\top}W_{Q}v_{n+1}\,. \tag{6}\]
where \(v_{n+1}=\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\). Additionally, we also define the matrix \(X\in\mathbb{R}^{n\times d}\) as the matrix whose \(i^{th}\) row is the row vector \(x_{i}^{\top}\), i.e.
\[X=\left[\begin{array}{ccc}\cdots&x_{1}^{\top}&\cdots\\ \cdots&x_{2}^{\top}&\cdots\\ \vdots&\vdots&\vdots\\ \cdots&x_{n}^{\top}&\cdots\end{array}\right]\,, \tag{7}\]
and we define the vector \(\vec{y}\in\mathbb{R}^{n}\) as the vector whose \(i^{\text{th}}\) entry is \(y_{i}\), i.e.
\[\vec{y}=\left[\begin{array}{c}y_{1}\\ y_{2}\\ \vdots\\ y_{n}\end{array}\right]\,. \tag{8}\]
Finally, it is worth noting that we can write the loss function as
\[L(W_{K},W_{Q},W_{V},h)=\mathbb{E}_{D\sim\mathcal{T}}[(h^{\top}W_{V}G_{\widetilde {D}}W_{K}^{\top}W_{Q}v_{n+1}-y_{n+1})^{2}]\,. \tag{9}\]
Thus, for \(w\in\mathbb{R}^{d+1}\) and \(M\in\mathbb{R}^{(d+1)\times(d+1)}\), if we define
\[L(w,M)=\mathbb{E}_{D\sim\mathcal{T}}[(w^{\top}G_{\widetilde{D}}Mv_{n+1}-y_{n+ 1})^{2}]\,, \tag{10}\]
then \(L(W_{V}^{\top}h,W_{K}^{\top}W_{Q})=L(W_{K},W_{Q},W_{V},h)\). Note that we have a slight abuse of notation, and \(L\) has two different meanings depending on the number of arguments. Finally, with the change of variables \(M=W_{K}^{\top}W_{Q}\) and \(w=W_{V}^{\top}h\), we can write the prediction of the transformer as \(w^{\top}G_{\widetilde{D}}Mv_{n+1}\). Thus, the output of the transformer only depends on the parameters through \(w^{\top}G_{\widetilde{D}}M\).
**Additional Notation.** For a matrix \(A\in\mathbb{R}^{d\times d}\), we write \(A_{i:j,:}\) to denote the sub-matrix of \(A\) that contains the rows of \(A\) with indices between \(i\) and \(j\) (inclusive). Similarly, we write \(A_{:,i:j}\) to denote the sub-matrix of \(A\) that contains the columns of \(A\) with indices between \(i\) and \(j\) (inclusive). We write \(A_{i:j,k:l}\) to denote the sub-matrix of \(A\) containing the entries with row indices between \(i\) and \(j\) (inclusive) and column indices between \(k\) and \(l\) (inclusive).
## 3 Main Result for Linear Models
**Theorem 1** (Global Minimum for Linear Regression Data).: _Suppose \((W_{K}^{*},W_{Q}^{*},W_{V}^{*},h^{*})\) is a global minimizer of the loss \(L\). Then, the corresponding one-layer transformer with linear self-attention implements one step of gradient descent on a linear model with some learning rate \(\eta>0\). More concretely, given a query token \(v_{n+1}=\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\), the transformer outputs \(\eta\sum_{i=1}^{n}y_{i}x_{i}^{\top}x_{n+1}\), where \(\eta=\frac{\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\hat{w}_{\widetilde{D}}^ {\top}X^{\top}\vec{y}]}{\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\vec{y}^{ \top}XX^{\top}\vec{y}]}.\) Here given a prefix \(\widetilde{D}\) of a test-time data sequence \(D\), we let \(\hat{w}_{\widetilde{D}}\) denote the solution to ridge regression on \(X\) and \(\vec{y}\) with regularization strength \(\sigma^{2}\)._
One such construction is as follows. von Oswald et al. (2022) describe essentially the same construction, but our result shows that it is a global minimum of the loss function, while von Oswald et al. (2022) do not theoretically study the construction aside from showing that it is equivalent to one step of gradient descent. We define
\[W_{K}^{*}=\begin{pmatrix}I_{d\times d}&0\\ 0&0\end{pmatrix}\,,W_{Q}^{*}=\begin{pmatrix}I_{d\times d}&0\\ 0&0\end{pmatrix}\,,W_{V}^{*}=\begin{pmatrix}0&0\\ 0&\eta\end{pmatrix}\,,h^{*}=\left[\begin{array}{c}0\\ 1\end{array}\right]\,. \tag{11}\]
Here, the unique value of \(\eta\) which makes this construction a global minimum is \(\eta=\frac{\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\hat{w}_{\widetilde{D}}^ {\top}X^{\top}\vec{y}]}{\mathbb{E}_{D\sim\mathcal{T}}[\vec{y}^{\top}XX^{\top} \vec{y}]}\).
To see why this construction implements a single step of gradient descent on a linear model, note that given test time inputs \(x_{1},y_{1},\ldots,x_{n},y_{n},x_{n+1}\), if we write \(v_{i}=\left[\begin{array}{c}x_{i}\\ y_{i}\end{array}\right]\) for \(i\leq n\) and \(v_{n+1}=\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\), then the output of the corresponding transformer would be
\[(h^{*})^{\top}\sum_{i=1}^{n}(W_{V}^{*}v_{i})(v_{i}^{\top}(W_{K}^{* })^{\top}W_{Q}^{*}v_{n+1}) =\begin{pmatrix}0&\eta\end{pmatrix}\sum_{i=1}^{n}\left[\begin{array} []{c}x_{i}\\ y_{i}\end{array}\right]\left[\begin{array}{c}x_{i}\\ y_{i}\end{array}\right]^{\top}\begin{pmatrix}I_{d\times d}&0\\ 0&0\end{pmatrix}\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right] \tag{12}\] \[=\eta\sum_{i=1}^{n}y_{i}x_{i}^{\top}x_{n+1}\,. \tag{13}\]
On the other hand, consider linear regression with total squared error as the loss function, using the \(x_{i}\) and \(y_{i}\). Here, the loss function would be
\[L(w)=\frac{1}{2}\sum_{i=1}^{n}(w^{\top}x_{i}-y_{i})^{2}\,, \tag{14}\]
meaning that the gradient is
\[\nabla_{w}L(w)=\sum_{i=1}^{n}(w^{\top}x_{i}-y_{i})x_{i}\,. \tag{15}\]
In particular, if we initialize gradient descent at \(w_{0}=0\), then after one step of gradient descent with learning rate \(\eta\), the iterate would be at \(w_{1}=\eta\sum_{i=1}^{n}y_{i}x_{i}\) -- observe that the final expression in Equation (13) is exactly \(w_{1}^{\top}x_{n+1}\).
Now, we give an overview of the proof of Theorem 1. By the discussion in Section 2, it suffices to show that \(L((W_{V}^{*})^{\top}h^{*},(W_{K}^{*})^{\top}W_{Q}^{*})\) is the global minimum of \(L(w,M)\). The first step of the proof is to rewrite the loss in a more convenient form, getting rid of the expectation over \(x_{n+1}\) and \(y_{n+1}\):
**Lemma 1**.: _Let \(\hat{w}_{\widetilde{D}}\) be the solution to ridge regression with regularization strength \(\sigma^{2}\) on the exemplars \((x_{1},y_{1}),\ldots,(x_{n},y_{n})\) given in a context \(\widetilde{D}\). Then, there exists a constant \(C\geq 0\), which is independent of \(w,M\), such that_
\[L(w,M)=C+\mathbb{E}_{D\sim\mathcal{T}}\|M^{\top}_{:,1:d}G_{ \widetilde{D}}w-\hat{w}_{\widetilde{D}}\|_{2}^{2}\,. \tag{16}\]
As discussed towards the end of Section 2, the prediction can be written as \(w^{\top}G_{\widetilde{D}}Mv_{n+1}\) where \(v_{n+1}=\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\), meaning that the effective linear predictor implemented by the transformer is the linear function from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) with weight vector \(M^{\top}_{:,1:d}G_{\widetilde{D}}w\). Thus, we can interpret Lemma 1 as saying that the loss function encourages the effective linear predictor to match the Bayes-optimal predictor \(\hat{w}_{\widetilde{D}}\). Note that it is not possible for the effective linear predictor of the transformer to match \(\hat{w}_{\widetilde{D}}\) exactly, since the transformer can only implement a linear or quadratic function of the \(x_{i}\), while representing \(\hat{w}_{\widetilde{D}}\) requires computing \((X^{\top}X+\sigma^{2}I)^{-1}\), and this is a much more complex function of the \(x_{i}\).
Proof of Lemma 1.: We can write
\[L(w,M) =\mathbb{E}_{D\sim\mathcal{T}}[(y_{n+1}-w^{\top}G_{\widetilde{D} }Mv_{n+1})^{2}] \tag{17}\] \[=\mathbb{E}_{\widetilde{D},x_{n+1}}\Big{[}\mathbb{E}_{y_{n+1}}[(y _{n+1}-w^{\top}G_{\widetilde{D}}Mv_{n+1})^{2}\mid x_{n+1},\widetilde{D}]\Big{]}\,. \tag{18}\]
Let us simplify the inner expectation. For convenience, fix \(\widetilde{D}\) and \(x_{n+1}\), and consider the function \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\) given by
\[g(u)=\mathbb{E}_{y_{n+1}}[(u\cdot x_{n+1}-y_{n+1})^{2}\mid \widetilde{D},x_{n+1}]\,. \tag{19}\]
It is a well-known fact that the minimizer of \(g(u)\) is given by \(\hat{w}_{\widetilde{D}}\) where \(\hat{w}_{\widetilde{D}}=(X^{\top}X+\sigma^{2}I)^{-1}X^{\top}\vec{y}\) is the solution to ridge regression on \(X\) and \(\vec{y}\) with regularization strength \(\sigma^{2}\). Furthermore,
\[0=\nabla_{u}g(\hat{w}_{\widetilde{D}})=\mathbb{E}_{y_{n+1}}[2( \hat{w}_{\widetilde{D}}\cdot x_{n+1}-y_{n+1})x_{n+1}\mid\widetilde{D},x_{n+1} ]\,, \tag{20}\]
and in particular, taking the dot product of both sides with \(u-\hat{w}_{\widetilde{D}}\) (for any vector \(u\in\mathbb{R}^{d}\)) gives
\[\mathbb{E}_{y_{n+1}}[(\hat{w}_{\widetilde{D}}\cdot x_{n+1}-y_{n+ 1})\cdot(u\cdot x_{n+1}-\hat{w}_{\widetilde{D}}\cdot x_{n+1})]=0\,. \tag{21}\]
Thus, letting \(u=w^{\top}G_{\widetilde{D}}M_{\cdot,1:d}\), we can simplify the inner expectation in Equation (18):
\[\mathbb{E}_{y_{n+1}}[(y_{n+1}-w^{\top}G_{\widetilde{D}}M_{\cdot,1:d }x_{n+1})^{2}\mid x_{n+1},\widetilde{D}] \tag{22}\] \[\qquad=\mathbb{E}_{y_{n+1}}[(y_{n+1}-\hat{w}_{\widetilde{D}}^{ \top}x_{n+1}+\hat{w}_{\widetilde{D}}^{\top}x_{n+1}-w^{\top}G_{\widetilde{D}}M_ {\cdot,1:d}x_{n+1})^{2}\mid x_{n+1},\widetilde{D}]\] (23) \[\qquad=\mathbb{E}_{y_{n+1}}[(y_{n+1}-\hat{w}_{\widetilde{D}}^{ \top}x_{n+1})^{2}\mid x_{n+1},\widetilde{D}]+(\hat{w}_{\widetilde{D}}^{\top}x _{n+1}-w^{\top}G_{\widetilde{D}}M_{\cdot,1:d}x_{n+1})^{2}\] (24) \[\qquad\qquad+2\cdot\mathbb{E}_{y_{n+1}}[(y_{n+1}-\hat{w}_{ \widetilde{D}}^{\top}x_{n+1})(\hat{w}_{\widetilde{D}}^{\top}x_{n+1}-w^{\top} G_{\widetilde{D}}M_{\cdot,1:d}x_{n+1})\mid x_{n+1},\widetilde{D}]\] (25) \[\qquad=\mathbb{E}_{y_{n+1}}[(y_{n+1}-\hat{w}_{\widetilde{D}}^{ \top}x_{n+1})^{2}\mid x_{n+1},\widetilde{D}]+(\hat{w}_{\widetilde{D}}^{\top}x _{n+1}-w^{\top}G_{\widetilde{D}}M_{\cdot,1:d}x_{n+1})^{2}\,.\qquad(\text{By Equation (\ref{eq:def})})\]
We can further write the final expression as \(C_{x_{n+1},\widetilde{D}}+(\hat{w}_{\widetilde{D}}^{\top}x_{n+1}-w^{\top}G_{ \widetilde{D}}M_{\cdot,1:d}x_{n+1})^{2}\), where \(C_{x_{n+1},\widetilde{D}}\) is a constant that depends on \(x_{n+1}\) and \(\widetilde{D}\) but is independent of \(w\) and \(M\). Thus, we have
\[L(w,M) =\mathbb{E}_{\widetilde{D},x_{n+1}}[C_{x_{n+1},\widetilde{D}}]+ \mathbb{E}_{\widetilde{D},x_{n+1}}[(\hat{w}_{\widetilde{D}}^{\top}x_{n+1}-w^{ \top}G_{\widetilde{D}}M_{\cdot,1:d}x_{n+1})^{2}] \tag{26}\] \[=C+\mathbb{E}_{\widetilde{D}}\|\hat{w}_{\widetilde{D}}-M_{\cdot,1: d}^{\top}G_{\widetilde{D}}w\|_{2}^{2}\,,\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\text{(B.c. $x_{n+1}\sim\mathcal{N}(0,I_{d\times d})$)}\]
where \(C\) is a constant which is independent of \(w\) and \(M\).
Next, the key step is to replace \(\hat{w}_{\widetilde{D}}\) in the above lemma by \(\eta X^{\top}\vec{y}\).
**Lemma 2**.: _There exists a constant \(C_{1}\geq 0\) which is independent of \(w,M\), such that_
\[L(w,M)=C_{1}+\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|M_{\cdot,1:d}^{\top}G _{\widetilde{D}}w-\eta X^{\top}\vec{y}\|_{2}^{2}\,. \tag{27}\]
Lemma 2 says that the loss depends entirely on how far the effective linear predictor is from \(\eta X^{\top}\vec{y}\). It immediately follows from this lemma that \((W_{K},W_{Q},W_{V},h)\) is a global minimizer of the loss if and only if the effective linear predictor of the corresponding transformer is \(\eta X^{\top}\vec{y}\). Thus, Theorem 1 follows almost directly from Lemma 2, and in the rest of this section, we give an outline of the proof of Lemma 2 -- the detailed proofs of Theorem 1 and Lemma 2 are in Appendix A.
**Proof Strategy for Lemma 2.** Our overall proof strategy is to show that the gradients of
\[L(w,M)=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|M_{\cdot,1:d}^{\top}G_{ \widetilde{D}}w-\hat{w}_{\widetilde{D}}\|_{2}^{2} \tag{28}\]
and
\[L^{\prime}(w,M)=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|M_{ \cdot,1:d}^{\top}G_{\widetilde{D}}w-\eta X^{\top}\vec{y}\|_{2}^{2} \tag{29}\]
are equal at every \(w,M\), from which Lemma 2 immediately follows.6 For simplicity, we write \(A=M_{\cdot,1:d}^{\top}\), so without loss of generality we can show that the gradients of the following two loss functions are identical:
Footnote 6: This is because \(L\) and \(L^{\prime}\) are defined everywhere on \(\mathbb{R}^{d+1}\times\mathbb{R}^{(d+1)\times(d+1)}\), and for any two differentiable functions \(f,g\) defined on an open connected subset \(S\subset\mathbb{R}^{k}\), if the gradients of \(f\) and \(g\) are identical on \(S\), then \(f\) and \(g\) are equal on \(S\) up to an additive constant. This can be shown using the fundamental theorem of calculus.
\[J_{1}(A,w)=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|AG_{ \widetilde{D}}w-\hat{w}_{\widetilde{D}}\|_{2}^{2} \tag{30}\]
and
\[J_{2}(A,w)=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|AG_{ \widetilde{D}}w-\eta X^{\top}\vec{y}\|_{2}^{2}\,. \tag{31}\]
In this section, we discuss the gradients with respect to \(w\) -- we use the same proof ideas to show that the gradients with respect to \(A\) are the same. We have
\[\nabla_{w}J_{1}(A,w)=2\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{ \widetilde{D}}A^{\top}(AG_{\widetilde{D}}w-\hat{w}_{\widetilde{D}}) \tag{32}\]
\[\nabla_{w}J_{2}(A,w)=2\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{ \widetilde{D}}A^{\top}(AG_{\widetilde{D}}w-\eta X^{\top}\vec{y})\,. \tag{33}\]
Thus, showing that these two gradients are equal for all \(A,w\) reduces to showing that for all \(A\), we have
\[\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{\widetilde{D}}A^{ \top}\hat{w}_{\widetilde{D}}=\eta\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{ \widetilde{D}}A^{\top}X^{\top}\vec{y}\,. \tag{34}\]
Recall that \(G_{\widetilde{D}}\) is defined as
\[G_{\widetilde{D}}=\sum_{i=1}^{n}\left[\begin{array}{cc}x_{i}x _{i}^{\top}&y_{i}x_{i}\\ y_{i}x_{i}^{\top}&y_{i}^{2}\end{array}\right]\,. \tag{35}\]
Our first key observation is that for any \(i\in[n]\) and any odd positive integer \(k\), \(\mathbb{E}[y_{i}^{k}\mid X]=0\), since \(y_{i}=w^{\top}x_{i}+\epsilon_{i}\), and both \(w^{\top}x_{i}\) and \(\epsilon_{i}\) have distributions which are symmetric around \(0\). This observation also extends to any odd-degree monomial of the \(y_{i}\). Using this observation, we can simplify the left-hand side of Equation (34) as follows. We can first write it as
\[\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{\widetilde{D}}A^{ \top}\hat{w}_{\widetilde{D}}=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\sum_{ i=1}^{n}\left[\begin{array}{cc}x_{i}x_{i}^{\top}(A^{\top})_{1:d,:}\hat{w}_{ \widetilde{D}}+y_{i}x_{i}(A^{\top})_{d+1,:}\hat{w}_{\widetilde{D}}\\ y_{i}x_{i}^{\top}(A^{\top})_{1:d,:}\hat{w}_{\widetilde{D}}+y_{i}^{2}(A^{\top})_ {d+1,:}\hat{w}_{\widetilde{D}}\end{array}\right]\,. \tag{36}\]
Then, since \(\hat{w}_{\widetilde{D}}=(X^{\top}X+\sigma^{2}I)^{-1}X^{\top}\vec{y}\), each entry of \(\hat{w}_{\widetilde{D}}\) has an odd degree in the \(y_{i}\), meaning the terms \(x_{i}x_{i}^{\top}(A^{\top})_{1:d,:}\hat{w}_{\widetilde{D}}\) and \(y_{i}^{2}(A^{\top})_{d+1,:}\hat{w}_{\widetilde{D}}\) in the above equation vanish after taking the expectation. Thus, we obtain
\[\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{\widetilde{D}}A^{ \top}\hat{w}_{\widetilde{D}}=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\sum_{ i=1}^{n}\left[\begin{array}{cc}y_{i}x_{i}A^{\top,d+1}_{:d+1}\hat{w}_{ \widetilde{D}}\\ y_{i}x_{i}^{\top}\,A^{\top}_{,1:d}\hat{w}_{\widetilde{D}}\end{array}\right]= \mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\left[\begin{array}{cc}X^{\top}\vec {y}A^{\top}_{,d+1}\hat{w}_{\widetilde{D}}\\ \vec{y}^{\top}X\,A^{\top}_{,1:d}\hat{w}_{\widetilde{D}}\end{array}\right]\,. \tag{37}\]
Since each entry of \(\eta X^{\top}\vec{y}\) has an odd degree in the \(y_{i}\), in order to simplify the right-hand side of Equation (34), we can apply the same argument but with \(\hat{w}_{\widetilde{D}}\) replaced by \(\eta X^{\top}\vec{y}\), obtaining
\[\eta\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}G_{\widetilde{D}}A ^{\top}X^{\top}\vec{y}=\eta\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\left[ \begin{array}{cc}X^{\top}\vec{y}A^{\top}_{,d+1}X^{\top}\vec{y}\\ \vec{y}^{\top}X\,A^{\top}_{,1:d}X^{\top}\vec{y}\end{array}\right]\,. \tag{38}\]
Thus, showing Equation (34) reduces to showing that
\[\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\left[\begin{array}{cc}X^{\top}\vec {y}A^{\top}_{,d+1}\hat{w}_{\widetilde{D}}\\ \vec{y}^{\top}X\,A^{\top}_{,1:d}\hat{w}_{\widetilde{D}}\end{array}\right]=\eta \mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\left[\begin{array}{cc}X^{\top}\vec {y}A^{\top}_{,d+1}X^{\top}\vec{y}\\ \vec{y}^{\top}X\,A^{\top}_{,1:d}X^{\top}\vec{y}\end{array}\right]\,. \tag{39}\]
To show Equation (39), our key tool is Lemma 4, which follows from Lemma 3.
**Lemma 3**.: _There exists a scalar \(c_{1}\) such that \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{y}^{\top}X]=c_{1 }I_{d\times d}\), and there exists a scalar \(c_{2}\) such that \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\hat{w}_{\widetilde{D} }^{\top}]=c_{2}I_{d\times d}\)._
**Lemma 4**.: _If \(\eta=\frac{\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\hat{w}_{\widetilde{D}}^{ \top}X^{\top}\vec{y}]}{\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\vec{y}^{ \top}XX^{\top}\vec{y}]}\), then \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\eta X^{\top}\vec{y}\vec{y}^{\top}X -X^{\top}\vec{y}\hat{w}_{\widetilde{D}}^{\top}]=0\)._
**Overview of Proof of Lemma 3 and Lemma 4.** We give an overview of how we prove Lemma 3 and Lemma 4 here, and defer the full proofs to Appendix A. To show that \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{y}^{\top}X]\) is a scalar multiple of the identity, we first use the fact that even when all of the \(x_{i}\) are rotated by a rotation matrix \(R\), the distribution of \(\vec{y}|X\) remains the same, since the weight vector \(w\) is drawn from \(\mathcal{N}(0,I_{d\times d})\) which is a rotationally invariant distribution. Thus, if we define \(M(X)=\mathbb{E}[\vec{y}\vec{y}^{\top}\mid X]\) as a function of \(X\in\mathbb{R}^{n\times d}\), then for any rotation matrix \(R\in\mathbb{R}^{d\times d}\), we have
\[M(XR^{\top})=\mathbb{E}[\vec{y}\vec{y}^{\top}\mid XR^{\top}]= \mathbb{E}[\vec{y}\vec{y}^{\top}\mid X]=M(X)\,, \tag{40}\]
where the second equality is because multiplying \(X\) on the right by \(R^{\top}\) corresponds to rotating each of the \(x_{i}\) by \(R\). Additionally, if we rotate the \(x_{i}\) by \(R\), then \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{y}^{\top}X]\) remains the same -- this is because the distribution of the \(x_{i}\) is unchanged due to the rotational invariance of the Gaussian distribution, and the conditional distribution \(y_{i}\mid x_{i}\) is unchanged when we rotate \(x_{i}\) by \(R\). This implies that
\[\mathbb{E}[X^{\top}\vec{y}\vec{y}^{\top}X]=\mathbb{E}[X^{\top}M(X)X]=\mathbb{ E}[(XR^{\top})^{\top}M(XR^{\top})XR^{\top}]\,, \tag{41}\]
where the second equality is because, as we observed above, \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{y}^{\top}X]\) remains the same when we rotate each of the \(x_{i}\) by \(R\). Since \(M(XR^{\top})=M(X)\), we have
\[\mathbb{E}[(XR^{\top})^{\top}M(XR^{\top})XR^{\top}]=R\mathbb{E}[X^{\top}M(X)X] R^{\top}\,, \tag{42}\]
which implies that \(\mathbb{E}[X^{\top}\vec{y}\vec{y}^{\top}X]=R\mathbb{E}[X^{\top}\vec{y}\vec{y}^ {\top}X]R^{\top}\) for any rotation matrix \(R\), and therefore \(\mathbb{E}[X^{\top}\vec{y}\vec{y}^{\top}X]\) is a scalar multiple of the identity matrix. To finish the proof of Lemma 3, we show that \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{w}_{\widetilde{ D}}^{\top}]\) is a scalar multiple of the identity using essentially the same argument. To show Lemma 4, we simply take the trace of \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\eta X^{\top}\vec{y}\vec{y}^{\top}X- X^{\top}\vec{y}\vec{w}_{\widetilde{D}}^{\top}]\), and select \(\eta\) so that this trace is equal to \(0\).
**Finishing the Proof of Lemma 2.** Recall that, to show that the gradients of \(J_{1}\) and \(J_{2}\) (defined in Equation (30) and Equation (31)) with respect to \(w\) are equal, it suffices to show Equation (39). However, this is a direct consequence of Lemma 4. This is because we can rewrite Equation (39) as
\[\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\left[\begin{array}{c}X^{\top}\vec {y}\hat{w}_{\widetilde{D}}A_{:,d+1}\\ \operatorname{tr}(\hat{w}_{\widetilde{D}}\vec{y}^{\top}XA_{:,1:d}^{\top}) \end{array}\right]=\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\left[\begin{array} []{c}X^{\top}\vec{y}\vec{y}^{\top}XA_{:,d+1}\\ \operatorname{tr}(X^{\top}\vec{y}\vec{y}^{\top}XA_{:,1:d}^{\top})\end{array} \right]\,. \tag{43}\]
This shows that the gradients of \(J_{1}\) and \(J_{2}\) with respect to \(w\) are equal, and we use similar arguments to show that the gradients of \(J_{1}\) and \(J_{2}\) with respect to \(A\) are equal. As mentioned above, this implies that \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|M_{:,1:d}^{\top}G_{\widetilde{D}}w -\hat{w}_{\widetilde{D}}\|_{2}^{2}-\mathbb{E}_{\widetilde{D}\sim\mathcal{T}} \|M_{:,1:d}^{\top}G_{\widetilde{D}}w-\eta X^{\top}\vec{y}\|_{2}^{2}\) is a constant that is independent of \(M\) and \(w\), as desired.
## 4 Results for Different Data Covariance Matrices
In this section, we consider the setting where the \(x_{i}\)'s have a covariance that is different from the identity matrix, and we show that the loss is minimized when the one-layer transformer implements one step of gradient descent with preconditioning. This suggests that the distribution of the \(x_{i}\)'s has a significant effect on the algorithm that the transformer implements.
**Data Distribution.** Concretely, the data distribution is the same as before, but the \(x_{i}\) are sampled from \(\mathcal{N}(0,\Sigma)\), where \(\Sigma\in\mathbb{R}^{d\times d}\) is a positive semi-definite (PSD) matrix. The outputs are generated according to \(y_{i}=w^{\top}x_{i}+\epsilon_{i}\), where \(w\sim\mathcal{N}(0,\Sigma^{-1})\). This can equivalently be written as \(x_{i}=\Sigma^{1/2}u_{i}\), where \(u_{i}\sim\mathcal{N}(0,I_{d\times d})\), and \(y_{i}=(w^{\prime})^{\top}u_{i}+\epsilon_{i}\), where \(w^{\prime}\sim\mathcal{N}(0,I_{d\times d})\). We keep all other definitions, such as the loss function, the same as before.
**Theorem 2** (Global Minimum for 1-Layer 1-Head Linear Self-Attention with Skewed Covariance).: _Suppose \((W_{K}^{*},W_{Q}^{*},W_{V}^{*},h^{*})\) is a global minimizer of the loss \(L\) when the data is generated according to the distribution given in this section. Then, the corresponding one-layer transformer implements one step of preconditioned gradient descent, on the least-squares linear regression objective, with preconditioner \(\Sigma^{-1}\), for some learning rate \(\eta>0\). Specifically, given a query token \(v_{n+1}=\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\), the transformer outputs \(\eta\sum_{i=1}^{n}y_{i}(\Sigma^{-1}x_{i})^{\top}x_{n+1}\), where \(\eta=\frac{\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[\vec{y}^{\top}X(X^{\top}X +\sigma^{2}\Sigma)^{-1}X^{\top}\vec{y}]}{\mathbb{E}_{\widetilde{D}\sim \mathcal{T}}[\vec{y}^{\top}X\Sigma^{-1}X^{\top}\vec{y}]}\)._
To prove this result, we essentially perform a change of variables to reduce this problem to the setting of the previous section -- then, we directly apply Theorem 1. The detailed proof is given in Appendix B.
Results for Nonlinear Target Functions
In this section, we extend to a setting where the target function is nonlinear -- our conditions on the target function are mild, and for instance allow the target function to be a fully-connected neural network with arbitrary depth/width. However, we keep the model class the same (i.e. 1-layer transformer with linear self-attention). We find that the transformer which minimizes the pre-training loss still implements one step of GD on the _linear regression objective_ (Theorem 3), even though the target function is nonlinear. This suggests that the distribution of \(y_{i}|x_{i}\) does not affect the algorithm learned by the transformer as much as the distribution of \(x_{i}\).
**Data Distribution.** In this section, we consider the same setup as Section 3, but we change the distribution of the \(y_{i}\)'s. We now assume \(y_{i}=f(x_{i})+\epsilon_{i}\), where \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\) as before, but \(f\) is drawn from a family of nonlinear functions satisfying the following assumption:
**Assumption 1**.: _We assume that the target function \(f\) is drawn from a family \(\mathcal{F}\), with a probability measure \(\mathbb{P}\) on \(\mathcal{F}\), such that the following conditions hold: (1) for any fixed rotation matrix \(R\in\mathbb{R}^{d\times d}\), the distribution of functions \(f\) is the same as the distribution of \(f\circ R\) (where \(\circ\) denotes function composition). Moreover, the distribution of \(f\) is symmetric under negation. In other words, if \(E\subset\mathcal{F}\) is measurable under \(\mathbb{P}\), then \(\mathbb{P}(E)=\mathbb{P}(-E)\), where \(-E=\{-f\mid f\in E\}\)._
For example, Assumption 1 is satisfied when \(f(x)\) is a fully connected neural network, with arbitrary depth and width, where the first and last layers have i.i.d. \(\mathcal{N}(0,1)\) entries. Under this assumption, we prove the following result:
**Theorem 3** (Global Minimum for 1-Layer 1-Head Linear Self-Attention with Nonlinear Target Function).: _Suppose Assumption 1 holds, and let \((W^{*}_{K},W^{*}_{Q},W^{*}_{V},h^{*})\) be a global minimizer of the pre-training loss. Then, the corresponding one-layer transformer implements one step of gradient descent on the least-squares linear regression objective, given \((x_{1},y_{1},\ldots,x_{n},y_{n})\). More concretely, given a query token \(v_{n+1}=\left[\begin{array}{c}x_{n+1}\\ 0\end{array}\right]\), the transformer outputs \(\eta\sum_{i=1}^{n}y_{i}x_{i}^{\top}x_{n+1}\), where \(\eta=\frac{\mathbb{E}_{\mathbb{P}}[\overline{u}_{\widetilde{D}}^{\top}X^{ \top}\widetilde{y}]}{\mathbb{E}_{\mathbb{P}}[\widetilde{y}^{\top}XX^{\top} \widetilde{y}]}\) and \(\overline{u}_{\widetilde{D}}=\operatorname*{argmin}_{u}\mathbb{E}_{x_{n+1},y _{n+1}}[(u\cdot x_{n+1}-y_{n+1})^{2}\mid\widetilde{D}]\)._
The result is essentially the same as that of Theorem 1 -- note that the learning rate is potentially different, as it may depend on the function family \(\mathcal{F}\). The proof is analogous to the proof of Theorem 1. First we prove the analogue of Lemma 1, defining \(L(w,M)\) as in Section 2:
**Lemma 5**.: _There exists a constant \(C\geq 0\) such that_
\[L(w,M)=C+\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|M^{\top}_{,:1:d}G_{ \widetilde{D}}w-\overline{u}_{\widetilde{D}}\|_{2}^{2}\,. \tag{44}\]
_Here \(\overline{u}_{\widetilde{D}}=\operatorname*{argmin}_{u}\mathbb{E}_{x_{n+1},y _{n+1}}[(u\cdot x_{n+1}-y_{n+1})^{2}\mid\widetilde{D}]\)._
Next, in the proof of Lemma 2, we used the fact that odd-degree polynomials of the \(y_{i}\) have expectation \(0\) -- the corresponding lemma in our current setting is as follows:
**Lemma 6**.: _For even integers \(k\) and for \(i\in[n]\), \(\mathbb{E}[y_{i}^{k}\overline{u}_{\widetilde{D}}\mid X]=0\). This also holds with \(y_{i}^{k}\) replaced by an even-degree monomial of the \(y_{i}\)._
_Additionally, for odd integers \(k\) and for \(i\in[n]\), \(\mathbb{E}[y_{i}^{k}\mid X]=0\). This also holds with \(y_{i}^{k}\) replaced by an odd-degree monomial of the \(y_{i}\)._
Proof of Lemma 6.: This follows from Assumption 1. This is because for each outcome (i.e. choice of \(f\) and \(\epsilon_{1},\ldots,\epsilon_{n}\)) which leads to \((x_{1},y_{1},\ldots,x_{n},y_{n})\), the corresponding outcome \(-f,-\epsilon_{1},\ldots,-\epsilon_{n}\) which leads to \((x_{1},-y_{1},\ldots,x_{n},-y_{n})\) is equally likely. The \(\overline{u}_{\widetilde{D}}\) which is obtained from the second outcome is the negative of the \(\overline{u}_{\widetilde{D}}\) which is obtained from the first outcome. If \(k\) is even, then \(y_{i}^{k}\) is the same under both outcomes since \(k\) is even, and the average of \(y_{i}^{k}\overline{u}_{\widetilde{D}}\) under these two outcomes is \(0\). Additionally, if \(k\) is odd, then \(y_{i}^{k}\) under the second outcome is the negative of \(y_{i}^{k}\) under the first outcome, and the average of \(y_{i}^{k}\) under these two outcomes is \(0\). This completes the proof of the lemma.
Next, we show the analogue of Lemma 3.
**Lemma 7**.: \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{y}^{\top}X]\) _and \(\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{u}_{\widetilde{D} }^{\top}]\) are scalar multiples of the identity. Thus,_
\[\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{u}_{\widetilde{D }}^{\top}]=\eta\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}[X^{\top}\vec{y}\vec{y} ^{\top}X] \tag{45}\]
_where \(\eta=\frac{\mathbb{E}_{\mathcal{D}}[\overline{u}_{\widetilde{D}}^{\top}X^{\top }\vec{y}]}{\mathbb{E}_{\mathcal{D}}[\vec{y}^{\top}XX^{\top}\vec{y}]}\)._
The proof of Lemma 7 is nearly identical to that of Lemma 3, with Assumption 1 used where appropriate. For completeness, we include the proof in Appendix C. We now state the analogue of Lemma 2:
**Lemma 8**.: _There exists a constant \(C_{1}\geq 0\) which is independent of \(w,M\), such that_
\[L(w,M)=C+\mathbb{E}_{\widetilde{D}\sim\mathcal{T}}\|M_{:,1:d}^{\top}G_{ \widetilde{D}}w-\eta X^{\top}\vec{y}\|_{2}^{2}\,, \tag{46}\]
_where \(\eta=\frac{\mathbb{E}_{\mathcal{D}}[\overline{u}_{\widetilde{D}}^{\top}X^{ \top}\vec{y}]}{\mathbb{E}_{\mathcal{D}}[\vec{y}^{\top}XX^{\top}\vec{y}]}\)._
Theorem 3 now follows from Lemma 8 because \(M_{:,1:d}^{\top}G_{\widetilde{D}}w\) is the weight vector for the effective linear predictor implemented by the transformer. All missing proofs are in Appendix C.
## 6 Conclusion
We theoretically study one-layer transformers with linear self-attention trained on noisy linear regression tasks with randomly generated data. We confirm the empirical finding of von Oswald et al. (2022) by mathematically showing that the global minimum of the pre-training loss for one-layer transformers with linear self-attention corresponds to one step of GD on a least-squares linear regression objective, when the covariates are from an isotropic Gaussian distribution. We find that when the covariates are not from an isotropic Gaussian distribution, the global minimum of the pre-training loss instead corresponds to pre-conditioned GD, while if the covariates are from an isotropic Gaussian distribution and the response is obtained from a _nonlinear_ target function, then the global minimum of the pre-training loss will still correspond to one step of GD on a least-squares linear regression objective. We study single-head linear self-attention layers -- it is an interesting direction for future work to study the global minima of the pre-training loss for a multi-head linear self-attention layer. Another interesting direction is to study the algorithms learned by multi-layer transformers when the response is obtained from a _nonlinear_ target function. We note that Ahn et al. (2023) have studied the case of multi-layer transformers when the target function is linear. They show that for certain restricted parameterizations of multi-layer linear transformers, the global minima or critical points of the pre-training loss correspond to interpretable gradient-based algorithms.
## Acknowledgments
The authors would like to thank the support from NSF IIS 2211780 and a gift from Open Philanthropy.
|
2307.04219 | Large Satellite Constellations and Their Potential Impact on VGOS
Operations | Large LEO satellite constellations (or so-called Mega-constellations) will
significantly change the view of the sky in some radio frequency bands. For
VGOS telescopes it is important to understand the potential impact these
constellations will have in their operations, what is the risk of its receivers
going into non-linear behaviour and how much additional power would a telescope
receive if observing in the same frequencies where satellites are transmitting.
This work describes three of these new constellations (as they would look fully
deployed) and summarizes the results of a particular study considering two VGOS
telescopes (Onsala and Wettzell). | Federico Di Vruno, Vincenza Tornatore | 2023-07-09T16:22:47Z | http://arxiv.org/abs/2307.04219v1 | # Large satellite constellations and their potential impact on VGOS operations
###### Abstract
Large LEO satellite constellations (or so-called Mega-constellations) will significantly change the view of the sky in some radio frequency bands. For VGOS telescopes it is important to understand the potential impact these constellations will have in their operations, what is the risk of its receivers going into non-linear behaviour and how much additional power would a telescope receive if observing in the same frequencies where satellites are transmitting. This work describes three of these new constellations (as they would look fully deployed) and summarizes the results of a particular study considering two VGOS telescopes (Onsala and Wettzell).
VGOS, RFI, Mega-constellations, Satellite constellations
## 1 Introduction
The industrialization of spacecraft construction, and the lowering in costs of space launches has paved the way for big plans in Low Earth Orbit (LEO). Large satellite constellations like Starlink phase 1(with 4400 satellites) and OneWeb phase 1 (with 648 satellites) are already in the deployment phase, others like Project Kuiper (from Amazon) or Guowang (from China) are in their development phase and others with even larger numbers are being filed into the International Telecommunication Union (ITU) system (see Table 1). With altitudes between 500 km and 1200 km, these new constellations will surround the planet almost homogeneously. From a radio telescope point of view, the situation in the sky will change considerably. This change is already evident in the number of active satellites in LEO, from about 2000 in 2018, to more than 5000 in 2022, and the trend suggests it may reach hundred of thousands in this decade [5].
Until now, most of the satellites for internet communication were located in the geostationary belt (at approximately 35780 km altitude), appearing fixed in the sky for a terrestrial observer [7]. The new LEO satellites will orbit the Earth with a period of about 90 minutes and will be seen as hundreds to thousands of bright and fast-moving radio sources in the sky with downlinks in frequency bands from 10.7 GHz up to 76 GHz (see Section 2.2).
Contrary to the situation with terrestrial radio frequency interference (RFI), it is not possible to build radio telescopes far away from satellite transmissions [1], the challenge is further increased due to the opposite pointing direction of radio telescopes and user downlink antenna beams.
The typical power flux density (\(PFD\)) of satellite constellations is in the order of \(-146dBW/m^{2}\) ([12], [6]) in \(4kHz\) or an equivalent to \(62*10^{6}Jy\), i.e. more than 7 orders of magnitude brighter than a typical VGOS source [8]. These strong signals will require a radio astronomy receiver to have a large dynamic range to accommodate the RFI and still be able to detect faint cosmic sources in other frequency channels within the receiver band. This is normally possible for modern radio astronomy receivers, but it can be different in some particular situations such as total power bolometric receivers or receivers with a low effective number of bits (ENB) [3].
## 2 Large LEO Constellations
Radio astronomy has been dealing with satellite transmissions since the very first satellites were launched back in the 1960s. Implementing different strategies such as using analog receivers with large dynamic ranges, smart scheduling, and RFI flagging among others, radio telescopes have been more or less able to mitigate (or avoid) the effect of these strong radio transmissions towards Earth [1]. In conjunction with these strategies, spectrum management has also played a key role in dealing with the effects of satellites, several radio astronomy groups have worked at national, regional and international level for the protection of the radio astronomy service (RAS) frequency bands allocated by the International Telecommunication Union (ITU). Some with successful results, like the GLONASS example, and sometimes with battles that still ongoing 20 years after satellite deployment like in the IRIDIUM case [2].
The exponential growth in the number of active satellites in Low Earth Orbit [5] could result in more than 2000 satellites above the local horizon at any moment in time. Radio telescopes are sensitive to any transmitter in line of sight through its main beam or antenna sidelobes.
### Walker-Delta constellations
All these new constellations follow a "Walker Delta" type of distribution, composed of orbital _shells_ at a certain altitude, each shell contains several _orbital planes_, with a certain inclination with respect to the Equator and distributed homogeneously in the 360 degrees of right ascension. Each one of the constellation's planes contains \(N\) satellites, a representation of Starlink Phase 2 can be found in Figure 1.
A shell of a Walker-Delta constellation [17] is described by \(i=t/p/f\) where \(i\) is the inclination, \(t\) is the total number of satellites, \(p\) is the number of equally spaced planes, and \(f\) is the relative spacing between satellites in adjacent planes. This description makes it very simple to simulate any of these constellations with the purpose of studying its geometric distribution in LEO and also its effect on radio telescopes. It is also possible to use existing Two-Line Elements (TLEs) to obtain the approximate position of existing satellites in space, which can be useful to compare observations to simulation.
Figure 2 shows a qualitative view of the sky from the Wettzell VGOS station (lat 49 degrees), with the position of different satellite constellations simulated for 100 seconds. It is simple to see how the density of satellites in the sky will drastically change in the near future if all constellations planned are deployed.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Constellation & Number of Satellites & Altitude [km] \\ \hline Starlink Phase 1 & 4,400 & 550 \\ OneWeb Phase 1 & 648 & 1200 \\ Amazon Phase 1 & 3,200 & \(\sim\) 600 \\ Guowang (GW) & 13,000 & 590 to 1145 \\ Starlink VLEO & 7600 & 340 \\ Telesat & 298 & 1000 \\ Starlink Phase 2 & 30,000 & 328 to 614 \\ OneWeb Phase 2 & 6,372 & 1200 \\ Cinnamon–937 & 327,320 & 550 to 643 \\ \hline \end{tabular}
\end{table}
Table 1: Some of the large LEO constellations in deployment and planned.
Figure 1: View of Starlink Phase 2 constellation with 30000 satellites, different colors are used for each of the shells of the constellation Frequency bands used by some of the Satellite Constellations
### Radio frequencies
Satellite constellations transmit their downlink signals in frequencies allocated to the Fixed Satellite Service (FSS). Table 2 contains some of the currently in-use and planned FSS bands and it is important to note the proximity to some ITU protected RAS bands immediately adjacent or in very close proximity.
The close vicinity of the satellite's downlinks to radio astronomy bands is a matter of concern for radio astronomers and spectrum managers. As an example, the protection of the 10.6-10.7 GHz Radio Astronomy Service (RAS) band, which includes a _passive band_ in 10.68-10.7 GHz protected by the footnote RR No. 5.340 in the ITU-R Radio Regulations (RR), was studied for the Starlink Ph1 and OneWeb ph1 constellations in [4], with the conclusion that both systems should not use the first 250 MHz channel to protect the RAS band. These signals can not only impact sensitive observations in the RAS protected bands, but can also affect wideband receivers which include the frequency range of user downlinks. Such wideband receivers (from 1 to 14 GHz in the case of VGOS) are necessary to conduct cutting edge science or Geodesy [8].
This paper focuses on the downlink frequency range 10.7 to 12.75 GHz where both OneWeb and Starlink have divided the band in 8 channels of 250 MHz each. The study can be replicated for higher frequency bands with the appropriate modification of satellite and telescope characteristics.
## 3 Potential impact on VGOS
By using large reflector antennas pointed towards the sky and wideband receivers covering the frequency range 1 to 14 GHz [8], VGOS telescopes can be impacted by downlinks of the large satellite constellations in different ways. In fact the VGOS bandwidth is wide while the protected Radio astronomy band is very narrow in and Starlink and OneWeb frequencies use a considerable portion of spectrum. The severity of this impact depends on the interaction between the radio telescope beam and the satellite downlink beams. One of the most important aspects is how much a correlated baseline can be affected, as the primary product of a VGOS observation. Nevertheless, the multi-dimensionality of this problem requires an analysis of the complete signal reception mechanisms and how each part of the signal chain may be impacted.
In a typical VGOS schedule, targets are observed with durations in the order of seconds to tens of seconds, the position of the target in the local sky and the density of satellites deployed will define how much
\begin{table}
\begin{tabular}{|l|c|c|} \hline Frequency & Band name & Protected RAS bands \\ & & (primary) \\ \hline
10.7 - 12.75 GHz & Ku & 10.6 - 10.7 GHz \\
19.7 - 20.2 GHz & Ka & 22.21 - 22.5 GHz \\
37.5 - 42.5 GHz & V & 42.5 - 43.5 GHz \\
71.0 - 76.0 GHz & E & 76 - 77.5 GHz \\ \hline \end{tabular}
\end{table}
Table 2: Frequency bands used by some of the Satellite Constellations.
Figure 2: Sky view from the Wettzell VGOS station with only Geostationary satellites (left), simulation of SL1 and OW1 constellations fully deployed (middle) and simulation of 6 large LEO constellations fully deployed (right). The term ”visible” is used for satellites above the horizon, as radio telescopes can detect satellites in any direction in the sky.
interference will be seen by the telescope. The instantaneous received power from all satellites above the horizon may saturate the analog signal chain (low noise amplifiers, mixers, etc), causing non-linearities that would render the complete receiver band unusable, even if the digitizer band is tuned to a completely different frequency than the satellite downlinks channels. If the RFI power is not as strong and the analog signal chain remains linear, then there can be two possible scenarios:
* First scenario: when the observed band is outside of the satellite downlink frequency range, in which case out of band emissions from the satellites could be a problem depending on their level. This work is not focusing on this, but [4] has studied that case.
* Second scenario: if the observing band falls within one satellite downlink band (250 MHz channels) or vice versa, strong RFI will be received by the VGOS antenna. This RFI can potentially be mitigated by correlation as long as the number of bits in the digitizer are enough to correctly digitize the signal. Since a VGOS digitizer has only two bits, the total integrated RFI needs to be lower (practically at least 10 dB lower or \(1/10\)) than the integrated noise power of the receiver [3].
Non-linearities and lack of headroom for RFI are transient phenomena and can be considered in terms of a data-loss associated with the moments where one satellite is going through the main beam of the radio telescope. The issue of out of band emission is related to long integrations and needs a comparison between the level of integrated RFI vs the integrated level of the astronomical source under observation. The following section describes a simulation method and presents a particular case for the Starlink phase 1, OneWeb phase 1 and Starlink phase 2 constellations to estimate data loss due to strong received power and the total aggregated RFI, the effects of the correlation is not included in this work as is currently under study by the authors.
## 4 Simulation methodology
The simulation is based on the Equivalent Power Flux Density (\(epfd\)) concept (see [11]), where the satellite constellation is propagated for a defined time duration, obtaining the coordinates and attitude of every satellite for each time step. Then, the telescope antenna is pointed towards a defined _sky-cell_ in azimuth and elevation and for each of the simulated time steps, the received power from all satellites above the horizon is calculated with the formula:
\[P_{rx_{(t,p)}}=\sum_{i=0}^{N_{sat}}(PFD_{sat_{(i,i)}}*A_{eff_{RA}s_{(i,i)}}) \tag{1}\]
where:
\(t\) = time step
\(p\) = pointing direction
\(i\) = satellite index
\(PFD_{sat}\) = Satellite power flux density in \(W/m^{2}\) towards the telescope location
\(A_{eff_{RA}s}\) = Effective area of the telescope antenna in \(m^{2}\) towards the satellite position
Figure 3: OneWeb \(SPFD\) model(top), Starlink \(SPFD\) model(bottom), the red line marks the maximum \(SPFD\) level of \(-182dB/m^{2}/Hz\).
This calculation is iterated for a number of _trials_ (typically hundreds to thousands), where each try has a random start time of the constellation and therefore contributes to a statistically representative result. In situations where multiple frequencies are calculated, like for example the case of OneWeb with its 16 fixed-beams antenna (see Figure 3), the number of channels is added to the result. Therefore the final calculation results in a data cube with four dimensions, namely number of iterations, number of pointing directions, number of time steps, and number of channels: \(N_{\mathit{l}ers}\), \(N_{\mathit{p}ointing}\), \(N_{\mathit{l}ime}\) and \(N_{\mathit{channel}}\).
Although the original \(epfd\) calculation as defined by the ITU uses telescope pointings in local coordinates (Alt,Az), this work considers pointings in celestial coordinates (Ra,Dec) as this allows to understand how celestial positions in different declinations can be impacted by satellite constellations transmissions.
### Satellite position propagation
Using the Python package Cysgp4 [18] and the Astropy Coordinates package [14], the position of the satellites in horizontal coordinates (Alt,Az) and Sky coordinates (Ra,Dec) are calculated for each timestep and each iteration (see Figure 5).
### Satellite power flux density \((Pfd)\)
The \(PFD\) from each satellite in a constellation is modelled based on publicly available information (ITU documents and FCC tilings). To calculate the power flux density towards the telescope site, the coordinates of the telescope in the satellite reference frame are also calculated using the Python package cysgp4 [18].
OneWeb satellites are modelled based on the information available in the ECC report 271 [4], with 8 channels in the \(10.7-12.75~{}GHz\). A fixed beam antenna pattern, like the OneWeb system, makes it simpler to calculate the received power in a deterministic way.
The \(PFD\) from Starlink satellites is more complex to model since they have an antenna array that can produce, and electronically steer, several beams in one or multiple frequency channels. The mean \(PFD\) from a Starlink satellite is modelled as a function of the elevation of the satellite, obtained from a Monte Carlo simulation in where the steering angle, the number of beams and the position of satellite and observer was varied a large number of times. Starlink satellites are modeled as one frequency channel at a time.
### Radio Telescope antenna
The radio telescope antenna is modelled based on [10]. While this model is not a real measurement of the antenna pattern of a radio telescope, it is based on real measurements and is considered as a worst case for compatibility studies. To obtain the gain towards the satellite, the angle between the pointing direction and the position of the satellite is calculated.
The Effective Area of the antenna is calculated with the following equation:
\[A_{eff}=G_{RAS}*(\lambda^{2}/(4*\pi)) \tag{2}\]
### Correlation
Interferometry can greatly mitigate the effects of RFI, especially when the baselines are large like in the case of VLBI [16]. Although Thompson and others have studied the effect that long baselines have over single
Figure 4: Antenna pattern as defined in ITU-R RA.1631.
RFI transmitters (and stationary), the situation is not the same when potentially hundreds of transmitters using the same frequency and bandwidth are received simultaneously as can happen now.
For example in [7], Petrachenko identifies the \(10.7-12.75\ GHz\) range as a usable frequency range as only Geostationary satellites were using that frequency at that time. Now the received RFI signal at one antenna will be the sum of the signals from all satellites above the horizon (of course with different levels of attenuation). This analysis is deferred to a further update of this work.
### Saturation Limit threshold
Digital processing operations in a radio telescope can be applied as long as the analog and digital signal chains behave in a linear manner; strong enough signals will generate non-linearities corrupting the complete receiver band for the duration of the interference. Defining the level where a receiver goes non-linear is not a simple task and will depend on each particular receiver. In the case of the VGOS receivers a conservative value for total power of \(-50\ dBm\) is considered to keep the analog signal chain within the linear regime.
If the received power is below this linearity threshold, the analog signal can then be correctly digitized with a bandwidth of \(1\ GHz\). Two scenarios can be identified:
1. Digitizing a frequency range outside of the \(10.7-12.75\ GHz\), which should not have any complications since the signal chain behaves in a linear way and therefore this case will not be further studied;
2. Digitizing in a frequency range within the \(10.7-12.75\ GHz\). In this case is interesting to understand when the RFI produces a significant amount of power compared to the RMS noise of the receiver.
Given the distinct characteristic of VGOS systems using a 2 bit correlator, it is reasonable to consider that there is not much headroom in the digital signal chain to accommodate for RFI, this work considers that any signal above or equal to the receiver's noise power will result in a data loss. This defines the second threshold as a spectral power flux density equal to the RMS noise of a \(20\ K\) receiver system (\(-215\ dBW/Hz\)).
These two thresholds are used in the simulation; a first set of flags is produced when the total integrated power (considering the 8 channels of \(250\ MHz\) for each constellation) is higher than \(-50\ dBm\) (representing a total data loss) and the second one representing a data loss in the case of observing in the same frequency range as the satellite transmissions.
After these two flagging stages, low level RFI will still be present, it is of interest to understand how this will affect the correlation of the baseline. This will be further study in a future update to this work and compared to the thresholds defined in RA.769 [9].
### Metrics
Based on the threshold limits defined in the previous section, the following metrics are used:
1. Full Band Data Loss (**FBDL**): percentage of time that the complete band is lost due to very strong RFI, where the total received power is \(>-50\ dBm\);
2. Digitizer Data Loss (**DDL**): Percentage of the total observation time (single run multiplied by the number of iterations) that the instantaneous power spectral density is above 10% of the integrated noise power of the receiver. This can be calculated as a function of the declination of the source;
3. Average Equivalent Spectral Power Flux Density (**aESPFD**): average value of the equivalent Spectral Power Flux Density during the observation time in each antenna. The eSPFD is calculated as the received spectral power flux density \([W/m^{2}/Hz]\) divided by the maximum effective antenna area, and it is useful to compare to the SPFD (in units of \(Jy\)) of a celestial source in the main beam of the antenna;
## 5 Case study simulation
A specific study case was selected to understand the impact from several satellite constellations on two telescopes normally involved in VGOS observations, it is the intent to further expand this work into how correlation over the long baseline mitigates the RFI. The VGOS stations in Sweden (Onsala Observatory) and Germany (Wettzell Observatory) were selected as the
test stations, using the parameters in Table 3, and Starlink phase 1, OneWeb phase 1 and Starlink phase 2 as constellations see Table 4. The simulated observations were runned for 100 seconds in 1 second timesteps with 100 iterations.
Originally it was intended to use a real VGOS schedule, using real Ra, Dec of sources observed, but to get a more representative results of the impact as a function of source declination the number of sources was increased artificially to 277 in a random fashion, see Figure 6 for a plot of the sources distribution. Figure 5 shows the view of the local sky in (Alt,Az) and how the celestial sources and the satellite constellation (in this case Starlink Phase1) move across the sky in that timeframe.
## 6 Results
The results for each one of the selected metrics is summarized here for each constellation simulated.
### Full Band Data Loss (FBDL)
Notably, the analog saturation threshold was not reached due to the combination of maximum PFD from the satellites (\(-98\ dBW/m^{2}\) in \(250\ MHz\)) and
maximum effective area of the VGOS antennas (\(106\ m^{2}\) or \(20.3\ dBm^{2}\)), as can be seen in Figure 7. This shows that even with large constellations such as
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Constellation & Altitude & Inclination & Number & Satellites \\ & & & of planes & per plane \\ \hline Starlink ph 1 & 550 & 53 & 72 & 22 \\ \cline{2-5} & 540 & 53.2 & 72 & 22 \\ \cline{2-5} & 570 & 70 & 36 & 20 \\ \cline{2-5} & 560 & 97.6 & 6 & 58 \\ \hline Oneweb ph 1 & 1200 & 87.9 & 18 & 40 \\ \hline Starlink ph 2 & 340 & 53 & 48 & 110 \\ \cline{2-5} & 345 & 46 & 48 & 110 \\ \cline{2-5} & 350 & 38 & 48 & 110 \\ \cline{2-5} & 360 & 96.9 & 30 & 120 \\ \cline{2-5} & 525 & 53 & 28 & 120 \\ \cline{2-5} & 530 & 43 & 28 & 120 \\ \cline{2-5} & 535 & 33 & 28 & 120 \\ \cline{2-5} & 604 & 148 & 12 & 12 \\ \cline{2-5} & 614 & 115.7 & 18 & 18 \\ \hline \end{tabular}
\end{table}
Table 4: Parameters of satellite constellations used for the study [12][13][6].
\begin{table}
\begin{tabular}{|l|c|c|} \hline Station & Wettzell & Onsala \\ \hline Location (lon,lat) (deg) & (12.88, 49.14) & (11.92, 57.39) \\ Height (m) & 600 & 20 \\ Antenna Diameter (m) & 13 & 13 \\ Antenna Efficiency (\%) & 80 & 80 \\ Receiver bandwidth (MHz) & 1000 & 1000 \\ System Temperature (K) & 20 & 20 \\ ITU-R RA.769 & -240 & - 240 \\ threshold (\(dBW/m^{2}/Hz\)) & & \\ \hline \end{tabular}
\end{table}
Table 3: VGOS stations parameters used for the simulation
Figure 5: Horizontal (Alt,Az) view of the pointing directions (in colours) and movement of the Starlink Phase 1 satellites (black) as seen from the Onsala Observatory for a time duration of 100 seconds.
Figure 6: VGOS schedule (2022Jan27) sources in red boxes, selected telescopes pointings for the simulation in blue circles.
Starlink phase 2 the analog receivers would still behave in a linear fashion.
### Digital Data Loss (DDL)
When considering an observation coinciding in frequency with the downlinks of satellites (i.e. in within the \(10.7-12.75~{}GHz\)) the DDL varies as a function of declination of the observed source and observatory latitude. This effect is attributable to the different structures of each constellation's density of satellites around the Earth and the latitude of the observer. This shows that impact to VGOS stations (and radio telescopes in general) will strongly depend on the observatory latitude. See Figure 8.
### Average Equivalent Spectral Power Flux Density (aESPFD)
After a certain percentage of the observed data was lost as DDL (see section 6.2, the aESPFD is calculated for each constellation as a function of declination. In this case the flagged percentage is calculated as the product of the flags from the previous section for each antenna.
Considering that the ITU-R RA.769 thresholds for harmful interference for VLBI are defined as \(-193~{}dBW/m^{2}/Hz\), representing an ESPDF of 250 \(Jy\) in an antenna of 13 \(m\) diameter, the results show that VGOS observations could in principle be conducted inside the satellite downlink bands (considering the percentage of data lost). See Figure 9.
## 7 Conclusions
This paper proposed metrics to evaluate the impact of large satellite constellations on VGOS operations by a simil-epfd simulation for Starlink ph1 and ph2, and OneWeb ph1, and two European stations as receivers.
Through calculations and simulations it was proved that the maximum received power even in beam-to-beam coupling condition with satellites will not be enough to saturate the analog chain of a VGOS receiver. As for the digitized part, the simulations show that observations in the same band as the downlinks from satellites can have a significant percentage of data loss due to strong signals compared to the thermal
Figure 7: Instantaneous power received by both VGOS antennas as a function of pointing declination with Starlink phase 2 constellation. The Linearity threshold of \(-50~{}dBm\) was not reached in any situation.
noise of the receiver. Nevertheless the results shows that the ESPFD for both antennas and all constella
Figure 8: Flagged percentage for each antenna and each constellation, a flag is raised when the power spectral density received is above the noise spectral density.
Figure 9: Average Equivalent Spectral Power Flux Density (aEPSPD) as a function of declination for each constellation.
tions is lower than the thresholds defined by ITU-R for VLBI. Observations outside of the satellite downlink bands should not be impacted by satellite downlinks in this frequency range.
As further work the authors will continue investigating how correlation can help mitigate this signals from satellite constellations and how the aggregation of all constellations scales the impact.
## Acknowledgements
The authors would like to thank the IVS Coordinating Center at NASA Goddard Space Flight Center (GSFC) for taking the archive of IVS sessions. The schedule used in this work is available at the [https://ivscc.gsfc.nasa.gov/sessions/2022/vo2027](https://ivscc.gsfc.nasa.gov/sessions/2022/vo2027) web page. We are grateful to Salvo Butaccio, for the assistance with the VGOS schedule, to Dr. Benjamin Winkel for assistance with the use of the Cysgp4 Python package, and to Dr. Jose Antonio Lopez-Perez and Dr. Hayo Hase for useful discussions about VGOS receivers and operations.
|
2303.10943 | On massive neutral lepton scattering on nucleus | The paper presents a theoretical approach to the description of the
relativistic scattering of a massive (neutral) lepton on a nucleus, in which
the latter retains its integrity. The measurable cross section of this process
includes the elastic (or coherent) contribution, when the nucleus remains in
its original quantum state and the inelastic (incoherent) contribution, when
the nucleus goes into another (excited) quantum state. Transition from the
elastic scattering regime to the inelastic scattering regime is regulated
automatically by the dependence of the form factors on the momentum transferred
to the nucleus. At small momentum transfers elastic scattering dominates. AS
the transferred momentum increases, the contribution of the inelastic
scattering increases, and the latter becomes dominant at sufficiently large
transferred momenta. The scattering of massive (anti)neutrinos interacting with
nucleons through the $V\mp A$ currents of the Standard Model is considered in
detail. Because of the nonzero masses, an additional channel arises for elastic
and inelastic scattering of these (anti)neutrinos on nuclei due to the
possibility of changing the helicity of these (anti)neutrinos. The expressions
obtained for the cross sections are applicable to any precision data analysis
involving neutrinos and antineutrinos, especially when non-zero neutrino masses
can be taken into account. These expressions can also be used in the analysis
of experiments on direct detection of (neutral) massive weakly interacting
relativistic dark matter particles since, unlike the generally accepted case,
they simultaneously take into account both elastic and inelastic interactions
of the particles. The presence of an "inelastic signal" with its characteristic
signature may be the only registrable evidence of interaction of the dark
matter particle with the nucleus. | V. A. Bednyakov | 2023-03-20T08:56:16Z | http://arxiv.org/abs/2303.10943v1 | # On massive neutral lepton scattering on nucleus
###### Abstract
The paper presents a theoretical approach to the description of the relativistic scattering of a massive (neutral) lepton on a nucleus, in which the latter retains its integrity. The measurable cross section of this process includes the elastic (or coherent) contribution, when the nucleus remains in its original quantum state and the inelastic (incoherent) contribution, when the nucleus goes into another (excited) quantum state. Transition from the elastic scattering regime to the inelastic scattering regime is regulated _automatically_ by the dependence of the nucleon-nucleus form factors on the momentum transferred to the nucleus. At small momentum transfers elastic scattering dominates. AS the transferred momentum increases, the contribution of the inelastic scattering increases, and the latter becomes dominant at sufficiently large transferred momenta. The interaction of a pointlike lepton with structureless nucleons of the target nucleus is parameterized with four effective coupling constants, reflecting the (axial)vector nature of the weak interaction.
The scattering of massive (anti)neutrinos interacting with nucleons through the \(V\mp A\) currents of the Standard Model is considered in detail. Because of the nonzero masses, an additional channel arises for elastic and inelastic scattering of these (anti)neutrinos on nuclei due to the possibility of changing the helicity of these (anti)neutrinos. For example, despite the smallness of the masses at (kinetic) energies of (anti)neutrinos much lower than the neutrino masses (for example, relic ones), the cross section of their interaction with the nucleus turns out to be many times enhanced, at least due to the "nucleus coherence effect".
The expressions obtained for the cross sections are applicable to any precision data analysis involving neutrinos and antineutrinos, especially when non-zero neutrino masses can be taken into account. These expressions can also be used in the analysis of experiments on direct detection of (neutral) massive weakly interacting _relativistic_ dark matter particles since, unlike the generally accepted case, they simultaneously take into account both elastic and inelastic interactions of the particles. The presence of an "inelastic signal" with its characteristic signature may be the only registrable evidence of interaction of the dark matter particle with the nucleus.
###### Contents
* 1 Introduction
* 2 Kinematics and cross section of \(\chi A\) scattering
* 3 The amplitude of \(\chi\) particle scattering on nucleus
* 4 Cross sections of \(\chi\) particle-nucleus scattering
* 4.1 _Coherent and incoherent contributions to the \(\chi A\to\chi A^{(*)}\) scattering_
* 4.2 _Scalar products for \(\chi A\to\chi A^{(*)}\) scattering_
* 4.3 _Relativistic cross sections of the process \(\chi A\to\chi A^{(*)}\)_
* 4.4 _Massive (anti)neutrinos and \(V\mp A\) interaction_
* 5 Conclusions
* 6 Appendix
## 1 Introduction
In [1; 2; 3], an approach was formulated to describe the interaction of _massless_ neutrinos and antineutrinos with a nucleus resulting in that the nucleus either remains in its original state or goes into an excited state while maintaining the integrity \(\nu(\bar{\nu})+A\to\nu(\bar{\nu})+A^{(*)}\). In [4], this approach was generalized to the case of nonrelativistic scattering of a massive neutral weakly interacting lepton on nuclei. On this basis, the balance of coherence and incoherence was studied in the problem of direct dark matter particle search with allowance for specific kinematic conditions and excitation energy levels of the nucleus. It was noted that one usually underestimates the role of inelastic processes in this problem [5].
Recall that the approach [1; 2; 3] is based on the description of the nucleus as a bound state of its structureless nucleons, whose interaction with the lepton is given by the effective 4-fermionic Lagrangian and is described by means of the scalar products of the lepton and nucleon currents, which allows one to control the spin states of nucleons in the initial and final states of the nucleus. Due to the constructive use of the completeness of the wave functions of nuclear states (conservation of probability), it is possible to represent the observable cross section as a sum of two terms. One term corresponds to the elastic interaction, in which the initial quantum state of the nucleus is preserved. The other term is the total contribution
to the observable cross section of all other (potentially possible) inelastic processes, which involve a change in the quantum state of the nucleus.
It was found that as the 3-momentum \(\mathbf{q}\) transferred to the nucleus increases, the nuclear form factors of the proton/neutron \(F_{p/n}(\mathbf{q})\)_automatically regulate_ the transition from the elastic regime to the inelastic regime of the lepton-nucleus interaction. This phenomenon is the _fundamental difference_ of the approach [1; 2; 3] from Friedman's concept of coherence [6; 7], when, based on the result of comparing of the radius of the target nucleus and the characteristic momentum transferred to it, one decided which formula -- coherent or incoherent -- should be used to calculate the cross section for (anti)neutrino scattering by the nucleus.
It was also noted that elastic and inelastic neutrino \(\nu(\bar{\nu})A\) processes turn out to be experimentally indistinguishable when the recoil energy of the nucleus is the only observable quantity. Therefore, in experiments aimed at studying the coherent scattering of (anti)neutrinos (at sufficiently high energies) by detecting only the recoil energy of the nucleus [8], an incoherent background can occur that is indistinguishable from the signal when \(\gamma\)-quanta de-exciting the nucleus cannot be registered.
A similar situation takes place in the direct detection of dark matter particles. The incoherent (inelastic) contribution to the expected event rate dominates in a number of kinematic regions at quite admissible values of the New Physics parameters [4; 5]. This incoherent contribution is of independent interest because it is an additional source of important information about the nature (unknown in the case of dark matter) of the interaction. It can be measured directly by detection of photons emitted by the target nuclei excited as a result of inelastic processes [9]. The number of such photons is proportional to the ratio of the cross section of the inelastic interaction channel to the cross section of the elastic channel, and these photons should have an energy spectrum characteristic of the nucleus \(A\), which, as a rule, is much larger than the recoil energy of the nucleus, which could greatly simplify their detection.
In this context, the goal of this paper is to further generalize both the _massless neutrino_[1; 2; 3] and the _massive lepton_[4; 5] approaches to the _relativistic_ case of interaction of _massive_ neutral weakly interacting particles (\(\chi\) leptons) with nuclei \(\chi A\to\chi A^{(*)}\), and elucidate new regularities that arise within the generaliztion.
## 2 Kinematics and cross section of \(\chi A\) scattering
When two particles interact with the formation of two particles, \(\chi+A\to\chi+A^{(*)}\), the 4-momenta of the incoming and outgoing \(\chi\) particles are denoted as \(k=(k_{0}=E_{\chi},\mathbf{k})\) and \(k^{\prime}=(k^{\prime}_{0}=E^{\prime}_{\chi},\mathbf{k}^{\prime})\), and the 4-momenta of the initial and the final state of the nucleus, respectively, as \(P_{n}=(P^{0}_{n},\mathbf{P}_{n})\) and \(P^{\prime}_{m}=(P^{0}_{m},\mathbf{P}_{m})\) (Fig. 1, left). The total energy of the nuclear state \(|P_{n}\rangle\) is \(P^{0}_{n}=E_{\mathbf{P}}+\varepsilon_{n}\), where \(\varepsilon_{n}\) is the internal energy of the \(n\)th quantum state of the nucleus. If the \(\chi\) particle with mass \(m_{\chi}\) and momentum \(\mathbf{k}\) hits the nucleus
at rest along the \(z\) axis and flies away at angle \(\theta\) to the \(x\)-axis with momentum \(\mathbf{k}^{\prime}\), then all 4-momenta can be written as
\[k = \big{(}k_{0}=\sqrt{m_{\chi}^{2}+|\mathbf{k}|^{2}},0,0,k_{z}=|\mathbf{k}| \big{)},\qquad P_{n}=\big{(}P_{n}^{0}=m_{A}+\varepsilon_{n},0,0,0\big{)},\] \[k^{\prime} = \big{(}k_{0}^{\prime}=\sqrt{m_{\chi}^{2}+|\mathbf{k}^{\prime}|^{2}},k _{x}^{\prime}=|\mathbf{k}^{\prime}|\sin\theta,0,k_{z}^{\prime}=|\mathbf{k}^{\prime}| \cos\theta\big{)},\] \[P_{m}^{\prime} = \big{(}P_{m}^{0}=\varepsilon_{m}+\sqrt{m_{A}^{2}+\mathbf{q}^{2}},-| \mathbf{k}^{\prime}|\sin\theta,0,|\mathbf{k}|-|\mathbf{k}^{\prime}|\cos\theta\big{)},\]
where \(m_{A}\) is the mass of the \(A\) nucleus, and \(\varepsilon_{m}\) is the excitation energy of the \(m\)th level of this nucleus [4]. The 4-momentum \(q=(q_{0},\mathbf{q})\) transferred to the nucleus is related to these quantities in the following way:
\[q^{2} \equiv (k-k^{\prime})^{2}=2\big{(}m_{\chi}^{2}-\sqrt{(m_{\chi}^{2}+|\bm {k}^{\prime}|^{2})(m_{\chi}^{2}+|\mathbf{k}|^{2})}+|\mathbf{k}||\mathbf{k}^{\prime}|\cos \theta\big{)},\] \[q_{0} = k_{0}-k_{0}^{\prime}=P_{m}^{0}-P_{n}^{0}=\Delta\varepsilon_{mn} +T_{A}, \tag{1}\] \[\mathbf{q}^{2} = (\mathbf{k}-\mathbf{k}^{\prime})^{2}=(-|\mathbf{k}^{\prime}|\sin\theta)^{2}+( |\mathbf{k}|-|\mathbf{k}^{\prime}|\cos\theta)^{2}=|\mathbf{k}|^{2}+|\mathbf{k}^{\prime}|^{2}-2 |\mathbf{k}||\mathbf{k}^{\prime}|\cos\theta.\]
The notation for the energy difference between the nuclear \(|m\rangle\) and \(|n\rangle\) states is introduced:
\[\Delta\varepsilon_{mn}\equiv\varepsilon_{m}-\varepsilon_{n}. \tag{2}\]
The kinetic energy of the motion of the recoil nucleus is defined as
\[T_{A}(|\mathbf{k}^{\prime}|,\cos\theta)=\sqrt{m_{A}^{2}+|\mathbf{k}^{\prime}|^{2}+|\bm {k}|^{2}-2|\mathbf{k}||\mathbf{k}^{\prime}|\cos\theta}-m_{A}. \tag{3}\]
The maximum value of the recoil energy, \(T_{A}^{\rm max}\), corresponds to the maximum momentum transferred, which is obtained at \(\cos\theta=-1\), i.e. \(\mathbf{q}_{\rm max}^{2}=(|\mathbf{k}|+|\mathbf{k}^{\prime}|)^{2}\). As a rule, \(T_{A}^{\rm max}\leq 200\) keV, and for the most interesting target nuclei \(\Delta\varepsilon_{mn}\) is noticeably less than 1 MeV.
Figure 1: An example of \(\chi A\)-interaction due to the exchange of the neutral \(Z\)-boson (left).
Kinematics of this process in the laboratory frame where the nucleus \(A\) is at rest (right).
Since in the lab frame the target nucleus is at rest in some \(|n\rangle\)-state, the differential cross section of the process \(\chi A_{n}\to\chi A_{m}^{(\star)}\) (see, for example, [10; 11; 12; 1; 4; 1]) takes the form
\[\frac{d^{2}\sigma_{mn}}{d|\mathbf{k}^{\prime}|d\cos\theta}=\frac{-|i\mathcal{M}_{mn }|^{2}|\mathbf{k}^{\prime}|^{2}}{2^{5}\pi\sqrt{m_{\chi}^{2}+|\mathbf{k}^{\prime}|^{2}} }\frac{\delta\Big{(}k_{0}-\sqrt{m_{\chi}^{2}+|\mathbf{k}^{\prime}|^{2}}-\Delta \varepsilon_{mn}-T_{A}(|\mathbf{k}^{\prime}|,\cos\theta)\Big{)}}{(m_{A}+ \varepsilon_{m}+T_{A}(|\mathbf{k}^{\prime}|,\cos\theta))\sqrt{k_{0}^{2}(m_{A}+ \varepsilon_{n})^{2}-m_{\chi}^{2}m_{A}^{2}}}. \tag{4}\]
This expression and the recoil energy of the nucleus (3) both depend on momentum \(|\mathbf{k}^{\prime}|\) and \(\cos\theta\). Integration over \(\cos\theta\) is performed using the \(\delta\)-function of the energy conservation law from (4) represented as a function of \(\cos\theta\)
\[\delta(f(\cos\theta))=\sum_{i}\frac{\delta(\cos\theta-\cos\theta_{i})}{|df( \cos\theta_{i})/d\ cos\theta|}. \tag{5}\]
Here \(\cos\theta_{i}\) are the solutions (may be not only one) of the equation
\[f(\cos\theta_{i})\equiv k_{0}-\sqrt{m_{\chi}^{2}+|\mathbf{k}^{\prime}|^{2}}- \Delta\varepsilon_{mn}-T_{A}(|\mathbf{k}^{\prime}|,\cos\theta_{i})=0. \tag{6}\]
The derivative of this function included in (5) is \(\frac{df(\cos\theta)}{d\cos\theta}=\frac{|\mathbf{k}^{\prime}||\mathbf{k}|}{T_{A}(|\bm {k}^{\prime}|,\cos\theta)+m_{A}}\). After integrating expression (4) over \(\cos\theta\) one has
\[\frac{d\sigma_{mn}}{d|\mathbf{k}^{\prime}|}=\frac{-1}{2^{5}\pi\sqrt{k_{0}^{2}(m_{ A}+\varepsilon_{n})^{2}-m_{\chi}^{2}m_{A}^{2}}}\frac{|i\mathcal{M}_{mn}|^{2}|\bm {k}^{\prime}|}{|\mathbf{k}|\sqrt{m_{\chi}^{2}+|\mathbf{k}^{\prime}|^{2}}}\frac{m_{A}+ T_{A}(|\mathbf{k}^{\prime}|,\cos\theta_{i})}{(m_{A}+T_{A}(|\mathbf{k}^{\prime}|,\cos \theta_{i})+\varepsilon_{m})}. \tag{7}\]
Using the Jacobian of the transition from the variable \(|\mathbf{k}^{\prime}|\) to the observable variable \(T_{A}\)
\[\frac{dT_{A}(|\mathbf{k}^{\prime}|,\cos\theta)}{d|\mathbf{k}^{\prime}|}=-\frac{|\mathbf{k} ^{\prime}|}{\sqrt{m_{\chi}^{2}+|\mathbf{k}^{\prime}|^{2}}}, \tag{8}\]
from formula (7) one gets the expression for the differential cross section of \(\chi A\) scattering
\[\frac{d\sigma_{mn}}{dT_{A}}=\frac{d\sigma_{mn}}{d|\mathbf{k}^{\prime}|}\frac{d|\bm {k}^{\prime}|}{dT_{A}}=\frac{|i\mathcal{M}_{mn}|^{2}}{2^{5}\pi\sqrt{w}}\frac{ 1}{|\mathbf{k}|}\frac{T_{A}+m_{A}}{T_{A}+m_{A}+\varepsilon_{m}}, \tag{9}\]
where the notation for the initial particle flux is
\[\sqrt{w}\equiv\sqrt{(m_{\chi}^{2}+|\mathbf{k}|^{2})(m_{A}+\varepsilon_{n})^{2}-m_ {\chi}^{2}m_{A}^{2}}\simeq|\mathbf{k}|m_{A}\sqrt{1+\frac{2\varepsilon_{n}}{m_{A}} \Big{(}1+\frac{m_{\chi}^{2}}{|\mathbf{k}|^{2}}\Big{)}}. \tag{10}\]
Finally, the relativistic cross section of the process \(\chi A_{n}\to\chi A_{m}\) has the form
\[\frac{d\sigma_{mn}}{dT_{A}}\big{(}\chi A_{n}\to\chi A_{m}\big{)}= \frac{|i\mathcal{M}_{mn}|^{2}}{2^{5}\pi|\mathbf{k}|^{2}m_{A}}C_{mn}(T_{A}),\qquad \text{where} \tag{11}\] \[C_{mn}(T_{A})=\frac{1}{\sqrt{1+\frac{2\varepsilon_{n}}{m_{A}} \Big{(}1+\frac{m_{\chi}^{2}}{|\mathbf{k}|^{2}}\Big{)}}}\frac{T_{A}+m_{A}}{T_{A}+m _{A}+\varepsilon_{m}}\simeq O(1),\]
since \(m_{A}\gg T_{A}+\varepsilon_{m}\) in the considered approximation.
The amplitude of \(\chi\) particle scattering on nucleus
The probability amplitude of relativistic \(\chi\) lepton scattering on a nucleus, as in the non-relativistic case [4], is based on the assumption that the interaction is between the lepton and the structureless nucleon. This allows one to use the effective 4-fermion Lagrangian [1; 3] written as the product of the lepton \(L_{\mu}(x)=h_{\chi}\overline{\psi}_{\chi}(x)O_{\mu}\psi_{\chi}(x)\) and nucleon \(H^{\mu}(x)=\sum_{f=n,p}h_{f}\overline{\psi}_{f}(x)O^{\mu}\psi_{f}(x)\) currents, where the \(O^{\mu}\) are combinations of \(\gamma\)-matrices.
The element of the \(\mathbb{S}\)-matrix, \(\langle P^{\prime}_{m},k^{\prime}|\mathbb{S}|P_{n},k\rangle\), which determines the probability for the nucleus and the \(\chi\) particle to transform from the initial state \(|P_{n},k\rangle\) to the final state \(\langle P^{\prime}_{m},k^{\prime}|\) due to their interaction is written in the standard way
\[\langle P^{\prime}_{m},k^{\prime}|\mathbb{S}|P_{n},k\rangle=(2\pi)^{4}\delta^ {4}(q+P_{n}-P^{\prime}_{m})i\mathcal{M}_{mn}=\frac{iG_{\rm F}}{\sqrt{2}}\int d ^{4}x\,H^{\mu}_{mn}(x)\,L^{\chi}_{\mu}(x), \tag{10}\]
where \(H^{\mu}_{mn}(x)\equiv\langle P^{\prime}_{m}|H^{\mu}(x)|P_{n}\rangle\) is the matrix element of the transition of the nucleus from the state \(|P_{n}\rangle\) to the state \(\langle P^{\prime}_{m}|\) due to the hadronic current \(H^{\mu}(x)\). Here, the nuclear state wave function is determined by the expression [1; 2; 3; 4]
\[|P_{n}\rangle=\int\Big{(}\prod_{i}^{A}d\mathbf{\widetilde{p}}_{i}^{\star}\Big{)} \frac{\widetilde{\psi}_{n}(\{p^{\star}\})}{\sqrt{A!}}\Phi_{n}(p)|\{p^{\star}\} \rangle,\quad\text{where}\quad\widetilde{\mathbf{p}}_{i}^{\star}=\frac{d\mathbf{p}_{i }^{\star}}{(2\pi)^{3}\sqrt{2E_{\mathbf{p}_{i}^{\star}}}}. \tag{11}\]
The function \(\Phi_{n}(p)=(2\pi)^{3}\sqrt{2P_{n}^{0}}\delta^{3}(\mathbf{p}-\mathbf{P})\) corresponds to the nucleus with a certain 3-momentum \(\mathbf{P}\) and energy \(P_{n}^{0}=E_{\mathbf{p}}+\varepsilon_{n}\). In formula (11) one has \(\{p^{\star}\}\equiv(p_{1}^{\star},\ldots,p_{n}^{\star})\), and \(p_{i}^{\star}\) is thr 4-momentum of the \(i\)th nucleon in the center-of-mass system of the nucleus (at rest). After transforming the right side of (10), the matrix element defined in (12), is written as follows:
\[i\mathcal{M}_{mn}=\frac{iG_{\rm F}}{\sqrt{2}}\sqrt{4P_{m}^{0^{\prime}}P_{n}^{ 0}}\ l_{\mu}(k^{\prime},k,s^{\prime},s)\,h^{\mu}_{mn}(\mathbf{q}), \tag{12}\]
where the lepton current is denoted as
\[l_{\mu}(k^{\prime},k,s^{\prime},s)\equiv\overline{u}_{\chi}(\mathbf{k}^{\prime},s ^{\prime})O_{\mu}u_{\chi}(\mathbf{k},s). \tag{13}\]
The hadronic current \(h^{\mu}_{mn}(\mathbf{q})=\langle m|H^{\mu}(0)|n\rangle\) defined in terms of nuclear \(|n\rangle\) state in the rest frame of the nucleus has the form
\[h^{\mu}_{mn}(\mathbf{q}) = \sum_{k}^{A}\frac{\overline{u}(\mathbf{\bar{p}}_{k}^{\star}+\mathbf{q},r_ {k}^{\prime})\,O_{k}^{\mu}\,u(\mathbf{\bar{p}}_{k}^{\star},r_{k})}{\sqrt{4E_{\mathbf{ \bar{p}}_{k}^{\star}}E_{\mathbf{\bar{p}}_{k}^{\star}+\mathbf{q}}}}\times \tag{14}\] \[\times\int\prod_{i=1}^{A}\frac{d\mathbf{p}_{i}^{\star}\delta\big{(}f( \mathbf{p}_{k}^{\star})\big{)}}{(2\pi)^{3}}\widetilde{\psi}_{m}^{\star}(\{p_{\star }^{(k)}\},\mathbf{p}_{k}^{\star}+\mathbf{q})\widetilde{\psi}_{n}(\{p^{\star}\})(2\pi)^ {3}\delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{\star}),\]
where \(\mathbf{\bar{p}}_{k}^{\star}(\mathbf{q})\) is the solution of the equation1
Footnote 1: It is assumed here that the mass of the proton is equal to the mass of the neutron, i.e. \(m_{p}=m_{n}\equiv m\).
\[f(\mathbf{\bar{p}})\equiv\sqrt{m^{2}+\mathbf{\bar{p}}^{2}}-\sqrt{m^{2}+(\mathbf{\bar{p}}+ \mathbf{q})^{2}}-T_{A}-\Delta\varepsilon_{mn}=0. \tag{15}\]
It is the condition for the simultaneous fulfillment of the energy conservation law and the nuclear integrity (Fig. 2).
It "selects" the momentum of the active nucleon \(\bar{\mathbf{p}}=(p_{L}(\mathbf{q}),p_{T})\)_depending on \(\mathbf{q}\) in the form [1; 3]
\[p_{L}(\mathbf{q})=-\frac{|\mathbf{q}|}{2}\Bigg{[}1-\sqrt{\beta}\sqrt{1+\frac{4m_{T}^{2} }{\mathbf{q}^{2}(1-\beta)}}\Bigg{]},\ \text{where}\ \beta=\frac{(T_{A}+\Delta\varepsilon_{mn})^{2}}{\mathbf{q}^{2}},\quad m_{T}^{2}= m^{2}+p_{T}^{2}. \tag{10}\]
In formula (11), expression \(\{p_{\star}^{(k)}\}\) coincides with \(\{p_{\star}\}\) except for the \(k\)th element, which is equal to \((\mathbf{p}_{k}^{\star}+\mathbf{q},s_{k})\).
As before in [1; 2; 3; 4], one assumes that the wave function \(\widetilde{\psi}_{n}\) has the form of the product of the momentum \(\widetilde{\psi}_{n}\) and spin \(\chi_{n}\) components independent of each other
\[\widetilde{\psi}_{n}(\{p^{\star}\})=\widetilde{\psi}_{n}(\{\mathbf{p}_{\star}\}) \chi_{n}(\{r\}). \tag{11}\]
Here the momentum \(\{\mathbf{p}_{\star}\}=(\mathbf{p}_{1}^{\star},\ldots,\mathbf{p}_{A}^{\star})\) and spin \(\{r\}=(r_{1},\ldots,r_{A})\) variables are introduced. Spin functions are normalized by the conditions
\[\chi_{m}^{\star}(\{r\})\chi_{n}(\{r\})=\delta_{nm}\quad\text{and}\quad\chi_{ n}^{\star}(\{r^{\prime}\})\chi_{n}(\{r\})=\delta_{\{r^{\prime}\}\{r\}}, \tag{12}\]
which mean that if (after the interaction) all the spins of the nucleons remained unchanged, the spin state of the nucleus also did not change, and vice versa, if the spin state of the nucleus has not changed, all the spins of the nucleons must also remain unchanged.
Since after the interaction the spin \(r_{k}^{\prime}\) of the active nucleon in the \(|m\rangle\) state may turn out to be different from the spin \(r_{k}\) of this nucleon in the \(|n\rangle\) state, we define the product of spin functions
\[\lambda^{mn}(r^{\prime},r)\equiv\lambda^{mn}(\{r^{\prime}\},\{r\})\equiv \chi_{m}^{\star}(\{r^{(k)}\})\chi_{n}(\{r\})=\delta_{mn}\delta_{r^{\prime}r}+ (1-\delta_{mn})\lambda_{r^{\prime}r}^{mn}, \tag{13}\]
Figure 2: The requirement of simultaneous fulfillment of the energy conservation law and integrity of the nucleus ”chooses” the momentum of the active nucleon \(\bar{\mathbf{p}}=(p_{L}(\mathbf{q}),p_{T})\), interaction with which occurs at the point marked with a red star.
where \(\{r^{(k)}\}\equiv\{r^{\prime}\}\) is the same as \(\{r\}\) except for the \(k\)th element, which is equal to \(r^{\prime}_{k}\), and the notation \(\lambda^{mn}_{r^{\prime}r}\equiv\lambda^{mn}(r^{\prime},r)\) for \(m\neq n\) is introduced. Given (3.10) and \(\bar{\mathbf{p}}(\mathbf{q})\) from (3.6), expression (3.5) is rewritten as follows:
\[h^{\mu}_{mn}(\mathbf{q})=\sum_{k=1}^{A}\frac{\bar{u}(\bar{\mathbf{p}}+\mathbf{q},r^{\prime} _{k})O^{\mu}_{k}u(\bar{\mathbf{p}},r_{k})}{\sqrt{4E_{\bar{\mathbf{p}}}E_{\bar{\mathbf{p}}+ \mathbf{q}}}}\lambda^{mn}(r^{\prime},r)\ \langle m|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|n\rangle, \tag{3.11}\]
where the multidimensional integral from (3.5) can be written as a nuclear matrix element of the operator \(\hat{\mathbf{X}}_{k}\) that implements the three-dimensional shift of the \(k\)th nucleon
\[f^{k}_{mn}(\mathbf{q})\equiv\langle m|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|n \rangle\!=\!\!\int\!\!\prod_{i=1}^{A}\frac{d\mathbf{p}^{\star}_{i}}{(2\pi)^{3}} \delta\big{(}f(\mathbf{p}^{\star}_{k})\big{)}\widetilde{\psi}^{\ast}_{m}(\{\mathbf{p} ^{(k)}_{\star}\},\mathbf{p}^{\star}_{k}\!+\!\mathbf{q})\widetilde{\psi}_{n}(\{\mathbf{p}_{ \star}\})(2\pi)^{3}\delta^{3}\big{(}\sum_{l=1}^{A}\mathbf{p}^{\star}_{l}\big{)}. \tag{3.12}\]
Here \(\delta\big{(}f(\mathbf{p}^{\star}_{k})\big{)}\) ensures the integrity of the nucleus after the \(\hat{\mathbf{X}}_{k}\)-operator shift of the momentum of the active \(k\)th nucleon caused by an external action.
Finally, for the matrix element (3.3) that defines the probability amplitude of the process \(\chi_{s}A_{n}\to\chi_{s^{\prime}}A^{(\ast)}_{m}\), one gets the _final_ expression
\[i\mathcal{M}^{s^{\prime}s,r^{\prime}r}_{mn}(\mathbf{q})=i\frac{G_{F}}{\sqrt{2}} \frac{m_{A}}{m}C^{1/2}_{1,mn}\sum_{k=1}^{A}f^{k}_{mn}(\mathbf{q})\lambda^{mn}(r^{ \prime},r)(l_{s^{\prime}s}\,h^{k}_{r^{\prime}r}),\quad\text{where} \tag{3.13}\]
\[(l_{s^{\prime}s}\,h^{k}_{r^{\prime}r})\equiv l_{\mu}(k^{\prime},k,s^{\prime},s )\ \bar{u}(\bar{\mathbf{p}}+\mathbf{q},r^{\prime}_{k})O^{\mu}_{k}u(\bar{\mathbf{p}},r_{k}) \tag{3.14}\]
is the scalar product of the lepton current and the \(k\)th nucleon current, containing all the specifics of the interaction between them. In formula (3.13), the auxiliary factor is introduced,
\[C_{1,mn}\equiv\frac{P^{0}_{n}\,P^{{}^{\prime}0}_{m}}{E_{\bar{\mathbf{p}}}\,E_{\bar {\mathbf{p}}+\mathbf{q}}}\frac{m^{2}}{m_{A}^{2}}\simeq O(1), \tag{3.15}\]
whose value is close to 1 with good accuracy, and the dependence on \(n,m\) and \(T_{A}\) manifests itself in it only at a level \(O(10^{-3})^{2}\).
## 4 Cross sections of \(\chi\) particle-nucleus scattering
### _Coherent and incoherent contributions to the \(\chi A\to\chi A^{(\ast)}\) scattering_
The _observable_ differential cross section of the process \(\chi_{s}A\to\chi_{s^{\prime}}A^{(\ast)}\) can be obtained by averaging over all possible initial \(|n\rangle\) states and summation over all final \(|m\rangle\) states of the nuclear cross section (2.11). After summation over the spin \(r,r^{\prime}\) indices of the (active) nucleon _at the level of_ the matrix element (3.13), the observable cross section can be written as [1, 2, 3, 4]
\[\frac{d\sigma_{s^{\prime}s}}{dT_{A}}(\chi A\to\chi A^{(\ast)})=\frac{G_{F}^{2 }m_{A}}{2^{6}\pi|\mathbf{k}^{\chi}_{l}|^{2}m^{2}}\Big{[}T^{s^{\prime}s}_{m=n}+T^{ s^{\prime}s}_{m\neq n}\Big{]},\quad\text{where} \tag{4.1}\]
\[T^{s^{\prime}s}_{m=n} = g_{\rm c}\sum_{k,j}^{A}\sum_{n}\omega_{n}\Big{[}f^{k}_{nn}f^{j*}_{ nn}\sum_{r}(l_{s^{\prime}s},h^{k}_{rr})\sum_{x}(l_{s^{\prime}s}\,h^{j}_{xx})^{*} \Big{]}, \tag{10}\] \[T^{s^{\prime}s}_{m\neq n} = g_{\rm i}\sum_{k,j}^{A}\sum_{n}\omega_{n}\Big{[}\sum_{m\neq n}f^{ k}_{mn}f^{j*}_{mn}\sum_{r^{\prime}r}\lambda^{mn}_{r^{\prime}r}(l_{s^{\prime}s},h^{ k}_{r^{\prime}r})\Big{(}\sum_{x^{\prime}x}\lambda^{mn}_{x^{\prime}x}(l_{s^{ \prime}s}\,h^{j}_{x^{\prime}x})\Big{)}^{\dagger}\Big{]}. \tag{11}\]
Here \(\sum_{n}\omega_{n}=1\) is the probability sum of all possible initial states of nucleus \(A\). For the sake of completeness, the correction kinematic coefficients \(g_{\rm c}=C_{1,nn}C_{nn}\) and \(g_{\rm i}=C_{1,mn}C_{mn}\) are kept, whose values are close to \(1\) at the \(O(10^{-3})\) level.
Summation over index the \(n\) in formula (10) determines the nucleon form factors averaged over all possible initial states of the nucleus [1; 2; 3; 4]
\[\sum_{n}\omega_{n}f^{k}_{nn}f^{j*}_{nn}=\begin{cases}|F_{p/n}(\mathbf{q})|^{2},& \text{when }(k,j)=(p,p)\ \text{ or }(n,n);\\ F_{p}(\mathbf{q})F^{*}_{n}(\mathbf{q}),&\text{when }(k,j)=(p,n);\\ F_{n}(\mathbf{q})F^{*}_{p}(\mathbf{q}),&\text{when }(k,j)=(n,p).\end{cases} \tag{12}\]
With this definition in mind, expression (10) for the contribution to the cross section corresponding to the preservation of the initial state of the nucleus, when the projection of the spin of the active nucleon does not change, can be represented as the square of the modulus of the sum of the contributions of protons and neutrons
\[T^{s^{\prime}s}_{m=n}(\mathbf{q})=g_{\rm c}\Big{|}\sum_{f=p,n}\sum_{k=1}^{A_{f}} \sum_{r}(l_{s^{\prime}s},h^{f}_{rr}(\mathbf{q}))F_{f}(\mathbf{q})\Big{|}^{2}. \tag{13}\]
Here \(A_{f}\) denotes the total number of \(f\)-type nucleons in the nucleus.
To sum over the nuclear indices \(m,n\) in formula (11), note that from the spin function normalization (12), for _the same_\(k\)th nucleon there follows the relation
\[\lambda^{mn}_{r^{\prime}_{k}r_{k}}[\lambda^{mn}_{x^{\prime}_{k}x_{k}}]^{*} \equiv\delta_{r^{\prime}_{k}x^{\prime}_{k}}\delta_{r_{k}x_{k}}|\lambda^{mn}_{ r^{\prime}_{k}r_{k}}|^{2},\quad\text{where}\quad|\lambda^{mn}_{r^{\prime}_{k}r_{k}}|^{2 }=1. \tag{14}\]
Therefore, one can assume that \(\lambda^{mn}_{r^{\prime}r}\) do not depend on indices \(m\) and \(n\); however, \(\lambda^{mn}_{r^{\prime}r}\simeq\lambda^{p/n}_{r^{\prime}r}\) may differ for protons and neutrons. In other words, for any initial spin orientation of the active nucleon (index \(r\)) any orientation of the spin of this nucleon (index \(r^{\prime}\)) is possible after the nuclear \(|n\rangle\to|m\rangle\) transition3. As a result, \(\lambda^{mn}_{r^{\prime}r}\) can be taken out of the summation over \(m,n\) in (11). Then, if the indices \(k\) and \(j\) in formula (11) "indicate" the same nucleon, e.g., the proton, the summation gives
Footnote 3: Discussion of this approximation is given in Appendix 6.
\[\sum_{n}\omega_{n}\sum_{m\neq n}f^{k}_{mn}f^{k*}_{mn}=\sum_{n}\omega_{n}\Big{[} \langle n|e^{i\mathbf{q}\mathbf{X}_{k}}\sum_{m}|m\rangle\langle m|e^{-i\mathbf{q}\mathbf{X}_{k} }|n\rangle\Big{]}-|F_{p}(\mathbf{q})|^{2}=1-|F_{p}(\mathbf{q})|^{2}. \tag{15}\]
If \(k\neq j\), but they still indicate protons (index \(p\)), it can be written that
\[\sum_{n}\omega_{n}\sum_{m\neq n}f^{k}_{mn}f^{j*}_{mn}=\sum_{n}\omega_{n}\text{ cov}_{nn}(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}},e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}})\equiv\langle \text{cov}(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}},e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}})\rangle_{p}, \tag{16}\]
where the covariation operator of the shift operators \(e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}\) and \(e^{i\mathbf{q}\mathbf{X}_{k}}\) with respect to the state \(|n\rangle\) is introduced in the form
\[\text{cov}_{nn}(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}},e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}})\equiv \langle n|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}\,e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}|n\rangle- \langle n|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|n\rangle\langle n|e^{-i\mathbf{q}\hat{\mathbf{X} }_{j}}|n\rangle. \tag{11}\]
Expression (10) vanishes at small and at large transfer momenta. A similar consideration applies to neutrons and can be generalized to the common case of protons and neutrons. It is believed that when correlations between nucleons in the nucleus are sufficiently weak, one can neglect the covariance functions like (10). For example, in nuclear shell models where multiparticle wave functions of nuclei are constructed in the form of a product of single-particle wave functions [13; 14], covariation (11) identically vanishes. Therefore, in what follows as in [1; 2; 3; 4], covariance contributions like (10) are _completely neglected_. In view of the above the term (10) can be finally presented as a sum over all nucleons
\[T^{s^{\prime}s}_{m\neq n} = g_{\text{i}}\sum_{f=p,n}\big{[}1-|F_{f}(\mathbf{q})|^{2}\big{]}\sum_ {k=1}^{A_{f}}\sum_{r^{\prime}r}\big{|}(l_{s^{\prime}s}\,h^{f}_{r^{\prime}r}( \mathbf{q}))\big{|}^{2}. \tag{12}\]
Thus, the observable differential cross section (10) for the process \(\chi A\to\chi A^{(*)}\) can be written in the form of two fundamentally different terms
\[\frac{d\sigma_{s^{\prime}s}}{dT_{A}} = \frac{d\sigma_{\text{coh}}^{s^{\prime}s}}{dT_{A}}(\chi A\to\chi A) +\frac{d\sigma_{\text{inc}}^{s^{\prime}s}}{dT_{A}}(\chi A\to\chi A^{*}),\quad \text{where} \tag{13}\] \[\frac{d\sigma_{\text{coh}}^{s^{\prime}s}}{dT_{A}} = c_{A}g_{\text{c}}\Big{|}\sum_{f=n,p}\sum_{k=1}^{A_{f}}\sum_{r}(l _{s^{\prime}s}\,h^{f}_{rr}(\mathbf{q}))F_{f}(\mathbf{q})\Big{|}^{2}\quad\text{and}\] \[\frac{d\sigma_{\text{inc}}^{s^{\prime}s}}{dT_{A}} = c_{A}g_{\text{i}}\sum_{f=n,p}\sum_{k=1}^{A_{f}}\sum_{r^{\prime}r }|(l_{s^{\prime}s}\,h^{f}_{r^{\prime}r}(\mathbf{q}))|^{2}[1-|F_{f}(\mathbf{q})|^{2}],\]
respectively correspond to the elastic and inelastic interactions of the \(\chi\) particle with nucleus \(A\). For convenience, in formulas (13) universal factors are introduced,
\[c_{A}\equiv\frac{G_{F}^{2}m_{A}}{2^{6}\pi|\mathbf{k}_{l}^{\chi}|^{2}m^{2}}\quad \text{and}\quad\hat{c}_{A}\equiv\frac{G_{F}^{2}m_{A}}{4\pi}\frac{1}{|\mathbf{k}_{l} ^{\chi}|^{2}m^{2}}=4^{2}c_{A}, \tag{14}\]
where \(\mathbf{k}_{l}^{\chi}\) is the momentum of the incident \(\chi\) particle in the rest frame of the target nucleus. The helicities of the \(\chi\) particle (in the general case, the projection of the spin onto some direction) in the initial \(s\) and final \(s^{\prime}\) states are assumed to be fixed in (13). In the future, they can be averaged (summated).
Since the scalar products \((l_{s^{\prime}s}\,h^{f}_{r^{\prime}r})\) in formulas (13) depend only on the type of the active nucleon and do not depend on its number (that is, from the summation index \(k\)), then the summation over \(k\) in formulas (13) gives the total number of protons and neutrons in
the nucleus. As a result, the differential cross sections for \(\chi\) particle scattering on the nucleus \(\chi_{s}A\to\chi_{s^{\prime}}A^{(*)}\) take the form
\[\frac{d\sigma^{s^{\prime}s}_{\rm inc}}{dT_{A}} = c_{A}g_{\rm i}\sum_{f=p,n}\big{[}1-|F_{f}(\mathbf{q})|^{2}\big{]}\Big{[} A_{+}^{f}\sum_{r^{\prime}=\pm}|(l_{s^{\prime}s}\,h^{\eta,f}_{r^{\prime}+})|^{2}+A_{-} ^{f}\sum_{r^{\prime}=\pm}|(l_{s^{\prime}s}\,h^{\eta,f}_{r^{\prime}-})|^{2} \Big{]},\] \[\frac{d\sigma^{s^{\prime}s}_{\rm coh}}{dT_{A}} = c_{A}g_{\rm c}\Big{|}\sum_{f=p,n}\,F_{f}(\mathbf{q})[A_{+}^{f}(l_{s^ {\prime}s}\,h^{\eta,f}_{++})+A_{-}^{f}(l_{s^{\prime}s}\,h^{\eta,f}_{--})]\Big{|} ^{2}. \tag{4.13}\]
Here the sums over the initial projections of the nucleon spin on the direction of arrival of the \(\chi\) particle, which is marked by the index \(\eta\) for hadronic currents, are explicitly distinguished. In (4.13) \(A_{\pm}^{f}\) is a number of \(f\)-type nucleons with the spin projection \(\pm 1\) onto this direction.
It is convenient to _finally_ rewrite formulas (4.13) in terms of the total number of \(f\)-type nucleons, \(A_{f}=A_{+}^{f}+A_{-}^{f}\), and the difference in the number of nucleons, \(\Delta A_{f}=A_{+}^{f}-A_{-}^{f}\), having positive and negative spin projections on the selected direction
\[\frac{d\sigma^{s^{\prime}s}_{\rm coh}(\mathbf{q})}{g_{\rm c}dT_{A}} = c_{A}\Big{|}\sum_{f=p,n}F_{f}(\mathbf{q})\frac{A_{f}}{2}Q^{s^{\prime} s}_{f}\Big{|}^{2}, \tag{4.14}\] \[\frac{d\sigma^{s^{\prime}s}_{\rm inc}(\mathbf{q})}{g_{\rm i}dT_{A}} = c_{A}\sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q})\frac{A_{f}}{2}\Big{[} \,Q^{s^{\prime}s}_{+}+\frac{\Delta A_{f}}{A_{f}}\,Q^{s^{\prime}s}_{-}\Big{]}, \quad\text{where}\quad\widehat{F}_{f}^{2}(\mathbf{q})\equiv\big{[}1-F_{f}^{2}(\mathbf{ q})\big{]}.\]
Thus, coherent and incoherent cross sections are defined, respectively, by the following combinations of scalar products:
\[Q^{s^{\prime}s}_{f}\equiv\hat{Q}^{s^{\prime}s}_{+}+\frac{\Delta A_{f}}{A_{f}} \hat{Q}^{s^{\prime}s}_{-},\quad\text{where}\quad\hat{Q}^{s^{\prime}s}_{\pm} \equiv(l_{s^{\prime}s}\,h^{f}_{++})\pm(l_{s^{\prime}s}\,h^{f}_{--}), \tag{4.15}\]
\[\text{and}\quad Q^{s^{\prime}s}_{\pm}\equiv\sum_{r^{\prime}=\pm}|(l_{s^{\prime }s}\,h^{f}_{r^{\prime}+})|^{2}\pm\sum_{r^{\prime}=\pm}|(l_{s^{\prime}s}\,h^{f} _{r^{\prime}-})|^{2}. \tag{4.16}\]
As a rule, if a nucleus has spin zero, then \(\Delta A_{f}=0\). Almost always \(\Delta A_{f}\ll A_{f}\), except for such nuclei as helium [15], which, perhaps, deserves special attention. To complete the derivation of the differential cross section of the process \(\chi A\to\chi A^{(*)}\), it is necessary to have explicit expressions for scalar products \((l_{s^{\prime}s}\,h^{p/n}_{r^{\prime}r})\). These values are given in section 4.2.
### _Scalar products for \(\chi A\to\chi A^{(*)}\) scattering_
Scalar products of \(\chi\)-lepton and nucleon currents obtained in [16] are necessary for self-consistent calculations of coherent and incoherent \(\chi A\to\chi A^{(*)}\) scattering cross sections. They are defined by the expression
\[(l^{i}_{s^{\prime}s}\,h^{k}_{r^{\prime}r})\equiv\sum_{\mu,\nu}^{4}J^{i,\mu}_{s ^{\prime}s}(\mathbf{k}^{\prime})\,g_{\mu\nu}\,J^{k,\nu}_{r^{\prime}r}(\mathbf{p}^{ \prime}),\]
where indices \(s\) and \(r\) denote the initial spin states of the lepton and the nucleon, and the indices \(s^{\prime}\) and \(r^{\prime}\) denote their final states. The indices \(i\) and \(k\) denote the vector (\(J^{v,\mu}\equiv V^{\mu}\)), axial vector (\(J^{a,\mu}\equiv A^{\mu}\)) lepton (argument \(\mathbf{k}^{\prime}\)) and nucleon (argument \(\mathbf{p}^{\prime}\)) currents. The scalar products are expressed in term of the masses of the nucleon and the \(\chi\) particle
\[m,\quad m_{\chi},\quad\text{and parameters}\quad\lambda_{\pm}=\sqrt{E_{p}\pm m },\qquad\xi_{\pm}=\sqrt{E_{\chi}\pm m_{\chi}},\]
as well as the scattering angle \(\theta\) of the \(\chi\) particle in the lepton and nucleon center-of-mass system (c.m.s.), where \(\mathbf{k}+\mathbf{p}=\mathbf{k}^{\prime}+\mathbf{p}^{\prime}=0\). Here \(p\) and \(k\) are the 4-momenta of the nucleon and \(\chi\) particles, in addition,
\[E_{\chi} \equiv \sqrt{m_{\chi}^{2}+|\mathbf{k}|^{2}}=\sqrt{m_{\chi}^{2}+|\mathbf{k}^{ \prime}|^{2}}=\frac{s+m_{\chi}^{2}-m^{2}}{2\sqrt{s}},\] \[E_{p} \equiv \sqrt{m^{2}+|\mathbf{p}|^{2}}=\sqrt{m^{2}+|\mathbf{p}^{\prime}|^{2}}= \frac{s+m^{2}-m_{\chi}^{2}}{2\sqrt{s}}, \tag{4.17}\] \[|\mathbf{p}| = \sqrt{E_{p}^{2}-m^{2}}=\lambda_{+}\lambda_{-}=|\mathbf{k}|=\sqrt{E_{ \chi}^{2}-m_{\chi}^{2}}=\xi_{+}\xi_{-}=\frac{\lambda(s,m^{2},m_{\chi}^{2})}{2 \sqrt{s}},\ \ \text{where}\] \[\lambda^{2}(s,m^{2},m_{\chi}^{2})\equiv[s-(m+m_{\chi})^{2}][s-(m- m_{\chi})^{2}].\]
The invariant square of the total energy in the c.m.s. has the form
\[s_{\text{c.s.m.}}=(p+k)^{2}=(p_{0}+k_{0})^{2}-(\mathbf{p}+\mathbf{k})^{2}=(E_{\chi}+E_ {p})^{2}.\]
In this case, the momenta of the particles do not change: \(|\mathbf{k}^{\prime}|=|\mathbf{k}|=|\mathbf{p}^{\prime}|=|\mathbf{p}|\). Therefore, the only free variable of \(\chi\)-lepton elastic scattering on the nucleon in c.m.s. takes place. It is the angle \(\theta\) between the direction of the initial lepton momentum \(\mathbf{k}\) and the direction of its final momentum \(\mathbf{k}^{\prime}\). It is present in the 3-momentum transferred to the nucleon
\[\mathbf{q}^{2}=2|\mathbf{k}|^{2}(1-\cos\theta)\equiv\mathbf{q}_{\text{max}}^{2}\sin^{2} \frac{\theta}{2}. \tag{4.18}\]
In the lab frame, where the nucleon is at rest, \(p_{l}=(m,\mathbf{p}_{l}=\mathbf{0})\), one has
\[s=(k_{l}+p_{l})^{2}=m_{\chi}^{2}+m^{2}+2m\sqrt{m_{\chi}^{2}+[\mathbf{k}_{\chi}^{l} ]^{2}}=m_{\chi}^{2}+m^{2}+2mm_{\chi}\Big{[}1+\frac{2T_{0}}{m_{\chi}}\Big{]}^{1 /2}, \tag{4.19}\]
where \(T_{0}=\frac{|\mathbf{k}_{\chi}^{l}|^{2}}{2m_{\chi}}\) is the kinetic energy of the lepton incident on the nucleon at rest. The following relation is also useful:
\[\frac{|\mathbf{k}|^{2}}{|\mathbf{k}_{\chi}^{l}|^{2}}=\frac{m^{2}}{s}. \tag{4.20}\]
Considering the definition of the angle \(\theta\) in terms of the recoil energy of the nucleus \(T_{A}\)4
Footnote 4: Since \(\mathbf{q}^{2}\ll m_{A}^{2}\), then \(\mathbf{q}^{2}\simeq 2m_{A}T_{A}\), and from (2.3) follows \(T_{A}^{\text{max}}\simeq m_{A}\Big{(}1+\frac{4|\mathbf{k}_{\chi}^{l}|^{2}}{2m_{A} ^{2}}\Big{)}-m_{A}=\frac{4m_{\chi}T_{0}}{m_{A}}\).
\[\sin^{2}\frac{\theta}{2}=\frac{\mathbf{q}^{2}}{\mathbf{q}_{\text{max}}^{2}}\simeq\frac {T_{A}}{T_{A}^{\text{max}}},\quad\text{where}\quad T_{A}^{\text{max}}\simeq \frac{4m_{\chi}T_{0}}{m_{A}}, \tag{4.21}\]
the parameters \(T_{0}\) (the initial kinetic energy of the \(\chi\) particle) through relations (4.17), and (4.19) and \(T_{A}\) (the kinetic energy of the recoil of the nucleus) determine all quantities necessary for calculating the cross sections.
Recall in this regard that according to the condition of preserving the integrity of the nucleus (3.6), _in the rest frame of the nucleus_ the active nucleon "meets" the \(\chi\) particle incident on the nucleus with a nonzero momentum from (3.7), and therefore the \(s\) invariant in the rest frame of the nucleus (lab frame) should be recalculated
\[s = (k_{l}+p_{l})^{2}=m_{\chi}^{2}+m^{2}+2mm_{\chi}\Bigg{\{}\sqrt{1+ \frac{|\mathbf{k}_{\chi}^{l}|^{2}}{m_{\chi}^{2}}}\sqrt{1+\frac{p_{L}^{2}}{m^{2}}}- \frac{|\mathbf{k}_{\chi}^{l}|}{m_{\chi}}\frac{p_{L}}{m}\Bigg{\}}. \tag{4.22}\]
In other words, this value depends both on the kinetic energy of the incident \(\chi\) particle \(T_{0}\) (or \(|\mathbf{k}_{\chi}^{l}|\)) and on the kinetic energy of the recoil of the nucleus \(T_{A}\). However, as a rule, one has \(p_{L}\leq 0.1\,m\), and this correction is small enough.
The scalar products of the lepton and nucleon currents with the spin projection of the nucleon (\(r^{\prime},r=\pm 1\)) and the \(\chi\)-particle (\(s^{\prime},s=\pm 1\)), given below are defined as follows [16]:
\[(l_{s^{\prime}s}^{w}\,h_{r^{\prime}r}^{w,f}) = \alpha_{f}(l_{s^{\prime}s}^{v}\,h_{r^{\prime}r}^{v})+\beta_{f}(l _{s^{\prime}s}^{v}\,h_{r^{\prime}r}^{a})+\gamma_{f}(l_{s^{\prime}s}^{q}\,h_{r^ {\prime}r}^{v})+\delta_{f}(l_{s^{\prime}s}^{q}\,h_{r^{\prime}r}^{a}). \tag{4.23}\]
Recall that the index \(f\) denotes a neutron or a proton.
When the \(\chi\)-lepton helicity and the nucleon spin projection are preserved, these scalar products have the form
\[(l_{\pm\pm}^{w}\,h_{\pm\pm}^{w,f}) = 4\cos\frac{\theta}{2}\Big{[}(\alpha_{f}-\delta_{f})\big{[}\xi_{ +}\xi_{-}\lambda_{+}\lambda_{-}\cos^{2}\frac{\theta}{2}+(m_{\chi}+\xi_{-}^{2} )(m+\lambda_{-}^{2}\cos^{2}\frac{\theta}{2})\big{]}\] \[\pm\lambda_{+}\lambda_{-}(\gamma_{f}-\beta_{f})\big{[}(m_{\chi}+ \xi_{-}^{2})\cos^{2}\frac{\theta}{2}+m+\lambda_{-}^{2}\cos^{2}\frac{\theta}{2 }\big{]}\Big{]},\] \[(l_{\pm\pm}^{w}\,h_{\mp\mp}^{w,f}) = 4\cos\frac{\theta}{2}\Big{[}(\alpha_{f}+\delta_{f})(m_{\chi}+ \xi_{-}^{2})m+\xi_{+}\xi_{-}\lambda_{+}\lambda_{-}\big{(}\alpha_{f}+\alpha_{f} \sin^{2}\frac{\theta}{2}+\delta_{f}\cos^{2}\frac{\theta}{2}\big{)}+\] \[+(m_{\chi}+\xi_{-}^{2})\lambda_{-}^{2}\big{(}\delta_{f}+\delta_{ f}\sin^{2}\frac{\theta}{2}+\alpha_{f}\cos^{2}\frac{\theta}{2}\big{)}\pm \lambda_{+}\lambda_{-}\big{\{}(\beta_{f}+\gamma_{f})m\] \[+\lambda_{-}^{2}\big{(}\beta_{f}+\beta_{f}\sin^{2}\frac{\theta}{ 2}+\gamma_{f}\cos^{2}\frac{\theta}{2}\big{)}+(m_{\chi}+\xi_{-}^{2})\big{(} \gamma_{f}+\gamma_{f}\sin^{2}\frac{\theta}{2}+\beta_{f}\cos^{2}\frac{\theta}{ 2}\big{)}\}\Big{]}.\]
When the helicity of the \(\chi\) lepton is preserved, but the projection of the nucleon spin changes:
\[(l_{\pm\pm}^{w}\,h_{\pm\mp}^{w,f}) = \mp 4\sin\frac{\theta}{2}e^{\mp i\phi}\Big{[}2m\delta_{f}(m_{\chi}+ \xi_{-}^{2})+\xi_{+}\xi_{-}\lambda_{+}\lambda_{-}(\alpha_{f}+\alpha_{f}\sin^{2 }\frac{\theta}{2}+\delta_{f}\cos^{2}\frac{\theta}{2})\] \[+(\xi_{-}^{2}+m_{\chi})\lambda_{-}^{2}(\delta_{f}+\delta_{f}\sin^ {2}\frac{\theta}{2}+\alpha_{f}\cos^{2}\frac{\theta}{2})\pm\lambda_{+}\lambda_{ -}\big{\{}2m\beta_{f}+\] \[+(\xi_{-}^{2}+m_{\chi})(\gamma_{f}+\gamma_{f}\sin^{2}\frac{\theta }{2}+\beta_{f}\cos^{2}\frac{\theta}{2})+\lambda_{-}^{2}(\beta_{f}+\beta_{f} \sin^{2}\frac{\theta}{2}+\gamma_{f}\cos^{2}\frac{\theta}{2})\}\Big{]},\] \[(l_{\pm\pm}^{w}\,h_{\mp\pm}^{w,f}) = 4\sin\frac{\theta}{2}\cos^{2}\frac{\theta}{2}e^{\pm i\phi}\Big{[} (\gamma_{f}-\beta_{f})\lambda_{+}\lambda_{-}[m_{\chi}+\lambda_{-}^{2}+\xi_{-} ^{2}]\] \[\pm(\alpha_{f}-\delta_{f})[\xi_{+}\xi_{-}\lambda_{+}\lambda_{-}+( \xi_{-}^{2}+m_{\chi})\lambda_{-}^{2}]\Big{]}.\]
When the helicity of the \(\chi\) lepton changes, but the projection of the nucleon spin is preserved:
\[(l^{w}_{\pm\mp}\,h^{w,f}_{\mp\mp}) = 4m_{\chi}\sin\frac{\theta}{2}e^{\mp i\phi}\Big{[}(\beta_{f}-\gamma _{f})\lambda_{+}\lambda_{-}\cos^{2}\frac{\theta}{2}\pm(\alpha_{f}-\delta_{f})(m +\lambda_{-}^{2}\cos^{2}\frac{\theta}{2})\Big{]},\] \[(l^{w}_{\pm\mp}\,h^{w,f}_{\pm\pm}) = 4m_{\chi}\sin\frac{\theta}{2}e^{\mp i\phi}\Big{[}(\gamma_{f}- \beta_{f})\lambda_{+}\lambda_{-}\cos^{2}\frac{\theta}{2}\pm\big{\{}(\alpha_{f} +\delta_{f})m+(\alpha_{f}-\delta_{f})\lambda_{-}^{2}\cos^{2}\frac{\theta}{2} \big{\}}\Big{]}.\]
Finally, when both the helicity of the \(\chi\) lepton and the projection of the nucleon spin change:
\[(l^{w}_{\pm\mp}\,h^{w,f}_{\mp\pm}) = 4m_{\chi}\cos\frac{\theta}{2}\Big{[}(\alpha_{f}-\delta_{f}) \lambda_{-}^{2}\sin^{2}\frac{\theta}{2}-2\delta_{f}m\pm(\gamma_{f}-\beta_{f}) \lambda_{+}\lambda_{-}\sin^{2}\frac{\theta}{2}\Big{]},\] \[(l^{w}_{\pm\mp}\,h^{w,f}_{\pm\mp}) = -4m_{\chi}\lambda_{-}\cos\frac{\theta}{2}\sin^{2}\frac{\theta}{2 }e^{\mp 2i\phi}\big{[}(\alpha_{f}-\delta_{f})\lambda_{-}\mp(\gamma_{f}-\beta _{f})\lambda_{+}\big{]}.\]
It is convenient to have scalar products in the form of an expansion in the coupling constants of weak currents \(\alpha,\beta,\gamma,\delta\) (the index \(f\) is temporarily omitted) in the following compact form
\[\frac{(l^{w}_{\pm\pm}\,h^{w,f}_{\pm\mp})}{4\cos\frac{\theta}{2}} = \alpha f_{1}(\theta)-\delta f_{1}(\theta)\pm\gamma f_{2}(\theta) \mp\beta f_{2}(\theta),\] \[\frac{(l^{w}_{\pm\pm}\,h^{w,f}_{\mp\mp})}{4\cos\frac{\theta}{2}} = \alpha f_{3}(\theta)+\delta f_{4}(\theta)\pm\beta f_{5}(\theta) \pm\gamma f_{6}(\theta),\] \[\frac{(l^{w}_{\pm\pm}\,h^{w,f}_{\pm\mp})}{\mp 4\sin\frac{\theta}{2}e^{ \mp i\phi}} = \alpha f_{7}(\theta)+\delta f_{8}(\theta)\pm\beta f_{9}(\theta) \pm\gamma f_{10}(\theta),\] \[\frac{(l^{w}_{\pm\pm}\,h^{w,f}_{\mp\pm})}{4\sin\frac{\theta}{2}e ^{\pm i\phi}} = \pm\alpha f_{11}(\theta)\mp\delta f_{11}(\theta)+\gamma f_{12}( \theta)-\beta f_{12}(\theta);\] \[\frac{(l^{w}_{\pm\pm}\,h^{w,f}_{\mp\mp})}{4\sin\frac{\theta}{2}e ^{\mp i\phi}} = \pm\alpha f_{13}(\theta)\mp\delta f_{13}(\theta)+\beta f_{14}( \theta)-\gamma f_{14}(\theta), \tag{4.24}\] \[\frac{(l^{w}_{\pm\mp}\,h^{w,f}_{\pm\pm})}{4\sin\frac{\theta}{2}e ^{\mp i\phi}} = \pm\alpha f_{13}(\theta)\pm\delta f_{15}(\theta)+\gamma f_{14}( \theta)-\beta f_{14}(\theta),\] \[\frac{(l^{w}_{\pm\mp}\,h^{w,f}_{\mp\pm})}{4\cos\frac{\theta}{2}} = +\alpha f_{16}(\theta)-\delta f_{17}(\theta)\pm\gamma_{f}f_{14}( \theta)\mp\beta_{f}f_{14}(\theta),\] \[\frac{(l^{w}_{\pm\mp}\,h^{w,f}_{\pm\mp})}{4\cos\frac{\theta}{2}e ^{\mp 2i\phi}} = -\alpha f_{16}(\theta)+\delta f_{16}(\theta)\pm\gamma f_{14}( \theta)\mp\beta f_{14}(\theta).\]
The following "form-factor-type" notations are introduced here:
\[f_{1}(\theta) \equiv E_{\chi}m+(E_{\chi}+\lambda_{+}^{2})\lambda_{-}^{2}\cos^{2} \frac{\theta}{2},\qquad f_{2}(\theta)\equiv\lambda_{+}\lambda_{-}\big{[}m+(E_{ \chi}+\lambda_{-}^{2})\cos^{2}\frac{\theta}{2}\big{]},\] \[f_{3}(\theta) \equiv E_{\chi}(m+\lambda_{-}^{2}\cos^{2}\frac{\theta}{2})+\lambda_{+}^ {2}\lambda_{-}^{2}(1+\sin^{2}\frac{\theta}{2}),\] \[f_{4}(\theta) \equiv E_{\chi}m+E_{\chi}\lambda_{-}^{2}(1+\sin^{2}\frac{\theta}{2})+ \lambda_{+}^{2}\lambda_{-}^{2}\cos^{2}\frac{\theta}{2},\] \[f_{5}(\theta) \equiv \lambda_{+}\lambda_{-}[m+\lambda_{-}^{2}(1+\sin^{2}\frac{\theta}{2 })+E_{\chi}\cos^{2}\frac{\theta}{2}],\] \[f_{6}(\theta) \equiv \lambda_{+}\lambda_{-}[m+\lambda_{-}^{2}\cos^{2}\frac{\theta}{2} +E_{\chi}(1+\sin^{2}\frac{\theta}{2})],\] \[f_{7}(\theta) \equiv \lambda_{+}^{2}\lambda_{-}^{2}(1+\sin^{2}\frac{\theta}{2})+E_{ \chi}\lambda_{-}^{2}\cos^{2}\frac{\theta}{2}, \tag{4.25}\] \[f_{8}(\theta) \equiv 2E_{\chi}m+\lambda_{+}^{2}\lambda_{-}^{2}\cos^{2}\frac{\theta}{2 }+E_{\chi}\lambda_{-}^{2}(1+\sin^{2}\frac{\theta}{2}),\] \[f_{9}(\theta) \equiv \lambda_{+}\lambda_{-}[2m+E_{\chi}\cos^{2}\frac{\theta}{2}+ \lambda_{-}^{2}(1+\sin^{2}\frac{\theta}{2})]\] \[f_{10}(\theta) \equiv \lambda_{+}\lambda_{-}[E_{\chi}(1+\sin^{2}\frac{\theta}{2})+ \lambda_{-}^{2}\cos^{2}\frac{\theta}{2}],\qquad f_{11}(\theta)\equiv(E_{\chi} +\lambda_{+}^{2})\lambda_{-}^{2}\cos^{2}\frac{\theta}{2},\] \[f_{12}(\theta) \equiv \lambda_{+}\lambda_{-}(E_{\chi}+\lambda_{-}^{2})\cos^{2}\frac{ \theta}{2},\qquad\qquad\qquad f_{13}(\theta)\equiv m_{\chi}(m+\lambda_{-}^{2} \cos^{2}\frac{\theta}{2}),\] \[f_{14}(\theta) \equiv m_{\chi}\lambda_{+}\lambda_{-}\cos^{2}\frac{\theta}{2}, \qquad f_{15}(\theta)\equiv m_{\chi}(m-\lambda_{-}^{2}\cos^{2}\frac{\theta}{2 })=f_{13}(\theta)-2m_{\chi}\lambda_{-}^{2}\cos^{2}\frac{\theta}{2},\] \[f_{16}(\theta) \equiv m_{\chi}\lambda_{-}^{2}\sin^{2}\frac{\theta}{2},\qquad \qquad\quad f_{17}(\theta)\equiv m_{\chi}(2m+\lambda_{-}^{2}\sin^{2}\frac{ \theta}{2})=2m_{\chi}m+f_{16}(\theta).\]
The squares of these scalar products are written as follows:
\[\frac{|(l_{\pm\pm}^{w}\,h_{\pm\pm}^{w,f})|^{2}}{16\cos^{2}\frac{ \theta}{2}} = (\alpha^{2}+\delta^{2})f_{1}^{2}-2\alpha\delta f_{1}^{2}+(\beta^{ 2}+\gamma^{2})f_{2}^{2}-2\beta\gamma f_{2}^{2}\mp 2(\alpha\beta+\gamma\delta)f_{1}f_{2} \pm 2(\alpha\gamma+\beta\delta)f_{1}f_{2},\] \[\frac{|(l_{\pm\pm}^{w}\,h_{\mp\mp}^{w,f})|^{2}}{16\cos^{2}\frac{ \theta}{2}} = \alpha^{2}f_{3}(\theta)^{2}+\beta^{2}f_{5}(\theta)^{2}+\gamma^{2} f_{6}(\theta)^{2}+\delta^{2}f_{4}(\theta)^{2}+2\alpha\delta f_{3}(\theta)f_{4}(\theta)+2 \beta\gamma f_{5}(\theta)f_{6}(\theta)\] \[\pm 2\alpha\beta f_{3}(\theta)f_{5}(\theta)\pm 2\alpha\gamma f_{3}( \theta)f_{6}(\theta)\pm 2\beta\delta f_{4}(\theta)f_{5}(\theta)\pm 2\gamma\delta f _{4}(\theta)f_{6}(\theta),\] \[\frac{|(l_{\pm\pm}^{w}\,h_{\pm\mp}^{w,f})|^{2}}{16\sin^{2}\frac{ \theta}{2}} = \alpha^{2}f_{7}(\theta)^{2}+\delta^{2}f_{8}(\theta)^{2}+2\alpha \delta f_{7}(\theta)f_{8}(\theta)+\beta^{2}f_{9}(\theta)^{2}+\gamma^{2}f_{10}( \theta)^{2}+2\beta\gamma f_{9}(\theta)f_{10}(\theta)\] \[\pm 2\alpha\beta f_{7}(\theta)f_{9}(\theta)\pm 2\alpha\gamma f_{7}( \theta)f_{10}(\theta)\pm 2\beta\delta f_{9}(\theta)f_{8}(\theta)\pm 2\gamma\delta f _{8}(\theta)f_{10}(\theta),\] \[\frac{|(l_{\pm\pm}^{w}\,h_{\mp\pm}^{w,f})|^{2}}{16\sin^{2}\frac{ \theta}{2}} = (\alpha^{2}+\delta^{2}-2\alpha\delta)f_{11}(\theta)^{2}+(\beta^{2}+ \gamma^{2}-2\beta\gamma)f_{12}(\theta)^{2}\] \[\mp 2(\alpha\beta+\gamma\delta)f_{11}(\theta)f_{12}(\theta)\pm 2( \alpha\gamma+\beta\delta)f_{11}(\theta)f_{12}(\theta);\]
\[\frac{|(l_{\pm\mp}^{w}\,h_{\mp\mp}^{w,f})|^{2}}{16\sin^{2}\frac{\theta} {2}} = (\alpha^{2}+\delta^{2}-2\alpha\delta)f_{13}(\theta)^{2}+(\beta^{2}+ \gamma^{2}-2\beta\gamma)f_{14}(\theta)^{2}\] \[\pm 2(\alpha\beta+\gamma\delta)f_{13}(\theta)f_{14}(\theta)\mp 2( \alpha\gamma+\beta\delta)f_{13}(\theta)f_{14}(\theta),\] \[\frac{|(l_{\pm\mp}^{w}\,h_{\pm\pm}^{w,f})|^{2}}{16\sin^{2}\frac{ \theta}{2}} = \alpha^{2}f_{13}(\theta)^{2}+\delta^{2}f_{15}(\theta)^{2}+2\alpha \delta f_{13}(\theta)f_{15}(\theta)+(\beta^{2}+\gamma^{2}-2\beta\gamma)f_{14} (\theta)^{2}\] \[\mp 2(\alpha\beta-\alpha\gamma)f_{13}(\theta)f_{14}(\theta)\mp 2( \beta\delta-\gamma\delta)f_{14}(\theta)f_{15}(\theta),\] \[\frac{|(l_{\pm\mp}^{w}\,h_{\mp\pm}^{w,f})|^{2}}{16\cos^{2}\frac{ \theta}{2}} = \alpha^{2}f_{16}(\theta)^{2}+\delta^{2}f_{17}(\theta)^{2}-2 \alpha\delta f_{16}(\theta)f_{17}(\theta)+(\beta^{2}+\gamma^{2}-2\beta\gamma) f_{14}(\theta)^{2}\] \[\pm 2\alpha(\gamma-\beta)f_{14}(\theta)f_{16}(\theta)\pm 2\delta( \beta-\gamma)f_{14}(\theta)f_{17}(\theta),\] \[\frac{|(l_{\pm\mp}^{w}\,h_{\pm\mp}^{w,f})|^{2}}{16\cos^{2}\frac{ \theta}{2}} = (\alpha^{2}+\delta^{2}-2\alpha\delta)f_{16}(\theta)^{2}+(\beta^{ 2}+\gamma^{2}-2\beta\gamma)f_{14}(\theta)^{2}\] \[\pm 2(\alpha\beta+\gamma\delta)f_{14}(\theta)f_{16}(\theta)\mp 2( \alpha\gamma+\beta\delta)f_{14}(\theta)f_{16}(\theta).\]
These scalar products of the nucleon and lepton currents are the basis for calculation of coherent (elastic) and incoherent (inelastic) cross sections for the \(\chi\) particle-nucleus interaction in the relativistic approximation.
### Relativistic cross sections of the process \(\chi A\to\chi A^{(*)}\)
Coherent cross sections.According to definition (4.14), the coherent \(\chi A\) cross sections are given by \(\hat{Q}_{\pm}^{s^{\prime}s}\) combinations of scalar products (4.15), which look like
\[\frac{\hat{Q}_{+}^{\mp\mp}}{8\cos\frac{\theta}{2}} \equiv \alpha f_{\alpha+}(\theta)+\delta f_{\delta-}(\theta)\mp\beta f _{\beta-}(\theta)\mp\gamma f_{\gamma+}(\theta),\] \[\frac{\hat{Q}_{-}^{\mp\mp}}{8\cos\frac{\theta}{2}} \equiv \pm[\alpha f_{\alpha-}(\theta)+\delta f_{\delta+}(\theta)\mp \beta f_{\beta+}(\theta)\mp\gamma f_{\gamma-}(\theta)];\] \[\frac{\hat{Q}_{+}^{\mp\pm}}{8\sin\frac{\theta}{2}e^{\pm i\phi}} = \mp\alpha\hat{f}_{\alpha}(\theta)\pm\delta\hat{f}_{\delta-}( \theta),\qquad\frac{\hat{Q}^{\mp\pm}}{8\sin\frac{\theta}{2}e^{\pm i\phi}}=\mp (\gamma-\beta)\hat{f}_{\beta\gamma}(\theta)+\delta\hat{f}_{\delta+}(\theta).\]
Then the \(Q_{f}^{s^{\prime}s}\) values from (4.15) are
\[\frac{Q_{f}^{\mp\mp}}{8\cos\frac{\theta}{2}} = \alpha\big{[}f_{\alpha+}(\theta)\pm\frac{\Delta A_{f}}{A_{f}}f_{ \alpha-}(\theta)\big{]}+\delta\big{[}f_{\delta-}(\theta)\pm\frac{\Delta A_{f}} {A_{f}}f_{\delta+}(\theta)\big{]}\] \[\mp\beta\big{[}f_{\beta-}(\theta)\pm\frac{\Delta A_{f}}{A_{f}}f_{ \beta+}(\theta)\big{]}\mp\gamma\big{[}f_{\gamma+}(\theta)\pm\frac{\Delta A_{f}} {A_{f}}f_{\gamma-}(\theta)\big{]}, \tag{4.26}\] \[\frac{Q_{f}^{\mp\pm}}{8\sin\frac{\theta}{2}} = \mp e^{\pm i\phi}\big{[}\alpha\hat{f}_{\alpha}(\theta)-\delta \hat{f}_{\delta-}(\theta)+(\gamma-\beta)\frac{\Delta A_{f}}{A_{f}}\hat{f}_{ \beta\gamma}(\theta)\mp\delta\frac{\Delta A_{f}}{A_{f}}\hat{f}_{\delta+}( \theta)\big{]}.\]
Here the following notations for form-factor functions are introduced:
\[f_{\alpha+}(\theta) \equiv \frac{1}{2}[f_{1}(\theta)+f_{3}(\theta)]=E_{\chi}(m+\lambda_{-}^{2} \cos^{2}\frac{\theta}{2})+\lambda_{+}^{2}\lambda_{-}^{2}=E_{\chi}(m\sin^{2} \frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2})+|\mathbf{k}|^{2},\] \[f_{\delta+}(\theta) \equiv \frac{1}{2}[f_{4}(\theta)+f_{1}(\theta)]=E_{\chi}(m+\lambda_{-}^{2 })+\lambda_{+}^{2}\lambda_{-}^{2}\cos^{2}\frac{\theta}{2}=E_{\chi}E_{p}+|\mathbf{k}| ^{2}\cos^{2}\frac{\theta}{2},\] \[f_{\beta+}(\theta) \equiv \frac{1}{2}[f_{5}(\theta)+f_{2}(\theta)]=\lambda_{+}\lambda_{-}(m+ E_{\chi}\cos^{2}\frac{\theta}{2}+\lambda_{-}^{2})=|\mathbf{k}|(E_{p}+E_{\chi}\cos^{2} \frac{\theta}{2}),\] \[f_{\gamma+}(\theta) \equiv \frac{1}{2}[f_{6}(\theta)+f_{2}(\theta)]=\lambda_{+}\lambda_{-}(m+ E_{\chi}+\lambda_{-}^{2}\cos^{2}\frac{\theta}{2})=|\mathbf{k}|(E_{\chi}+m\sin^{2} \frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}),\] \[f_{\alpha-}(\theta) \equiv \frac{1}{2}[f_{3}(\theta)-f_{1}(\theta)]=\lambda_{+}^{2}\lambda_{ -}^{2}\sin^{2}\frac{\theta}{2}=|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2},\] \[f_{\delta-}(\theta) \equiv \frac{1}{2}[f_{4}(\theta)-f_{1}(\theta)]=E_{\chi}\lambda_{-}^{2} \sin^{2}\frac{\theta}{2}=E_{\chi}(E_{p}-m)\sin^{2}\frac{\theta}{2},\] \[f_{\beta-}(\theta) \equiv \frac{1}{2}[f_{5}(\theta)-f_{2}(\theta)]=\lambda_{+}\lambda_{-} \lambda_{-}^{2}\sin^{2}\frac{\theta}{2}=|\mathbf{k}|(E_{p}-m)\sin^{2}\frac{\theta }{2},\] \[f_{\gamma-}(\theta) \equiv \frac{1}{2}[f_{6}(\theta)-f_{2}(\theta)]=\lambda_{+}\lambda_{-}E_{ \chi}\sin^{2}\frac{\theta}{2}=|\mathbf{k}|E_{\chi}\sin^{2}\frac{\theta}{2};\] \[\hat{f}_{\alpha}(\theta) \equiv f_{13}(\theta)=m_{\chi}m+m_{\chi}\lambda_{-}^{2}\cos^{2}\frac{ \theta}{2}=m_{\chi}(m+(E_{p}-m)\cos^{2}\frac{\theta}{2}), \tag{4.27}\] \[\hat{f}_{\beta\gamma}(\theta) \equiv f_{14}(\theta)=m_{\chi}\lambda_{+}\lambda_{-}\cos^{2}\frac{ \theta}{2}=m_{\chi}|\mathbf{k}|\cos^{2}\frac{\theta}{2},\] \[\hat{f}_{\delta-}(\theta) \equiv \frac{1}{2}[f_{13}(\theta)-f_{15}(\theta)]=m_{\chi}\lambda_{-}^{2} \cos^{2}\frac{\theta}{2}=m_{\chi}(E_{p}-m)\cos^{2}\frac{\theta}{2},\] \[\hat{f}_{\delta+}(\theta) \equiv \frac{1}{2}[f_{15}(\theta)+f_{13}(\theta)]=m_{\chi}m.\]
Substitution of \(Q_{f}^{s^{\prime}s}\) parameters from (4.26) into formula (4.14) gives the following expressions for _relativistic_ cross sections of _coherent_\(\chi A\) scattering of the massive \(\chi\) particle on the nucleus (process \(\chi A\to\chi A\)) due to weak interaction (4.23):
\[\frac{d\sigma_{\rm coh}^{\mp\mp}(\mathbf{q})}{g_{\rm c}\hat{c}_{A}dT_{ A}} = \cos^{2}\frac{\theta}{2}\Big{|}\sum_{f=p,n}G_{\alpha}^{f}(\mathbf{q})f _{\alpha+}(\theta)\pm\Delta G_{\alpha}^{f}(\mathbf{q})f_{\alpha-}(\theta)+G_{ \delta}^{f}(\mathbf{q})f_{\delta-}(\theta)\pm\Delta G_{\delta}^{f}(\mathbf{q})f_{ \delta+}(\theta)\] \[\mp G_{\beta}^{f}(\mathbf{q})f_{\beta-}(\theta)-\Delta G_{\beta}^{f} (\mathbf{q})f_{\beta+}(\theta)\mp G_{\gamma}^{f}(\mathbf{q})f_{\gamma+}(\theta)-\Delta G _{\gamma}^{f}(\mathbf{q})f_{\gamma-}(\theta)\Big{|}^{2},\] \[\frac{d\sigma_{\rm coh}^{\mp\pm}(\mathbf{q})}{g_{\rm c}\hat{c}_{A}dT_ {A}} = \sin^{2}\frac{\theta}{2}\Big{|}\sum_{f=p,n}G_{\alpha}^{f}(\mathbf{q}) \hat{f}_{\alpha}(\theta)-G_{\delta}^{f}(\mathbf{q})\hat{f}_{\delta-}(\theta)\mp \Delta G_{\delta}^{f}(\mathbf{q})\hat{f}_{\delta+}(\theta)\] \[+[\Delta G_{\gamma}^{f}(\mathbf{q})-\Delta G_{\beta}^{f}(\mathbf{q})]\hat {f}_{\beta\gamma}(\theta)\Big{|}^{2}.\]
In formulas (4.28), coherent structural factors that accumulate dependence on both the nucleus and the weak interaction constants are introduced:
\[G_{\alpha}^{f}(\mathbf{q}) \equiv \alpha_{f}F_{f}(\mathbf{q})A_{f},\qquad\Delta G_{\alpha}^{f}(\mathbf{q}) \equiv\alpha_{f}F_{f}(\mathbf{q})\Delta A_{f},\] \[G_{\beta}^{f}(\mathbf{q}) \equiv \beta_{f}F_{f}(\mathbf{q})A_{f},\qquad\Delta G_{\beta}^{f}(\mathbf{q}) \equiv\beta_{f}F_{f}(\mathbf{q})\Delta A_{f},\] \[G_{\gamma}^{f}(\mathbf{q}) \equiv \gamma_{f}F_{f}(\mathbf{q})A_{f},\qquad\Delta G_{\gamma}^{f}(\mathbf{q}) \equiv\gamma_{f}F_{f}(\mathbf{q})\Delta A_{f}, \tag{4.29}\] \[G_{\delta}^{f}(\mathbf{q}) \equiv \delta_{f}F_{f}(\mathbf{q})A_{f},\qquad\Delta G_{\delta}^{f}(\mathbf{q}) \equiv\delta_{f}F_{f}(\mathbf{q})\Delta A_{f}.\]
Important simplification can be obtained if one takes into account that masses of the proton and the neutron are the same5, \(m\equiv m_{p}=m_{n}\). Then the form-factor \(f\)-functions in (4.28) will also be independent of the nucleon type, and formulas (4.28) can be written as follows:
Footnote 5: Previously it was not considered to be so, although the \(f\)-index of the mass \(m\) and the \(f\)-functions was omitted.
\[\frac{d\sigma_{\rm coh}^{\mp\mp}(\mathbf{q})}{g_{\rm c}\hat{c}_{A} dT_{A}} = \cos^{2}\frac{\theta}{2}\Big{|}G_{\alpha}(A,\mathbf{q})f_{\alpha+}( \theta)\pm\Delta G_{\alpha}(A,\mathbf{q})f_{\alpha-}(\theta)+G_{\delta}(A,\mathbf{q}) f_{\delta-}(\theta)\pm\Delta G_{\delta}(A,\mathbf{q})f_{\delta+}(\theta)\] \[\mp G_{\beta}(A,\mathbf{q})f_{\beta-}(\theta)-\Delta G_{\beta}(A,\bm {q})f_{\beta+}(\theta)\mp G_{\gamma}(A,\mathbf{q})f_{\gamma+}(\theta)-\Delta G_{ \gamma}(A,\mathbf{q})f_{\gamma-}(\theta)\Big{|}^{2},\] \[\frac{d\sigma_{\rm coh}^{\mp\pm}(\mathbf{q})}{g_{\rm c}\hat{c}_{A} dT_{A}} = \sin^{2}\frac{\theta}{2}\Big{|}G_{\alpha}(A,\mathbf{q})\hat{f}_{\alpha }(\theta)-G_{\delta}(A,\mathbf{q})\hat{f}_{\delta-}(\theta)\mp\Delta G_{\delta}(A,\mathbf{q})\hat{f}_{\delta+}(\theta)\] \[\qquad\qquad+[\Delta G_{\gamma}(A,\mathbf{q})-\Delta G_{\beta}(A,\bm {q})]\hat{f}_{\beta\gamma}(\theta)\Big{|}^{2}.\]
In (4.30), new \(\mathbf{q}\)-dependent effective coupling constants are introduced, which take into account the influence of the nuclear and nucleon structure (\(\tau\) "runs through" \(\alpha,\beta,\gamma,\delta\)):
\[G_{\tau}(A,\mathbf{q}) \equiv G_{\tau}^{p}(\mathbf{q})+G_{\tau}^{n}(\mathbf{q})=\tau_{p}A_{p}F_{p}( \mathbf{q})+\tau_{n}A_{n}F_{n}(\mathbf{q}),\] \[\Delta G_{\tau}(A,\mathbf{q}) \equiv \Delta G_{\tau}^{p}(\mathbf{q})+\Delta G_{\tau}^{n}(\mathbf{q})=\tau_{p} \Delta A_{p}F_{p}(\mathbf{q})+\tau_{n}\Delta A_{n}F_{n}(\mathbf{q}). \tag{4.31}\]
One can say that the nuclear form factors "participate" in the weak interaction together with the corresponding number of (unpaired) protons or neutrons.
Taking into account expressions for the form-factor \(f\)-functions, (4.30) can be written as
\[\frac{d\sigma_{\rm coh}^{\mp\mp}}{g_{\rm c}dT_{A}} = \cos^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{E_{ \chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\Big{|}G_{\alpha}(A,\mathbf{q})\Big{(}\sin^{2} \frac{\theta}{2}+\frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+\frac{|\mathbf{k}|^{2}}{ mE_{\chi}}\Big{)}\pm\Delta G_{\alpha}(A,\mathbf{q})\frac{|\mathbf{k}|^{2}}{mE_{\chi}}\sin^{2} \frac{\theta}{2} \tag{4.32}\] \[\mp G_{\beta}(A,\mathbf{q})\frac{|\mathbf{k}|}{E_{\chi}}\Big{(}\frac{E_{p }}{m}-1\Big{)}\sin^{2}\frac{\theta}{2}-\Delta G_{\beta}(A,\mathbf{q})\frac{|\mathbf{k}| }{m}\Big{(}\frac{E_{p}}{E_{\chi}}+\cos^{2}\frac{\theta}{2}\Big{)}\] \[\mp G_{\gamma}(A,\mathbf{q})\frac{|\mathbf{k}|}{m}\big{(}1+\frac{E_{p}}{E _{\chi}}\cos^{2}\frac{\theta}{2}+\frac{m}{E_{\chi}}\sin^{2}\frac{\theta}{2} \big{)}-\Delta G_{\gamma}(A,\mathbf{q})\frac{|\mathbf{k}|}{m}\sin^{2}\frac{\theta}{2} \Big{|}^{2},\] \[\frac{d\sigma_{\rm coh}^{\mp\pm}}{g_{\rm c}dT_{A}} = \sin^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{ 2}}{|\mathbf{k}_{\chi}|^{2}}\Big{|}\big{(}G_{\alpha}(A,\mathbf{q})-G_{\delta}(A,\mathbf{q })\big{)}\frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+G_{\alpha}(A,\mathbf{q})\sin^{2} \frac{\theta}{2}+G_{\delta}(A,\mathbf{q})\cos^{2}\frac{\theta}{2}\] \[+\big{(}\Delta G_{\gamma}(A,\mathbf{q})-\Delta G_{\beta}(A,\mathbf{q}) \big{)}\frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2}\mp\Delta G_{\delta}(A,\mathbf{q })\Big{|}^{2}.\]
Formulas (4.32) give the most general set of expressions for relativistic cross sections of coherent scattering of the massive \(\chi\) lepton on the nucleus due to the weak interaction given the form (4.23). Averaged over the initial projections of \(\chi\)-lepton spins and summed over the final ones, the coherent cross sections are as follows:
\[\frac{d\sigma_{\rm coh}^{s^{\prime}=s}}{dT_{A}} = \cos^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{E_{\chi} ^{2}}{|\mathbf{k}_{\chi}|^{2}}\Big{\{}\Big{[}G_{\alpha}(A,\mathbf{q})\Big{(}\sin^{2} \frac{\theta}{2}+\frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+\frac{|\mathbf{k}|^{2}}{ mE_{\chi}}\Big{)} \tag{4.33}\] \[+G_{\delta}(A,\mathbf{q})\Big{(}\frac{E_{p}}{m}-1\Big{)}\sin^{2} \frac{\theta}{2}-\Delta G_{\beta}(A,\mathbf{q})\frac{|\mathbf{k}|}{m}\Big{(}\frac{E_{ p}}{E_{\chi}}+\cos^{2}\frac{\theta}{2}\Big{)}-\Delta G_{\gamma}(A,\mathbf{q}) \frac{|\mathbf{k}|}{m}\sin^{2}\frac{\theta}{2}\Big{]}^{2}\] \[+\Big{[}\Delta G_{\alpha}(A,\mathbf{q})\frac{|\mathbf{k}|^{2}}{mE_{\chi} }\sin^{2}\frac{\theta}{2}+\Delta G_{\delta}(A,\mathbf{q})\Big{(}\frac{E_{p}}{m}+ \frac{|\mathbf{k}|^{2}}{mE_{\chi}}\cos^{2}\frac{\theta}{2}\Big{)}\] \[-G_{\beta}(A,\mathbf{q})\frac{|\mathbf{k}|}{m}\frac{E_{p}-m}{E_{\chi}} \sin^{2}\frac{\theta}{2}-G_{\gamma}(A,\mathbf{q})\frac{|\mathbf{k}|}{m}\big{(}1+\frac{ E_{p}}{E_{\chi}}\cos^{2}\frac{\theta}{2}+\frac{m}{E_{\chi}}\sin^{2}\frac{\theta}{2} \big{)}\Big{]}^{2}\Big{\}};\] \[\frac{d\sigma_{\rm coh}^{s^{\prime}\neq s}}{dT_{A}} = \sin^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi} ^{2}}{|\mathbf{k}_{\chi}|^{2}}\Big{\{}\Big{[}G_{\alpha}(A,\mathbf{q})\Big{(}\sin^{2} \frac{\theta}{2}+\frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}\Big{)}-G_{\delta}(A, \mathbf{q})\Big{(}\frac{E_{p}}{m}-1\Big{)}\cos^{2}\frac{\theta}{2}+\] \[+\big{(}\Delta G_{\gamma}(A,\mathbf{q})-\Delta G_{\beta}(A,\mathbf{q}) \big{)}\frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2}\Big{]}^{2}+\big{[}\Delta G _{\delta}(A,\mathbf{q})\big{]}^{2}\Big{\}}.\]
The second formula in (4.33) corresponds to the massive \(\chi\) lepton coherent scattering on the nucleus with lepton helicity (spin) flip. The cross section in this case is proportional to the square of the lepton mass \(m_{\chi}\) and is strongly suppressed when the lepton escapes into the forward hemisphere. The cross section without changing the lepton helicity (the first formula) is proportional to the square of the lepton energy \(E_{\chi}\) and is maximal at the minimum lepton escape angles. For spinless nuclei one can obtain "shorter" formulas from (4.33) by simply discarding terms proportional to \(\Delta G\), since for \(\Delta A_{f}=0\) all \(\Delta G=0\).
The "fully averaged" (averaged over the initial helicities and summed over the final lepton helicities) _coherent_\(\chi A\) cross section is the sum of two expressions from (4.33). For spinless nuclei (\(\Delta A_{f}=0\)), this coherent \(\chi A\) cross section has the form
\[\frac{d\sigma_{\rm coh,w}^{\rm total}}{g_{c}\hat{c}_{A}dT_{A}}= \cos^{2}\frac{\theta}{2}\Big{\{}E_{\chi}^{2}\Big{[}G_{\alpha}(A, \mathbf{q})\big{(}m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}+\frac{| \mathbf{k}|^{2}}{E_{\chi}}\big{)}+G_{\delta}(A,\mathbf{q})(E_{p}-m)\sin^{2}\frac{\theta }{2}\Big{]}^{2} \tag{4.34}\] \[+|\mathbf{k}|^{2}\Big{[}G_{\gamma}(A,\mathbf{q})[m\sin^{2}\frac{\theta}{ 2}+E_{p}\cos^{2}\frac{\theta}{2}+E_{\chi}]+G_{\beta}(A,\mathbf{q})(E_{p}-m)\sin^{2 }\frac{\theta}{2}\Big{]}^{2}\Big{\}}\] \[+\sin^{2}\frac{\theta}{2}m_{\chi}^{2}\Big{[}G_{\alpha}(A,\mathbf{q}) (m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2})-G_{\delta}(A,\mathbf{q}) (E_{p}-m)\cos^{2}\frac{\theta}{2}\Big{]}^{2}.\]
As a "simplifying example" of general formulas (4.28), the nuclear form factors are assumed to be real functions independent of the type of the nucleon, i.e.,
\[F(\mathbf{q})\equiv F_{p}(\mathbf{q})=F_{n}(\mathbf{q}). \tag{4.35}\]
Then they factorize in expressions (4.31), and coherent \(\chi A\) cross sections (4.32) are directly proportional to them
\[\frac{d\sigma^{\mp\mp}_{\rm coh}}{g_{c}F^{2}(\mathbf{q})dT_{A}} = \cos^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{E_{\chi} ^{2}}{|\mathbf{k}_{\chi}|^{2}}\Big{|}G_{\alpha}(A)\Big{(}\sin^{2}\frac{\theta}{2}+ \frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+\frac{|\mathbf{k}|^{2}}{mE_{\chi}}\Big{)} \pm\Delta G_{\alpha}(A)\frac{|\mathbf{k}|^{2}}{mE_{\chi}}\sin^{2}\frac{\theta}{2} \tag{4.36}\] \[\mp G_{\beta}(A)\frac{|\mathbf{k}|}{E_{\chi}}\Big{(}\frac{E_{p}}{m}-1 \Big{)}\sin^{2}\frac{\theta}{2}-\Delta G_{\beta}(A)\frac{|\mathbf{k}|}{m}\Big{(} \frac{E_{p}}{E_{\chi}}+\cos^{2}\frac{\theta}{2}\Big{)}\] \[\mp G_{\gamma}(A)\frac{|\mathbf{k}|}{m}\big{(}1+\frac{E_{p}}{E_{\chi} }\cos^{2}\frac{\theta}{2}+\frac{m}{E_{\chi}}\sin^{2}\frac{\theta}{2}\big{)}- \Delta G_{\gamma}(A)\frac{|\mathbf{k}|}{m}\sin^{2}\frac{\theta}{2}\big{|}^{2},\] \[\frac{d\sigma^{\pm\pm}_{\rm coh}}{g_{c}F^{2}(\mathbf{q})dT_{A}} = \sin^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi} ^{2}}{|\mathbf{k}_{\chi}|^{2}}\Big{|}\big{(}G_{\alpha}(A)-G_{\delta}(A)\big{)} \frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+G_{\alpha}(A)\sin^{2}\frac{\theta}{2}+ G_{\delta}(A)\cos^{2}\frac{\theta}{2}\] (4.37) \[+\big{(}\Delta G_{\gamma}(A)-\Delta G_{\beta}(A)\big{)}\frac{| \mathbf{k}|}{m}\cos^{2}\frac{\theta}{2}\mp\Delta G_{\delta}(A)\Big{|}^{2}.\]
In (4.36), \(\mathbf{q}\)-independent "enlarged" interaction constants are introduced:
\[G_{\tau}(A)=\tau_{p}A_{p}+\tau_{n}A_{n}\quad\text{and}\quad\Delta G_{\tau}(A) =\tau_{p}\Delta A_{p}+\tau_{n}\Delta A_{n},\quad\text{where}\quad\tau=\alpha, \beta,\gamma,\delta. \tag{4.38}\]
_Incoherent cross sections. Relativistic_ incoherent cross sections of \(\chi A\) scattering due to weak currents (4.23) can be obtained from (4.14) after calculating the following values:
\[Q^{\prime s}_{\pm}\equiv S^{s^{\prime}s}_{+}\pm S^{s^{\prime}s}_{-},\quad \text{where}\quad S^{s^{\prime}s}_{\pm}\equiv\sum_{r^{\prime}=\pm}|(l_{s^{ \prime}s}\,h^{f}_{r^{\prime}\pm})|^{2}=|(l_{s^{\prime}s}\,h^{f}_{+\pm})|^{2}+| (l_{s^{\prime}s}\,h^{f}_{-\pm})|^{2}.\]
With the help of squared scalar products, represented as expansions in effective coupling constants \(\alpha,\beta,\gamma,\delta\) (page 15), eight \(Q^{s^{\prime}s}_{\pm,w}\) values are computed. They are as follows6:
Footnote 6: Hereinafter, for simplicity, the nucleon \(f\)-index is omitted in \(\alpha,\beta,\gamma,\delta\).
\[\frac{Q^{\mp\mp}_{+,w}}{16} = \alpha^{2}F^{+}_{\alpha^{2}}(\theta)+\beta^{2}F^{+}_{\beta^{2}}( \theta)+\gamma^{2}F^{+}_{\gamma^{2}}(\theta)+\delta^{2}F^{+}_{\delta^{2}}( \theta)+\alpha\delta F^{+}_{\alpha\delta}(\theta)+\beta\gamma F^{+}_{\beta \gamma}(\theta)\] \[\mp\{\alpha\beta F^{+}_{\alpha\beta}(\theta)+\alpha\gamma F^{+}_ {\alpha\gamma}(\theta)+\beta\delta F^{+}_{\beta\delta}(\theta)+\gamma\delta F ^{+}_{\gamma\delta}(\theta)\},\] \[\frac{Q^{\mp\mp}_{-,w}}{16} = \mp\{+\alpha^{2}F^{-}_{\alpha^{2}}(\theta)+\beta^{2}F^{-}_{\beta ^{2}}(\theta)+\gamma^{2}F^{-}_{\gamma^{2}}(\theta)+\delta^{2}F^{-}_{\delta^{2 }}(\theta)-\alpha\delta F^{-}_{\alpha\delta}(\theta)-\beta\gamma F^{-}_{\beta \gamma}(\theta)\} \tag{4.39}\] \[+\alpha\gamma F^{-}_{\alpha\gamma}(\theta)-\alpha\beta F^{-}_{ \alpha\beta}(\theta)+\beta\delta F^{-}_{\beta\delta}(\theta)-\gamma\delta F^{-}_{ \gamma\delta}(\theta);\] \[\frac{Q^{\mp\pm}_{+,w}}{16} = +\alpha^{2}G^{+}_{\alpha^{2}}(\theta)+\delta^{2}G^{+}_{\delta^{2} }(\theta)+\alpha\delta G^{+}_{\alpha\delta}(\theta)+(\gamma-\beta)^{2}G^{+}_{ \beta\gamma}(\theta)\mp(\gamma-\beta)\delta G^{+}_{\beta\delta}(\theta),\] \[\frac{Q^{\mp\pm}_{-,w}}{16} = \pm\delta^{2}G^{-}_{\delta^{2}}(\theta)\pm\alpha\delta G^{-}_{ \alpha\delta}(\theta)+\alpha(\gamma-\beta)G^{-}_{\alpha\gamma}(\theta)+\delta( \beta-\gamma)G^{-}_{\beta\delta}(\theta).\]
Substituting \(Q^{\sigma^{\mp\pm}}_{\pm,w}\) into (4.14), one obtains the incoherent cross sections in the following form:
\[\frac{d\sigma^{\mp\mp}_{\rm inc}}{g_{\rm i}\hat{c}_{A}dT_{A}} = \frac{1}{2}\!\!\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[} \frac{\Delta A_{f}}{A_{f}}\big{\{}\alpha\gamma F^{-}_{\alpha\gamma}(\theta)- \alpha\beta F^{-}_{\alpha\beta}(\theta)+\beta\delta F^{-}_{\beta\delta}(\theta )-\gamma\delta F^{-}_{\gamma\delta}(\theta)\big{\}} \tag{4.39}\] \[+\alpha^{2}F^{+}_{\alpha^{2}}(\theta)+\beta^{2}F^{+}_{\beta^{2}} (\theta)+\gamma^{2}F^{+}_{\gamma^{2}}(\theta)+\delta^{2}F^{+}_{\delta^{2}}( \theta)+\alpha\delta F^{+}_{\alpha\delta}(\theta)+\beta\gamma F^{+}_{\beta \gamma}(\theta)\] \[\mp\frac{\Delta A_{f}}{A_{f}}\{\alpha^{2}F^{-}_{\alpha^{2}}( \theta)+\beta^{2}F^{-}_{\beta^{2}}(\theta)+\gamma^{2}F^{-}_{\gamma^{2}}( \theta)+\delta^{2}F^{-}_{\delta^{2}}(\theta)-\alpha\delta F^{-}_{\alpha\delta }(\theta)-\beta\gamma F^{-}_{\beta\gamma}(\theta)\}\] \[\qquad\mp\{\alpha\beta F^{+}_{\alpha\beta}(\theta)+\alpha\gamma F ^{+}_{\alpha\gamma}(\theta)+\beta\delta F^{+}_{\beta\delta}(\theta)+\gamma \delta F^{+}_{\gamma\delta}(\theta)\}\Big{]},\] \[\frac{d\sigma^{\mp\pm}_{\rm inc}}{g_{\rm i}\hat{c}_{A}dT_{A}} = \frac{1}{2}\!\!\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[} \alpha^{2}G^{+}_{\alpha^{2}}(\theta)+\delta^{2}G^{+}_{\delta^{2}}(\theta)+ \alpha\delta G^{+}_{\alpha\delta}(\theta)+(\gamma-\beta)^{2}G^{+}_{\beta \gamma}(\theta)\mp(\gamma-\beta)\delta G^{+}_{\beta\delta}(\theta)\] \[+\frac{\Delta A_{f}}{A_{f}}\{\pm\delta^{2}G^{-}_{\delta^{2}}( \theta)\pm\alpha\delta G^{-}_{\alpha\delta}(\theta)+\alpha(\gamma-\beta)G^{-} _{\alpha\gamma}(\theta)+\delta(\beta-\gamma)G^{-}_{\beta\delta}(\theta)\} \Big{]}.\]
Formulas (4.39) are general expressions for the relativistic incoherent cross sections of \(\chi A\) scattering due to weak currents. The nucleon \(F\)- and \(G\)-form-factor functions included in (4.39) are obtained using functions from (4.25). Expressed in terms of the \(\chi\)-lepton scattering angle and the energy variables (4.17), they are
\[\frac{F^{-}_{\alpha^{2}}(\theta)}{2} = -2\sin^{2}\frac{\theta}{2}|\mathbf{k}|^{2}(|\mathbf{k}|^{2}+E_{p}E_{\chi} \cos^{2}\frac{\theta}{2}),\quad\frac{F^{-}_{\delta^{2}}(\theta)}{2}=-2\sin^{2 }\frac{\theta}{2}E_{p}E_{\chi}(E_{p}E_{\chi}+|\mathbf{k}|^{2}\cos^{2}\frac{\theta }{2}),\] \[\frac{F^{-}_{\beta^{2}}(\theta)}{2} = -2\sin^{2}\frac{\theta}{2}|\mathbf{k}|^{2}(E_{p}^{2}+E_{p}E_{\chi} \cos^{2}\frac{\theta}{2}),\quad\frac{F^{-}_{\gamma^{2}}(\theta)}{2}=-2\sin^{ 2}\frac{\theta}{2}|\mathbf{k}|^{2}(E_{\chi}^{2}+E_{p}E_{\chi}\cos^{2}\frac{ \theta}{2});\] \[\frac{F^{+}_{\alpha^{2}}(\theta)}{2} = |\mathbf{k}|^{4}(1+\sin^{4}\frac{\theta}{2})+\cos^{2}\frac{\theta}{2} [|\mathbf{k}|^{2}(E_{\chi}^{2}\cos^{2}\frac{\theta}{2}+2E_{p}E_{\chi})+m^{2}E_{ \chi}^{2}], \tag{4.40}\] \[\frac{F^{+}_{\delta^{2}}(\theta)}{2} = E_{\chi}^{2}E_{p}^{2}(1+\sin^{4}\frac{\theta}{2})+\cos^{2}\frac {\theta}{2}[|\mathbf{k}|^{2}(|\mathbf{k}|^{2}\cos^{2}\frac{\theta}{2}+2E_{\chi}E_{p}) +m^{2}E_{\chi}^{2}\sin^{2}\frac{\theta}{2}],\] \[\cdot\frac{F^{+}_{\beta^{2}}(\theta)}{2|\mathbf{k}|^{2}} = (E_{p}+E_{\chi}\cos^{2}\frac{\theta}{2})^{2}+\sin^{2}\frac{\theta }{2}[m^{2}\cos^{2}\frac{\theta}{2}+E_{p}^{2}(\sin^{2}\frac{\theta}{2}-\cos^{2 }\frac{\theta}{2})],\] \[\frac{F^{+}_{\gamma^{2}}(\theta)}{2|\mathbf{k}|^{2}} = (E_{p}\cos^{2}\frac{\theta}{2}+E_{\chi})^{2}+\sin^{2}\frac{\theta }{2}[m^{2}\cos^{2}\frac{\theta}{2}+E_{\chi}^{2}(\sin^{2}\frac{\theta}{2}-\cos^{2 }\frac{\theta}{2})].\]
\[\frac{F^{+}_{\alpha\beta}(\theta)}{4} = 2\sin^{2}\frac{\theta}{2}|\mathbf{k}|^{3}(E_{p}+E_{\chi}\cos^{2} \frac{\theta}{2}),\quad\frac{F^{+}_{\alpha\delta}(\theta)}{4}=\sin^{2}\frac{ \theta}{2}|\mathbf{k}|^{2}[(E_{\chi}^{2}+|\mathbf{k}|^{2})\cos^{2}\frac{\theta}{2}+2E_{ \chi}E_{p}],\] \[\frac{F^{+}_{\gamma\delta}(\theta)}{4} = 2\sin^{2}\frac{\theta}{2}E_{\chi}|\mathbf{k}|(E_{\chi}E_{p}+|\mathbf{k}|^{ 2}\cos^{2}\frac{\theta}{2}),\quad\frac{F^{+}_{\beta\gamma}(\theta)}{4}=\sin^{2} \frac{\theta}{2}|\mathbf{k}|^{2}[(E_{\chi}^{2}+|\mathbf{k}|^{2})\cos^{2}\frac{\theta}{2}+2E _{\chi}E_{p}],\] \[\frac{F^{-}_{\alpha\delta}(\theta)}{4} = 2E_{\chi}E_{p}|\mathbf{k}|^{2}\sin^{4}\frac{\theta}{2}+(E_{\chi}E_{p}+| \mathbf{k}|^{2})^{2}\cos^{2}\frac{\theta}{2},\] \[\frac{F^{-}_{\beta\gamma}(\theta)}{4} = 2E_{\chi}E_{p}|\mathbf{k}|^{2}\sin^{4}\frac{\theta}{2}+|\mathbf{k}|^{2}(E_{ \chi}+E_{p})^{2}\cos^{2}\frac{\theta}{2},\] \[\frac{F^{-}_{\beta\delta}(\theta)}{4} = -\sin^{2}\frac{\theta}{2}|\mathbf{k}|[E_{p}(E_{\chi}^{2}+|\mathbf{k}|^{ 2})\cos^{2}\frac{\theta}{2}+2E_{\chi}E_{p}^{2}],\] \[\frac{F^{-}_{\alpha\gamma}(\theta)}{4} = -\sin^{2}\frac{\theta}{2}|\mathbf{k}|[E_{p}(E_{\chi}^{2}+|\mathbf{k}|^{ 2})\cos^{2}\frac{\theta}{2}+2|\mathbf{k}|^{2}E_{\chi}],\]
\[\frac{F^{-}_{\alpha\beta}(\theta)}{4|\mathbf{k}|} = |\mathbf{k}|^{2}E_{p}(1+\sin^{4}\frac{\theta}{2})+\cos^{2}\frac{\theta }{2}[E_{\chi}^{2}E_{p}\cos^{2}\frac{\theta}{2}+E_{\chi}(|\mathbf{k}|^{2}+E_{p}^{2})],\] \[\frac{F^{-}_{\gamma\delta}(\theta)}{4|\mathbf{k}|} = E_{\chi}^{2}E_{p}(1+\sin^{4}\frac{\theta}{2})+\cos^{2}\frac{ \theta}{2}[|\mathbf{k}|^{2}(E_{p}\cos^{2}\frac{\theta}{2}+E_{\chi})+E_{\chi}E_{p}^ {2}],\] \[\frac{F^{+}_{\alpha\gamma}(\theta)}{4|\mathbf{k}|} = (E_{\chi}^{2}+E_{\chi}E_{p}+|\mathbf{k}|^{2})E_{p}\cos^{2}\frac{\theta }{2}+|\mathbf{k}|^{2}E_{\chi}(2\sin^{4}\frac{\theta}{2}+\cos^{2}\frac{\theta}{2}),\] \[\frac{F^{+}_{\beta\delta}(\theta)}{4|\mathbf{k}|} = (E_{\chi}^{2}+E_{\chi}E_{p}+|\mathbf{k}|^{2})E_{p}\cos^{2}\frac{ \theta}{2}+|\mathbf{k}|^{2}E_{\chi}(2\sin^{4}\frac{\theta}{2}+\cos^{2}\frac{ \theta}{2})+2m^{2}E_{\chi}\sin^{2}\frac{\theta}{2}.\]
\[\frac{G^{-}_{\alpha\delta}(\theta)}{2m_{\chi}^{2}} = -2m^{2}\sin^{2}\frac{\theta}{2},\qquad\frac{G^{-}_{\delta^{2}}( \theta)}{2m_{\chi}^{2}}=-2m^{2}\cos^{2}\frac{\theta}{2},\qquad\frac{G^{+}_{ \beta\gamma}(\theta)}{2m_{\chi}^{2}}=\lambda_{+}^{2}\lambda_{-}^{2}\cos^{4} \frac{\theta}{2}=|\mathbf{k}|^{2}\cos^{4}\frac{\theta}{2},\] \[\frac{G^{+}_{\alpha^{2}}(\theta)}{2m_{\chi}^{2}} = (m^{2}+|\mathbf{k}|^{2}\cos^{2}\frac{\theta}{2})\sin^{2}\frac{\theta }{2},\qquad\frac{G^{+}_{\delta^{2}}(\theta)}{2m_{\chi}^{2}}=m^{2}(1+\cos^{2} \frac{\theta}{2})+|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2},\] \[\frac{G^{+}_{\beta\delta}(\theta)}{2m_{\chi}^{2}} = \frac{G^{+}_{\gamma\delta}(\theta)}{2m_{\chi}^{2}}=2m|\mathbf{k}|\cos ^{2}\frac{\theta}{2}(\sin^{2}\frac{\theta}{2}-\cos^{2}\frac{\theta}{2}), \qquad\frac{G^{+}_{\alpha\delta}(\theta)}{2m_{\chi}^{2}}=-2|\mathbf{k}|^{2}\sin^{2} \frac{\theta}{2}\cos^{2}\frac{\theta}{2},\] \[\frac{G^{-}_{\alpha\gamma}(\theta)}{2m_{\chi}^{2}} = 2|\mathbf{k}|\cos^{2}\frac{\theta}{2}[m+2(E_{p}-m)\cos^{2}\frac{ \theta}{2}]\sin^{2}\frac{\theta}{2}, \tag{4.41}\] \[\frac{G^{-}_{\beta\delta}(\theta)}{2m_{\chi}^{2}} = 2|\mathbf{k}|\cos^{2}\frac{\theta}{2}[m+2(E_{p}-m)\sin^{2}\frac{ \theta}{2}]\cos^{2}\frac{\theta}{2}.\]
All \(G\)-form-factor functions (corresponding to the lepton spin flip) are proportional to the square of the lepton mass and disappear at \(m_{\chi}=0\). Given expressions (4), the second formula from (4.3) gives incoherent \(\chi A\) cross sections _with lepton spin flip_ in the general form
\[\frac{d\sigma^{\mp\pm}_{\rm inc}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2 }}\sum_{f=p,n}\!\!A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[}(\alpha^{2}\sin^{2} \frac{\theta}{2}+\delta^{2}(1+\cos^{2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2} ))+(\alpha-\delta)^{2}\frac{|\mathbf{k}|^{2}}{m^{2}}\cos^{2}\frac{\theta}{2}\sin^{2 }\frac{\theta}{2} \tag{4.42}\] \[+ \Big{[}(\gamma-\beta)\frac{|\mathbf{k}|}{m}\pm\delta\Big{]}^{2}\cos^{ 4}\frac{\theta}{2}\mp 2(\gamma-\beta)\delta\frac{|\mathbf{k}|}{m}\cos^{2}\frac{ \theta}{2}\sin^{2}\frac{\theta}{2}+\frac{2\Delta A_{f}}{A_{f}}\{\,\mp(\delta^{2 }\cos^{2}\frac{\theta}{2}+\alpha\delta\sin^{2}\frac{\theta}{2})\] \[+ (\gamma-\beta)\frac{|\mathbf{k}|}{m}[(\alpha\sin^{2}\frac{\theta}{2}- \delta\cos^{2}\frac{\theta}{2})\cos^{2}\frac{\theta}{2}+2(\alpha-\delta) \Big{(}\frac{E_{p}}{m}-1\Big{)}\cos^{4}\frac{\theta}{2}\sin^{2}\frac{\theta}{2} ]\}\Big{]}.\]
A similar "direct" expansion of the first formula from (4.3) takes up too much space due to more complex \(F\)-form factors (4.3). Nevertheless, by introducing generalized incoherent nucleus-nucleon form-factor combinations that accumulate dependence on weak nucleon coupling constants in the form
\[\Phi^{+}_{ab}(\mathbf{q},A) = \sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q})A_{f}a_{f}b_{f}\quad\text{ and}\quad\Phi^{-}_{ab}(\mathbf{q},A)=\sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q})\Delta A_{f}a_{f}b_{f}, \tag{4.43}\]
where \(a\) and \(b\) independently "run through" values of \(\alpha,\beta,\gamma,\delta\), one can obtain the _most compact_ decomposition of the incoherent \(\chi A\) cross section without \(\chi\)-lepton spin flip in 20
nucleon \(F\)-form factors corresponding to all possible combinations of weak coupling constants. This decomposition looks like
\[\frac{d\sigma^{\mp\mp}_{\rm inc}}{8g_{i}c_{A}dT_{A}} = \Phi^{+}_{\alpha^{2}}(\mathbf{q})F^{+}_{\alpha^{2}}(\theta)+\Phi^{+}_{ \beta^{2}}(\mathbf{q})F^{+}_{\beta^{2}}(\theta)+\Phi^{+}_{\gamma^{2}}(\mathbf{q})F^{+}_ {\gamma^{2}}(\theta)+\Phi^{+}_{\delta^{2}}(\mathbf{q})F^{+}_{\delta^{2}}(\theta)+ \Phi^{+}_{\alpha\delta}(\mathbf{q})F^{+}_{\alpha\delta}(\theta)\] \[+\Phi^{+}_{\beta\gamma}(\mathbf{q})F^{+}_{\beta\gamma}(\theta)\mp[\Phi ^{+}_{\alpha\beta}(\mathbf{q})F^{+}_{\alpha\beta}(\theta)+\Phi^{+}_{\alpha\gamma}( \mathbf{q})F^{+}_{\alpha\gamma}(\theta)+\Phi^{+}_{\beta\delta}(\mathbf{q})F^{+}_{\beta \delta}(\theta)+\Phi^{+}_{\gamma\delta}(\mathbf{q})F^{+}_{\gamma\delta}(\theta)]\] \[+ \Phi^{-}_{\alpha\gamma}(\mathbf{q})F^{-}_{\alpha\gamma}(\theta)-\Phi^ {-}_{\alpha\beta}(\mathbf{q})F^{-}_{\alpha\beta}(\theta)+\Phi^{-}_{\beta\delta}( \mathbf{q})F^{-}_{\beta\delta}(\theta)-\Phi^{-}_{\gamma\delta}(\mathbf{q})F^{-}_{\gamma \delta}(\theta)\pm\Phi^{-}_{\beta\gamma}(\mathbf{q})F^{-}_{\beta\gamma}(\theta)\] \[\mp\Phi^{-}_{\alpha^{2}}(\mathbf{q})F^{-}_{\alpha^{2}}(\theta)\mp \Phi^{-}_{\beta^{2}}(\mathbf{q})F^{-}_{\beta^{2}}(\theta)\mp\Phi^{-}_{\gamma^{2} }(\mathbf{q})F^{-}_{\gamma^{2}}(\theta)\mp\Phi^{-}_{\delta^{2}}(\mathbf{q})F^{-}_{ \delta^{2}}(\theta)\pm\Phi^{-}_{\alpha\delta}(\mathbf{q})F^{-}_{\alpha\delta}( \theta).\]
By substituting explicit expressions for the \(F\)-form factors into this formula and performing simple but somewhat tedious transformations, one can obtain the expansion of the _incoherent_\(\chi A\) cross section _without spin-flip_ of the \(\chi\)-lepton in terms of kinematic structures \(|\mathbf{k}|^{2},|\mathbf{k}|^{4},2E_{p}E_{\chi}\), etc. in the following form:
\[\frac{d\sigma^{\mp\mp}_{\rm inc}}{g_{i}\widehat{c}_{A}dT_{A}} = m^{2}m_{\chi}^{2}[\cos^{2}\frac{\theta}{2}(\Phi^{+}_{\alpha^{2} +\delta^{2}}(\mathbf{q})\pm\Phi^{-}_{2\alpha\delta}(\mathbf{q}))+\sin^{2}\frac{\theta} {2}(\Phi^{+}_{2\delta^{2}}(\mathbf{q})\pm\Phi^{-}_{2\delta^{2}}(\mathbf{q}))]\] \[+ m^{2}|\mathbf{k}|^{2}[\cos^{2}\frac{\theta}{2}(\Phi^{+}_{\alpha^{2} +\beta^{2}+\gamma^{2}+\delta^{2}}(\mathbf{q})\pm\Phi^{-}_{2(\alpha\delta+\beta \gamma)}(\mathbf{q}))+\sin^{2}\frac{\theta}{2}(\Phi^{+}_{2(\beta^{2}+\delta^{2})}( \mathbf{q})\pm\Phi^{-}_{2(\beta^{2}+\delta^{2})}(\mathbf{q}))]\] \[+ m^{2}_{\chi}|\mathbf{k}|^{2}[\cos^{2}\frac{\theta}{2}(\Phi^{+}_{ \alpha^{2}+\beta^{2}+\gamma^{2}+\delta^{2}}(\mathbf{q})\pm\Phi^{-}_{2(\alpha \delta+\beta\gamma)}(\mathbf{q}))+\sin^{2}\frac{\theta}{2}(\Phi^{+}_{2(\gamma^{2} +\delta^{2})}(\mathbf{q})\pm\Phi^{-}_{2(\gamma^{2}+\delta^{2})}(\mathbf{q}))\] \[-\sin^{2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2}\Phi^{+}_{( \alpha-\delta)^{2}+(\beta-\gamma)^{2}}(\mathbf{q})]\] \[\mp m^{2}_{\chi}|\mathbf{k}|E_{p}[\cos^{2}\frac{\theta}{2}(\Phi^{+}_{2( \alpha\gamma+\beta\delta)}(\mathbf{q})\pm\Phi^{-}_{2(\alpha\beta+\gamma\delta)}( \mathbf{q}))+\sin^{2}\frac{\theta}{2}(\Phi^{+}_{4\gamma\delta}(\mathbf{q})\pm\Phi^{-}_ {4\gamma\delta}(\mathbf{q}))\] \[\pm\sin^{2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2}\Phi^{-}_{2( \alpha-\delta)(\gamma-\beta)}(\mathbf{q})]\] \[\mp m^{2}|\mathbf{k}|E_{\chi}[\cos^{2}\frac{\theta}{2}(\Phi^{+}_{2( \alpha\gamma+\beta\delta)}(\mathbf{q})\pm\Phi^{-}_{2(\alpha\beta+\gamma\delta)}( \mathbf{q}))+\sin^{2}\frac{\theta}{2}(\Phi^{+}_{4\beta\delta}(\mathbf{q})\pm\Phi^{-}_{4 \beta\delta}(\mathbf{q}))]\] \[+ 2E_{p}E_{\chi}|\mathbf{k}|^{2}[\Phi^{+}_{\alpha^{2}+\beta^{2}+\gamma^ {2}+\delta^{2}}(\mathbf{q})\pm\Phi^{-}_{2(\alpha\delta+\beta\gamma)}(\mathbf{q})\] \[-\sin^{2}\frac{\theta}{2}(\Phi^{+}_{(\alpha-\delta)^{2}+(\beta- \gamma)^{2}}(\mathbf{q})\mp\cos^{2}\frac{\theta}{2}\Phi^{-}_{(\alpha-\delta)^{2}+( \beta-\gamma)^{2}}(\mathbf{q}))]\] \[+ 2|\mathbf{k}|^{4}[\Phi^{+}_{\alpha^{2}+\beta^{2}+\gamma^{2}+\delta^{ 2}}(\mathbf{q})\pm\Phi^{-}_{\alpha^{2}+\beta^{2}+\gamma^{2}+\delta^{2}}(\mathbf{q})\] \[-\cos^{2}\frac{\theta}{2}(\sin^{2}\frac{\theta}{2}\Phi^{+}_{( \alpha-\delta)^{2}+(\beta-\gamma)^{2}}(\mathbf{q})\pm\Phi^{-}_{(\alpha-\delta)^{2}+( \beta-\gamma)^{2}}(\mathbf{q}))]\] \[\mp 2|\mathbf{k}|^{3}E_{p}[\Phi^{+}_{2(\alpha\gamma+\beta\delta)}(\mathbf{q} )\pm\Phi^{-}_{2(\alpha\beta+\gamma\delta)}(\mathbf{q})+\sin^{2}\frac{\theta}{2}( \Phi^{+}_{2(\alpha-\delta)(\beta-\gamma)}(\mathbf{q})\mp\cos^{2}\frac{\theta}{2} \Phi^{-}_{2(\alpha-\delta)(\beta-\gamma)}(\mathbf{q}))]\] \[\mp 2|\mathbf{k}|^{3}E_{\chi}[\Phi^{+}_{2(\alpha\gamma+\beta\delta)}(\bm {q})\pm\Phi^{-}_{2(\alpha\beta+\gamma\delta)}(\mathbf{q})+\sin^{2}\frac{\theta}{2}( \cos^{2}\frac{\theta}{2}\Phi^{+}_{2(\alpha-\delta)(\beta-\gamma)}(\mathbf{q})\mp \Phi^{-}_{2(\alpha-\delta)(\beta-\gamma)}(\mathbf{q}))].\]
To shorten the notations, sums of the following type from expressions (4.43) are introduced in (4.45)
\[\Phi^{+}_{\alpha^{2}\pm\delta^{2}}(\mathbf{q}) = \Phi^{+}_{\alpha^{2}}(\mathbf{q},A)\pm\Phi^{+}_{\delta^{2}}(\mathbf{q},A)= \sum_{f=p,n}\widehat{F}^{2}_{f}(\mathbf{q})A_{f}(\alpha^{2}_{f}\pm\delta^{2}_{f}),\] \[\Phi^{-}_{\alpha\delta\pm\beta\gamma}(\mathbf{q}) = \Phi^{-}_{\alpha\delta}(\mathbf{q},A)\pm\Phi^{-}_{\beta\gamma}(\mathbf{q},A)= \sum_{f=p,n}\widehat{F}^{2}_{f}(\mathbf{q})\Delta A_{f}(\alpha_{f}\delta_{f}\pm \beta_{f}\gamma_{f}).\]
Formulas (4.44) and (4.45) give two different representations of the incoherent cross section of \(\chi A\) scattering _without lepton spin flip_. For a spinless nucleus (\(\Delta A_{f}=0\) and all \(\Phi^{-}=0\)) formula (4.45) takes the form
\[\frac{d\sigma^{\mp\mp}_{\rm inc,0}}{g_{i}\hat{c}_{A}dT_{A}} = m^{2}m_{\chi}^{2}[\cos^{2}\frac{\theta}{2}\Phi^{+}_{\alpha^{2}+ \delta^{2}}(\mathbf{q})+2\sin^{2}\frac{\theta}{2}\Phi^{+}_{\delta^{2}}(\mathbf{q})]+m^ {2}|\mathbf{k}|^{2}[\cos^{2}\frac{\theta}{2}\Phi^{+}_{\alpha^{2}+\beta^{2}+\gamma^{2 }+\delta^{2}}(\mathbf{q})+2\sin^{2}\frac{\theta}{2}\Phi^{+}_{\beta^{2}+\delta^{2}}( \mathbf{q})] \tag{4.46}\] \[+m_{\chi}^{2}|\mathbf{k}|^{2}[\cos^{2}\frac{\theta}{2}\Phi^{+}_{ \alpha^{2}+\beta^{2}+\gamma^{2}+\delta^{2}}(\mathbf{q})+2\sin^{2}\frac{\theta}{2} \Phi^{+}_{\gamma^{2}+\delta^{2}}(\mathbf{q})-\sin^{2}\frac{\theta}{2}\cos^{2} \frac{\theta}{2}\Phi^{+}_{(\alpha-\delta)^{2}+(\beta-\gamma)^{2}}(\mathbf{q})]\] \[+2E_{p}E_{\chi}|\mathbf{k}|^{2}[\cos^{2}\frac{\theta}{2}\Phi^{+}_{ \alpha^{2}+\beta^{2}+\gamma^{2}+\delta^{2}}(\mathbf{q})+2\sin^{2}\frac{\theta}{2} \Phi^{+}_{\alpha\delta+\beta\gamma}(\mathbf{q})]\] \[+2|\mathbf{k}|^{4}[\Phi^{+}_{\alpha^{2}+\beta^{2}+\gamma^{2}+\delta^ {2}}(\mathbf{q})-\sin^{2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2}\Phi^{+}_{(\alpha -\delta)^{2}+(\beta-\gamma)^{2}}(\mathbf{q})]\] \[\mp 2m_{\chi}^{2}|\mathbf{k}|E_{p}[\cos^{2}\frac{\theta}{2}\Phi^{+}_{ \alpha\gamma+\beta\delta}(\mathbf{q})+\sin^{2}\frac{\theta}{2}\Phi^{+}_{2\gamma \delta}(\mathbf{q})]\mp 2m^{2}|\mathbf{k}|E_{\chi}[\cos^{2}\frac{\theta}{2}\Phi^{+}_{\alpha \gamma+\beta\delta}(\mathbf{q})+\sin^{2}\frac{\theta}{2}\Phi^{+}_{2\beta\delta}( \mathbf{q})]\] \[\mp 4|\mathbf{k}|^{3}E_{\chi}[\Phi^{+}_{\alpha\gamma+\beta\delta}(\mathbf{ q})+\Phi^{+}_{(\alpha-\delta)(\beta-\gamma)}(\mathbf{q})\sin^{2}\frac{\theta}{2} \cos^{2}\frac{\theta}{2}]\] \[\mp 4|\mathbf{k}|^{3}E_{p}[\Phi^{+}_{\alpha\gamma+\beta\delta}(\mathbf{q})+ \Phi^{+}_{(\alpha-\delta)(\beta-\gamma)}(\mathbf{q})\sin^{2}\frac{\theta}{2}].\]
From formulas (4.46) and (4.45) one can get "check limiting cases". First, when \(m_{\chi}^{2}\to 0\) (and \(E_{\chi}\to|\mathbf{k}|\)), formula (4.46) gets the form
\[\frac{d\sigma^{\mp\mp}_{\rm inc,0}}{|\mathbf{k}|^{2}g_{i}\hat{c}_{A} dT_{A}} = s\Phi^{+}_{(\alpha\mp\gamma)^{2}+(\beta\mp\delta)^{2}}(\mathbf{q})-m^{2}\sin^{2} \frac{\theta}{2}\Phi^{+}_{(\alpha\mp\gamma)^{2}-(\beta\mp\delta)^{2}}(\mathbf{q})\] \[-2|\mathbf{k}|\sin^{2}\frac{\theta}{2}\Phi^{+}_{((\alpha-\delta)\pm( \beta-\gamma))^{2}}(\mathbf{q})(E_{p}+|\mathbf{k}|\cos^{2}\frac{\theta}{2}),\]
from which, when \(m_{\chi}^{2}=0\) and \(\Delta A_{f}=0\), for the incoherent \(\chi A\) cross section without spin flip one has the following "explicit expression":
\[\frac{d\sigma^{\mp\mp}_{\rm inc,00}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{|\mathbf{k}|^{2}}{|\mathbf{k}_{\chi}|^{2} }\sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q})A_{f}\Big{[}\frac{s}{m^{2}}[(\alpha\mp \gamma)^{2}+(\beta\mp\delta)^{2}]-\sin^{2}\frac{\theta}{2}[(\alpha\mp\gamma)^{ 2}-(\beta\mp\delta)^{2}] \tag{4.47}\] \[-2\frac{|\mathbf{k}|}{m}\sin^{2}\frac{\theta}{2}((\alpha\mp\gamma)\pm (\beta\mp\delta))^{2}(\frac{E_{p}}{m}+\frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{ 2})\Big{]}.\]
This expression will be used for cross-check. Next, when \(|\mathbf{k}|\to 0\), from (4.46) one gets a simple expression
\[\frac{d\sigma^{\mp\mp}_{\rm inc}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2} }\big{[}\Phi^{+}_{\alpha^{2}}(\mathbf{q})\cos^{2}\frac{\theta}{2}+\Phi^{+}_{\delta ^{2}}(\mathbf{q})(1+\sin^{2}\frac{\theta}{2})\big{]},\]
which coincides with that obtained earlier in the nonrelativistic limit [4].
In terms of \(\Phi\)-form-factor functions from (4.43), expression (4.42) for the incoherent \(\chi A\) scattering _with lepton spin flip_ takes the form
\[\frac{d\sigma^{\mp\pm}_{\rm inc}}{8g_{i}c_{A}dT_{A}} = \Phi^{+}_{\alpha^{2}}(\mathbf{q})G^{+}_{\alpha^{2}}(\theta)+\Phi^{+}_{ \delta^{2}}(\mathbf{q})G^{+}_{\delta^{2}}(\theta)+\Phi^{+}_{\alpha\delta}(\mathbf{q})G^ {+}_{\alpha\delta}(\theta)+\Phi^{+}_{(\gamma-\beta)^{2}}(\mathbf{q})G^{+}_{\beta \gamma}(\theta)\mp\Phi^{+}_{(\gamma-\beta)\delta}(\mathbf{q})G^{+}_{\beta\delta}( \theta)\] \[\pm\Phi^{-}_{\delta^{2}}(\mathbf{q})G^{-}_{\delta^{2}}(\theta)\pm\Phi^ {-}_{\alpha\delta}(\mathbf{q})G^{-}_{\alpha\delta}(\theta)+\Phi^{-}_{\alpha( \gamma-\beta)}(\mathbf{q})G^{-}_{\alpha\gamma}(\theta)+\Phi^{-}_{\delta(\beta-\gamma)}( \mathbf{q})G^{-}_{\beta\delta}(\theta).\]
After "expanding" \(G\)-form-factors, this expression becomes the following:
\[\frac{d\sigma_{\rm inc}^{\mp\pm}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2} }\Big{\{}\Phi^{+}_{\alpha^{2}+\delta^{2}}(\mathbf{q})+\cos^{2}\frac{\theta}{2}\Phi^{ +}_{\delta^{2}-\alpha^{2}}(\mathbf{q})\] \[+\frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2}[\frac{|\mathbf{k}|}{m}|( \sin^{2}\frac{\theta}{2}\Phi^{+}_{(\alpha-\delta)^{2}}(\mathbf{q})+\cos^{2}\frac{ \theta}{2}\Phi^{+}_{(\gamma-\beta)^{2}}(\mathbf{q}))\pm 2\Phi^{+}_{(\gamma-\beta) \delta}(\mathbf{q})(\cos^{2}\frac{\theta}{2}-\sin^{2}\frac{\theta}{2})]\] \[+2\frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2}[\Phi^{-}_{(\alpha+ \delta)(\beta-\gamma)}(\mathbf{q})\cos^{2}\frac{\theta}{2}-\Phi^{-}_{\alpha(\beta -\gamma)}(\mathbf{q})]\] \[+4\frac{(E_{p}-m)}{m}\frac{|\mathbf{k}|}{m}\cos^{4}\frac{\theta}{2} \sin^{2}\frac{\theta}{2}\Phi^{-}_{(\delta-\alpha)(\beta-\gamma)}(\mathbf{q})\mp 2[ \Phi^{-}_{\alpha\delta}(\mathbf{q})+\Phi^{-}_{\delta^{2}-\alpha\delta}(\mathbf{q}) \cos^{2}\frac{\theta}{2}]\Big{\}}.\]
Fully averaged over the initial and summed over the final \(\chi\)-particle helicity, the _averaged incoherent_ weak \(\chi A\) interaction cross section has the general form
\[\frac{d\sigma_{\rm inc}^{\rm total}}{g_{i}c_{A}dT_{A}} \equiv \frac{1}{2}\sum_{s^{\prime}s}\frac{d\sigma_{\rm inc}^{s^{\prime}s} }{g_{i}c_{A}dT_{A}}=\frac{1}{2}\sum_{f=p,n}\frac{A_{f}}{2}\widehat{F}_{f}^{2}( \mathbf{q})16\Big{[}\sum_{s^{\prime}s}\frac{Q_{+,w}^{s^{\prime}s}}{16}+\frac{\Delta A _{f}}{A_{f}}\sum_{s^{\prime}s}\frac{Q_{-,w}^{s^{\prime}s}}{16}\Big{]}. \tag{4.49}\]
It can be calculated "directly" by taking the sum of all individual incoherent cross sections from (4.3) over all spin projections \(s,s^{\prime}\) of the initial and final \(\chi\) lepton. For example, when \(m_{\chi}=0\) and \(\Delta A_{f}=0\), the total \(\chi A\) cross section (4.49) is the half-sum of only two expressions from (4.4):
\[\frac{1}{2}\Big{\{}\frac{d\sigma_{\rm inc,00}^{--}}{dT_{A}}+\frac{ d\sigma_{\rm inc,00}^{++}}{dT_{A}}\Big{\}}=\frac{G_{F}^{2}m_{A}}{8\pi}\frac{|\mathbf{k}|^{2 }}{|\mathbf{k}_{\chi}|^{2}}\sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q})A_{f}\Big{[} \tag{4.50}\] \[+\frac{s}{m^{2}}[(\alpha-\gamma)^{2}+(\beta-\delta)^{2}]-\sin^{2 }\frac{\theta}{2}[(\alpha-\gamma)^{2}-(\beta-\delta)^{2}]-\frac{2|\mathbf{k}|}{m} \sin^{2}\frac{\theta}{2}(\alpha-\gamma+\beta-\delta)^{2}(\frac{E_{p}}{m}+ \frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2})\] \[+\frac{s}{m^{2}}[(\alpha+\gamma)^{2}+(\beta+\delta)^{2}]-\sin^{2 }\frac{\theta}{2}[(\alpha+\gamma)^{2}-(\beta+\delta)^{2}]-\frac{2|\mathbf{k}|}{m} \sin^{2}\frac{\theta}{2}(\alpha+\gamma-\beta-\delta)^{2}(\frac{E_{p}}{m}+ \frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2})\Big{]}.\]
The second line here corresponds to the negative helicity of the massless \(\chi\) lepton and completely vanishes for the \(V\!+\!A\)-current. The bottom line corresponds to the positive helicity of the massless \(\chi\) lepton and vanishes for \(V\!-\!A\) current7. This "property" of expression (4.4) speaks in favor of its correctness. After a slight transformation, the formula (4.50) takes the form (recall that \(m_{\chi}=0\) and \(\Delta A_{f}=0\))
Footnote 7: For \(V\!\mp\!A\) currents one has \((\alpha\mp\gamma)_{V\mp A}=2g_{V},(\delta\mp\beta)_{V\mp A}=\pm 2g_{A},( \alpha\mp\gamma)_{V\pm A}=(\delta\mp\beta)_{V\pm A}=0\).
\[\frac{d\sigma_{\rm inc,00}^{\rm total}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{|\mathbf{k}|^{2}}{|\mathbf{k}_{\chi}|^{2}} \sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q})A_{f}\Big{[}\frac{s}{m^{2}}(\alpha^{2}+ \delta^{2}+\beta^{2}+\gamma^{2})-\sin^{2}\frac{\theta}{2}(\alpha^{2}-\delta^{2 }+\gamma^{2}-\beta^{2}) \tag{4.51}\] \[-\frac{2|\mathbf{k}|}{m}\sin^{2}\frac{\theta}{2}\Big{(}\frac{E_{p}}{m }+\frac{|\mathbf{k}|}{m}\cos^{2}\frac{\theta}{2}\Big{)}[(\alpha-\delta)^{2}+( \beta-\gamma)^{2}]\Big{]}.\]
Another way to find the total cross section (4.49) is to use the results of calculating the sums \(\sum Q_{\pm,w}^{s^{\prime}s}\) from the right side of formula (4.49), which have the form
\[\sum_{s^{\prime}s}\frac{Q_{+,w}^{s^{\prime}s}}{2^{5}} = \alpha^{2}S_{\alpha^{2}}^{+}(\theta)+\beta^{2}S_{\beta^{2}}^{+}( \theta)+\gamma^{2}S_{\gamma^{2}}^{+}(\theta)+\delta^{2}S_{\delta^{2}}^{+}(\theta )+\alpha\delta S_{\alpha\delta}^{+}(\theta)+\beta\gamma S_{\beta\gamma}^{+}( \theta),\] \[\sum_{s^{\prime}s}\frac{Q_{-,w}^{s^{\prime}s}}{2^{5}} = \alpha\gamma S_{\alpha\gamma}^{-}(\theta)+\beta\delta S_{\beta\delta}^ {-}(\theta)-\alpha\beta S_{\alpha\beta}^{-}(\theta)-\gamma\delta S_{\gamma \delta}^{-}(\theta). \tag{4.52}\]
The structural coefficients multiplying combinations of coupling constants are given below:
\[\frac{S^{+}_{\alpha^{2}}(\theta)}{2} = \frac{F^{+}_{\alpha^{2}}(\theta)}{2}+\frac{G^{+}_{\alpha^{2}}(\theta )}{2}=m^{2}m^{2}_{\chi}+|\mathbf{k}|^{2}[s\cos^{2}\frac{\theta}{2}+2|\mathbf{k}|^{2} \sin^{4}\frac{\theta}{2}],\] \[\frac{S^{+}_{\delta^{2}}(\theta)}{2} = \frac{F^{+}_{\delta^{2}}(\theta)}{2}+\frac{G^{+}_{\delta^{2}}( \theta)}{2}=3m^{2}_{\chi}+|\mathbf{k}|^{2}[s\cos^{2}\frac{\theta}{2}+2|\mathbf{k}|^{2} \sin^{4}\frac{\theta}{2}+2(m^{2}+m^{2}_{\chi})\sin^{2}\frac{\theta}{2}],\] \[\frac{S^{+}_{\alpha\delta}(\theta)}{2} = \frac{F^{+}_{\alpha\delta}(\theta)}{2}+\frac{G^{+}_{\alpha\delta} (\theta)}{2}=2\sin^{2}\frac{\theta}{2}|\mathbf{k}|^{2}[s-m^{2}-m^{2}_{\chi}-2|\mathbf{ k}|^{2}\sin^{2}\frac{\theta}{2}], \tag{4.53}\] \[\frac{S^{+}_{\gamma^{2}}(\theta)}{2} = \frac{F^{+}_{\gamma^{2}}(\theta)}{2}+\frac{G^{+}_{\beta\gamma}( \theta)}{2}=|\mathbf{k}|^{2}[(s+m^{2}_{\chi})\cos^{2}\frac{\theta}{2}+2\sin^{4} \frac{\theta}{2}E^{2}_{\chi}],\] \[\frac{S^{+}_{\beta^{2}}(\theta)}{2} = \frac{F^{+}_{\beta^{2}}(\theta)}{2}+\frac{G^{+}_{\beta\gamma}( \theta)}{2}=|\mathbf{k}|^{2}[(s+m^{2}_{\chi})\cos^{2}\frac{\theta}{2}+2(m^{2}-m^{ 2}_{\chi})\sin^{2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2}+2E^{2}_{p}\sin^{4} \frac{\theta}{2}],\] \[\cdot\frac{S^{+}_{\beta\gamma}(\theta)}{2} = \frac{F^{+}_{\beta\gamma}(\theta)}{2}-2\frac{G^{+}_{\beta\gamma}( \theta)}{2}=2|\mathbf{k}|^{2}[(s-m^{2}+m^{2}_{\chi})\sin^{2}\frac{\theta}{2}-m^{2} _{\chi}\cos^{2}\frac{\theta}{2}-2E^{2}_{\chi}\sin^{4}\frac{\theta}{2}].\]
\[\frac{S^{-}_{\alpha\gamma}(\theta)}{4|\mathbf{k}|} = \frac{G^{-}_{\alpha\gamma}(\theta)}{2}+\frac{F^{-}_{\alpha\gamma }(\theta)}{2}=-\sin^{2}\frac{\theta}{2}[m^{2}_{\chi}(E_{p}-m)\cos^{2}\frac{ \theta}{2}(1-2\cos^{2}\frac{\theta}{2})+2|\mathbf{k}|^{2}(E_{p}\cos^{2}\frac{ \theta}{2}+E_{\chi})],\] \[\frac{S^{-}_{\beta\delta}(\theta)}{4|\mathbf{k}|} = \frac{G^{-}_{\beta\delta}(\theta)}{2}+\frac{F^{-}_{\beta\delta}( \theta)}{2}= \tag{4.54}\] \[= m^{2}_{\chi}(m\cos^{2}\frac{\theta}{2}+E_{p}\sin^{2}\frac{ \theta}{2})\cos^{2}\frac{\theta}{2}(1-2\sin^{2}\frac{\theta}{2})-2\sin^{2} \frac{\theta}{2}E_{p}(E_{\chi}E_{p}+\cos^{2}\frac{\theta}{2}|\mathbf{k}|^{2}),\] \[\frac{S^{-}_{\gamma\delta}(\theta)}{4|\mathbf{k}|} = \frac{F^{-}_{\gamma\delta}(\theta)}{2}+\frac{G^{-}_{\beta\delta}( \theta)}{2}=\] \[= 2E^{2}_{\chi}E_{p}(\sin^{2}\frac{\theta}{2}+\cos^{4}\frac{ \theta}{2})-m^{2}_{\chi}(E_{p}-m)\cos^{4}\frac{\theta}{2}(1-2\sin^{2}\frac{ \theta}{2})+E_{\chi}(m^{2}+2|\mathbf{k}|^{2})\cos^{2}\frac{\theta}{2},\] \[\frac{S^{-}_{\alpha\beta}(\theta)}{4|\mathbf{k}|} = \frac{F^{-}_{\alpha\beta}(\theta)}{2}+\frac{G^{-}_{\alpha\gamma}( \theta)}{2}=\] \[= \cos^{2}\frac{\theta}{2}[m^{2}_{\chi}\cos^{2}\frac{\theta}{2}(E_{p }-m)(1+2\sin^{2}\frac{\theta}{2})+(m^{2}_{\chi}+E_{\chi}m)m+2|\mathbf{k}|^{2} \sqrt{s}]+2|\mathbf{k}|^{2}E_{p}\sin^{4}\frac{\theta}{2}.\]
Substituting expressions (4.53) and (4.54) into formulas (4.52), one gets explicit forms
\[\sum\frac{Q^{s^{\prime}s}_{+,w}}{2^{6}} = (\alpha^{2}+3\delta^{2})m^{2}m^{2}_{\chi}+2|\mathbf{k}|^{4}[(\alpha- \delta)^{2}+(\beta-\gamma)^{2}]\sin^{4}\frac{\theta}{2}+|\mathbf{k}|^{2}s[2(\alpha \delta+\beta\gamma) \tag{4.55}\] \[+((\alpha-\delta)^{2}+(\beta-\gamma)^{2})\cos^{2}\frac{\theta}{2} ]+2|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}[m^{2}(\beta-\gamma)\beta+(m^{2}+m^{2}_{ \chi})(\delta-\alpha)\delta]\] \[+|\mathbf{k}|^{2}m^{2}_{\chi}[(\beta-\gamma)^{2}(\sin^{4}\frac{ \theta}{2}+\cos^{4}\frac{\theta}{2})+(\gamma^{2}-\beta^{2})\sin^{2}\frac{ \theta}{2}],\] \[\sum\frac{Q^{s^{\prime}s}_{-,w}}{2^{7}|\mathbf{k}|} = (\alpha\gamma+\beta\delta)[m^{2}_{\chi}E_{p}\cos^{2}\frac{\theta}{2 }\sin^{2}\frac{\theta}{2}(1-2\sin^{2}\frac{\theta}{2})-2|\mathbf{k}|^{2}(E_{p}\cos^{ 2}\frac{\theta}{2}+E_{\chi})\sin^{2}\frac{\theta}{2}]\] \[-(\alpha\beta+\gamma\delta)[m^{2}_{\chi}E_{p}\cos^{4}\frac{\theta}{2 }(1+2\sin^{2}\frac{\theta}{2})+2|\mathbf{k}|^{2}E_{p}(1-\cos^{2}\frac{\theta}{2} \sin^{2}\frac{\theta}{2})\] \[+(2|\mathbf{k}|^{2}+m^{2})E_{\chi}\cos^{2}\frac{\theta}{2}]-2\delta( \beta m^{2}E_{\chi}+\gamma m^{2}_{\chi}E_{p})\sin^{2}\frac{\theta}{2}\] \[+(\alpha\sin^{2}\frac{\theta}{2}-\delta\cos^{2}\frac{\theta}{2})( \gamma-\beta)mm^{2}_{\chi}\cos^{2}\frac{\theta}{2}(1-2\sin^{2}\frac{\theta}{2}).\]
Using these sums and formula (4.49), one can calculate the fully averaged incoherent \(\chi A\) interaction cross section. By analogy with the coherent cross section (4.34) we present only the completely averaged _incoherent_\(\chi A\) cross section for the _spinless_ nuclei
\[\frac{d\sigma^{\rm total}_{\rm inc}}{g_{i}dT_{A}} = \hat{c}_{A}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[}( \alpha^{2}+3\delta^{2})m^{2}m_{\chi}^{2}+2|\mathbf{k}|^{4}[(\alpha-\delta)^{2}+( \beta-\gamma)^{2}]\sin^{4}\frac{\theta}{2}\] \[\quad+|\mathbf{k}|^{2}s[((\alpha-\delta)^{2}+(\beta-\gamma)^{2})\cos^ {2}\frac{\theta}{2}+2(\alpha\delta+\beta\gamma)]\] \[\quad+2|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}[m^{2}(\beta-\gamma) \beta+(m^{2}+m_{\chi}^{2})(\delta-\alpha)\delta]\] \[\quad+|\mathbf{k}|^{2}m_{\chi}^{2}[(\beta-\gamma)^{2}(\sin^{4}\frac{ \theta}{2}+\cos^{4}\frac{\theta}{2})+(\gamma^{2}-\beta^{2})\sin^{2}\frac{ \theta}{2}]\Big{]}.\]
In terms of \(\Phi\)-functions from (4.43) this formula looks like8
Footnote 8: Because \(\Phi^{+}_{(\alpha-\delta)^{2}+(\beta-\gamma)^{2}}(\mathbf{q})\cos^{2}\frac{\theta} {2}+\Phi^{+}_{2(\alpha\delta+\beta\gamma)}(\mathbf{q})=\Phi^{+}_{\alpha^{2}+\delta ^{2}+\beta^{2}+\gamma^{2}}(\mathbf{q})\cos^{2}\frac{\theta}{2}+\Phi^{+}_{2(\alpha \delta+2\beta\gamma)}(\mathbf{q})\sin^{2}\frac{\theta}{2}\).
\[\frac{d\sigma^{\rm total}_{\rm inc}}{g_{i}\hat{c}_{A}dT_{A}} = m_{\chi}^{2}m^{2}\Phi^{+}_{\alpha^{2}+3\delta^{2}}(\mathbf{q})+m_{ \chi}^{2}|\mathbf{k}|^{2}[\Phi^{+}_{(\beta-\gamma)^{2}}(\mathbf{q})(\sin^{4}\frac{ \theta}{2}+\cos^{4}\frac{\theta}{2})+\Phi^{+}_{(\gamma^{2}-\beta^{2})}(\mathbf{q} )\sin^{2}\frac{\theta}{2}] \tag{4.57}\] \[+2|\mathbf{k}|^{4}\Phi^{+}_{(\alpha-\delta)^{2}+(\beta-\gamma)^{2}}( \mathbf{q})\sin^{4}\frac{\theta}{2}+|\mathbf{k}|^{2}s[\Phi^{+}_{(\alpha-\delta)^{2}+( \beta-\gamma)^{2}}(\mathbf{q})\cos^{2}\frac{\theta}{2}+\Phi^{+}_{2(\alpha\delta+ \beta\gamma)}(\mathbf{q})]\] \[+2|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}[m^{2}\Phi^{+}_{(\beta- \gamma)\beta}(\mathbf{q})+(m^{2}+m_{\chi}^{2})\Phi^{+}_{(\delta-\alpha)\delta}( \mathbf{q})].\]
Formulas (4.56) and (4.57) give the _incoherent_ cross sections of the weak \(\chi A\) interaction, completely averaged over the initial and summed over the final helicities of the \(\chi\) particle, in the case of a spinless nucleus.
In the super-relativistic, "neutrino" limit (\(m_{\chi}\to 0\), \(|\mathbf{k}|=E_{\chi}\)), the sums (4.55) are
\[\sum\frac{Q^{s^{\prime}s}_{+,w}}{2^{6}|\mathbf{k}|^{2}} = 2|\mathbf{k}|^{2}[(\alpha-\delta)^{2}+(\beta-\gamma)^{2}]\sin^{4} \frac{\theta}{2}+s[((\alpha-\delta)^{2}+(\beta-\gamma)^{2})\cos^{2}\frac{ \theta}{2}+2(\alpha\delta+\beta\gamma)] \tag{4.58}\] \[+2m^{2}\sin^{2}\frac{\theta}{2}[(\beta-\gamma)\beta+(\delta- \alpha)\delta];\] \[\sum\frac{Q^{s^{\prime}s}_{-,w}}{2^{7}|\mathbf{k}|^{2}} = -2\delta\beta m^{2}\sin^{2}\frac{\theta}{2}-2(\alpha\gamma+ \beta\delta)|\mathbf{k}|(E_{p}\cos^{2}\frac{\theta}{2}+|\mathbf{k}|)\sin^{2}\frac{ \theta}{2}\] \[-(\alpha\beta+\gamma\delta)\big{[}2|\mathbf{k}|E_{p}(1-\cos^{2}\frac{ \theta}{2}\sin^{2}\frac{\theta}{2})+(2|\mathbf{k}|^{2}+m^{2})\cos^{2}\frac{\theta} {2}\big{]}.\]
Then the fully averaged incoherent \(\chi A\) cross section (4.49) takes the form
\[\frac{d\sigma^{\rm total}_{\rm inc,0}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{E_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2} }\!\!\sum_{f=p,n}\!A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[}\frac{2E_{\chi}^{2}} {m^{2}}[(\alpha-\delta)^{2}+(\beta-\gamma)^{2}]\sin^{4}\frac{\theta}{2}+s[( \alpha^{2}+\delta^{2}+\beta^{2}+\gamma^{2})\cos^{2}\frac{\theta}{2}\] \[+2(\alpha\delta+\beta\gamma)\sin^{2}\frac{\theta}{2}]+2\sin^{2} \frac{\theta}{2}[\beta(\beta-\gamma)+\delta(\delta-\alpha)]\] \[+ \frac{2\Delta A_{f}}{A_{f}}\Big{\{}-2(\alpha\gamma+\beta\delta) \frac{E_{\chi}}{m}\big{(}\frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+\frac{E_{\chi} }{m}\big{)}\sin^{2}\frac{\theta}{2}\] \[-(\alpha\beta+\gamma\delta)\big{[}\frac{2E_{\chi}E_{p}}{m^{2}} \big{(}1-\cos^{2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2}\big{)}+\big{(}2\frac {E_{\chi}^{2}}{m^{2}}+1\big{)}\cos^{2}\frac{\theta}{2}\big{]}-2\delta\beta\sin ^{2}\frac{\theta}{2}\Big{\}}\Big{]}.\]
This expression for the spinless nucleus yields
\[\frac{d\sigma^{\rm total}_{\rm inc,00}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{|\mathbf{k}|^{2}}{|\mathbf{k}_{\chi}|^{2}} \underset{f=p,n}{\sum}A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[}\frac{2|\mathbf{k}|^{2} }{m^{2}}[(\alpha-\delta)^{2}+(\beta-\gamma)^{2}]\sin^{4}\frac{\theta}{2}\] \[+s[(\alpha^{2}+\delta^{2}+\beta^{2}+\gamma^{2})\cos^{2}\frac{\theta }{2}+2(\alpha\delta+\beta\gamma)\sin^{2}\frac{\theta}{2}]+2\sin^{2}\frac{\theta }{2}[\beta(\beta-\gamma)+\delta(\delta-\alpha)]\Big{]}.\]
Since \(m_{\chi}=0\), formula (4.5) for the _total_ cross section should not have the contribution of the \(\chi\)-lepton spin-flip cross sections; therefore, this expression is the sum of only two terms arising from the massless \(\chi\) lepton falling on the nucleus with positive and negative (conserved) helicity. Therefore, expression (4.5) must coincide with the direct sum of these contributions, i.e. with formula (4.5). In the nonrelativistic limit, when \(|\mathbf{k}|\ll m_{\chi}\) (or \(|\mathbf{k}|\to 0\)), one gets
\[\sum\frac{Q_{+,w}^{s^{\prime}s}}{2^{6}}=(\alpha^{2}+3\delta^{2})m^{2}m_{\chi}^ {2},\qquad\sum\frac{Q_{-,w}^{s^{\prime}s}}{2^{5}4}\propto|\mathbf{k}|\simeq 0.\]
Hence the averaged (total) incoherent \(\chi A\) cross section (4.4) acquires a "simple form"
\[\frac{d\sigma^{\rm total}_{\rm inc}}{g_{i}dT_{A}}=\frac{G_{F}^{2}m_{A}}{4\pi} \frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}( \mathbf{q})(\alpha^{2}+3\delta^{2}),\]
which was obtained earlier in [4].
_The total cross section of the weak \(\chi A\) interaction_ is the sum of fully averaged coherent and incoherent terms given in the general form by formulas (4.33) and (4.4). Let us present, as an example, the expressions for this sum in the case of a _spinless_ (\(\Delta A_{f}=0\)) nucleus9:
Footnote 9: The ratio (4.20), the expression for \(\hat{c}_{A}\) from (4.12) and \(g_{\rm c}\simeq g_{\rm i}\simeq 1\) are taken into account here.
\[\frac{d\sigma^{\rm total}_{0}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\Bigg{[}\cos^{2}\frac{\theta}{2}\frac{ E_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\Big{\{}\Big{[}G_{\alpha}(A,\mathbf{q})\big{(} \frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+\sin^{2}\frac{\theta}{2}+\frac{|\mathbf{k} |^{2}}{mE_{\chi}}\big{)}+G_{\delta}(A,\mathbf{q})(\frac{E_{p}}{m}-1)\sin^{2}\frac{ \theta}{2}\Big{]}^{2}\] \[+\frac{|\mathbf{k}|^{2}}{m^{2}}\Big{[}G_{\beta}(A,\mathbf{q})\frac{E_{p}- m}{E_{\chi}}\sin^{2}\frac{\theta}{2}+G_{\gamma}(A,\mathbf{q})\big{(}1+\frac{E_{p}}{E_{ \chi}}\cos^{2}\frac{\theta}{2}+\frac{m}{E_{\chi}}\sin^{2}\frac{\theta}{2} \big{)}\Big{]}^{2}\Big{\}}\] \[+\sin^{2}\frac{\theta}{2}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}} \Big{[}G_{\alpha}(A,\mathbf{q})(\frac{E_{p}}{m}\cos^{2}\frac{\theta}{2}+\sin^{2} \frac{\theta}{2})-G_{\delta}(A,\mathbf{q})(\frac{E_{p}}{m}-1)\cos^{2}\frac{\theta }{2}\Big{]}^{2}+\] \[+\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\Phi^{+}_{\alpha^{2}+3 \delta^{2}}(\mathbf{q})+2\frac{|\mathbf{k}|^{2}}{s}\Phi^{+}_{(\alpha-\delta)^{2}+( \beta-\gamma)^{2}}(\mathbf{q})\sin^{4}\frac{\theta}{2}+\Phi^{+}_{(\alpha-\delta)^ {2}+(\beta-\gamma)^{2}}(\mathbf{q})\cos^{2}\frac{\theta}{2}+\Phi^{+}_{2(\alpha \delta+\beta\gamma)}(\mathbf{q})\] \[+2\frac{m^{2}}{s}\sin^{2}\frac{\theta}{2}[\Phi^{+}_{(\beta- \gamma)\beta}(\mathbf{q})+(1+\frac{m_{\chi}^{2}}{m^{2}})\Phi^{+}_{(\delta-\alpha) \delta}(\mathbf{q})]\] \[+\frac{m_{\chi}^{2}}{s}[\Phi^{+}_{(\beta-\gamma)^{2}}(\mathbf{q})( \sin^{4}\frac{\theta}{2}+\cos^{4}\frac{\theta}{2})+\Phi^{+}_{(\gamma^{2}- \beta^{2})}(\mathbf{q})\sin^{2}\frac{\theta}{2}]\Bigg{]}.\]
Using this formula, we recalls the algorithm for calculating the cross sections. The "erxternal fixed" parameters that define the nature of the interaction are the weak Fermi constant \(G_{F}\)
the (common) nucleon mass \(m\), and two sets of effective proton and neutron weak interaction constants, \(\alpha_{p/n},\beta_{p/n},\gamma_{p/n},\delta_{p/n}\). The "external static" parameters are the mass \(m_{A}\) of the target nucleus and the mass \(m_{\chi}\) of the lepton. The "external dynamic" parameters are the initial kinetic energy \(T_{0}\) (or momentum \(\mathbf{k}_{\chi}\)) of the lepton incident on the nucleus at rest and the recoil energy of this nucleus \(T_{A}\). Furthermore, the scattering angle \(\theta\) is defined in terms of them using expression (4.21), and the other quantities \(E_{p}\), \(E_{\chi}\), \(\mathbf{k}\), etc. are defined by means of formulas (4.17).
### Massive (anti)neutrinos and \(V\mp\!A\) interaction
Consider an important case of the scattering of the massive neutral \(\chi\) lepton on the nucleus due to the "standard" \(V\mp\!A\) weak interaction.
Since calculation [16] of the scalar products was carried out when the eigenstates of the \(\chi\)-lepton spin were the states of its helicity (projections of the spin onto the direction of its momentum), the neutrino (whose helicity is negative) corresponds to the cross sections with the "\(--\)"-indices of the \(\chi\) lepton, i.e., coherent \(\dfrac{d\sigma^{--}_{\rm coh}}{dT_{A}}\) and incoherent \(\dfrac{d\sigma^{--}_{\rm inc}}{dT_{A}}\) cross sections. It is only due to the non-zero mass \(m_{\chi}\) that the processes with \(\chi\)-lepton spin flip whose cross sections are given in the second line of formulas (4.28) are possible.
First, from general formulas (4.28) we get expressions for the cross sections for coherent scattering on the nuclei of massive neutrinos from the Standard Model. Let us use formulas (4.36), corresponding to the assumption that the nuclear proton and neutron form factors are equal (4.35). For the \(V\!-\!A\) weak interaction of neutrinos with nucleons, the latter have the following effective coupling constants from (4.23):
\[\alpha_{f}=+g^{f}_{V},\quad\beta_{f}=-g^{f}_{A},\quad\gamma_{f}=-g^{f}_{V}, \quad\delta_{f}=+g^{f}_{A}. \tag{4.61}\]
Then the \(\mathbf{q}\)-independent notations (4.37) are simplified and take the form
\[G_{\alpha}(A) = g^{p}_{V}A_{p}+g^{n}_{V}A_{n}\equiv G_{V}(A), G_{\gamma}(A)=-(g^{p}_{V}A_{p}+g^{n}_{V}A_{n})=-G_{V}(A),\] \[G_{\delta}(A) = g^{p}_{A}A_{p}+g^{n}_{A}A_{n}\equiv G_{A}(A), G_{\beta}(A)=-(g^{p}_{A}A_{p}+g^{n}_{A}A_{n})=-G_{A}(A);\] \[\Delta G_{\alpha}(A) = g^{p}_{V}\Delta A_{p}+g^{n}_{V}\Delta A_{n}\equiv\Delta G_{V}(A),\quad\Delta G_{\gamma}(A)=-(g^{p}_{V}\Delta A_{p}+g^{n}_{V}\Delta A_{n})=- \Delta G_{V}(A),\] \[\Delta G_{\delta}(A) = g^{p}_{A}\Delta A_{p}+g^{n}_{A}\Delta A_{n}\equiv\Delta G_{A}(A ),\quad\Delta G_{\beta}(A)=-(g^{p}_{A}\Delta A_{p}+g^{n}_{A}\Delta A_{n})=- \Delta G_{A}(A).\]
With their help one obtains from (4.36) a general set of \(\chi A\) cross sections of coherent scattering of the massive \(\chi\) lepton due to \(V\)-\(A\) interaction in the form
\[\dfrac{d\sigma^{\mp\mp}_{\rm coh,V\text{-}A}}{g_{\rm c}\hat{c}_{ A}dT_{A}} = \cos^{2}\dfrac{\theta}{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)[f_{\alpha+} (\theta)\pm f_{\gamma+}(\theta)]+G_{A}(A)[f_{\delta-}(\theta)\pm f_{\beta-}( \theta)]\] \[\qquad\qquad+\Delta G_{V}(A)[f_{\gamma-}(\theta)\pm f_{\alpha-}( \theta)]+\Delta G_{A}(A)[f_{\beta+}(\theta)\pm f_{\delta+}(\theta)]\Big{|}^{2},\] \[\dfrac{d\sigma^{\mp\pm}_{\rm coh,V\text{-}A}}{g_{\rm c}\hat{c}_{ A}dT_{A}} = \sin^{2}\dfrac{\theta}{2}m^{2}_{\chi}F^{2}(\mathbf{q})\Big{|}G_{V}(A)(m +\lambda^{2}_{-}\cos^{2}\dfrac{\theta}{2})-G_{A}(A)\lambda^{2}_{-}\cos^{2} \dfrac{\theta}{2}\] \[\qquad\qquad-\Delta G_{V}(A)\lambda_{+}\lambda_{-}\cos^{2}\dfrac{ \theta}{2}+\Delta G_{A}(A)[\lambda_{+}\lambda_{-}\cos^{2}\dfrac{\theta}{2}\mp m ]\Big{|}^{2}.\]
For negative initial helicity (\(s=-1\), upper right index) of the massive \(\chi\) lepton corresponding to the neutrino with \(V\)-\(A\) weak interaction, formulas (4.62) take the explicit form
\[\frac{d\sigma^{--}_{\rm coh,V\text{-}A}}{g_{\rm c}\hat{c}_{A}dT_{A}} = \cos^{2}\frac{\theta}{2}F^{2}(\mathbf{q})(|\mathbf{k}|+E_{\chi})^{2}\Big{|} G_{V}(A)(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}+|\mathbf{k}|)+ \Delta G_{V}(A)|\mathbf{k}|\sin^{2}\frac{\theta}{2}\] \[+G_{A}(A)(E_{p}-m)\sin^{2}\frac{\theta}{2}+\Delta G_{A}(A)(E_{p}+ |\mathbf{k}|\cos^{2}\frac{\theta}{2})\Big{|}^{2},\] \[\frac{d\sigma^{+-}_{\rm coh,V\text{-}A}}{g_{\rm c}\hat{c}_{A}dT_{A }} = m_{\chi}^{2}\sin^{2}\frac{\theta}{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)(m\sin^{2} \frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2})-\Delta G_{V}(A)|\mathbf{k}|\cos^{2 }\frac{\theta}{2}\] \[-G_{A}(A)(E_{p}-m)\cos^{2}\frac{\theta}{2}+\Delta G_{A}(A)(|\mathbf{k }|\cos^{2}\frac{\theta}{2}+m)\Big{|}^{2}.\]
It is taken into account that for sums of \(f\)-form factors from (4.62) one has
\[f_{\alpha+}(\theta)+f_{\gamma+}(\theta) = (|\mathbf{k}|+E_{\chi})(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{ \theta}{2}+|\mathbf{k}|),\ f_{\alpha-}(\theta)+f_{\gamma-}(\theta)=(|\mathbf{k}|+E_{ \chi})|\mathbf{k}|\sin^{2}\frac{\theta}{2},\] \[f_{\beta-}(\theta)+f_{\delta-}(\theta) = (|\mathbf{k}|+E_{\chi})(E_{p}-m)\sin^{2}\frac{\theta}{2},\ f_{\beta+} (\theta)+f_{\delta+}(\theta)=(|\mathbf{k}|+E_{\chi})(E_{p}+|\mathbf{k}|\cos^{2}\frac{ \theta}{2}).\]
Recall that in the invariant variables the following relations hold
\[E_{\chi/p}=\frac{s\pm m_{\chi}^{2}\mp m^{2}}{2\sqrt{s}},\ \ \ \ |\mathbf{k}|=\frac{\lambda(s,m^{2},m_{\chi}^{2})}{2 \sqrt{s}},\ \ \ [\mathbf{k}_{\chi}^{l}]^{2}=\frac{\lambda^{2}(s,m^{2},m_{\chi}^{2})}{4m^{2}},\ \ \text{giving}\] \[4^{2}c_{A}(E_{\chi}+|\mathbf{k}|)^{2} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{(E_{\chi}+|\mathbf{k}|)^{2}}{m^{2}| \mathbf{k}_{\chi}^{l}|^{2}}=\frac{G_{F}^{2}m_{A}}{4\pi}\frac{\varkappa(s,m^{2},m_ {\chi}^{2})}{s},\ \ \ \text{where}\] \[\varkappa(s,m_{\chi}^{2}) \equiv \varkappa(s,m^{2},m_{\chi}^{2})=\Big{[}1+\frac{s+m_{\chi}^{2}-m^ {2}}{\lambda(s,m^{2},m_{\chi}^{2})}\Big{]}^{2},\ \ \ \text{as well} \tag{4.64}\] \[4^{2}c_{A}m_{\chi}^{2} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{m^{2}|\mathbf{k}_{ \chi}^{l}|^{2}}=\frac{G_{F}^{2}m_{A}}{\pi}\frac{m_{\chi}^{2}}{\lambda^{2}(s,m ^{2},m_{\chi}^{2})}.\]
Then for the massive \(\chi\) lepton incident on the nucleus with negative initial helicity and interacting with nucleons via the \(V\)-\(A\) weak current, coherent \(\chi A\) cross sections (4.63) can be _finally_ written in the form
\[\frac{d\sigma^{--}_{\rm coh,V\text{-}A}}{g_{\rm c}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\varkappa(s,m_{\chi}^{2})\cos^{2} \frac{\theta}{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)\Big{(}\frac{m}{\sqrt{s}}\sin^{2} \frac{\theta}{2}+\frac{E_{p}}{\sqrt{s}}\cos^{2}\frac{\theta}{2}+\frac{|\mathbf{k}| }{\sqrt{s}}\Big{)}\] \[+\Delta G_{V}(A)\frac{|\mathbf{k}|}{\sqrt{s}}\sin^{2}\frac{\theta}{2} +G_{A}(A)\frac{(E_{p}-m)}{\sqrt{s}}\sin^{2}\frac{\theta}{2}+\Delta G_{A}(A) \Big{(}\frac{E_{p}}{\sqrt{s}}+\frac{|\mathbf{k}|}{\sqrt{s}}\cos^{2}\frac{\theta}{2} \Big{)}\Big{|}^{2},\] \[\frac{d\sigma^{+-}_{\rm coh,V\text{-}A}}{g_{\rm c}dT_{A}} = \frac{G_{F}^{2}m_{A}}{\pi}\frac{sm_{\chi}^{2}}{\lambda^{2}(s,m_{ \chi}^{2})}\sin^{2}\frac{\theta}{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)\Big{(}\frac{m}{ \sqrt{s}}\sin^{2}\frac{\theta}{2}+\frac{E_{p}}{\sqrt{s}}\cos^{2}\frac{\theta}{2 }\Big{)}-\Delta G_{V}(A)\frac{|\mathbf{k}|}{\sqrt{s}}\cos^{2}\frac{\theta}{2}\] \[-G_{A}(A)\frac{(E_{p}-m)}{\sqrt{s}}\cos^{2}\frac{\theta}{2}+ \Delta G_{A}(A)\Big{(}\frac{|\mathbf{k}|}{\sqrt{s}}\cos^{2}\frac{\theta}{2}+\frac{m} {\sqrt{s}}\Big{)}\Big{|}^{2}.\]
Formulas (4.65) give the main result for the coherent scattering cross section of the massive neutrino on the nucleus due to the weak \(V\)-\(A\) interaction10. In invariant variables, the coherent sections of the \(V\)-\(A\) weak interaction of the massive neutrino incident on the _spinless_ (\(\Delta A_{f}=0\)) nucleus take the form
Footnote 10: If one wants to ”restore” dependence on the nucleon type in the nuclear form factors \(F^{2}(\mathbf{q})\), one should ”hide them back” into the parameters \(G_{V}(A,\mathbf{q})\) and \(G_{A}(A,\mathbf{q})\).
\[\frac{d\sigma^{--}_{\rm coh,V\!-\!A}}{g_{\rm c}dT_{A}} = \Big{[}1-\frac{T_{A}}{T_{A}^{\rm max}}\Big{]}\Big{[}\frac{G_{F}^{ 2}m_{A}}{4\pi}\Big{]}F^{2}(\mathbf{q})\varkappa(s,m_{\chi}^{2})\Big{\{}G_{A}(A) \frac{(\sqrt{s}-m)^{2}-m_{\chi}^{2}}{2s}\sin^{2}\frac{\theta}{2} \tag{4.66}\] \[\qquad\qquad+G_{V}(A)\Big{[}\frac{m}{\sqrt{s}}+\frac{(\sqrt{s}-m )^{2}-m_{\chi}^{2}}{2s}\cos^{2}\frac{\theta}{2}+\frac{\lambda(s,m^{2},m_{\chi }^{2})}{2s}\Big{]}\Big{\}}^{2},\] \[\frac{d\sigma^{+-}_{\rm coh,V\!-\!A}}{g_{\rm c}dT_{A}} = \sin^{2}\frac{\theta}{2}\Big{[}\frac{G_{F}^{2}m_{A}}{4\pi} \Big{]}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\frac{s}{m^{2}}F^{2}(\mathbf{q}) \Big{\{}-G_{A}(A)\frac{(\sqrt{s}-m)^{2}-m_{\chi}^{2}}{2s}\cos^{2}\frac{\theta} {2}\] \[\qquad\qquad+G_{V}(A)\Big{[}\frac{m}{\sqrt{s}}+\frac{(\sqrt{s}-m )^{2}-m_{\chi}^{2}}{2s}\cos^{2}\frac{\theta}{2}\Big{]}\Big{\}}^{2},\quad{\rm where }\quad\sin^{2}\frac{\theta}{2}\simeq\frac{T_{A}}{T_{A}^{\rm max}}.\]
There are two comments about these formulas. First, in the case of a massless Standard Model neutrino, the second formula must vanish, which is clearly seen due to its proportionality to the square of the neutrino mass \(m_{\chi}^{2}\). Second, for \(m_{\chi}=0\) one gets \(\lambda(s,m^{2},m_{\chi}^{2}=0)=(s-m^{2})\). Hence it follows that
\[\varkappa(s,m_{\chi}^{2}=0)=4\quad{\rm and}\quad\frac{m}{\sqrt{s}}+\frac{(s-m ^{2})}{2s}+\frac{(\sqrt{s}-m)^{2}}{2s}\cos^{2}\frac{\theta}{2}=1-\frac{(\sqrt{s }-m)^{2}}{2s}\sin^{2}\frac{\theta}{2}.\]
As a result, the first formula goes into the expression
\[\frac{d\sigma^{--}_{\rm coh}}{dT_{A}}=\frac{G_{F}^{2}m_{A}}{\pi}\Big{[}1-\frac {T_{A}}{T_{A}^{\rm max}}\Big{]}F^{2}(\mathbf{q})\Big{[}G_{V}(A)\big{[}1-\frac{( \sqrt{s}-m)^{2}}{2s}\sin^{2}\frac{\theta}{2}\big{]}+G_{A}(A)\frac{(\sqrt{s}-m )^{2}}{2s}\sin^{2}\frac{\theta}{2}\Big{]}^{2}, \tag{4.67}\]
which coincides with the corresponding formula from [3]. In other words, formula (4.66) for the coherent scattering cross section of massive neutrinos on (spinless) nuclei at \(m_{\chi}\to 0\) goes exactly into the well-known formula for the massless neutrino [3].
About the second formula (4.66), note the following. This cross section (with lepton spin flip) has an appreciable value only for "appreciable" mass of this lepton; moreover, the angular distribution proportional to \(\sin^{2}\frac{\theta}{2}\) is very different from the case without the lepton spin flip (the first formula with \(\cos^{2}\frac{\theta}{2}\) distribution). The latter means that the lepton spin flip (due to the conservation of the total angular momentum of the system) is possible only if the spin-changing lepton flies out of the interaction zone almost exactly in the opposite direction. On the other hand, for (extremely) small momenta \(|\mathbf{k}_{\chi}|^{2}\ll m_{\chi}^{2}\) of the lepton incident on the nucleus, firstly, the momentum \(\mathbf{q}\) transferred to the nucleus is also very small, and hence the nuclear form factor \(F^{2}(\mathbf{q})\simeq 1\) plays no role; second, \(\sqrt{s}\simeq m+m_{\chi}\), and then the coherent \(\chi A\) cross section with lepton spin flip takes the form
\[\frac{d\sigma^{+-}_{\rm coh}(\chi A)}{dT_{A}} = \sin^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi} ^{2}}{|\mathbf{k}_{\chi}|^{2}}G_{V}^{2}(A). \tag{4.68}\]
It can be seen that if \(\frac{|\mathbf{k}_{\chi}|^{2}}{m_{\chi}^{2}}\equiv\Theta\ll 1\), then the coherent \(\chi A\) cross section with spin flip is enhanced not only by the \(A^{2}\) nuclear factor hidden in the parameter \(G_{V}^{2}(A)\), but also by a sufficiently large "particle flux" factor \(\Theta^{-1}\). This argument is not invalidated if \(|\mathbf{k}_{\chi}|^{2}\simeq m_{\chi}^{2}\ll m^{2}\), then \(s\simeq m^{2}\) and the second formula (4.66) goes into expression (4.68).
Note further that formula for the \(V\)-\(A\) cross sections (4.62), while maintaining the positive helicity of the \(\chi\) lepton (right superscript \(+\)), includes the following combinations of nucleon \(f\)-form factors from (4.27):
\[f_{\alpha+}(\theta)-f_{\gamma+}(\theta) = (E_{\chi}-|\mathbf{k}|)(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2} \frac{\theta}{2}-|\mathbf{k}|),\ \ f_{\gamma-}(\theta)-f_{\alpha-}(\theta)=(E_{\chi}-|\mathbf{k}|)|\mathbf{k}|\sin^{2 }\frac{\theta}{2},\] \[f_{\delta-}(\theta)-f_{\beta-}(\theta) = (E_{\chi}-|\mathbf{k}|)(E_{p}-m)\sin^{2}\frac{\theta}{2},\quad f_{ \beta+}(\theta)-f_{\delta+}(\theta)=(E_{\chi}-|\mathbf{k}|)(|\mathbf{k}|\cos^{2}\frac {\theta}{2}-E_{p}).\]
All these quantities for "\(++\)"-helicity are proportional to the difference \((E_{\chi}-|\mathbf{k}|)\), and not to the sum \((E_{\chi}+|\mathbf{k}|)\), as for "\(--\)"-helicity. This means that \(V\)-\(A\)-couplings from (4.61) strongly "suppress" (proportionally to \((E_{\chi}-|\mathbf{k}|)^{2}\)) the coherent cross sections of interaction between the nucleus and a massive antineutrino-analogue, and upon passing to the massless limit, these cross sections simply vanish. Thus, for the "\(++\)"-helicity of the massive analogue of the antineutrino, it is necessary to set the coupling constants according to the \(V\)+\(A\) character of the weak interaction corresponding specifically to the antineutrino. Indeed, in the case of antineutrinos (due to the \(V\)+\(A\) current), the nucleon weak coupling constants look like
\[\alpha_{f}^{V+A}\equiv\alpha_{f}^{\bar{\nu}}=+g_{V}^{f},\quad\beta_{f}^{\bar{ \nu}}=-g_{A}^{f},\quad\gamma_{f}^{\bar{\nu}}=+g_{V}^{f},\quad\delta_{f}^{\bar {\nu}}=-g_{A}^{f}. \tag{4.69}\]
Then \(\mathbf{q}\)-independent factors (4.37) for \(V\)+\(A\) cross sections are
\[G_{\alpha}^{\bar{\nu}}(A) = g_{V}^{p}A_{p}+g_{V}^{n}A_{n}\equiv G_{V}(A),\qquad\Delta G_{ \alpha}^{\bar{\nu}}(A)=g_{V}^{p}\Delta A_{p}+g_{V}^{n}\Delta A_{n}\equiv \Delta G_{V}(A),\] \[G_{\gamma}^{\bar{\nu}}(A) = g_{V}^{p}A_{p}+g_{V}^{n}A_{n}=G_{V}(A),\qquad\Delta G_{\gamma} ^{\bar{\nu}}(A)=g_{V}^{p}\Delta A_{p}+g_{V}^{n}\Delta A_{n}=\Delta G_{V}(A),\] \[G_{\delta}^{\bar{\nu}}(A) = -(g_{A}^{p}A_{p}+g_{A}^{n}A_{n})\equiv-G_{A}(A),\quad\Delta G_{ \beta}^{\bar{\nu}}(A)=-(g_{A}^{p}\Delta A_{p}+g_{A}^{n}\Delta A_{n})=-\Delta G _{A}(A),\] \[G_{\beta}^{\bar{\nu}}(A) = -(g_{A}^{p}A_{p}+g_{A}^{n}A_{n})=-G_{A}(A),\quad\Delta G_{\delta} ^{\bar{\nu}}(A)=-(g_{A}^{p}\Delta A_{p}+g_{A}^{n}\Delta A_{n})\equiv-\Delta G _{A}(A).\]
After substituting these factors into formulas (4.30), expressions are obtained for the coherent cross sections of the massive lepton scattering on the nucleus due to the \(V\)+\(A\) interaction
\[\frac{d\sigma_{\rm coh,V+A}^{\mp\mp}(\mathbf{q})}{g_{c}\hat{c}_{A}dT_{A}} = \cos^{2}\frac{\theta}{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)[f_{\alpha+} (\theta)\mp f_{\gamma+}(\theta)]-G_{A}(A)[f_{\delta-}(\theta)\mp f_{\beta-}( \theta)] \tag{4.70}\] \[\qquad\qquad-\Delta G_{V}(A)[f_{\gamma-}(\theta)\mp f_{\alpha-}( \theta)]+\Delta G_{A}(A)[f_{\beta+}(\theta)\mp f_{\delta+}(\theta)]\Big{|}^{2};\] \[\frac{d\sigma_{\rm coh,V+A}^{\mp\pm}(\mathbf{q})}{g_{c}\hat{c}_{A}dT_{A}} = \sin^{2}\frac{\theta}{2}m_{\chi}^{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)(m +\lambda_{-}^{2}\cos^{2}\frac{\theta}{2})+G_{A}(A)\lambda_{-}^{2}\cos^{2} \frac{\theta}{2}\pm\Delta G_{A}(A)m+\] \[\qquad+[\Delta G_{V}(A)+\Delta G_{A}(A)]\lambda_{+}\lambda_{-}\cos^ {2}\frac{\theta}{2}\Big{|}^{2}.\]
It can be seen that these formulas "correspond" to antineutrinos (helicity is positive for a weak \(V\)+\(A\)-current). The "++"-helicity states correspond to the sum \(f_{\alpha+}(\theta)+f_{\gamma+}(\theta)\propto(E_{\chi}+|\mathbf{k}|)\), which does not vanish in the massless and nonrelativistic cases when \(E_{\chi}\simeq|\mathbf{k}|\). If a \(\chi\) particle with the "\(--\)"-helicity "wants" to interact via the weak \(V\)+\(A\)-current (like a neutrino), then its coherent cross section (first formula (4.1)) will be suppressed by the factor \((E_{\chi}-|\mathbf{k}|)^{2}\) and vanishes when \(E_{\chi}\simeq|\mathbf{k}|\).
Therefore, coherent \(\chi A\) cross sections for the "antineutrino-like" (++, \(-+\) and \(V\)+\(A\) indices) and "neutrino-like" (\(--\), \(+-\) and \(V\)\(-A\) indices) interactions of the massive \(\chi\) lepton with the nucleus have the form
\[\frac{d\sigma^{\mp\mp}_{\rm coh,V\mp A}}{g_{c}\hat{c}_{A}dT_{A}} = \cos^{2}\frac{\theta}{2}(|\mathbf{k}|+E_{\chi})^{2}F^{2}(\mathbf{q}) \Big{|}G_{V}(A)(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}+|\mathbf{ k}|)\pm G_{A}(A)(E_{p}-m)\sin^{2}\frac{\theta}{2}\] \[\pm\Delta G_{V}(A)|\mathbf{k}|\sin^{2}\frac{\theta}{2}+\Delta G_{A}( A)(E_{p}+|\mathbf{k}|\cos^{2}\frac{\theta}{2})\Big{|}^{2}; \tag{4.1}\] \[\frac{d\sigma^{\pm\mp}_{\rm coh,V\mp A}}{g_{c}\hat{c}_{A}dT_{A}} = m_{\chi}^{2}\sin^{2}\frac{\theta}{2}F^{2}(\mathbf{q})\Big{|}G_{V}(A)(m \sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2})\mp G_{A}(A)(E_{p}-m) \cos^{2}\frac{\theta}{2}\] \[\mp\Delta G_{V}(A)|\mathbf{k}|\cos^{2}\frac{\theta}{2}+\Delta G_{A}( A)(m+|\mathbf{k}|\cos^{2}\frac{\theta}{2})\Big{|}^{2}.\]
It should be emphasized that each of formulas (4.1) contains only two possible expressions, when only superscripts or only subscripts are _simultaneously_ taken for the cross section (spin indices are \(\mp\mp\)) and for the current (\(\mp\)).
Note that, as previousely for neutrinos, at (extremely) small momenta \(|\mathbf{k}_{\chi}|^{2}\ll m_{\chi}^{2}\) of the (right-hand) lepton incident on the nucleus, one has the \(F^{2}(\mathbf{q})\simeq 1\) and \(\sqrt{s}\simeq m+m_{\chi}\), \(|\mathbf{k}|\simeq 0\), \(E_{p}\simeq m\), hence the coherent \(\chi A\) cross section with spin flip of the (left- and right-handed) lepton _are the same_:
\[\frac{d\sigma^{\pm\mp}_{\rm coh,V\mp A}(\mathbf{q})}{g_{c}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2} }\sin^{2}\frac{\theta}{2}\Big{|}G_{V}(A)+\Delta G_{A}(A)\Big{|}^{2}. \tag{4.2}\]
Thus, if \(\frac{|\mathbf{k}_{\chi}|^{2}}{m_{\chi}^{2}}\equiv\Theta\ll 1\), the coherent \(\chi A\) cross sections with spin flip (of both left- and right-handed massive lepton) are enhanced not only by the influence of the nucleus, but also by a sufficiently large "particle flux" factor \(\Theta^{-1}\). These observations _look new_ and probably are important for registration of the relic massive (anti)neutrinos with extremely low energy (\(5\div 6\times 10^{-4}\) eV). Despite the rather enhanced cross section, the old question of _what we are going to measure_ remains open.
From general formula (4.3) one can obtain the fully averaged coherent \(\chi A\) cross section _on the spinless nucleus_ due to the \(V\mp A\) interaction, i.e. when the nucleon coupling constants (4.1) and (4.1) are
\[\alpha_{V\mp A}=+g_{V},\ \beta_{V\mp A}=-g_{A},\ \gamma_{V\mp A}=\mp g_{V},\ \delta_{V\mp A}=\pm g_{A}. \tag{4.3}\]
The \(\mathbf{q}\)-dependent effective coupling constants from (4.31) take the "\(V\mp A\)" form
\[G_{\alpha_{\mp}}(A,\mathbf{q})= \sum(+g_{V}^{f})A_{f}F_{f}(\mathbf{q})\equiv+G_{V}(A,\mathbf{q}),\ \ G_{\gamma_{\mp}}(A,\mathbf{q})=\sum(\mp g_{V}^{f})A_{f}F_{f}(\mathbf{q})\equiv\mp G_{V }(A,\mathbf{q}), \tag{4.74}\] \[G_{\beta_{\mp}}(A,\mathbf{q})= \sum(-g_{A}^{f})A_{f}F_{f}(\mathbf{q})=-G_{A}(A,\mathbf{q}),\ \ G_{\delta_{\mp}}(A,\mathbf{q})=\sum(\pm g_{A}^{f})A_{f}F_{f}(\mathbf{q})\equiv\pm G_{ A}(A,\mathbf{q}).\]
Similar to (4.74), spin-dependent factors are
\[\Delta G_{\alpha_{\mp}}(A,\mathbf{q}) = \sum(+g_{V}^{f})\Delta A_{f}F_{f}(\mathbf{q})\equiv+\Delta G_{V}(A, \mathbf{q}),\] \[\Delta G_{\gamma_{\mp}}(A,\mathbf{q}) = \sum(\mp g_{V}^{f})\Delta A_{f}F_{f}(\mathbf{q})\equiv\mp\Delta G_{V }(A,\mathbf{q}),\] \[\Delta G_{\beta_{\mp}}(A,\mathbf{q}) = \sum(-g_{A}^{f})\Delta A_{f}F_{f}(\mathbf{q})=-\Delta G_{A}(A,\mathbf{q}),\] \[\Delta G_{\delta_{\mp}}(A,\mathbf{q}) = \sum(\pm g_{A}^{f})\Delta A_{f}F_{f}(\mathbf{q})\equiv\pm\Delta G_{A} (A,\mathbf{q}).\]
Substitution of (4.74) in (4.34) gives _the fully averaged coherent \(V\mp A\) cross section_ for the scattering of the massive lepton on the nucleus _without spin_\((\Delta A_{f}=0)\):
\[\frac{d\sigma^{\rm total}_{\rm coh,V\mp A,0}}{g_{c}\hat{c}_{A}dT_{ A}} = \cos^{2}\frac{\theta}{2}\Big{[}G_{V}(A,\mathbf{q})E_{\chi}\big{(}E_{ p}\cos^{2}\frac{\theta}{2}+m\sin^{2}\frac{\theta}{2}+\frac{|\mathbf{k}|^{2}}{E_{ \chi}}\big{)}\pm G_{A}(A,\mathbf{q})E_{\chi}(E_{p}-m)\sin^{2}\frac{\theta}{2}\Big{]} ^{2} \tag{4.75}\] \[+ \cos^{2}\frac{\theta}{2}\Big{[}G_{V}(A,\mathbf{q})|\mathbf{k}|(E_{p}\cos^ {2}\frac{\theta}{2}+m\sin^{2}\frac{\theta}{2}+E_{\chi})\pm G_{A}(A,\mathbf{q})|\bm {k}|(E_{p}-m)\sin^{2}\frac{\theta}{2}\Big{]}^{2}\] \[+ m_{\chi}^{2}\sin^{2}\frac{\theta}{2}\Big{[}G_{V}(A,\mathbf{q})(E_{p} \cos^{2}\frac{\theta}{2}+m\sin^{2}\frac{\theta}{2})\mp G_{A}(A,\mathbf{q})(E_{p}-m )\cos^{2}\frac{\theta}{2}\Big{]}^{2}.\]
This coherent \(V\mp A\) cross section can be expanded in \(\mathbf{q}\)-dependent effective constants
\[\frac{d\sigma^{\rm total}_{\rm coh,V\mp A,0}}{g_{c}dT_{A}}=\frac{G_{F}^{2}m_{A }}{4\pi m^{2}|\mathbf{k}_{\chi}|^{2}}\big{[}G_{V}^{2}(A,\mathbf{q})T_{V}(\theta)+G_{A} ^{2}(A,\mathbf{q})T_{A}(\theta)\pm 2G_{V}(A,\mathbf{q})G_{A}(A,\mathbf{q})T_{M}(\theta)\big{]}, \tag{4.76}\]
where the coefficients of this expansion are
\[T_{A} = (E_{p}-m)^{2}\cos^{2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2}(2| \mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}+m_{\chi}^{2}),\] \[T_{M} = 2(E_{p}-m)\sin^{2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2}|\mathbf{k} |^{2}(E_{\chi}+E_{p}\cos^{2}\frac{\theta}{2}+m\sin^{2}\frac{\theta}{2}),\] \[T_{V} = (E_{p}\cos^{2}\frac{\theta}{2}+m\sin^{2}\frac{\theta}{2})^{2}(2| \mathbf{k}|^{2}\cos^{2}\frac{\theta}{2}+m_{\chi}^{2})+\cos^{2}\frac{\theta}{2}|\bm {k}|^{2}\big{[}m_{\chi}^{2}+2|\mathbf{k}|^{2}+4E_{\chi}(E_{p}\cos^{2}\frac{\theta}{2 }+m\sin^{2}\frac{\theta}{2})\big{]}.\]
Formula (4.76) gives the coherent \(\chi A\) cross section (\(\Delta A_{f}=0\)) for \(V\mp A\) variants of the weak interaction of the massive \(\chi\) lepton with the nucleus, fully summed over the helicities of the final lepton and averaged over the helicities of the initial lepton.
To verify the calculations, let us consider a transition of the general formulas of coherent scattering to the limit of \(V\mp A\) currents. To this end, we substitute \(V\mp A\) parameters from (4.74) into the general formulas for the cross sections (4.30) in the following form:
\[G_{\alpha_{\mp}}(A,\mathbf{q}) = [+G_{V}(\mathbf{q})],\qquad G_{\gamma_{\mp}}(A,\mathbf{q})=[\mp G_{V}(\mathbf{ q})],\] \[G_{\beta_{\mp}}(A,\mathbf{q}) = [-G_{A}(\mathbf{q})],\qquad G_{\delta_{\mp}}(A,\mathbf{q})=[\pm G_{A}(\mathbf{ q})];\] \[\Delta G_{\alpha_{\mp}}(A,\mathbf{q}) = [+\Delta G_{V}(\mathbf{q})],\quad\Delta G_{\gamma_{\mp}}(A,\mathbf{q})=[ \mp\Delta G_{V}(\mathbf{q})],\] \[\Delta G_{\beta_{\mp}}(A,\mathbf{q}) = [-\Delta G_{A}(\mathbf{q})],\quad\Delta G_{\delta_{\mp}}(A,\mathbf{q})=[ \pm\Delta G_{A}(\mathbf{q})].\]
The result is _all_ coherent \(\chi A\) cross sections in the \(V\mp A\) approximation
\[\frac{d\sigma^{\mp\pm}_{\text{coh},\mathbf{V}-\mathbf{A}}}{\cos^{2}\frac{ \theta}{2}g_{\text{c}}\hat{c}_{A}dT_{A}} = \Big{|}(f_{\alpha+}(\theta)\pm f_{\gamma+}(\theta))G_{V}(\mathbf{q})+( f_{\delta-}(\theta)\pm f_{\beta-}(\theta))G_{A}(\mathbf{q})\] \[+(f_{\gamma-}(\theta)\pm f_{\alpha-}(\theta))\Delta G_{V}(\mathbf{q} )+(f_{\beta+}(\theta)\pm f_{\delta+}(\theta))\Delta G_{A}(\mathbf{q})\Big{|}^{2},\] \[\frac{d\sigma^{\mp\pm}_{\text{coh},\mathbf{V}-\mathbf{A}}}{\sin^{2}\frac{ \theta}{2}g_{\text{c}}\hat{c}_{A}dT_{A}} = \Big{|}G_{V}(\mathbf{q})\hat{f}_{\alpha}(\theta)-G_{A}(\mathbf{q})\hat{f} _{\delta-}(\theta)-\Delta G_{V}(\mathbf{q})\hat{f}_{\beta\gamma}(\theta)+(\hat{f} _{\beta\gamma}(\theta)\mp\hat{f}_{\delta+}(\theta))\Delta G_{A}(\mathbf{q})\Big{|} ^{2};\] \[\frac{d\sigma^{\mp\mp}_{\text{coh},\mathbf{V}+\mathbf{A}}}{\cos^{2}\frac{ \theta}{2}g_{\text{c}}\hat{c}_{A}dT_{A}} = \Big{|}(f_{\alpha+}(\theta)\mp f_{\gamma+}(\theta))G_{V}(\mathbf{q})- (f_{\delta-}(\theta)\mp f_{\beta-}(\theta))G_{A}(\mathbf{q})\] \[-(f_{\gamma-}(\theta)\mp f_{\alpha-}(\theta))\Delta G_{V}(\mathbf{q} )+(f_{\beta+}(\theta)\mp f_{\delta+}(\theta))\Delta G_{A}(\mathbf{q})\Big{|}^{2},\] \[\frac{d\sigma^{\mp\pm}_{\text{coh},\mathbf{V}+\mathbf{A}}}{\sin^{2}\frac {\theta}{2}g_{\text{c}}\hat{c}_{A}dT_{A}} = \Big{|}G_{V}(\mathbf{q})\hat{f}_{\alpha}(\theta)+G_{A}(\mathbf{q})\hat{f} _{\delta-}(\theta)+\Delta G_{V}(\mathbf{q})\hat{f}_{\beta\gamma}(\theta)+(\hat{f} _{\beta\gamma}(\theta)\pm\hat{f}_{\delta+}(\theta))\Delta G_{A}(\mathbf{q})\Big{|} ^{2}.\]
Since for \(f\)-form factors from (4.62) for \(V\mp A\) we have
\[f_{\alpha+}(\theta)\pm f_{\gamma+}(\theta) = (E_{\chi}\pm|\mathbf{k}|)(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2} \frac{\theta}{2}\pm|\mathbf{k}|),\ \ f_{\gamma-}(\theta)\pm f_{\alpha-}(\theta)=(E_{\chi}\pm|\mathbf{k}|)|\mathbf{k}| \sin^{2}\frac{\theta}{2},\] \[f_{\delta-}(\theta)\pm f_{\beta-}(\theta) = (E_{\chi}\pm|\mathbf{k}|)(E_{p}-m)\sin^{2}\frac{\theta}{2},\ \ f_{\beta+}(\theta)\pm f_{\delta+}(\theta)=(E_{\chi}\pm|\mathbf{k}|)(|\mathbf{k}| \cos^{2}\frac{\theta}{2}\pm E_{p}),\]
and according to (4.27)
\[\hat{f}_{\alpha}(\theta)=m_{\chi}(m+(E_{p}-m)\cos^{2}\frac{\theta}{2}),\,\hat {f}_{\beta\gamma}(\theta)=m_{\chi}|\mathbf{k}|\cos^{2}\frac{\theta}{2},\,\hat{f}_ {\delta-}(\theta)=m_{\chi}(E_{p}-m)\cos^{2}\frac{\theta}{2},\,\hat{f}_{\delta+ }(\theta)=m_{\chi}m,\]
then _the complete set_ of coherent \(\chi A\) cross sections for \(V-A\)- and \(V+A\)-weak currents is
\[\frac{d\sigma^{\mp\mp}_{\text{coh},V-A}}{g_{\text{c}}\hat{c}_{A} dT_{A}} = \cos^{2}\frac{\theta}{2}(E_{\chi}\pm|\mathbf{k}|)^{2}\Big{|}(m\sin^{2} \frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}\pm|\mathbf{k}|)G_{V}(\mathbf{q})\] \[+(E_{p}-m)\sin^{2}\frac{\theta}{2}G_{A}(\mathbf{q})+|\mathbf{k}|\sin^{2} \frac{\theta}{2}\Delta G_{V}(\mathbf{q})+(|\mathbf{k}|\cos^{2}\frac{\theta}{2}\pm E_{p} )\Delta G_{A}(\mathbf{q})\Big{|}^{2},\] \[\frac{d\sigma^{\mp\mp}_{\text{coh},V+A}}{g_{\text{c}}\hat{c}_{A} dT_{A}} = \cos^{2}\frac{\theta}{2}(E_{\chi}\mp|\mathbf{k}|)^{2}\Big{|}(m\sin^{2} \frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}\mp|\mathbf{k}|)G_{V}(\mathbf{q})-(E_{p}- m)\sin^{2}\frac{\theta}{2}G_{A}(\mathbf{q}) \tag{4.77}\] \[-|\mathbf{k}|\sin^{2}\frac{\theta}{2}\Delta G_{V}(\mathbf{q})+(|\mathbf{k}| \cos^{2}\frac{\theta}{2}\mp E_{p})\Delta G_{A}(\mathbf{q})\Big{|}^{2};\] \[\frac{d\sigma^{\mp\pm}_{\text{coh},V-A}}{g_{\text{c}}\hat{c}_{A} dT_{A}} = \sin^{2}\frac{\theta}{2}m_{\chi}^{2}\Big{|}m[G_{V}(\mathbf{q})\mp\Delta G _{A}(\mathbf{q})]+[G_{V}(\mathbf{q})-G_{A}(\mathbf{q})](E_{p}-m)\cos^{2}\frac{\theta}{2}\] \[+[\Delta G_{A}(\mathbf{q})-\Delta G_{V}(\mathbf{q})]|\mathbf{k}|\cos^{2}\frac{ \theta}{2}\Big{|}^{2},\] \[\frac{d\sigma^{\mp\pm}_{\text{coh},V+A}}{g_{\text{c}}\hat{c}_{A} dT_{A}} = \sin^{2}\frac{\theta}{2}m_{\chi}^{2}\Big{|}m[G_{V}(\mathbf{q})\pm\Delta G _{A}(\mathbf{q})]+[G_{V}(\mathbf{q})+G_{A}(\mathbf{q})](E_{p}-m)\cos^{2}\frac{\theta}{2}\] \[+[\Delta G_{V}(\mathbf{q})+\Delta G_{A}(\mathbf{q})]|\mathbf{k}|\cos^{2}\frac{ \theta}{2}\Big{|}^{2}.\]
The expressions for the cross sections for the spinless nucleus (\(\Delta A_{f}=0\)) can be obtained from (4.77) by discarding terms proportional to \(\Delta G\). They can be used, for example, to directly check the expansion of the fully averaged coherent \(V\mp A\) cross sections on the spinless nucleus (4.76) in effective \(\mathbf{q}\)-dependent coupling constants. Indeed, at \(\Delta A_{f}=0\) the \(V\!-\!A\) interaction has the following expansions in effective \(\mathbf{q}\)-dependent coupling constants:
\[\frac{d\sigma^{\mp\mp}_{\rm coh,V-A,0}}{g_{\rm c}\hat{c}_{A}dT_{ A}} = \cos^{2}\frac{\theta}{2}(E_{\chi}\pm|\mathbf{k}|)^{2}\Big{[}G_{V}^{2 }(\mathbf{q})(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2}\pm|\mathbf{k}|)^ {2}+G_{A}^{2}(\mathbf{q})(E_{p}-m)^{2}\sin^{4}\frac{\theta}{2}\] \[+2G_{V}(\mathbf{q})G_{A}(\mathbf{q})(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^ {2}\frac{\theta}{2}\pm|\mathbf{k}|)(E_{p}-m)\sin^{2}\frac{\theta}{2}\Big{]},\] \[\frac{d\sigma^{\mp\pm}_{\rm coh,V-A,0}}{g_{\rm c}\hat{c}_{A}dT_{ A}} = \sin^{2}\frac{\theta}{2}m_{\chi}^{2}\Big{[}G_{V}^{2}(\mathbf{q})(m \sin^{2}\frac{\theta}{2}+E_{p}\cos^{2}\frac{\theta}{2})^{2}+G_{A}^{2}(\mathbf{q}) (E_{p}-m)^{2}\cos^{4}\frac{\theta}{2}\] \[-2G_{V}(\mathbf{q})G_{A}(\mathbf{q})(m\sin^{2}\frac{\theta}{2}+E_{p}\cos^ {2}\frac{\theta}{2})(E_{p}-m)\cos^{2}\frac{\theta}{2}\Big{]}.\]
Then the averaged coherent "\(V\!-\!A\)" cross section, which is the sum (divided by 2) of the above-mentioned four terms can be written as
\[\frac{d\sigma^{\rm total}_{\rm coh,V-A,0}}{g_{\rm c}\hat{c}_{A}dT_{ A}} \equiv\frac{1}{2}\sum_{s^{\prime}s=\pm}\frac{d\sigma^{s^{\prime}s}_{\rm coh,V-A,0}} {g_{\rm c}\hat{c}_{A}dT_{A}}\equiv G_{V}^{2}(\mathbf{q})C_{V}(\theta)+G_{A}^{2}( \mathbf{q})C_{A}(\theta)+2G_{V}(\mathbf{q})G_{A}(\mathbf{q})C_{M}(\theta).\]
For the averaged coherent \(V\!+\!A\) cross section, exactly the same expansion is obtained but with the minus sign in front of the interference term (see formula (4.76)). As a result, with (4.12), the averaged coherent \(V\mp A\) cross section on the spinless nucleus takes the form
\[\frac{d\sigma^{\rm total}_{\rm coh,V\mp A,0}}{g_{\rm c}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\Big{\{}G_{V}^{2}(\mathbf{q})\frac{C_{V}( \theta)}{|\mathbf{k}|^{2}s}+G_{A}^{2}(\mathbf{q})\frac{C_{A}(\theta)}{|\mathbf{k}|^{2}s} \pm 2G_{V}(\mathbf{q})G_{A}(\mathbf{q})\frac{C_{M}(\theta)}{|\mathbf{k}|^{2}s}\Big{\}}. \tag{4.78}\]
Direct calculations give expressions for the parameters \(C_{A}(\theta)\), \(C_{V}(\theta)\) and \(C_{M}(\theta)\) coinciding with \(T(\theta)\)-parameters from (4.76), although they were obtained in completely different ways.
Finally, in the _massless_\(\chi\)-lepton limit, when \(m_{\chi}=0\), \(\frac{E_{p}}{\sqrt{s}}=\frac{s+m^{2}}{2s}\), \(\frac{|\mathbf{k}|}{\sqrt{s}}=\frac{E_{\chi}}{\sqrt{s}}=\frac{s-m^{2}}{2s}\), \(C_{V,A,M}\)-parameters from (4.78) contain only one term each, which corresponds to the conservation of helicity of the neutrino (\(V\!-\!A\)) or antineutrino (\(V\!+\!A\)). They are
\[\frac{C_{A}(\theta)}{|\mathbf{k}|^{2}s} = \frac{1}{2}\cos^{2}\frac{\theta}{2}\hat{C}_{A}(\theta)_{0}=\frac{ 1}{2}\cos^{2}\frac{\theta}{2}\Big{(}1-\frac{m}{\sqrt{s}}\Big{)}^{4}\sin^{4} \frac{\theta}{2},\] \[\frac{C_{M}(\theta)}{|\mathbf{k}|^{2}s} = \frac{1}{2}\cos^{2}\frac{\theta}{2}\hat{C}_{M}(\theta)_{0}=\frac{ 1}{2}\cos^{2}\frac{\theta}{2}\Big{(}1-\frac{m}{\sqrt{s}}\Big{)}^{2}\sin^{2} \frac{\theta}{2}\Big{[}2-\sin^{2}\frac{\theta}{2}\Big{(}1-\frac{m}{\sqrt{s}} \Big{)}^{2}\Big{]},\] \[\frac{C_{V}(\theta)}{|\mathbf{k}|^{2}s} = \frac{1}{2}\cos^{2}\frac{\theta}{2}\hat{C}_{V}(\theta)_{0}=\frac{ 1}{2}\cos^{2}\frac{\theta}{2}\Big{[}2-\sin^{2}\frac{\theta}{2}\Big{(}1-\frac{m }{\sqrt{s}}\Big{)}^{2}\Big{]}^{2}.\]
Then for the spinless nucleus and \(m_{\chi}=0\), the total coherent cross section (4.78) can be written as
\[\frac{d\sigma^{\rm total}_{\rm coh,V\mp A,00}}{g_{\rm c}dT_{A}}= \frac{G_{F}^{2}m_{A}}{8\pi}\cos^{2}\frac{\theta}{2}\Big{\{}G_{V}^{2}(\mathbf{q}) \hat{C}_{V}(\theta)_{0}+G_{A}^{2}(\mathbf{q})\hat{C}_{A}(\theta)_{0}\pm 2G_{V}(\mathbf{q})G_{A}(\mathbf{q}) \hat{C}_{M}(\theta)_{0}\Big{\}}. \tag{4.79}\]
In fact, for the massless lepton, the "completely averaged" coherent cross section contains only one term from each formula (111). For \(V\!-\!A\) current, it is _half_ the cross section with "\(--\)"-helicity ("\(-\)"-superscripts), for \(V\!+\!A\) current, it is _half_ the cross section with "++"-helicity ("\(+\)"-subscripts) from (111):
\[\frac{d\sigma^{\rm total}_{\rm coh,V\mp A,00}}{g_{\rm c}dT_{A}}\equiv\frac{1} {2}\sum_{s^{\prime}s}\frac{d\sigma^{s^{\prime}s}_{\rm coh}}{g_{\rm c}dT_{A}}= \frac{1}{2}\frac{d\sigma^{\mp\mp}_{\rm coh,V\mp A,00}}{g_{\rm c}dT_{A}}=\frac{ 1}{2}\cos^{2}\frac{\theta}{2}\frac{G_{F}^{2}m_{A}}{\pi}\Big{[}G_{V}(\mathbf{q})T_{ V,0}(\theta)\pm G_{A}(\mathbf{q})T_{A,0}\Big{]}^{2}.\]
Parameters \(T_{V,0}(\theta)\) and \(T_{A,0}(\theta)\) for \(m_{\chi}=0\) take the form
\[T_{V,0}(\theta) \equiv \frac{m}{\sqrt{s}}\sin^{2}\frac{\theta}{2}+\frac{E_{p}}{\sqrt{s} }\cos^{2}\frac{\theta}{2}+\frac{|\mathbf{k}|}{\sqrt{s}}\to 1-\Big{(}1-\frac{m}{ \sqrt{s}}\Big{)}^{2}\frac{\sin^{2}\frac{\theta}{2}}{2},\] \[T_{A,0}(\theta) \equiv \Big{(}\frac{E_{p}}{\sqrt{s}}-\frac{m}{\sqrt{s}}\Big{)}\sin^{2} \frac{\theta}{2}\to\Big{(}1-\frac{m}{\sqrt{s}}\Big{)}^{2}\frac{\sin^{2}\frac{ \theta}{2}}{2}.\]
The parameters \(T_{V,0}\) and \(T_{A,0}\) completely correspond to the above expression (111) for the coherent cross section of the interaction of the massless neutrino with the nucleus from [3]. The latter means that the averaged coherent cross section of the \(V\!-\!A\) interaction of the massless left-handed helicity lepton with the spinless nucleus (111) is exactly half of the purely "neutrino cross section" (111).
Incoherent \(\chi A\)-sections for \(V\!\mp\!A\) weak currents.Recall that the nucleon constants in the case of \(V\!\mp\!A\) weak currents have the form (111)
\[\alpha_{V\mp\!A}\equiv\alpha_{\mp}=+g_{V},\ \beta_{\mp}=-g_{A},\ \gamma_{\mp}=\mp g_{V},\ \delta_{\mp}=\pm g_{A}.\]
Then the general expansions of the incoherent \(\chi A\) cross sections (108) go into the following "\(V\mp A\)" expressions for _scattering without lepton spin flip_:
\[\frac{d\sigma^{\mp\mp}_{\rm inc,V\!-\!A}}{g_{i}dT_{A}} = \frac{\hat{c}_{A}}{2}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}\Big{[} g_{V}^{2}\widetilde{V}_{\pm}^{+}(\theta)+g_{A}^{2}\widetilde{A}_{\pm}^{+}( \theta)+g_{V}g_{A}\widetilde{M}_{\pm}^{+}(\theta) \tag{112}\] \[\qquad\pm\frac{\Delta A_{f}}{A_{f}}\big{\{}g_{V}g_{A}\widetilde{ M}_{\pm}^{-}(\theta)-g_{V}^{2}\widetilde{V}_{\pm}^{-}(\theta)-g_{A}^{2} \widetilde{A}_{\pm}^{-}(\theta)\big{\}}\Big{]},\] \[\frac{d\sigma^{\mp\mp}_{\rm inc,V\!+\!A}}{g_{i}\hat{c}_{A}dT_{A}} = \frac{\hat{c}_{A}}{2}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}\Big{[} g_{V}^{2}\widetilde{V}_{\mp}^{+}(\theta)+g_{A}^{2}\widetilde{A}_{\mp}^{+}( \theta)-g_{V}g_{A}\widetilde{M}_{\mp}^{+}(\theta)\] \[\qquad\mp\frac{\Delta A_{f}}{A_{f}}\big{\{}g_{V}g_{A}\widetilde{ M}_{\mp}^{-}(\theta)+g_{V}^{2}\widetilde{V}_{\mp}^{-}(\theta)+g_{A}^{2} \widetilde{A}_{\mp}^{-}(\theta)\big{\}}\Big{]}.\]
In (4.80), the following notations are introduced:
\[\frac{\widetilde{V}^{-}_{\pm}(\theta)}{2} = \frac{F^{-}_{\alpha^{2}}(\theta)}{2}+\frac{F^{-}_{\gamma^{2}}( \theta)}{2}\pm\frac{F^{-}_{\alpha\gamma}(\theta)}{2}=-2(E_{\chi}\pm|\mathbf{k}|)^{2 }(|\mathbf{k}|\pm E_{p}\cos^{2}\frac{\theta}{2})|\mathbf{k}|\sin^{2}\frac{\theta}{2},\] \[\frac{\widetilde{A}^{-}_{\pm}(\theta)}{2} = \frac{F^{-}_{\beta^{2}}(\theta)}{2}+\frac{F^{-}_{\delta^{2}}( \theta)}{2}\pm\frac{F^{-}_{\beta\delta}(\theta)}{2}=-2(E_{\chi}\pm|\mathbf{k}|)^{2 }(E_{p}\pm|\mathbf{k}|\cos^{2}\frac{\theta}{2})E_{p}\sin^{2}\frac{\theta}{2},\] \[\frac{\widetilde{M}^{-}_{\pm}(\theta)}{4} = \frac{F^{-}_{\alpha\delta}(\theta)}{4}+\frac{F^{-}_{\beta\gamma}( \theta)}{4}\Big{[}\frac{F^{-}_{\alpha\beta}(\theta)}{4}+\frac{F^{-}_{\gamma \delta}(\theta)}{4}\Big{]}=(E_{\chi}\pm|\mathbf{k}|)^{2}\big{[}(E_{p}\pm|\mathbf{k}|)^{ 2}\cos^{2}\frac{\theta}{2}\pm 2E_{p}|\mathbf{k}|\sin^{4}\frac{\theta}{2}\big{]},\] \[\frac{\widetilde{M}^{+}_{\pm}(\theta)}{4} = \frac{F^{+}_{\alpha\delta}(\theta)}{4}+\frac{F^{+}_{\beta\gamma}( \theta)}{4}\pm\Big{[}\frac{F^{+}_{\alpha\beta}(\theta)}{4}+\frac{F^{+}_{\gamma \delta}(\theta)}{4}\Big{]}=2(E_{\chi}\pm|\mathbf{k}|)^{2}(|\mathbf{k}|\cos^{2}\frac{ \theta}{2}\pm E_{p})|\mathbf{k}|\sin^{2}\frac{\theta}{2}, \tag{4.81}\] \[\frac{\widetilde{A}^{+}_{\pm}(\theta)}{2} = \frac{F^{+}_{\beta^{2}}(\theta)}{2}+\frac{F^{+}_{\beta\theta}( \theta)}{2}\pm\frac{F^{+}_{\beta\delta}(\theta)}{2}=(E_{\chi}\pm|\mathbf{k}|)^{2} [(E_{p}\pm|\mathbf{k}|)^{2}\cos^{2}\frac{\theta}{2}+2(E_{p}^{2}-|\mathbf{k}|^{2}\cos^ {2}\frac{\theta}{2})\sin^{2}\frac{\theta}{2}],\] \[\frac{\widetilde{V}^{+}_{\pm}(\theta)}{2} = \frac{F^{+}_{\alpha^{2}}(\theta)}{2}+\frac{F^{+}_{\gamma^{2}}( \theta)}{2}\pm\frac{F^{+}_{\alpha\gamma}(\theta)}{2}=(E_{\chi}\pm|\mathbf{k}|)^{2} \big{[}2|\mathbf{k}|^{2}\sin^{4}\frac{\theta}{2}+(E_{p}\pm|\mathbf{k}|)^{2}\cos^{2} \frac{\theta}{2}\big{]}.\]
Form-factor expressions (4.81) completely determine the incoherent \(\chi A\) cross sections _without spin flip_ of the massive \(\chi\) lepton (4.80) in the case of \(V\mp A\) interaction. It can be seen that all parameters with the subscript "\(+\)" contain the common factor \((E_{\chi}+|\mathbf{k}|)^{2}\), and all parameters with the subscript "\(-\)" are proportional to the common factor \((E_{\chi}-|\mathbf{k}|)^{2}\), which makes the cross section vanish in the relativistic limit when \(E_{\chi}\simeq|\mathbf{k}|\). Since
\[2^{4}c_{A}(E_{\chi}\pm|\mathbf{k}|)^{2}=\frac{G_{F}^{2}m_{A}}{4\pi}\frac{(E_{\chi} \pm|\mathbf{k}|)^{2}}{m^{2}|\mathbf{k}_{\chi}|^{2}}=\frac{G_{F}^{2}m_{A}}{4\pi}\frac{ \varkappa_{\pm}(s,m_{\chi}^{2})}{s},\]
where, by analogy with (4.64), the following notation is introduced:
\[\varkappa(s,m_{\chi}^{2})_{\pm}\equiv\varkappa(s,m^{2},m_{\chi}^{2})_{\pm}= \Big{[}1\pm\frac{s+m_{\chi}^{2}-m^{2}}{\lambda(s,m^{2},m_{\chi}^{2})}\Big{]}^{ 2}, \tag{4.82}\]
the set of incoherent "\(V\mp A\)" cross sections without spin flip of the massive \(\chi\) lepton can _finally_ be written as follows:
\[\frac{d\sigma^{\mp\mp}_{\rm inc,V-A}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\varkappa_{\pm}(s,m_{\chi}^{2})\sum A _{f}\widehat{F}_{f}^{2}\Big{[}g_{V}^{2}V^{+}_{\pm}(\theta)+g_{A}^{2}A^{+}_{\pm}( \theta)+g_{V}g_{A}M^{+}_{\pm}(\theta) \tag{4.83}\] \[\pm\frac{\Delta A_{f}}{A_{f}}\big{\{}g_{V}g_{A}M^{-}_{\pm}( \theta)-g_{V}^{2}V^{-}_{\pm}(\theta)-g_{A}^{2}A^{-}_{\pm}(\theta)\big{\}}\Big{]},\] \[\frac{d\sigma^{\mp\mp}_{\rm inc,V+A}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\varkappa_{\mp}(s,m_{\chi}^{2})\sum A _{f}\widehat{F}_{f}^{2}\Big{[}g_{V}^{2}V^{+}_{\mp}(\theta)+g_{A}^{2}A^{+}_{\mp}( \theta)-g_{V}g_{A}M^{+}_{\mp}(\theta)\] \[\mp\frac{\Delta A_{f}}{A_{f}}\big{\{}g_{V}g_{A}M^{-}_{\mp}( \theta)+g_{V}^{2}V^{-}_{\mp}(\theta)+g_{A}^{2}A^{-}_{\mp}(\theta)\big{\}}\Big{]}.\]
In formulas (4.83) parameters (4.81) are redefined as dimensionless quantities
\[V^{-}_{\pm}(\theta)\equiv\frac{\widetilde{V}^{-}_{\pm}(\theta)}{2s(E _{\chi}\pm|\mathbf{k}|)^{2}} = -\frac{|\mathbf{k}|}{\sqrt{s}}\Big{(}\frac{|\mathbf{k}|}{\sqrt{s}}\pm\frac{ E_{p}}{\sqrt{s}}\cos^{2}\frac{\theta}{2}\Big{)}\sin^{2}\frac{\theta}{2},\] \[A^{-}_{\pm}(\theta)=\frac{\widetilde{A}^{-}_{\pm}(\theta)}{2s(E _{\chi}\pm|\mathbf{k}|)^{2}} = -2\Big{(}\frac{E_{p}^{2}}{s}\pm\frac{|\mathbf{k}|}{\sqrt{s}}\frac{E_{p }}{\sqrt{s}}\cos^{2}\frac{\theta}{2}\Big{)}\sin^{2}\frac{\theta}{2},\] \[M^{-}_{\pm}(\theta)\equiv\frac{\widetilde{M}^{-}_{\pm}(\theta)}{2 s(E_{\chi}\pm|\mathbf{k}|)^{2}} = 2\frac{(E_{p}\pm|\mathbf{k}|)^{2}}{s}\cos^{2}\frac{\theta}{2}\pm 4\frac{|\mathbf{k} |}{\sqrt{s}}\frac{E_{p}}{\sqrt{s}}\sin^{4}\frac{\theta}{2},\] \[M^{+}_{\pm}(\theta)\equiv\frac{\widetilde{M}^{+}_{\pm}(\theta)}{ 2s(E_{\chi}\pm|\mathbf{k}|)^{2}} = 4\frac{|\mathbf{k}|}{\sqrt{s}}\Big{(}\frac{|\mathbf{k}|}{\sqrt{s}}\cos^{ 2}\frac{\theta}{2}\pm\frac{E_{p}}{\sqrt{s}}\Big{)}\sin^{2}\frac{\theta}{2}, \tag{4.84}\] \[A^{+}_{\pm}(\theta)\equiv\frac{\widetilde{A}^{+}_{\pm}(\theta)}{ 2s(E_{\chi}\pm|\mathbf{k}|)^{2}} = \frac{(E_{p}\pm|\mathbf{k}|)^{2}}{s}\cos^{2}\frac{\theta}{2}+2\Big{(} \frac{E_{p}^{2}}{s}-\frac{|\mathbf{k}|^{2}}{s}\cos^{2}\frac{\theta}{2}\Big{)}\sin ^{2}\frac{\theta}{2},\] \[V^{+}_{\pm}(\theta)\equiv\frac{\widetilde{V}^{+}_{\pm}(\theta)}{ 2s(E_{\chi}\pm|\mathbf{k}|)^{2}} = 2\frac{|\mathbf{k}|^{2}}{s}\sin^{4}\frac{\theta}{2}+\frac{(E_{p}\pm| \mathbf{k}|)^{2}}{s}\cos^{2}\frac{\theta}{2}.\]
Formulas (4.83) give the incoherent cross sections for massive \(\chi\)-lepton scattering on the nucleus _without spin flip_ simultaneously due to \(V\)-\(A\) and \(V\)+\(A\) currents. Note that in the first case the cross section is for the lepton with positive helicity (analogue antineutrino), and in the second case it is for the lepton with negative helicity (analogue neutrino).
If one substitutes parameters from (4.84) in formulas (4.83) and introduces another notation
\[\xi=\frac{|\mathbf{k}|}{E_{p}}=\frac{\lambda(s,m^{2},m_{\chi}^{2})}{s-m_{\chi}^{2} +m^{2}},\]
then one can obtain "more explicit" expressions for "\(V\mp A\)" incoherent \(\chi A\) cross sections in the following form:
\[\frac{d\sigma^{\mp\mp}_{\rm inc,V\!-\!A}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{E_{p}^{2}}{s}\varkappa_{\pm}(s, m_{\chi}^{2})\sum A_{f}\widehat{F}_{f}^{2}\Big{[}(g_{V}^{2}+g_{A}^{2})(1\pm\xi)^{2} \cos^{2}\frac{\theta}{2}+2(g_{V}\xi\pm g_{A})^{2}\sin^{2}\frac{\theta}{2} \tag{4.85}\] \[-2(g_{V}-g_{A})^{2}\xi^{2}\cos^{2}\frac{\theta}{2}\sin^{2}\frac{ \theta}{2}\pm\frac{\Delta A_{f}}{A_{f}}\big{\{}2g_{V}g_{A}(1\pm\xi)^{2}\cos^{2 }\frac{\theta}{2}+2(g_{V}\xi\pm g_{A})^{2}\sin^{2}\frac{\theta}{2}\] \[\pm 2(g_{V}-g_{A})^{2}\xi\cos^{2}\frac{\theta}{2}\sin^{2}\frac{ \theta}{2}-g_{V}^{2}\xi\sin^{2}\frac{\theta}{2}(\xi\pm\cos^{2}\frac{\theta}{2} )\big{\}}\Big{]},\] \[\frac{d\sigma^{\mp\mp}_{\rm inc,V\!+\!A}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{E_{p}^{2}}{s}\varkappa_{\mp}(s, m_{\chi}^{2})\sum A_{f}\widehat{F}_{f}^{2}\Big{[}(g_{V}^{2}+g_{A}^{2})(1\mp\xi)^{2} \cos^{2}\frac{\theta}{2}+2\big{(}g_{V}\xi\pm g_{A}\big{)}^{2}\sin^{2}\frac{ \theta}{2}\] \[-2(g_{V}+g_{A})^{2}\xi^{2}\cos^{2}\frac{\theta}{2}\sin^{2}\frac{ \theta}{2}\mp\frac{\Delta A_{f}}{A_{f}}\big{\{}2g_{V}g_{A}(1\mp\xi)^{2}\cos^{2 }\frac{\theta}{2}-2(g_{V}\xi\pm g_{A})^{2}\sin^{2}\frac{\theta}{2}\] \[\pm 2(g_{V}+g_{A})^{2}\xi\cos^{2}\frac{\theta}{2}\sin^{2}\frac{ \theta}{2}+g_{V}^{2}\xi\sin^{2}\frac{\theta}{2}(\xi\mp\cos^{2}\frac{\theta}{2}) \big{\}}\Big{]}.\]
This completes the discussion of presentations of the incoherent cross sections for the weak \(V\mp A\) interaction of the massive \(\chi\) lepton with the nucleus without lepton spin flip.
In the massless limit, when
\[m_{\chi}=0,\quad\frac{E_{p}}{\sqrt{s}}=\frac{s+m^{2}}{2s},\quad\xi=\frac{s-m^{2}}{ s+m^{2}},\quad\varkappa(s,0)_{+}=4,\quad\varkappa(s,0)_{-}=0,\]
cross sections (4.85) for leptons with "wrong helicity" become zero
\[\frac{d\sigma^{++}_{\rm inc,V\!-\!A,0}}{dT_{A}}\propto\varkappa_{-}(s,0)=0, \qquad\frac{d\sigma^{--}_{\rm inc,V\!+\!A,0}}{dT_{A}}\propto\varkappa_{-}(s,0)=0.\]
For massless leptons with "correct helicity", formulas (4.85) turn into
\[\frac{d\sigma^{\mp\mp}_{\rm inc,V\mp A,0}}{g_{\rm i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{4E_{p}^{2}}{s}\sum A_{f}\widehat {F}_{f}^{2}\Big{[}(g_{V}^{2}+g_{A}^{2})(1+\xi)^{2}\cos^{2}\frac{\theta}{2}+2(g_ {V}\xi\pm g_{A})^{2}\sin^{2}\frac{\theta}{2}\] \[-2(g_{V}\mp g_{A})^{2}\xi^{2}\cos^{2}\frac{\theta}{2}\sin^{2} \frac{\theta}{2}+\frac{\Delta A_{f}}{A_{f}}\Big{\{}2g_{V}g_{A}(1+\xi)^{2}\cos ^{2}\frac{\theta}{2}\] \[\pm 2(g_{V}\xi\pm g_{A})^{2}\sin^{2}\frac{\theta}{2}\pm 2(g_{V} \mp g_{A})^{2}\xi\cos^{2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2}\mp g_{V}^{2 }\xi\sin^{2}\frac{\theta}{2}(\xi+\cos^{2}\frac{\theta}{2})\Big{\}}\Big{]}.\]
This formula corresponds to the incoherent scattering of neutrinos and antineutrinos on the nucleus, which was discussed in [3]. It is a direct consequence of the general expansion in chiral constants (4.83), whose "\(V\)-\(A\)" formula for a massless neutrino takes the form11 matching the following expression from [3]:
Footnote 11: Here, the \(f\)-index is omitted in \(g_{V}^{f}\) and \(g_{A}^{f}\) and, as in [3], it is assumed that \(g_{\rm i}\simeq 1\) and \(y\equiv\sin^{2}\frac{\theta}{2}\Big{(}1-\frac{m^{2}}{s}\Big{)}\).
\[\frac{d\sigma^{--}_{\rm inc,V\!-\!A,0}}{dT_{A}}= \frac{G_{F}^{2}m_{A}}{2\pi}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2} \Bigg{[}g_{A}^{2}\Big{[}2\Big{(}1-\frac{ys}{s-m^{2}}\Big{)}+y^{2}+y\frac{4m^{2 }}{s-m^{2}}\Big{]}+2g_{V}g_{A}y(2-y)\] \[+g_{V}^{2}\Big{[}2\Big{(}1-\frac{ys}{s-m^{2}}\Big{)}+y^{2}\Big{]} +\frac{2\Delta A_{f}}{A_{f}}\Big{\{}g_{V}^{2}y\Big{(}1-\frac{y}{2}\frac{s+m^{2 }}{s-m^{2}}\Big{)}+g_{A}^{2}y\Big{(}1-\frac{y}{2}\Big{)}\frac{s+m^{2}}{s-m^{2}}\] \[+g_{V}g_{A}\Big{[}2\Big{(}1-\frac{ys}{s-m^{2}}\Big{)}+y^{2}\frac{ s+m^{2}}{s-m^{2}}\Big{]}\Big{\}}\Bigg{]}. \tag{4.87}\]
Finally, let us see what the incoherent \(\chi A\) cross section (4.47) corresponding to the scattering of the massless lepton on the spinless nucleus in the \(V\!\pm\!A\) approximation passes into when
\[(\alpha\mp\gamma)_{V\mp A}=2g_{V},\quad(\beta\mp\delta)_{V\mp A}=-2g_{A}, \quad(\alpha\mp\gamma)_{V\pm A}=(\delta\mp\beta)_{V\pm A}=0.\]
Substituting these relations into formula (4.47) and taking into account (4.20), one obtains the consequence of formula (4.47) as follows:
\[\frac{d\sigma^{\mp\mp}_{\rm inc,V\mp A,00}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{2\pi}\sum_{f=p,n}\widehat{F}_{f}^{2}(\mathbf{q} )A_{f}\Big{[}g_{A}^{2}\Big{(}2+y^{2}+2y\frac{2m^{2}-s}{s-m^{2}}\Big{)}\pm 2g_{V}g_{A}y(2 -y)\] \[+g_{V}^{2}\Big{(}2+y^{2}-\frac{2ys}{s-m^{2}}\Big{)}\Big{]};\] \[\frac{d\sigma^{\mp\mp}_{\rm inc,V\pm A,00}}{dT_{A}} = 0.\]
As expected, the first formula, which is the incoherent cross section for scattering of the massless neutrino (with negative helicity) and massless antineutrino (with positive helicity) on the nucleus, coincides (in its spinless part) with the "neutrino formula" (4.87). The second formula, which is the cross section of the massless neutrino scattering via the \(V\!+\!A\) current and massless antineutrino scattering via the \(V\!-\!A\) current, is equal to zero.
\(V\!\mp\!A\)-interaction with a spin flip of the massive lepton.In this case, general expansions of incoherent \(\chi A\) cross sections (4.39) go into the following expressions:
\[\frac{d\sigma^{\mp\pm}_{\rm inc,V\!-\!A}}{g_{\rm i}\hat{c}_{A}dT_{ A}} = \frac{1}{2}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}\Big{[}g_{V}^{2} \widehat{V}^{+}(\theta)+g_{A}^{2}\widehat{A}_{\mp}^{+}(\theta)+g_{V}g_{A} \widehat{M}_{\pm}^{+}(\theta)\] \[\qquad\qquad+\frac{\Delta A_{f}}{A_{f}}\big{\{}g_{V}g_{A} \widehat{M}_{\pm}^{-}(\theta)-g_{V}^{2}\widehat{V}^{-}(\theta)-g_{A}^{2} \widehat{A}_{\mp}^{-}(\theta)\big{\}}\Big{]}, \tag{4.88}\] \[\frac{d\sigma^{\mp\pm}_{\rm inc,V\!+\!A}}{g_{\rm i}\hat{c}_{A}dT_ {A}} = \frac{1}{2}\sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}\Big{[}g_{V}^{2} \widehat{V}^{+}(\theta)+g_{A}^{2}\widehat{A}_{\pm}^{+}(\theta)-g_{V}g_{A} \widehat{M}_{\mp}^{+}(\theta)\] \[\qquad\qquad+\frac{\Delta A_{f}}{A_{f}}\big{\{}g_{V}g_{A} \widehat{M}_{\mp}^{-}(\theta)+g_{V}^{2}\widehat{V}^{-}(\theta)+g_{A}^{2} \widehat{A}_{\pm}^{-}(\theta)\big{\}}\Big{]}.\]
Here, all form-factor combinations below, which define defining incoherent \(\chi A\) cross sections in the case of \(V\!\mp\!A\) interaction _with the change of the projection of the massive lepton spin_, depend on \(G\)-form-factors from (4.41) and are proportional to the common factor \(2m_{\chi}^{2}\):
\[\frac{\widehat{V}^{-}(\theta)}{2m_{\chi}^{2}} = \frac{G_{\alpha\gamma}^{-}(\theta)}{2m_{\chi}^{2}}=2|\mathbf{k}|[m+2( E_{p}-m)\cos^{2}\frac{\theta}{2}]\cos^{2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2},\] \[\frac{\widehat{V}^{+}(\theta)}{2m_{\chi}^{2}} = \frac{G_{\alpha\beta}^{+}(\theta)}{2m_{\chi}^{2}}+\frac{G_{\beta \gamma}^{+}(\theta)}{2m_{\chi}^{2}}=m^{2}\sin^{2}\frac{\theta}{2}+|\mathbf{k}|^{2 }\cos^{2}\frac{\theta}{2}=E_{p}^{2}\cos^{2}\frac{\theta}{2}+m^{2}(\sin^{2} \frac{\theta}{2}-\cos^{2}\frac{\theta}{2}),\] \[\frac{\widehat{A}_{\pm}^{-}(\theta)}{2m_{\chi}^{2}} = \frac{G_{\beta\delta}^{-}(\theta)}{2m_{\chi}^{2}}\pm\frac{G_{ \delta^{2}}^{-}(\theta)}{2m_{\chi}^{2}}=2\cos^{2}\frac{\theta}{2}\{|\mathbf{k}|[m +2(E_{p}-m)\sin^{2}\frac{\theta}{2}]\cos^{2}\frac{\theta}{2}\mp m^{2}\},\] \[\frac{\widehat{M}_{\pm}^{-}(\theta)}{2m_{\chi}^{2}} = \frac{G_{\alpha\gamma}^{-}(\theta)}{2m_{\chi}^{2}}+\frac{G_{\beta \delta}^{-}(\theta)}{2m_{\chi}^{2}}\pm\frac{G_{\alpha\delta}^{-}(\theta)}{2m_ {\chi}^{2}}=2|\mathbf{k}|\cos^{2}\frac{\theta}{2}\{m+4(E_{p}-m)\sin^{2}\frac{ \theta}{2}\cos^{2}\frac{\theta}{2}\}\mp 2m^{2}\sin^{2}\frac{\theta}{2},\] \[\frac{\widehat{A}_{\pm}^{+}(\theta)}{2m_{\chi}^{2}} = \frac{G_{\delta^{2}}^{+}(\theta)}{2m_{\chi}^{2}}+\frac{G_{\beta \gamma}^{+}(\theta)}{2m_{\chi}^{2}}\pm\frac{G_{\beta\delta}^{+}(\theta)}{2m_ {\chi}^{2}}=m^{2}+(m-|\mathbf{k}|)^{2}\cos^{2}\frac{\theta}{2}\mp 4m|\mathbf{k}|\cos^{4} \frac{\theta}{2},\] \[\frac{\widehat{M}_{\pm}^{+}(\theta)}{2m_{\chi}^{2}} = \frac{G_{\alpha\delta}^{+}(\theta)}{2m_{\chi}^{2}}-2\frac{G_{\beta \gamma}^{+}(\theta)}{2m_{\chi}^{2}}\pm\frac{G_{\beta\delta}^{+}(\theta)}{2m_ {\chi}^{2}}=-2|\mathbf{k}|\cos^{2}\frac{\theta}{2}[|\mathbf{k}|\mp m(\sin^{2}\frac{ \theta}{2}-\cos^{2}\frac{\theta}{2})].\]
Given (4.12), incoherent \(\chi A\) cross sections of scattering due to \(V\!\mp\!A\) currents with massive lepton spin flip can be written with dimensionless factors of the chiral constants \(g_{A}\), \(g_{V}\) in
the following final form:
\[\frac{d\sigma^{\mp\pm}_{\rm inc,V-A}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}} \sum A_{f}\widehat{F}_{f}^{2}\Big{[}(g_{V}^{2}+g_{A}^{2})+(g_{A}^{2}-g_{V}^{2}) \cos^{2}\frac{\theta}{2}\] \[+\epsilon^{2}(g_{V}-g_{A})^{2}\cos^{2}\frac{\theta}{2}-2g_{A} \epsilon[(g_{A}\mp g_{V})\mp 2(g_{A}-g_{V})\cos^{2}\frac{\theta}{2}]\cos^{2} \frac{\theta}{2}\] \[+ \frac{2\Delta A_{f}}{A_{f}}\Big{\{}\epsilon(g_{A}-g_{V})[g_{A} \cos^{2}\frac{\theta}{2}(1+2\sin^{2}\frac{\theta}{2})+g_{V}(1+\cos^{2}\frac{ \theta}{2}-2\cos^{2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2})]\cos^{2}\frac{ \theta}{2}\] \[\mp g_{A}(g_{V}\sin^{2}\frac{\theta}{2}+g_{A}\cos^{2}\frac{ \theta}{2})-2\epsilon\sqrt{1+\epsilon^{2}}(g_{V}-g_{A})^{2}\cos^{4}\frac{ \theta}{2}\sin^{2}\frac{\theta}{2}\Big{\}}\Big{]}, \tag{114}\] \[\cdot\frac{d\sigma^{\mp\pm}_{\rm inc,V+A}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2 }}\sum A_{f}\widehat{F}_{f}^{2}\Big{[}(g_{V}^{2}+g_{A}^{2})+(g_{A}^{2}-g_{V}^{ 2})\cos^{2}\frac{\theta}{2}\] \[+\epsilon^{2}(g_{V}+g_{A})^{2}\cos^{2}\frac{\theta}{2}-2g_{A} \epsilon[(g_{A}\mp g_{V})\pm 2(g_{A}+g_{V})\cos^{2}\frac{\theta}{2}]\cos^{2} \frac{\theta}{2}\] \[+\] \[\pm g_{A}[g_{V}\sin^{2}\frac{\theta}{2}-g_{A}\cos^{2}\frac{\theta} {2}]+2\epsilon\sqrt{1+\epsilon^{2}}(g_{V}+g_{A})^{2}\cos^{4}\frac{\theta}{2} \sin^{2}\frac{\theta}{2}\Big{\}}\Big{]}.\]
Here the auxiliary notation \(\frac{|\mathbf{k}|E_{p}}{m^{2}}=\epsilon\sqrt{1+\epsilon^{2}}\) is introduced, where \(\epsilon\equiv\frac{|\mathbf{k}|}{m}\). It is immediately clear that for \(m_{\chi}\to 0\) both formulas (114) vanish, as expected.
The limit \(\epsilon\to 0\) corresponds to the case where the momentum of the incident \(\chi\) lepton \(|\mathbf{k}_{\chi}|\ll m\), including \(|\mathbf{k}_{\chi}|\leq m_{\chi}\ll m\). Then in this limit the following approximation is obtained from formulas (114):
\[\frac{d\sigma^{\mp\pm}_{\rm inc,V-A}}{dT_{A}}=\frac{G_{F}^{2}m_{A }}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\sum A_{f}\widehat{F}_{f}^{2} \Big{[}(g_{V}^{2}+g_{A}^{2})+(g_{A}^{2}-g_{V}^{2})\cos^{2}\frac{\theta}{2}\mp \frac{2\Delta A_{f}}{A_{f}}g_{A}\Big{\{}g_{V}\sin^{2}\frac{\theta}{2}+g_{A} \cos^{2}\frac{\theta}{2}\Big{\}}\Big{]},\] \[\frac{d\sigma^{\mp\pm}_{\rm inc,V+A}}{dT_{A}}=\frac{G_{F}^{2}m_{A }}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}}\sum A_{f}\widehat{F}_{f}^{2} \Big{[}(g_{V}^{2}+g_{A}^{2})+(g_{A}^{2}-g_{V}^{2})\cos^{2}\frac{\theta}{2}\pm \frac{2\Delta A_{f}}{A_{f}}g_{A}\Big{\{}g_{V}\sin^{2}\frac{\theta}{2}-g_{A} \cos^{2}\frac{\theta}{2}\Big{\}}\Big{]}.\]
It shows that for the nucleus with spin 0 (\(\Delta A_{f}=0\)) when \(\epsilon\simeq 0\) these cross sections coincide
\[\frac{d\sigma^{\mp\pm}_{\rm inc,V-A}}{dT_{A}}=\frac{d\sigma^{\mp\pm}_{\rm inc, V+A}}{dT_{A}}=\frac{G_{F}^{2}m_{A}}{4\pi}\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}|^{2}} \sum A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[}(g_{V}^{2}+g_{A}^{2})+(g_{A}^{2}-g_{ V}^{2})\cos^{2}\frac{\theta}{2}\Big{]}.\]
However, this observation is of no practical use, since at such a small momentum of the incident lepton the value of the transferred momentum is \(\mathbf{q}\simeq 0\), which results in \(\widehat{F}_{f}^{2}(\mathbf{q})\simeq 0\). In other words, for sufficiently small transferred momenta, one can not speak about any noticeable role of inelastic (incoherent) scattering processes.
Fully averaged incoherent "\(\mathbf{V\mp A}\)" cross section.Recall that the fully averaged (averaged over the initial and summed over the final helicities of the \(\chi\) particle) _incoherent_
\(\chi A\)-interaction cross section is given by (104). In the \(V\mp A\) approximation, when \(\alpha_{\mp}=+g_{V},\ \beta_{\mp}=-g_{A},\ \gamma_{\mp}=\mp g_{V},\ \delta_{\mp}=\pm g _{A}\) one has
\[\alpha_{\mp}^{2}=\gamma_{\mp}^{2} = g_{V}^{2},\quad\beta_{\mp}^{2}=\delta_{\mp}^{2}=g_{A}^{2},\quad( \gamma^{2}-\beta^{2})_{\mp}=(g_{V}^{2}-g_{A}^{2}),\] \[(\alpha-\delta)_{\mp} = (g_{V}\mp g_{A}),\quad(\beta-\gamma)_{\mp}=\pm(g_{V}\mp g_{A}),\] \[(\alpha-\delta)\delta_{\mp} = \pm(g_{V}\mp g_{A})g_{A},\quad(\beta-\gamma)\beta_{\mp}=\mp(g_{V} \mp g_{A})g_{A}, \tag{105}\] \[(\alpha\gamma+\beta\delta)_{\mp} = \mp(g_{V}^{2}+g_{A}^{2}),\quad(\alpha\delta+\beta\gamma)_{\mp}= \pm 2g_{V}g_{A},\quad(\alpha\beta+\gamma\delta)_{\mp}=-2g_{V}g_{A}.\]
Then sums (103) defining completely averaged incoherent \(\chi A\) cross sections go into the following "\(V\pm A\)" expressions:
\[\sum_{s^{\prime},s}\frac{Q^{s^{\prime}s}_{+,\mathbb{V}\mp\Lambda} }{2^{6}} = (g_{V}^{2}+3g_{A}^{2})m^{2}m_{\chi}^{2}\pm 4g_{V}g_{A}|\mathbf{k}|^{2}s+4g_ {A}(g_{A}\mp g_{V})m^{2}|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}\] \[+(g_{V}\mp g_{A})^{2}|\mathbf{k}|^{2}[4|\mathbf{k}|^{2}\sin^{4}\frac{ \theta}{2}+2s\cos^{2}\frac{\theta}{2}+m_{\chi}^{2}(\sin^{2}\frac{\theta}{2}+ \sin^{4}\frac{\theta}{2}+\cos^{4}\frac{\theta}{2})], \tag{106}\] \[\sum_{s^{\prime},s}\frac{Q^{s^{\prime}s}_{-,\mathbb{V}\mp\Lambda} }{2^{7}|\mathbf{k}|} = \mp(g_{V}\mp g_{A})^{2}[m_{\chi}^{2}E_{p}\cos^{2}\frac{\theta}{2} \sin^{2}\frac{\theta}{2}(1-2\sin^{2}\frac{\theta}{2})-2|\mathbf{k}|^{2}(E_{p}\cos ^{2}\frac{\theta}{2}+E_{\chi})\sin^{2}\frac{\theta}{2}]\] \[\mp(g_{V}^{2}-g_{A}^{2})m_{\chi}^{2}m\cos^{2}\frac{\theta}{2}\sin ^{2}\frac{\theta}{2}(1-2\sin^{2}\frac{\theta}{2})+g_{A}^{2}[mm_{\chi}^{2}\cos^ {2}\frac{\theta}{2}(1-2\sin^{2}\frac{\theta}{2})\pm 2m^{2}E_{\chi}\sin^{2} \frac{\theta}{2}]\] \[+g_{V}g_{A}[mm_{\chi}^{2}\cos^{2}\frac{\theta}{2}(1-2\sin^{2} \frac{\theta}{2})+2m_{\chi}^{2}E_{p}+4|\mathbf{k}|^{2}(E_{\chi}+E_{p})+2m^{2}E_{ \chi}\cos^{2}\frac{\theta}{2}].\]
As a result, the fully averaged _incoherent_\(\chi A\) interaction cross section (104) with allowance for sums from (106) takes the "\(V\mp A\)" form
\[\frac{d\sigma^{\rm total}_{\rm inc,V\mp A}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\frac{1}{m^{2}|\mathbf{k}_{\chi}|^{2}} \sum_{f=p,n}A_{f}\widehat{F}_{f}^{2}(\mathbf{q})\Big{[}\sum_{s^{\prime}s}\frac{Q^{s ^{\prime}s}_{+,\mathbb{V}\mp A}}{2^{6}}+\frac{2\Delta A_{f}}{A_{f}}|\mathbf{k}| \sum_{s^{\prime}s}\frac{Q^{s^{\prime}s}_{-,\mathbb{V}\mp A}}{2^{7}|\mathbf{k}|} \Big{]}.\]
Further, since the \(V\mp A\) limit satisfies the relations (105), there are the following expressions then for parameters from (102):
\[\Phi^{+}_{\alpha^{2}+3\delta^{2}}(\mathbf{q}) = \Phi^{+}_{g_{V}^{2}}+3\Phi^{+}_{g_{A}^{2}}(\mathbf{q}),\quad\Phi^{+} _{\gamma^{2}-\beta^{2}}(\mathbf{q})=\Phi^{+}_{g_{V}^{2}}(\mathbf{q})-\Phi^{+}_{g_{A}^{2} }(\mathbf{q}),\] \[\Phi^{+}_{(\beta-\gamma)^{2}}(\mathbf{q}) = \Phi^{+}_{(g_{V}\mp g_{A})^{2}}(\mathbf{q}),\quad\Phi^{+}_{(\alpha- \delta)^{2}+(\beta-\gamma)^{2}}(\mathbf{q})=2\Phi^{+}_{(g_{V}\mp g_{A})^{2}}(\mathbf{q}),\] \[\Phi^{+}_{2(\alpha\delta+\beta\gamma)}(\mathbf{q}) = \pm 4\Phi^{+}_{g_{V}g_{A}}(\mathbf{q}),\quad\Phi^{+}_{(\beta-\gamma)\beta} (\mathbf{q})=\Phi^{+}_{(\delta-\alpha)\delta}(\mathbf{q})=\mp\Phi^{+}_{g_{V}g_{A}\mp g _{A}^{2}}(\mathbf{q}).\]
After their substituting into general formula (102), for the spinless nucleus there is
\[\frac{d\sigma^{\rm total}_{\rm inc,V\mp A,0}}{g_{i}\hat{c}_{A}dT_{A}} = [\Phi^{+}_{g_{V}^{2}}(\mathbf{q})+3\Phi^{+}_{g_{A}^{2}}(\mathbf{q})]m_{ \chi}^{2}m^{2}\pm 4\Phi^{+}_{g_{V}g_{A}}(\mathbf{q})|\mathbf{k}|^{2}s+4\Phi^{+}_{(g_{A}\mp g _{V})g_{A}}(\mathbf{q})m^{2}|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}\] \[\cdot+\Phi^{+}_{(g_{V}\mp g_{A})^{2}}(\mathbf{q})|\mathbf{k}|^{2}[4|\mathbf{k} |^{2}\sin^{4}\frac{\theta}{2}+2s\cos^{2}\frac{\theta}{2}+m_{\chi}^{2}(\cos^{2} \frac{\theta}{2}+2\sin^{4}\frac{\theta}{2})].\]
Here, by analogy with the definitions from (4.43), the following notations are given:
\[\Phi^{+}_{V^{2}}(\mathbf{q}) \equiv\Phi^{+}_{g^{\prime}_{V}}(\mathbf{q})=\sum_{f=p,n}\widehat{F}^{2}_ {f}(\mathbf{q})A_{f}[g^{f}_{V}]^{2},\quad\Phi^{+}_{VA}(\mathbf{q})\equiv\Phi^{+}_{g_{V} g_{A}}(\mathbf{q})=\sum_{f=p,n}\widehat{F}^{2}_{f}(\mathbf{q})A_{f}g^{f}_{A}g^{f}_{V},\] \[\Phi^{+}_{A^{2}}(\mathbf{q})\equiv\Phi^{+}_{g^{\prime}_{A}}(\mathbf{q})= \sum_{f=p,n}\widehat{F}^{2}_{f}(\mathbf{q})A_{f}[g^{f}_{A}]^{2}. \tag{4.92}\]
With these notations, the fully averaged incoherent \(\chi A\) cross section on the _spinless_ nucleus in the form of an expansion in chiral coupling constants \(g_{V}\) and \(g_{A}\) takes the form
\[\frac{d\sigma^{\rm total}_{\rm inc,V\mp A,0}}{g_{d}dT_{A}} \equiv \frac{G_{F}^{2}m_{A}}{4\pi m^{2}|\mathbf{k}_{\chi}|^{2}}\Big{\{}\Phi^ {+}_{V^{2}}(\mathbf{q})S_{V^{2}}(\theta)+\Phi^{+}_{A^{2}}(\mathbf{q})S_{A^{2}}(\theta) \pm 2\Phi^{+}_{VA}(\mathbf{q})S_{VA}(\theta)\Big{\}},\quad\text{where} \tag{4.93}\] \[S_{V^{2}}(\theta) \equiv m_{\chi}^{2}m^{2}+4|\mathbf{k}|^{4}\sin^{4}\frac{\theta}{2}+2s|\mathbf{k}| ^{2}\cos^{2}\frac{\theta}{2}+m_{\chi}^{2}|\mathbf{k}|^{2}(\cos^{2}\frac{\theta}{2} +2\sin^{4}\frac{\theta}{2}),\] \[S_{A^{2}}(\theta) \equiv 3m_{\chi}^{2}m^{2}+4m^{2}|\mathbf{k}|^{2}\sin^{2}\frac{\theta}{2}+| \mathbf{k}|^{2}[4|\mathbf{k}|^{2}\sin^{4}\frac{\theta}{2}+2s\cos^{2}\frac{\theta}{2}+ m_{\chi}^{2}(\cos^{2}\frac{\theta}{2}+2\sin^{4}\frac{\theta}{2})],\] \[S_{VA}(\theta) \equiv |\mathbf{k}|^{2}\big{[}2s-2m^{2}\sin^{2}\frac{\theta}{2}-[4|\mathbf{k}|^{2 }\sin^{4}\frac{\theta}{2}+2s\cos^{2}\frac{\theta}{2}+m_{\chi}^{2}(\cos^{2} \frac{\theta}{2}+2\sin^{4}\frac{\theta}{2})]\big{]}.\]
For the _massless_\(\chi\) lepton (\(m_{\chi}=0\)), summation over all lepton helicities (4.91) gives the following result:
\[\sum\frac{Q^{g^{\prime}s}_{+,\nabla\mp A,0}}{2^{\prime}|\mathbf{k}|^{2}} = +2g_{A}^{2}m^{2}\sin^{2}\frac{\theta}{2}\pm 2g_{V}g_{A}(s-m^{2}\sin^{2} \frac{\theta}{2})+(g_{V}\mp g_{A})^{2}(2|\mathbf{k}|^{2}\sin^{4}\frac{\theta}{2}+s \cos^{2}\frac{\theta}{2}),\] \[\sum\frac{Q^{g^{\prime}s}_{-,\nabla\mp A,0}}{2^{\prime}|\mathbf{k}|^{2}} = \pm 2g_{A}^{2}m^{2}\sin^{2}\frac{\theta}{2}+2g_{V}g_{A}(s-m^{2} \sin^{2}\frac{\theta}{2})\] \[\pm(g_{V}\mp g_{A})^{2}((s-m^{2})\cos^{2}\frac{\theta}{2}+2|\mathbf{ k}|^{2}\sin^{2}\frac{\theta}{2})\sin^{2}\frac{\theta}{2}.\]
Then, for \(m_{\chi}=0\) in the \(V\mp A\) variant, the fully averaged incoherent \(\chi A\) cross section is
\[\frac{d\sigma^{\rm total}_{\rm inc,V\mp A,0}}{g_{i}dT_{A}} = \frac{G_{F}^{2}m_{A}}{2\pi}\sum_{f=p,n}A_{f}\widehat{F}^{2}_{f}( \mathbf{q})\Big{\{}2g_{A}^{2}\frac{m^{2}}{s}\sin^{2}\frac{\theta}{2}+(g_{V}\mp g_{ A})^{2}\big{[}\frac{1}{2}\big{(}1-\frac{m^{2}}{s}\big{)}^{2}\sin^{4}\frac{\theta}{2}+ \cos^{2}\frac{\theta}{2}\big{]} \tag{4.95}\] \[\pm 2g_{V}g_{A}\big{(}1-\frac{m^{2}}{s}\sin^{2}\frac{\theta}{2} \big{)}+\frac{\Delta A_{f}}{A_{f}}\Big{[}2g_{V}g_{A}\big{(}1-\frac{m^{2}}{s} \sin^{2}\frac{\theta}{2}\big{)}\pm 2g_{A}^{2}\frac{m^{2}}{s}\sin^{2}\frac{\theta}{2}\] \[\pm(g_{V}\mp g_{A})^{2}\big{(}1-\frac{m^{2}}{s}\big{)}\sin^{2} \frac{\theta}{2}\big{[}\frac{1}{2}\big{(}1-\frac{m^{2}}{s}\big{)}\sin^{2}\frac{ \theta}{2}+\cos^{2}\frac{\theta}{2}\big{]}\Big{]}\Big{\}}.\]
This \(\chi A\) cross section, like each expression (4.94) corresponding to the _massless_ lepton, despite the presence of a sum over lepton helicities, generally speaking, contains only one term. For the \(V\!-\!A\) variant, it is the only contribution of the lepton with the negative _conserved_ helicity (neutrino, same as formula (4.87)). For the \(V\!+\!A\) variant, it is the only contribution of the lepton with positive _conserved_ helicity (antineutrino). This expression coincides with the formulas for (anti)neutrinos from [3].
Total averaged \(\chi A\) cross section for the massive lepton and \(V\mp A\) currents.It is the sum of two terms: the fully averaged coherent (4.78) and fully averaged incoherent (4.93) \(\chi A\) interaction cross sections due to \(V\mp A\) currents. Let us present this expression for the spinless nucleus, taking into account (4.20) in the form of expansion in the effective \(V\mp A\) constants (assuming that \(g_{\rm l}\simeq g_{\rm c}\simeq 1\))
\[\frac{d\sigma^{\rm total}_{V\mp A,0}}{dT_{A}} = \frac{G_{F}^{2}m_{A}}{4\pi}\Big{\{}\Phi^{+}_{V^{2}}(\mathbf{q})S_{V^ {2}}(\theta)+\Phi^{+}_{A^{2}}(\mathbf{q})S_{A^{2}}(\theta)\pm 2\Phi^{+}_{VA}(\mathbf{q})S_{VA}(\theta)\] \[\qquad+G_{V}^{2}(\mathbf{q})C_{V}(\theta)+G_{A}^{2}(\mathbf{q})C_{A}( \theta)\pm 2G_{V}(\mathbf{q})G_{A}(\mathbf{q})C_{M}(\theta)\Big{\}}.\]
Here the coefficients in front of the effective \(V\mp A\) constants are redefined by including into them the common factor \((|\mathbf{k}|^{2}s)^{-1}\), i.e., \(S_{V^{2}}(\theta)\equiv\frac{S_{V^{2}}(\theta)}{|\mathbf{k}|^{2}s}\), etc. They have the following form:
\[S_{V^{2}}(\theta) = \frac{m_{\chi}^{2}m^{2}}{|\mathbf{k}|^{2}s}+2\cos^{2}\frac{\theta}{2 }+\frac{4|\mathbf{k}|^{2}}{s}\sin^{4}\frac{\theta}{2}+\frac{m_{\chi}^{2}}{s}\Big{(} \cos^{2}\frac{\theta}{2}+2\sin^{4}\frac{\theta}{2}\Big{)},\] \[S_{A^{2}}(\theta) = \frac{3m_{\chi}^{2}m^{2}}{|\mathbf{k}|^{2}s}+\frac{4m^{2}}{s}\sin^{2 }\frac{\theta}{2}+2\cos^{2}\frac{\theta}{2}+\frac{4|\mathbf{k}|^{2}}{s}\sin^{4} \frac{\theta}{2}+\frac{m_{\chi}^{2}}{s}\Big{(}\cos^{2}\frac{\theta}{2}+2\sin^ {4}\frac{\theta}{2}\Big{)},\] \[S_{VA}(\theta) = 2\sin^{2}\frac{\theta}{2}\Big{(}1-\frac{m^{2}}{s}\Big{)}-\frac{ 4|\mathbf{k}|^{2}}{s}\sin^{4}\frac{\theta}{2}-\frac{m_{\chi}^{2}}{s}\Big{(}\cos^{ 2}\frac{\theta}{2}+2\sin^{4}\frac{\theta}{2}\Big{)};\] \[C_{A}(\theta) = \Big{(}\frac{E_{p}}{\sqrt{s}}-\frac{m}{\sqrt{s}}\Big{)}^{2}\sin^{ 2}\frac{\theta}{2}\cos^{2}\frac{\theta}{2}\Big{(}\frac{m_{\chi}^{2}}{|\mathbf{k}| ^{2}}+2\sin^{2}\frac{\theta}{2}\Big{)},\] \[C_{M}(\theta) = 2\Big{(}\frac{E_{p}}{\sqrt{s}}-\frac{m}{\sqrt{s}}\Big{)}\cos^{ 2}\frac{\theta}{2}\sin^{2}\frac{\theta}{2}\Big{(}\frac{E_{\chi}}{\sqrt{s}}+ \frac{m}{\sqrt{s}}\sin^{2}\frac{\theta}{2}+\frac{E_{p}}{\sqrt{s}}\cos^{2} \frac{\theta}{2}\Big{)},\] \[C_{V}(\theta) = \Big{(}\frac{m}{\sqrt{s}}\sin^{2}\frac{\theta}{2}+\frac{E_{p}}{ \sqrt{s}}\cos^{2}\frac{\theta}{2}\Big{)}^{2}\Big{(}\frac{m_{\chi}^{2}}{|\mathbf{k }|^{2}}+2\cos^{2}\frac{\theta}{2}\Big{)}\] \[+\cos^{2}\frac{\theta}{2}\Big{(}\frac{m_{\chi}^{2}}{s}+\frac{2| \mathbf{k}|^{2}}{s}+\frac{4E_{\chi}}{\sqrt{s}}\Big{(}\frac{m}{\sqrt{s}}\sin^{2} \frac{\theta}{2}+\frac{E_{p}}{\sqrt{s}}\cos^{2}\frac{\theta}{2}\Big{)}\Big{)}.\]
Formula (4.96) can also be obtained by the corresponding substitutions into the general expression (4.60), which holds for the total cross section of the weak interaction of the massive lepton and the spinless nucleus. The other variables included in expression (4.96) are defined in the text of this chapter by formulas (4.74) and (4.92).
_Note_ that expression (4.96) can be considered _as a new prescription_ for computing the cross sections of the (massive) lepton scattering on the nucleus via weak \(V\mp A\) interaction at energies below hundreds of MeV. For example, such modern software products as Achilles from [17] claiming to allow precision description of weak processes including those considered in this paper should apparently take into account the results obtained here.
## 5 Conclusions
This paper outlines all elements of the theoretical approach to the description of the scattering of a _massive_ (neutral) lepton on a nucleus as an object, the internal structure of which is
due to mutually interacting nucleons. This is a natural generalization of the new concept in the theory of (anti)neutrino scattering on the nucleus as a composite system proposed by D.V. Naumov and the author in [1; 2; 3; 9]. The boundary condition for the applicability of the approach is the requirement to preserve integrity of the nucleus after interaction. The nucleus as a complex quantum system can be "excited", can pass from one of its quantum states to another, but in the canonical version of this approach, the nucleus should not "fall apart". This requirement imposes restrictions on the magnitude of the momentum transferred by the lepton to the nucleus, which should not be so large as to completely "destroy the nucleus". In addition, the applicability of the approach is, strictly speaking, also limited by the approximations made, which, in turn, are explicitly formulated in the text, look quite natural, and, if necessary, can be specially investigated and taken into account as, for example, correction factors or contributions.
The approach constructively uses the so-called condition for the completeness of the quantum state of the nucleus (or the probability conservation condition), which, being based on the summation over all possible initial and final states of the nucleus, allows one to explicitly separate the elastic process from all other inelastic processes capable of contributing to the total observable \(\chi A\) scattering cross section. The approach further relies on the microscopic description of the nucleus as a bound state of its constituent nucleons on the basis of the many-particle wave function of the nucleus. The _weak_ interaction of a point-like \(\chi\) lepton is considered to be between structureless protons and neutrons of the nucleus. This interaction is parameterized in the form of an expansion of the scalar product of the lepton and nucleon currents in four effective coupling constants reflecting the (axial) vector nature of the weak interaction
\[(l^{w}_{s^{\prime}s},h^{w,f}_{r^{\prime}r})=\alpha_{f}(l^{v}_{s^{\prime}s}\,h^ {v}_{r^{\prime}r})+\beta_{f}(l^{v}_{s^{\prime}s}\,h^{a}_{r^{\prime}r})+\gamma _{f}(l^{a}_{s^{\prime}s}\,h^{v}_{r^{\prime}r})+\delta_{f}(l^{a}_{s^{\prime}s} \,h^{a}_{r^{\prime}r}).\]
_Relativistic_ expressions for the cross sections of the massive (neutral) \(\chi\) lepton scattering on the nucleus are obtained in a general form. It is shown that the _observable_ cross section of the process \(\chi A\to\chi A^{(*)}\) (4.60) includes the elastic (or coherent) contribution (4.32) when the nucleus remains in its original quantum state and the inelastic (incoherent) contribution (4.39) when the nucleus goes into another (excited) quantum state. Transition from the elastic scattering regime to the inelastic scattering regime is _automatically_ determined by the dependence of the nucleon-nucleus form factors \(F_{p/n}(\mathbf{q})\) on the momentum \(\mathbf{q}\) transferred to the nucleus. At small \(\mathbf{q}\) the elastic scattering dominates, and as \(\mathbf{q}\) increases the contribution of the inelastic scattering increases, and the latter dominates at sufficiently large \(\mathbf{q}\). This _automatic behavior_ makes this approach fundamentally different from Friedmann's concept of coherence [6; 7], within which, before using formulas for the coherent scattering of the (anti)neutrino on the nucleus, one has to make sure in advance that this can be done, i.e., that the product of the characteristic radius of the target nucleus and some characteristic momentum transferred to the nucleus is noticeably less than unity (\(|\mathbf{q}|R\ll 1\)). In our approach, this question does not arise at all.
As an important application of the general formulas, the scattering of _massive_ (anti)neutrinos interacting with nucleons through the \(V{\mp}A\) current of the Standard Model is considered in detail. The weak \(V{-}A\) interaction with the nucleus corresponds to the massive analogue of the neutrino (negative helicity), and the case of the weak \(V{+}A\) interaction is the massive analog of the antineutrino (positive helicity). A complete set of expressions for the corresponding cross sections is obtained (e.g. formulas (111) and (112)). The transition of these formulas to the formulas from [3] corresponding to the scattering of _massless_ (anti)neutrinos of the Standard Model on the nucleus is demonstrated.
Owing to the nonzero mass of the (anti)neutrino, an additional channel for the elastic (coherent) and inelastic (incoherent) scattering of (anti)neutrinos on the nuclei arises from the possibility of a change in helicity in massive (anti)neutrinos. For example, despite the smallness of the mass of neutrinos at kinetic energies of (anti)neutrinos much lower than the mass of neutrinos (for example, relict ones), the cross section of their interaction with the nucleus turns out to be multiply enhanced by the "nucleus coherence effect".
The resulting expressions can be used, for example, in the analysis of results of experiments on the direct detection of neutral massive weakly interacting relativistic and non-relativistic particles of dark matter, since, unlike the generally accepted case, both the elastic and inelastic interactions of such particles with the target nucleus are simultaneously taken into account. In this case, the presence of the "inelastic signal" with its characteristic signature in the form of \(\gamma\)-quanta from deexcitation of the nucleus may be the only registered evidence [3; 4; 9] of the interaction of the dark matter particle.
## Acknowledgments
The author is grateful to V.A. Kuzmin, D.V. Naumov, E.A. Yakushev and other colleagues for the important comments and discussions.
## 6 Appendix
Wave functions.Recall [1; 2; 3] that the state of the nucleus is denoted by the symbol \(|P_{l}\rangle\), which means that the nucleus has a 4-momentum \(P_{l}\), is in some \(l\)th internal quantum state (\(l=n,m\)) and the \(|P_{l}\rangle\) is a superposition of free nucleons \(|\{p\}\rangle\) multiplied by the wave function of the bound state \(\widetilde{\psi}^{\prime}_{n}(\{p\})\). The latter is a product of the wave function \(\widetilde{\psi}_{n}(\{p^{\star}\})\), which describes the internal structure of the nucleus in its rest frame (the corresponding momenta are marked with index \(\star\)), and the wave function \(\Phi_{n}(p)\) responsible for the motion of the nucleus as a whole with the momentum \(\mathbf{p}=\sum_{i=1}^{A}\mathbf{p}_{i}\) and the projection of the nuclear spin \(s\)
\[\widetilde{\psi}^{\prime}_{n}(\{p\})=\widetilde{\psi}_{n}(\{p^{\star}\})\Phi _{n}(p),\quad\mbox{where}\quad p=(\mathbf{p},s). \tag{114}\]
The function \(\Phi_{n}(p)\) depends on \(A-1\) terms of 3-momenta, since one combination of \(A\) 3-momenta is used to describe the motion of the nucleus as a whole. Taking into account
(6.1) for the state \(|P_{n}\rangle\), which completely characterizes the nucleus \(A\), we will use the (antisymmetrized) expression (3.2) from the main text of the paper.
For nuclear states \(|n\rangle\) describing the nucleus at rest in the \(n\)th internal quantum state (\(n\)th level), the normalization condition is adopted in the form
\[\langle m|n\rangle\equiv\int\Big{(}\prod_{i}^{A}\frac{d\mathbf{p}_{i}^{\star}}{(2 \pi)^{3}}\Big{)}\widetilde{\psi}_{n}(\{p^{\star}\})\widetilde{\psi}_{m}^{\ast} (\{p^{\star}\})(2\pi)^{3}\delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{\star})=\delta_{ mn}. \tag{6.2}\]
For nuclear states \(|P_{n}\rangle\) from (3.2) it gives the normalization condition
\[\langle P_{m}^{\prime}|P_{n}\rangle=(2\pi)^{3}2P_{n}^{0}\delta^{3}(\mathbf{P}-\bm {P}^{\prime})\delta_{nm}. \tag{6.3}\]
A Formal definition of the \(|n\rangle\)-state satisfying (6.2) can be given as follows:
\[|n\rangle=\int\Big{(}\prod_{i=1}^{A}d\overline{\mathbf{p}_{i}^{\star}}\Big{)} \frac{\widetilde{\psi}_{n}(\{p^{\star}\})}{\sqrt{A!}}\Big{[}(2\pi)^{3}\delta^ {3}\Big{(}\sum_{i=1}^{A}\mathbf{p}_{i}^{\star}\Big{)}\Big{]}^{1/2}|\{p^{\star}\}\rangle. \tag{6.4}\]
Formula for the hadronic current \(\mathbf{h}_{mn}^{\mu}(\mathbf{q})\).The definition of the matrix element in terms of the effective Lagrangian \({\cal L}_{\rm int}(x)=\frac{G_{\rm F}}{\sqrt{2}}H^{\mu}(x)\,L_{\mu}(x)\) looks as follows:
\[i(2\pi)^{4}\delta^{4}(\sum p_{i}-\sum p_{f}){\cal M}\equiv\langle{\rm f}|i{ \int}d^{4}x{\cal L}_{\rm int}(x)|{\rm i}\rangle=\langle{\rm f}|\frac{iG_{\rm F }}{\sqrt{2}}{\int}d^{4}x\!:\!H^{\mu}(x)L_{\mu}(x)\!:\!|{\rm i}\rangle. \tag{6.5}\]
In the case of \(\chi A\to\chi A^{(\ast)}\) scattering, the initial and final states are
\[|{\rm i}\rangle = |\chi(k,s),A(P_{n})\rangle=(2\pi)^{3/2}\sqrt{2E_{\chi}}\,a_{\chi} ^{+}(\mathbf{k},s)|0\rangle|P_{n}\rangle, \tag{6.6}\] \[\langle{\rm f}| = \langle\chi(k^{\prime},s^{\prime}),A^{(\ast)}(P_{m}^{\prime})|=(2 \pi)^{3/2}\sqrt{2E_{\chi^{\prime}}}\,\langle 0|a_{\chi}(\mathbf{k}^{\prime},s^{ \prime})\langle P_{m}^{\prime}|.\]
Therefore, expression (6.5) for the process \(\chi A\to\chi A^{(\ast)}\) becomes
\[(2\pi)^{4}\delta^{4}(k+P_{n}-k^{\prime}-P_{m}^{\prime})i{\cal M}_ {mn} = \frac{iG_{\rm F}}{\sqrt{2}}\int d^{4}x\,H_{nm}^{\mu}(x)\,L_{\mu}^{ \chi}(x),\quad\mbox{where} \tag{6.7}\] \[H_{nm}^{\mu}(x) \equiv \langle P_{m}^{\prime}|:\!\hat{H}^{\mu}(x)\!:|P_{n}\rangle,\] \[L_{\mu}^{\chi}(x) \equiv (2\pi)^{3}\sqrt{2E_{\chi^{\prime}}2E_{\chi}}\langle 0|a_{\chi}( \mathbf{k}^{\prime},s^{\prime}):\!\hat{L}_{\mu}(x)\!:a_{\chi}^{+}(\mathbf{k},s)|0\rangle.\]
Using the fermionic quantum field operators of the \(\chi\) particle and the nucleon
\[\overline{\hat{\psi}}(x)=\int\frac{d\mathbf{y}^{\prime}e^{iy^{\prime}x}}{\sqrt{(2 \pi)^{3}2E_{\mathbf{y}^{\prime}}}}\sum_{s=1,2}a_{s}^{+}(\mathbf{y}^{\prime})\overline{ u}(\mathbf{y}^{\prime},s),\ \ \hat{\psi}(x)=\int\frac{d\mathbf{y}e^{-iyx}}{\sqrt{(2\pi)^{3}2E_{\mathbf{y}}}}\sum_{r=1,2}a _{r}(\mathbf{y})u(\mathbf{y},r), \tag{6.8}\]
one can write the nucleon \(\hat{H}^{\mu}(x)\) and the lepton \(\hat{L}_{\mu}(x)\) operators in the following way:
\[\hat{H}^{\mu}(x)\equiv\sum_{k}^{A}\overline{\hat{\psi}}_{k}(x)\,O_{k}^{\mu}\, \hat{\psi}_{k}(x)\quad\mbox{and}\quad\hat{L}_{\mu}(x)\equiv\overline{\hat{\psi }}_{\chi}(x)\,O_{\mu}\,\hat{\psi}_{\chi}(x),\]
where \(O^{\mu}\) is some combination of \(\gamma\)-matrices corresponding to the specific Lorentz interaction structure. Then the lepton element from formula (6.7) is transformed as follows:
\[L_{\mu}^{\chi}(x) = \sqrt{E_{k^{\prime}}E_{k}}\int\frac{d\mathbf{y}^{\prime}d\mathbf{y}e^{ix(y^ {\prime}-y)}}{\sqrt{E_{\mathbf{y}^{\prime}}E_{\mathbf{y}}}}\sum_{r,r^{\prime}=1,2} \langle 0|a_{s^{\prime}}(\mathbf{k}^{\prime})\!:\!a_{r^{\prime}}^{+}(\mathbf{y}^{ \prime})\overline{u}(\mathbf{y}^{\prime},r^{\prime})O_{\mu}a_{r}(\mathbf{y})u(\mathbf{y},r )\!:\!a_{s}^{+}(\mathbf{k})|0\rangle.\]
Further, since \(a_{s^{\prime}}(\mathbf{k}^{\prime})a_{r^{\prime}}^{+}(\mathbf{y}^{\prime})=\delta_{s^{ \prime},r^{\prime}}\delta(\mathbf{k}^{\prime}-\mathbf{y}^{\prime})\) and \(a_{r}(\mathbf{y})a_{s}^{+}(\mathbf{k})=\delta_{s,r}\delta(\mathbf{k}-\mathbf{y})\), one gets
\[L_{\mu}^{\chi}(x)=e^{ix(k^{\prime}-k)}\overline{u}_{\chi}(\mathbf{k}^{\prime},s^{ \prime})O_{\mu}u_{\chi}(\mathbf{k},s)\equiv e^{-ixq}\,\overline{u}_{\chi}(\mathbf{k}^ {\prime},s^{\prime})O_{\mu}u_{\chi}(\mathbf{k},s). \tag{6.9}\]
According to (3.2), the initial and final wave functions of the nucleus are given by the formulas
\[\langle P^{\prime}_{m}|= \int\Bigl{(}\prod_{j}^{A}d\tilde{\mathbf{p}^{\prime}}_{j}^{\star} \Bigr{)}\frac{\tilde{\psi}_{m}^{*}(\{p^{\prime}{}^{\star}\})}{\sqrt{A!}}\Phi_ {m}^{*}(p^{\prime})\langle\{p^{\prime}{}^{\star}\}|,\ \ |P_{n}\rangle= \int\Bigl{(}\prod_{i}^{A}d\tilde{\mathbf{p}^{\star}_{i}}\Bigr{)}\frac{ \tilde{\psi}_{n}(\{p^{*}\})}{\sqrt{A!}}\Phi_{n}(p)|\{p^{*}\}\rangle. \tag{6.10}\]
In this case, the product of the "external" wave functions is
\[\Phi_{m}^{*}(p^{\prime})\Phi_{n}(p) = (2\pi)^{3}\sqrt{2P_{m}^{0^{\prime}}}\delta^{3}(\mathbf{p}^{\prime}- \mathbf{P}^{\prime}_{m})\cdot(2\pi)^{3}\sqrt{2P_{n}^{0}}\delta^{3}(\mathbf{p}-\mathbf{P}_ {n}), \tag{6.11}\]
where \(\mathbf{P}_{n}\) and \(\mathbf{P}^{\prime}_{m}\) are the 3-momenta of motion of the entire nucleus as a whole, and \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\) are the arguments of the wave functions that have the meaning \(\sum_{i}^{A}\mathbf{p}_{i}\) and \(\sum_{i}^{A}\mathbf{p}^{\prime}_{i}\), respectively. By virtue of (6.10), the nucleon matrix element from formula (6.7) with allowance for the summation over all \(A\) nucleons takes the form
\[H_{mn}^{\mu}(x) = \sum_{k}^{A}\sum_{s_{k},r_{k}}^{1,2}\int\frac{d\mathbf{y}^{\prime}_{k }d\mathbf{y}_{k}e^{ix(y^{\prime}_{k}-y_{k})}}{(2\pi)^{3}\sqrt{2E_{\mathbf{y}_{k}}2E_{ \mathbf{y}^{\prime}_{k}}}}\int\Big{(}\prod_{j}^{A}d\tilde{\mathbf{p}^{\prime}}_{j}^{ \star}\Bigr{)}\frac{\tilde{\psi}_{m}^{*}(\{p^{\prime}{}^{\star}\})}{\sqrt{A!} }\Phi_{m}^{*}(p^{\prime})\times\] \[\times\int\Big{(}\prod_{i}^{A}d\tilde{\mathbf{p}^{*}_{i}}\Big{)} \frac{\tilde{\psi}_{n}(\{p^{*}\})}{\sqrt{A!}}\Phi_{n}(p)\langle\{p^{\prime}{}^ {\star}\}|a_{s_{k}}^{+}(\mathbf{y}^{\prime}_{k})\big{\{}\overline{u}(\mathbf{y}^{\prime }_{k},s_{k})O_{k}^{\mu}u(\mathbf{y}_{k},r_{k})\big{\}}a_{r_{k}}(\mathbf{y}_{k})|\{p^{*} \}\rangle.\]
To further transform of expression (6.12), it is necessary to calculate the matrix element of the single-particle (\(k\)th) nucleon current
\[h_{k}^{\mu}(\mathbf{y}^{\prime}_{k},r^{\prime}_{k},\mathbf{y}_{k},r_{k})\equiv \overline{u}(\mathbf{y}^{\prime}_{k},r^{\prime}_{k})O_{k}^{\mu}u(\mathbf{y}_{k},r_{k}) \tag{6.13}\]
from the many-particle state of \(A\) free nucleons
\[w_{A}^{k}\equiv\langle\{p^{\prime}{}^{\star}\}|a_{r^{\prime}_{k}}^{+}(\mathbf{y}^{ \prime}_{k})\big{\{}h_{k}^{\mu}(\mathbf{y}^{\prime}_{k},r^{\prime}_{k},\mathbf{y}_{k},r _{k})\big{\}}a_{r_{k}}(\mathbf{y}_{k})|\{p^{*}\}\rangle. \tag{6.14}\]
Here and below, the renaming is done: \(r^{\prime}_{k}\equiv s_{k}\). The states \(\langle\{p^{\prime}{}^{\star}\}|\) and \(|\{p^{*}\}\rangle\) can be written by isolating the active \(k\)th nucleon, for example, as
\[{}_{A}\langle\{p^{\prime}{}^{\star}\}|=\langle 0|(..,p^{{}^{\prime}{}^{ \star}}_{k},..);..,a_{s_{p^{\prime}}}(\mathbf{p^{\prime}{}^{\star}_{k}}),..|,\ \ \ |\{p^{*}\}\rangle_{A}=|..,a_{r_{p}}^{+}(\mathbf{p^{*} }_{k}),..;(..,p^{*}_{k},..)|0\rangle.\]
Then expression (6.14) can be represented as follows:
\[w^{k}_{A}={}_{A-1}\langle\{p^{\prime}{}^{\star}\}||\{p^{\star}\} \rangle_{A-1}\times w^{k}_{1},\quad\text{where} \tag{6.15}\] \[w^{k}_{1}\equiv\langle\mathbf{p}^{{}^{\prime}\star}_{k},s_{p^{\prime} }|a^{+}_{r^{\prime}_{k}}(\mathbf{y}^{\prime}_{k})\big{\{}\overline{u}(\mathbf{y}^{ \prime}_{k},r^{\prime}_{k})O^{\mu}_{k}u(\mathbf{y}_{k},r_{k})\big{\}}a_{r_{k}}(\bm {y}_{k})|\mathbf{p}^{\star}_{k},r_{p}\rangle \tag{6.16}\]
is the matrix element of the one-particle current (6.13) in the one-nucleon (\(k\)th) state, and
\[{}_{A-1}\langle\{p^{\prime}{}^{\star}\}||\{p^{\star}\}\rangle_{A-1} \equiv \langle 0|(..,p^{{}^{\prime}\star}_{k-1},p^{{}^{\prime}\star}_{k+ 1},..)|(..,p^{\star}_{k-1},p^{\star}_{k+1},..)|0\rangle=\] \[= (A-1)!\Big{(}\prod_{l\neq k}^{A-1}(2\pi)^{3}2E_{\mathbf{p}^{\prime}_{ l}}\delta^{3}(\mathbf{p}^{\star}_{l}-\mathbf{p}^{{}^{\prime}\star}_{l})\delta_{r_{l},r^{ \prime}_{l}}\Big{)}\]
is the normalization condition for the state of \((A-1)\) free nucleons.
Considering the normalization factor \((2\pi)^{3}\sqrt{2E_{p^{\star}_{k}}2E_{p^{\prime}_{k}}}\) for the single-particle initial and final \(k\)th nucleon state, commutation conditions for creation and annihilation operators
\[a_{s_{p^{\prime}}}(\mathbf{p}^{{}^{\prime}\star}_{k})a^{+}_{r^{\prime}_{k}}(\mathbf{y} ^{\prime}_{k})=\delta^{3}(\mathbf{p}^{{}^{\prime}\star}_{k}-\mathbf{y}^{\prime}_{k}) \delta_{r^{\prime}_{k},s_{p^{\prime}}},\quad a_{r_{k}}(\mathbf{y}_{k})a^{+}_{r_{p} }(\mathbf{p}^{\star}_{k})=\delta^{3}(\mathbf{p}^{\star}_{k}-\mathbf{y}_{k})\delta_{r_{k}, r_{p}}\]
as well as the vacuum normalization \(\langle 0|0\rangle=1\), the single-particle matrix element \(w^{k}_{1}\) (6.16) is transformed as follows
\[w^{k}_{1}=(2\pi)^{3}\sqrt{2E_{p^{\star}_{k}}2E_{p^{\prime}_{k}}}\delta^{3}( \mathbf{p}^{{}^{\prime}\star}_{k}-\mathbf{y}^{\prime}_{k}))\delta_{r^{\prime}_{k},s_{ p^{\prime}}}\delta^{3}(\mathbf{p}^{\star}_{k}-\mathbf{y}_{k}))\delta_{r_{k},r_{p}} \big{\{}\overline{u}(\mathbf{y}^{\prime}_{k},r^{\prime}_{k})O^{\mu}_{k}u(\mathbf{y}_{k},r_{k})\big{\}}.\]
With this expression for \(w^{k}_{1}\), the matrix element of the (\(k\)th) nucleon current in the state of \(A\) free nucleons (6.15) takes the form
\[w^{k}_{A}= (A-1)!\Big{(}\prod_{l\neq k}^{A-1}(2\pi)^{3}2E_{\mathbf{p}^{\prime}_{ l}}\delta^{3}(\mathbf{p}^{\star}_{l}-\mathbf{p}^{{}^{\prime}\star}_{l})\delta_{r_{l},r^{ \prime}_{l}}\Big{)}\big{\{}\overline{u}(\mathbf{y}^{\prime}_{k},r^{\prime}_{k})O^{ \mu}_{k}u(\mathbf{y}_{k},r_{k})\big{\}}\] \[\times(2\pi)^{3}\sqrt{2E_{p^{\star}_{k}}2E_{p^{{}^{\prime}}_{k}} }\delta^{3}(\mathbf{p}^{\prime}_{k}-\mathbf{y}^{\prime}_{k}))\delta_{r^{\prime}_{k},s _{p^{\prime}}}\delta^{3}(\mathbf{p}^{\star}_{k}-\mathbf{y}_{k}))\delta_{r_{k},r_{p}}.\]
Substituting this expression into the formula for the hadronic current (6.12), and taking into account that the active \(k\)th nucleon can be in any of the \(A\) places in the nucleus, which gives the factor \(A\), one gets
\[H^{\mu}_{mn}(x) = \!\!\!\!\!\sum_{k}^{A}\int\Big{[}\prod_{j,i}^{A}\frac{d\mathbf{p}^{{ }^{\prime}\star}_{j}\ d\mathbf{p}^{\star}_{i}}{(2\pi)^{6}\sqrt{4E_{\mathbf{p}^{\prime} _{j}}E_{\mathbf{p}^{\prime}_{i}}}}\Big{]}\Big{[}\prod_{l\neq k}^{A-1}(2\pi)^{3}2E_ {\mathbf{p}^{\prime}_{i}}\delta^{3}(\mathbf{p}^{\star}_{l}-\mathbf{p}^{{}^{\prime}\star}_ {l})\delta_{r_{l},r^{\prime}_{l}}\Big{]}\bar{\psi}^{\star}_{m}(\{p^{\prime} \star\})\bar{\psi}_{n}(\{p^{\prime}\})\Phi^{\star}_{m}(p^{\prime})\Phi_{n}(p)\] \[\times\int\frac{d\mathbf{y}^{\prime}_{k}d\mathbf{y}_{k}e^{ix(y^{\prime}_{ k}-y_{k})}\delta^{3}(\mathbf{p}^{{}^{\prime}\star}_{k}-\mathbf{y}^{\prime}_{k}) \delta^{3}(\mathbf{p}^{\star}_{k}-\mathbf{y}_{k})}{\sqrt{4E_{\mathbf{y}_{k}}E_{p^{\prime}_{k} }}}\sqrt{4E_{p^{\prime}_{k}}E_{p^{\prime}_{k}}}[\overline{u}(\mathbf{y}^{\prime}_{k },k^{\prime}_{k})O^{\mu}_{k}u(\mathbf{y}_{k},r_{k})].\]
Integration in this expression over \(d\mathbf{y}^{\prime}_{k}d\mathbf{y}_{k}\), summation over coinciding spin indices of spectator nucleons and separation integration over \(\mathbf{p}^{\star}_{k}\) and \(\mathbf{p}^{{}^{\prime}\star}_{k}\), give the expression
\[H^{\mu}_{mn}(x)= \!\!\!\!\!\sum_{k}^{A}\int\Big{[}\prod_{j,i\neq k}^{A}\frac{d\bm {p}^{{}^{\prime}\star}_{j}d\mathbf{p}^{{}^{\prime}\star}_{i}e^{ix(p^{{}^{\prime}\star}_{ j}-p^{\prime}_{i})}\delta^{3}(\mathbf{p}^{\star}_{i}-\mathbf{p}^{{}^{\prime}\star}_{j})}{(2\pi)^{3} \sqrt{E_{\mathbf{p}^{\prime}_{j}}E_{\mathbf{p}^{\prime}_{i}}}}E_{\mathbf{p}^{\prime}_{i}} \Big{]}\Big{\{}\overline{u}(\mathbf{p}^{{}^{\prime}\star}_{k},r_{p^{\prime}_{k}})O^{ \mu}_{k}u(\mathbf{p}^{\star}_{k},r_{p_{k}})\big{\}}\times\] \[\times\tilde{\psi}^{\star}_{m}(\{p^{\prime}{}^{\star}\})\tilde{ \psi}_{n}(\{p^{\star}\})[\Phi^{\star}_{m}(p^{\prime})\Phi_{n}(p)]\frac{d\mathbf{p} ^{{}^{\prime}\star}_{k}d\mathbf{p}^{\star}_{k}}{(2\pi)^{6}\sqrt{4E_{\mathbf{p}^{\prime} _{k}}E_{\mathbf{p}^{\prime}_{k}}}},\]
where after integration over \(d\mathbf{p}_{j}^{\,\star}\) (except for \(\mathbf{p}_{k}^{\,\star}\)), one gets
\[H_{mn}^{\mu}(x) = \big{[}\sqrt{4P_{m}^{0^{\prime}}P_{n}^{0}}\delta^{3}(\mathbf{q}+\mathbf{P}_ {n}-\mathbf{P}_{m}^{\prime})\delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{\star})\big{]}\sum_ {k}^{A}\int\Big{[}\prod_{i\neq k}^{A}\frac{E_{\mathbf{p}_{i}^{\star}}\ d\mathbf{p}_{i}^{ \star}}{(2\pi)^{3}\sqrt{E_{\mathbf{p}_{i}^{\star}}E_{\mathbf{p}_{i}^{\star}}}}\Big{]}\times\] \[\times \widetilde{\psi}_{m}^{\star}(\{p_{\star}^{(k)}\})\widetilde{\psi} _{n}(\{p^{\star}\})\big{\{}\overline{u}(\mathbf{p}_{k}^{\prime\star},r_{p_{k}^{ \prime}})O_{k}^{\mu}u(\mathbf{p}_{k}^{\star},r_{p_{k}})\big{\}}\frac{d\mathbf{p}_{k}^ {\prime\star}d\mathbf{p}_{k}^{\star}e^{ix(p_{k}^{{}^{\prime}}-p_{k}^{\prime})}}{ \sqrt{4E_{\mathbf{p}_{k}^{\star}}E_{\mathbf{p}_{k}^{\star}}}}.\]
It is used here that the product of functions (6.11) can be written as [1; 3]
\[\Phi_{m}^{\star}(p^{\prime})\Phi_{n}(p)=(2\pi)^{6}\sqrt{4P_{m}^{0^{\prime}}P_ {n}^{0}}\delta^{3}(\mathbf{q}+\mathbf{P}_{n}-\mathbf{P}_{m}^{\prime})\delta^{3}(\sum_{i=1}^ {A}\mathbf{p}_{i}^{\star}), \tag{6.17}\]
and the notation \(\widetilde{\psi}_{m}^{\star}(\{p_{\star}^{(k)}\})\equiv\widetilde{\psi}_{m}^{ \star}(\{p^{\star}\},\mathbf{p}_{k}^{{}^{\prime}\star}\neq\mathbf{p}_{k}^{\star},r_{p_{ k}^{\prime}}\neq r_{p_{k}})\), is introduced, where the argument \(\{p_{\star}^{(k)}\}\) is the same as \(\{p^{\star}\}\), except that at the \(k\)th place in \(\{p_{\star}^{(k)}\}\) there is not the pair \((\mathbf{p}_{k}^{\star},r_{p_{k}})\) but another pair \((\mathbf{p}_{k}^{{}^{\prime}\star},r_{p_{k}^{\prime}})\neq(\mathbf{p}_{k}^{\star},r_{p_ {k}})\).
The next step is the integration of expression (6.7) over \(x\), which12 after one more integration over \(d\mathbf{p}_{k}^{{}^{\prime}\star}\) due to \(\delta^{3}(\mathbf{p}_{k}^{\star}+\mathbf{q}-\mathbf{p}_{k}^{{}^{\prime}\star})\) reduces the integral \(\int d^{4}x\,H_{nm}^{\mu}(x)\,L_{\mu}^{\chi}(x)\) to the following form:
Footnote 12: Relation \(\int d^{4}x\,e^{ix(p_{k}^{{}^{\prime}}-p_{k}^{\star}-q)}=(2\pi)^{4}\delta^{4} (p_{k}^{\star}+q-p_{k}^{{}^{\prime}\star})\) was used.
\[(2\pi)^{4}\big{[}\sqrt{4P_{m}^{0^{\prime}}P_{n}^{0}}\delta^{3}( \mathbf{q}+\mathbf{P}_{n}-\mathbf{P}_{m}^{\prime})\delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{ \star})\big{]}\overline{u}_{\chi}(\mathbf{k}^{\prime},s^{\prime})O_{\mu}u_{\chi}( \mathbf{k},s)(2\pi)^{3}\sum_{k}^{A}\int\Big{(}\prod_{i}^{A}\frac{d\mathbf{p}_{i}^{ \star}}{(2\pi)^{3}}\Big{)}\times\] \[\times\delta(p_{0,k}^{\star}+q_{0}-p_{0,k}^{{}^{\prime}\star}) \frac{\overline{u}(\mathbf{p}_{k}^{{}^{\prime}\star}\!\!=\!\mathbf{p}_{k}^{\star}+\mathbf{ q},r_{p_{k}^{\prime}})O_{k}^{\mu}u(\mathbf{p}_{k}^{\star},r_{p_{k}})}{\sqrt{4E_{\mathbf{p}_{k}^{ \prime\star}}\!\!=\!\mathbf{p}_{k}^{\star}\!\!+\!\mathbf{q}\,E_{\mathbf{p}_{k}^{\star}}}} \widetilde{\psi}_{m}^{*}(\{p_{\star}^{(k)}\},\mathbf{p}_{k}^{\prime\star}\!\!=\!\bm {p}_{k}^{\star}+\mathbf{q})\widetilde{\psi}_{n}(\{p^{\star}\}).\]
Equating this integral to the definition of matrix element (6.7), using the equality \(\delta^{4}(k+P_{n}-k^{\prime}-P_{m}^{\prime})=\delta^{3}(\mathbf{q}+\mathbf{P}_{n}-\mathbf{ P}_{m}^{\prime})\delta(q_{0}+P_{0,n}-P_{0,m}^{\prime})\) and \(q=k-k^{\prime}\), introducing notation for the lepton current \(l_{\mu}(k^{\prime},k,s^{\prime},s^{\prime})\equiv\overline{u}_{\chi}(\mathbf{k}^{ \prime},s^{\prime})O_{\mu}u_{\chi}(\mathbf{k},s)\), one gets the matrix element in the form
\[i\mathcal{M}_{mn}\delta(q_{0}\!+\!P_{0,n}\!-\!P_{0,m}^{\prime})\! =\!\frac{iG_{\rm F}}{\sqrt{2}}l_{\mu}(k^{\prime},k,s^{\prime},s)\sqrt{4P_{m}^{0^ {\prime}}P_{n}^{0}}\sum_{k}^{A}\int\Big{[}\prod_{i}^{A}\frac{d\mathbf{p}_{i}^{ \star}}{(2\pi)^{3}}\Big{]}(2\pi)^{3}\delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{ \star})\times\] \[\times\delta(q_{0}+p_{0,k}^{\star}-p_{0,k}^{{}^{\prime}\star}) \frac{\overline{u}(\mathbf{p}_{k}^{\star}+\mathbf{q},r_{p_{k}^{\prime}})O_{k}^{\mu}u( \mathbf{p}_{k}^{\star},r_{p_{k}})}{\sqrt{4E_{\mathbf{p}_{k}^{\star}}\!\!+\!\mathbf{q}\,E_{ \mathbf{p}_{k}^{\star}}}}\widetilde{\psi}_{m}^{*}(\{p_{\star}^{(k)}\},\mathbf{p}_{k}^{ \star}+\mathbf{q})\widetilde{\psi}_{n}(\{p^{\star}\}).\]
The delta functions (with common argument \(q_{0}\)) on the right and left sides of this relation cancel each other only if the nuclear integrity condition (3.6) is met
\[P_{0,n}\!-\!P_{0,m}^{\prime}\equiv-T_{A}-\Delta\varepsilon_{mn}=p_{0,k}^{\star}-p_ {0,k}^{{}^{\prime}\star}\equiv\sqrt{m^{2}+\mathbf{p}_{k}^{\star}}^{2}-\sqrt{m^{2}+( \mathbf{p}_{k}^{\star}+\mathbf{q})^{2}}.\]
Then integration over \(\mathbf{p}_{k}^{\star}\) in the hadronic current \(h_{mn}^{\mu}\) (see (3.3)) can also be removed using the delta function of the form
\[\delta(-T_{A}-\Delta\varepsilon_{mn}+\sqrt{m^{2}+\mathbf{p}_{k}^{\star 2}}- \sqrt{m^{2}+(\mathbf{p}_{k}^{\star}+\mathbf{q})^{2}})\equiv\delta\big{(}f(\mathbf{p}_{k}^{ \star})\big{)}=1. \tag{6.18}\]
This allows one to take the single-particle hadronic current at \(\mathbf{\bar{p}_{k}^{\star}}(\mathbf{q})\) outside the integral sign, where \(\mathbf{\bar{p}_{k}^{\star}}(\mathbf{q})\) is the \(\mathbf{q}\)_-dependent_ solution of the equation \(f(\mathbf{\bar{p}_{k}^{\star}})=0\), and finally get
\[h_{mn}^{\mu}=\sum_{k}^{A}\frac{\overline{u}(\mathbf{\bar{p}_{k}^{ \star}}+\mathbf{q},s_{p_{k}})O_{k}^{\mu}u(\mathbf{\bar{p}_{k}^{\star}},r_{p_{k}})}{ \sqrt{4E_{\mathbf{\bar{p}_{k}^{\star}}}E_{\mathbf{\bar{p}_{k}^{\star}}+\mathbf{q}}}}\int \prod_{i\neq k}^{A}\frac{d\mathbf{p}_{i}^{\star}}{(2\pi)^{3}}\widetilde{\psi}_{m}^ {\star}(\{p_{\star}^{(k)}\},\mathbf{\bar{p}_{k}^{\star}}+\mathbf{q})\widetilde{\psi}_{ n}(\{p_{\star}^{(k)}\},\mathbf{\bar{p}_{k}^{\star}})(2\pi)^{3}\delta^{3}(\sum_{i=1}^{A} \mathbf{p}_{i}^{\star}).\]
Formally, one can restore the integration over \(A\) momenta in the form
\[h_{mn}^{\mu} = \sum_{k}^{A}\frac{\overline{u}(\mathbf{\bar{p}_{k}^{\star}}+\mathbf{q},s_ {p_{k}})O_{k}^{\mu}u(\mathbf{\bar{p}_{k}^{\star}},r_{p_{k}})}{\sqrt{4E_{\mathbf{\bar{p }_{k}^{\star}}}E_{\mathbf{\bar{p}_{k}^{\star}}+\mathbf{q}}}}\times \tag{6.19}\] \[\times \int\prod_{i=1}^{A}\frac{d\mathbf{p}_{i}^{\star}\delta\big{(}f(\mathbf{p}_ {k}^{\star})\big{)}}{(2\pi)^{3}}\widetilde{\psi}_{m}^{\star}(\{p_{\star}^{(k)} \},\mathbf{p}_{k}^{\star}+\mathbf{q})\widetilde{\psi}_{n}(\{p^{\star}\})(2\pi)^{3} \delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{\star}).\]
Formula for the hadronic structure \(f_{mn}^{k}(\mathbf{q})\).Let us demonstrate that the multidimensional integral in formula (6.19) is the matrix element \(\langle m|e^{i\mathbf{q}\hat{X}_{k}}|n\rangle\). To this end, we note that the operator \(e^{i\mathbf{q}\hat{X}}\) increases the 3-momentum \(\mathbf{p}\) of the state \(|\mathbf{p}\rangle\) by momentum \(\mathbf{q}\), transforming the state \(|\mathbf{p}\rangle\) into the state \(|\mathbf{p}+\mathbf{q}\rangle\) by the operation13
Footnote 13: Justification of formula (6.20) can be found, for example, in [3].
\[e^{i\mathbf{q}\hat{X}}|\mathbf{p}\rangle=\frac{\sqrt{2E_{\mathbf{p}}}}{\sqrt{2E_{\mathbf{p}+ \mathbf{q}}}}|\mathbf{p}+\mathbf{q}\rangle. \tag{6.20}\]
From (6.20) and the normalization condition of the single-particle states \(\langle\mathbf{k}|\mathbf{p}\rangle=(2\pi)^{3}2E_{\mathbf{p}}\delta^{3}(\mathbf{p}-\mathbf{k})\) one gets
\[\langle\mathbf{k}|e^{i\mathbf{q}\hat{X}}|\mathbf{p}\rangle=\frac{\sqrt{2E_{\mathbf{p}}}}{ \sqrt{2E_{\mathbf{p}+\mathbf{q}}}}\langle\mathbf{k}|\mathbf{p}+\mathbf{q}\rangle=(2\pi)^{3}\sqrt{2E _{\mathbf{p}}2E_{\mathbf{p}+\mathbf{q}}}\delta^{3}(\mathbf{p}+\mathbf{q}-\mathbf{k}). \tag{6.21}\]
By definition, the matrix element \(\langle m|e^{i\mathbf{q}\hat{X}_{k}}|n\rangle\) should be calculated in the nuclear rest frame, where the nuclear state wave function \(|n\rangle\) is given by (6.4). Let us write them in the form
\[\langle m|=\!\int\!\phi_{m}^{\star}(0)\Big{(}\prod_{j}^{A}d\tilde{\mathbf{p}_{j} ^{\star}}\Big{)}\frac{\tilde{\psi}_{m}^{\star}(\{p_{\star}^{\star}\})}{\sqrt{A!}}\langle\{p_{\star}^{\prime}\}|\ \ \text{and}\ \ |n\rangle=\int\!\phi_{n}(0)\Big{(}\prod_{i}^{A}d\tilde{\mathbf{p}_{i}^{\star}} \Big{)}\frac{\tilde{\psi}_{n}(\{p_{\star}\})}{\sqrt{A!}}|\{p_{\star}\}\rangle. \tag{6.22}\]
With (6.22), the matrix element \(f_{mn}^{k}\equiv\langle m|e^{i\mathbf{q}\hat{X}_{k}}|n\rangle\) becomes
\[f_{mn}^{k}(\mathbf{q}) = \int C\ \Big{(}\prod_{j,i}^{A}\frac{d\mathbf{p}_{j}^{\prime\star}d\mathbf{p} _{i}^{\star}}{(2\pi)^{6}\sqrt{2E_{\mathbf{p}_{j}^{\prime\star}}2E_{\mathbf{p}_{i}^{ \star}}}}\Big{)}\frac{\tilde{\psi}_{m}^{\star}(\{p_{\star}^{\prime}\})\tilde{ \psi}_{n}(\{p^{\star}\})}{A!}\ Q_{k}(\mathbf{q}), \tag{6.23}\]
where \(C\equiv\phi_{m}^{\star}(0)\phi_{n}(0)\equiv(2\pi)^{3}\delta^{3}(\sum_{j}^{A} \mathbf{p}_{j}^{\star})\), and the matrix element of the \(k\)th nucleon shift operator \(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}\) with respect to the many-particle state of free nucleons is introduced
\[Q_{k}(\mathbf{q})\equiv\langle\{p_{\star}^{\prime}\}|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|\{p _{\star}\}\rangle=\langle...,p_{k}^{{}^{\prime}\star},...|e^{i\mathbf{q}\hat{\mathbf{X}} _{k}}|...,p_{k}^{\star},...\rangle. \tag{6.24}\]
It can be presented as a product of \((A-1)\) non-interacting nucleons,
\[\langle...,p_{k-1}^{{}^{\prime}\star},p_{k+1}^{{}^{\prime}\star},...|...,p_{k- 1}^{\star},p_{k+1}^{\star},...\rangle=(A-1)!\Big{[}\prod_{l\neq k}^{A-1}(2\pi )^{3}2E_{\mathbf{p}_{l}^{\prime}}\delta^{3}(\mathbf{p}_{l}^{\star}-\mathbf{p}_{l}^{{}^{ \prime}\star})\delta_{r_{l},r_{l}^{\prime}}\Big{]},\]
and the matrix element of the action of the operator \(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}\) on the \(k\)th nucleon
\[Q_{k}(\mathbf{q})=\langle...,p_{k-1}^{{}^{\prime}\star},p_{k+1}^{{}^{\prime}\star },...|...,p_{k-1}^{\star},p_{k+1}^{\star},...\rangle\langle\mathbf{p}_{k}^{{}^{ \prime}\star}|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|\mathbf{p}_{k}^{\star}\rangle\delta_{r_{ k},r_{k}^{\prime}}. \tag{6.25}\]
From formula (6.21) for the single-particle nucleon state shift operator one obtains
\[\langle\mathbf{p}_{k}^{{}^{\prime}\star}|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|\mathbf{p}_{k}^{ \star}\rangle=(2\pi)^{3}\sqrt{2E_{\mathbf{p}_{k}^{\star}}2E_{\mathbf{p}_{k}^{\star}+ \mathbf{q}}}\delta^{3}(\mathbf{p}_{k}^{\star}+\mathbf{q}-\mathbf{p}_{k}^{{}^{\prime}\star}). \tag{6.26}\]
Then formula (6.25) becomes
\[Q_{k}(\mathbf{q})=A!\Big{[}\prod_{l\neq k}^{A-1}(2\pi)^{3}2E_{\mathbf{p}_{l}^{\star}} \delta^{3}(\mathbf{p}_{l}^{\star}-\mathbf{p}_{l}^{{}^{\prime}\star})\delta_{r_{l},r_{ l}^{\prime}}\Big{]}\delta_{r_{k},r_{k}^{\prime}}\sqrt{2E_{\mathbf{p}_{k}^{\star}}2E_{ \mathbf{p}_{k}^{\star}+\mathbf{q}}}(2\pi)^{3}\delta^{3}(\mathbf{p}_{k}^{\star}+\mathbf{q}-\mathbf{ p}_{k}^{{}^{\prime}\star}), \tag{6.27}\]
where one takes into account that the \(k\)th nucleon can be located at any of the \(A\) possible positions in the \(A\)-nucleus, which gives an additional permutation factor \(A\). From formula (6.27) it follows that the matrix element \(Q_{k}(\mathbf{q})\) does not depend on the spin indices.
Further, from the factorization of the spin and momentum dependences (3.8) and normalization conditions (3.9) it follows that the value (6.23) is nonzero only when the spin wave functions of the initial and final states are the same, The spin structure of the nucleus does not change under the action of the shift operator. As a result, the product of nuclear wave functions enters into formula (6.23) only via its "impulse component"
\[\widetilde{\psi}_{m}^{\star}(\{\mathbf{p}_{\star}^{\prime}\})\chi_{m}^{\star}(\{r^ {\prime}\})\,\widetilde{\psi}_{n}(\{\mathbf{p}_{\star}\})\chi_{n}(\{r\})\prod_{l }^{A}\delta_{r_{l},r_{l}^{\prime}}=\widetilde{\psi}_{m}^{\star}(\{\mathbf{p}_{ \star}^{\prime}\})\widetilde{\psi}_{n}(\{\mathbf{p}_{\star}\}). \tag{6.28}\]
Substituting expression (6.25) into (6.23) with allowance for (6.28), after integration over \(d\mathbf{p}_{j}^{{}^{\prime}\star}\) (without \(d\mathbf{p}_{k}^{{}^{\prime}\star}\)) due to \(\delta^{3}(\mathbf{p}_{l}^{\star}-\mathbf{p}_{l}^{{}^{\prime}\star})\) one arrives at the following:
\[f_{mn}^{k}(\mathbf{q})=\int\left[\frac{d\mathbf{p}_{k}^{\star}}{(2\pi)^{3}}\prod_{i \neq k}^{A}\frac{C\,d\mathbf{p}_{i}^{\star}}{(2\pi)^{3}}\right]\!\frac{\langle\mathbf{p }_{k}^{{}^{\prime}\star}|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|\mathbf{p}_{k}^{\star}\rangle }{(2\pi)^{3}\sqrt{4E_{\mathbf{p}_{k}^{\prime}}E_{\mathbf{p}_{k}^{\star}}}}\widetilde{ \psi}_{m}^{\star}(\{\mathbf{p}_{\star}\},\mathbf{p}_{k}^{{}^{\prime}\star}\neq\mathbf{p}_{k} ^{\star})\widetilde{\psi}_{n}(\{\mathbf{p}_{\star}\})d\mathbf{p}_{k}^{{}^{\prime}\star}. \tag{6.29}\]
Taking into account the explicit expression for the one-particle matrix element (6.26), which is proportional to \(\delta(\mathbf{p}_{k}^{\star}-\mathbf{p}_{k}^{{}^{\prime}\star}+\mathbf{q})\), and performing subsequent integration over \(d\mathbf{p}_{k}^{{}^{\prime}\star}\), which leads
to cancelation of normalization factors \((2\pi)^{3}\sqrt{4E_{\mathbf{p}_{k}^{\star}}E_{\mathbf{p}_{k}^{\star}}}\) and to the "shift" of the argument of the wave function of the final state of the nucleus, \(\mathbf{p}_{k}^{\star}\to\mathbf{p}_{k}^{\star}+\mathbf{q}\), one gets
\[f_{mn}^{k}(\mathbf{q})=\int\Big{[}\prod_{i}^{A}\frac{d\mathbf{p}_{i}^{\star}}{(2\pi)^{3 }}\Big{]}\widetilde{\psi}_{m}^{*}(\{\mathbf{p}_{\star}^{(k)}\},\mathbf{p}_{k}^{\star}+ \mathbf{q})\widetilde{\psi}_{n}(\{\mathbf{p}_{\star}\})(2\pi)^{3}\delta^{3}(\sum_{i=1}^ {A}\mathbf{p}_{i}^{\star}). \tag{111}\]
Let us return back to the condition of simultaneous conservation of energy and integrity of the nucleus. The need to apply this condition to the expression \(\langle m|e^{i\mathbf{q}\widehat{\mathbf{X}}_{k}}|n\rangle\) follows from the implicit assumption that the action of the operator \(e^{i\mathbf{q}\widehat{\mathbf{X}}_{k}}\), implemented as a momentum shift of the \(k\)th nucleon in the initial \(|n\rangle\) state of the nucleus by \(\mathbf{q}\), does not lead to the disintegration of the nucleus (the nucleus retains its integrity). Otherwise, the use of the final state of the nucleus in the form of the wave function \(\langle m|\) becomes meaningless. Roughly speaking, something from the outside strikes the \(k\)th nucleon of the nucleus in the \(|n\rangle\) state, increasing the momenta of both this nucleon and the entire nucleus by \(\mathbf{q}\), and as a result, the nucleus, while maintaining its integrity, passes into the \(\langle m|\) state. The condition for just such a "course of things" is relation (109). Then integration over \(\mathbf{p}_{k}^{\star}\) in expression (111) can be removed by the delta function \(\delta\big{(}f(\mathbf{p}_{k}^{\star})\big{)}\), and it can be rewritten as
\[\bar{f}_{mn}^{k}(\mathbf{q})=\int\Big{[}\prod_{i}^{A}\frac{d\mathbf{p}_{i}^{\star}}{(2 \pi)^{3}}\delta\big{(}f(\mathbf{p}_{k}^{\star})\big{)}\Big{]}\widetilde{\psi}_{m}^ {*}(\{\mathbf{p}_{\star}^{(k)}\},\mathbf{p}_{k}^{\star}+\mathbf{q})\widetilde{\psi}_{n}(\{ \mathbf{p}^{\star}\})(2\pi)^{3}\delta^{3}(\sum_{i=1}^{A}\mathbf{p}_{i}^{\star}). \tag{112}\]
Note that expression (112) for \(\mathbf{q}=0\) turns into the condition for the normalization of nuclear states in the rest frame of the nucleus (104). Indeed, at \(\mathbf{q}=0\) one has \(q_{0}=0\), and from the condition \(\delta(q_{0}+P_{0,n}-P_{0,m}^{\prime})=1\) follows \(P_{0,n}-P_{0,m}^{\prime}=-T_{A}-\Delta\varepsilon_{mn}=0\), i.e.,
\[f(\mathbf{p}_{k}^{\star})=-T_{A}-\Delta\varepsilon_{mn}+\sqrt{m^{2}+\mathbf{p}_{k}^{ \star}}^{2}-\sqrt{m^{2}+\mathbf{p}_{k}^{\star}}^{2}\equiv-T_{A}-\Delta\varepsilon_ {mn}=0\quad\text{for any }\ \mathbf{p}_{k}^{\star}.\]
So \(\delta(f(\mathbf{p}_{k}^{\star}))\equiv 1\) can be taken out of the integral over the variable \(\mathbf{p}_{k}^{\star}\), and formula (112) becomes
\[\bar{f}_{mn}^{k}(\mathbf{0})=\langle m|e^{i\mathbf{0}\widehat{\mathbf{X}}_{k}}|n\rangle= \int\Big{[}\prod_{i}^{A}\frac{d\mathbf{p}_{i}^{\star}}{(2\pi)^{3}}\Big{]} \widetilde{\psi}_{m}^{*}(\{\mathbf{p}_{\star}^{(k)}\},\mathbf{p}_{k}^{\star}) \widetilde{\psi}_{n}(\{\mathbf{p}^{\star}\})(2\pi)^{3}\delta^{3}(\sum_{i=1}^{A}\bm {p}_{i}^{\star})\equiv\langle m|n\rangle=\delta_{mn}.\]
Therefore, formula (112) has the same meaning as expression (111), being without the delta function.
Using formula (112) for \(\bar{f}_{mn}^{k}(\mathbf{q})\), one can write the hadronic current (108) as
\[\bar{h}_{mn}^{\mu}(\mathbf{q})=\sum_{k}^{A}\frac{\overline{u}(\mathbf{\bar{p}}_{k}^{ \star}+\mathbf{q},r_{k}^{\prime})O_{k}^{\mu}\,u(\mathbf{\bar{p}}_{k}^{\star},r_{k})}{ \sqrt{4E_{\mathbf{p}_{k}^{\star}}E_{\mathbf{p}_{k}^{\star}+\mathbf{q}}}}\bar{f}_{mn}^{k}( \mathbf{q})\,\lambda^{mn}(r^{\prime},r), \tag{113}\]
where \(\mathbf{\bar{p}}_{k}^{\star}\) is the solution of equation (109). It is worth comparing expression (113) with a similar formula from [1, 3] that has the form
\[h_{mn}^{\mu}(\mathbf{q})=\sum_{k=1}^{A}\frac{\bar{u}(\mathbf{\bar{p}}+\mathbf{q},r_{k}^{ \prime})O_{k}^{\mu}u(\mathbf{\bar{p}},r_{k})}{\sqrt{4E_{\mathbf{\bar{p}}}E_{\mathbf{\bar{p} }+\mathbf{q}}}}f_{mn}^{k}(\mathbf{q})\lambda^{mn}(r^{\prime},r),\quad\text{where}\]
\[f^{k}_{mn}(\mathbf{q})\equiv\int\Big{[}\prod_{j=1}^{A}\frac{d\mathbf{p}_{j}^{*}}{(2\pi)^{3 }}\Big{]}\widetilde{\psi}^{*}_{n}(\{\mathbf{p}_{*}^{(k)}\})\widetilde{\psi}_{n}(\{ \mathbf{p}_{*}\})(2\pi)^{3}\delta^{3}(\sum_{l=1}^{A}\mathbf{p}_{l}^{*}). \tag{111}\]
To obtain _this_ expression, a _special assumption_ was made in [1; 3] about the possibility of taking the single-particle matrix element, \(\frac{\bar{u}(\bar{\mathbf{p}}+\mathbf{q},r_{k}^{\prime})O_{k}^{\mu}u(\bar{\mathbf{p}},r_{ k})}{\sqrt{4E_{\bar{\mathbf{p}}}E_{\bar{\mathbf{p}}+\mathbf{q}}}}\), out of the integration right at the momentum \(\bar{\mathbf{p}}\), being the solution of equation (107). In formula (111) this factor is obtained automatically, but one has to slightly redefine the form factor function, i.e., instead of \(f^{k}_{mn}(\mathbf{q})\) from (110) one takes \(\bar{f}^{k}_{mn}(\mathbf{q})\) from (110). Nevertheless, this "redefinition" does not play any role, since the "physical"nuclear form factors of protons and neutrons \(F_{p/n}(\mathbf{q})\) are defined in terms of the functions \(f^{k}_{mn}(\mathbf{q})\) only formally (see, for example formula (100)), i.e., _without explicit calculation_ of the above-mentioned multidimensional integrals.
Based on the definition of \(f^{k}_{mn}(\mathbf{q})\), for example in the form of (111), and on the symmetry properties of the nuclear wave function, one can conclude [1; 3] that the structure factor \(f^{k}_{mn}\) does not depend on the number \(k\), and depends only on whether this \(k\) corresponds to the proton or the neutron. This property is used below.
Coherent and incoherent terms of the cross section \(\chi A\to\chi A^{(*)}\).The measurable differential cross section can be obtained by averaging (over all possible initial nucleus states \(|n\rangle\)) and summing (over all possible final states \(|m\rangle\)) of the differential cross section, which corresponds to the nuclear transition from the initial state \(|n\rangle\) to the final state \(|m\rangle\) due to the interaction with the \(\chi\) particle (12)
\[\frac{d\sigma}{dT_{A}}(\chi A\to\chi A^{(*)})=\sum_{n,m}\omega_{n}\frac{|i \mathcal{M}_{mn}|^{2}}{2^{5}\pi|\mathbf{k}_{\chi}^{\dagger}|^{2}m_{A}}C_{mn},\quad \text{where}\quad\sum_{n}\omega_{n}=1. \tag{112}\]
With the matrix element from (12) summed over the nucleon spin indices [1; 2; 3; 4]
\[i\mathcal{M}^{s^{\prime}s}_{mn}=i\frac{G_{F}}{\sqrt{2}}\frac{m_{A}}{m_{n}}C_{ 1,mn}^{1/2}\sum_{k=1}^{A}\sum_{r^{\prime}r}f^{k}_{mn}\lambda^{mn}(r^{\prime}, r)(l_{s^{\prime}s}\,h^{k}_{r^{\prime}r}), \tag{113}\]
the differential cross section (112) can be written as a sum of two terms from (102)
\[T^{s^{\prime}s}_{m=n} \equiv g_{\rm c}\sum_{k,j}^{A}\sum_{n}\omega_{n}\Big{[}f^{k}_{nn}f^{j*}_{ nn}\sum_{r}(l_{s^{\prime}s}\,h^{k}_{rr})\sum_{x}(l_{s^{\prime}s}\,h^{j}_{xx})^{* }\Big{]}\ \ \text{and} \tag{114}\] \[T^{s^{\prime}s}_{m\neq n} \equiv g_{\rm i}\sum_{k,j}^{A}\sum_{n}\omega_{n}\Big{[}\sum_{m\neq n}f^{k }_{mn}f^{j*}_{mn}\sum_{r^{\prime}r}\lambda^{mn}_{r^{\prime}r}(l_{s^{\prime}s} \,h^{k}_{r^{\prime}r})\Big{(}\sum_{x^{\prime}x}\lambda^{mn}_{x^{\prime}x}(l_{s^ {\prime}s}\,h^{j}_{x^{\prime}x})\Big{)}^{\dagger}\Big{]}. \tag{115}\]
Here, correction factors are introduced in the form
\[g_{\rm i/c}\equiv C_{1,mn}C_{mn}\simeq\frac{\Big{(}1+\frac{\varepsilon_{n}}{m_ {A}}\Big{)}\Big{(}1+\frac{\varepsilon_{m}+T_{A}}{m_{A}}\Big{)}}{\sqrt{1+\frac{ \bar{\mathbf{p}}^{2}}{m^{2}}}\Big{(}\sqrt{1+\frac{\bar{\mathbf{p}}^{2}}{m^{2}}}+\frac{ T_{A}+\Delta\varepsilon_{mn}}{m}\Big{)}}\frac{\Big{(}1+\frac{T_{A}}{m_{A}} \Big{)}/\Big{(}1+\frac{T_{A}+\varepsilon_{m}}{m_{A}}\Big{)}}{1+\frac{ \varepsilon_{n}}{m_{A}}\Big{(}1+\frac{m_{\chi}^{2}}{|\mathbf{k}_{\chi}^{\dagger}|^ {2}}\Big{)}}. \tag{116}\]
The factors differ from 1 at a level of \(O(10^{-3})\) and with the same accuracy they do not depend on the nuclear indices \(m,n\) and the recoil energy \(T_{A}\)14. For this reason, the summation over the index \(n\) in formula (6.36) can be interpreted as the appearance of the form factors averaged over all possible initial states of the nucleus, which are given by formula (4.4) from the main text. This allows expression (6.36) to be separated into contributions of protons and neutrons as follows:
Footnote 14: Since in the accepted approximation one has \(\ \frac{\varepsilon_{m}+T_{A}}{m_{A}}\leq 10^{-3}\), \(\frac{\Delta\varepsilon_{mn}+T_{A}}{m}\leq 10^{-3}\) and \(\frac{|\bar{\mathbf{p}}|^{2}}{m^{2}}\leq 0.01\).
\[\frac{T_{m=n}^{s^{\prime}s}}{g_{\rm c}} = \sum_{k,j}^{Z}F_{p}\sum_{r=1}^{2}(l_{s^{\prime}s}\,h_{rr}^{p})F_{ p}^{*}\sum_{x=1}^{2}(l_{s^{\prime}s}\,h_{xx}^{p})^{*}+\sum_{k,j}^{N}F_{n}\sum_{r= 1}^{2}(l_{s^{\prime}s}\,h_{rr}^{n})F_{n}^{*}\sum_{x=1}^{2}(l_{s^{\prime}s}\,h _{xx}^{n})^{*}+\] \[+\sum_{k}^{Z}\sum_{j}^{N}F_{p}\sum_{r=1}^{2}(l_{s^{\prime}s}\,h_ {rr}^{p})F_{n}^{*}\sum_{x=1}^{2}(l_{s^{\prime}s}\,h_{xx}^{n})^{*}+\sum_{k}^{N} \sum_{j}^{Z}F_{n}\sum_{r=1}^{2}(l_{s^{\prime}s}\,h_{rr}^{n})F_{p}^{*}\sum_{x=1 }^{2}(l_{s^{\prime}s}\,h_{xx}^{p})^{*}.\]
The formula can be presented as the squared modulus of the sum over proton and neutron contributions:
\[T_{m=n}^{s^{\prime}s}=g_{\rm c}\Big{|}\sum_{k}^{Z}\sum_{r}(l_{s^{\prime}s}\,h_ {rr}^{p})F_{p}+\sum_{j}^{N}\sum_{r}(l_{s^{\prime}s}\,h_{rr}^{n})F_{n}\Big{|}^{ 2}=g_{\rm c}\Big{|}\sum_{f=p,n}\sum_{k=1}^{A_{f}}\sum_{r}(l_{s^{\prime}s}\,h_{ rr}^{f})F_{f}\Big{|}^{2}. \tag{6.39}\]
The second term from (4.1), or expression (6.37), contains summation over both indices \(m,n\). To carry out this summation and to continue transformations of the term \(T_{m\neq n}\), certain assumptions should be made about the unknown behavior of \(\lambda_{r^{\prime}r}^{mn}\). Recall that the spin functions of the nucleus are normalized by condition (3.9)
\[\chi_{m}^{*}(\{r\})\chi_{n}(\{r\})=\delta_{nm},\quad\chi_{n}^{*}(\{r^{\prime} \})\chi_{n}(\{r\})=\delta_{\{r^{\prime}\}\{r^{\prime}\}}.\]
The definition (3.10) was adopted for the products of the spin functions. Since formula (3.10) is valid for any value of \(k\), the index \(k\) of \(r^{\prime}_{k}\) and \(r_{k}\) was omitted. If one keeps the index \(k\), expression (3.10) takes the form
\[\chi_{m}^{*}(\{r^{(k)}\},r^{\prime}_{k})\chi_{n}(\{r\},r_{k})\equiv\lambda^{mn }(r^{\prime}_{k},r_{k})=\delta_{mn}\delta_{r^{\prime}_{k}r_{k}}+(1-\delta_{mn} )\lambda_{r^{\prime}_{k}r_{k}}^{mn}.\]
Here \(\{r\}\equiv\{r_{1},r_{2},..,r_{k},..,r_{A}\}\) is the set of all nucleon spin projections in the initial nucleus \(A\), and \(\{r^{(k)}\}\) is the similar set for the final nucleus, differing only by that the \(k\)th place in the set \(\{r^{(k)}\}\) is occupied by another number \(r^{\prime}_{k}\) not equal to \(r_{k}\) from \(\{r\}\). All other projection values are the same. In other words, \(\{r^{(k)}\}\equiv\{r_{1},r_{2},..,r^{\prime}_{k},..,r_{A}\}\). It can be seen that for \(m=n\) one has
\[\chi_{n}^{*}(\{r^{(k)}\},r^{\prime}_{k})\chi_{n}(\{r\},r_{k})=\delta_{r^{\prime }_{k}r_{k}}, \tag{6.40}\]
i.e., all values in two sets of spin index variables \(\{r^{(k)}\}\) and \(\{r\}\) must fully coincide if the spin functions refer to the same (\(n\)th) internal quantum state of the nucleus. In other words, if
the nucleus has not changed, then all values of the spin indices of the nucleus \(\{r\}\) should also remain unchanged.
Next, consider the product of spin functions included in the squared matrix element (6.37) _for the same_ active \(k\)th nucleon when \(m\neq n\). It is
\[\lambda^{mn}_{r^{\prime}_{k}r_{k}}[\lambda^{mn}_{x^{\prime}_{k}x_{k}}]^{*}= \chi^{*}_{m}(\{r^{(k)}\},r^{\prime}_{k})\chi_{n}(\{r\},r_{k})\cdot[\chi^{*}_{m }(\{x^{(k)}\},x^{\prime}_{k})\chi_{n}(\{x\},x_{k})]^{*}.\]
Performing complex conjugation and rearranging the functions, one gets
\[\lambda^{mn}_{r^{\prime}_{k},r_{k}}[\lambda^{mn}_{x^{\prime}_{k},x_{k}}]^{*}= \big{[}\chi^{*}_{m}(\{r^{(k)}\},r^{\prime}_{k})\chi_{m}(\{x^{(k)}\},x^{\prime }_{k})\big{]}\big{[}\chi_{n}(\{r\},r_{k})\chi^{*}_{n}(\{x\},x_{k})\big{]}.\]
Then, according to (6.40), for each of these two products corresponding to the same state of the nucleus (\(m\) and \(n\), respectively) to be different from zero, the sets of spin indices in each pair of products must completely coincide, i.e., \(\{r^{(k)}\}=\{x^{(k)}\}\) and \(\{r\}=\{x\}\). In other words, _for any_\(k\) there should be the following:
\[\chi^{*}_{m}(\{r^{(k)}\},r^{\prime}_{k})\chi_{m}(\{x^{(k)}\},x^{\prime}_{k})= \delta_{r^{\prime}_{k},x^{\prime}_{k}}\quad\text{and}\quad\chi_{n}(\{r\},r_{ k})\chi^{*}_{n}(\{x\},x_{k})=\delta_{r_{k},x_{k}}.\]
As a result, it turns out that
\[[\lambda^{mn}_{r^{\prime}_{k}r_{k}}][\lambda^{mn}_{x^{\prime}_{k}x_{k}}]^{*}= \delta_{r^{\prime}_{k},x^{\prime}_{k}}\delta_{r_{k},x_{k}},\quad\text{and} \quad|\lambda^{mn}_{r^{\prime}_{k}r_{k}}|^{2}=1. \tag{6.41}\]
Here the index \(k\) explicitly takes into account the invariance of the type and number of the active nucleon. The last equality in (6.41) means that in the case of scattering _on the same active nucleon_, regardless of its number \(k\) and of the change in the state of the nucleus \(|n\rangle\to|m\rangle\), any final orientation of the spin of this nucleon (index \(r^{\prime}_{k}\)) is equally possible for any initial orientation of the spin of the active nucleon (index \(r_{k}\)). Formally, this result is a consequence of the normalization conditions for the spin wave functions (6.40). There is also no operator that changes the spin of the nucleon and _remains standing_ between these spin functions, preventing their normalization condition "to work". Indeed, if the _interaction did not occur_, then the initial \(|n\rangle\) and final \(|m\rangle\) states are completely independent. In these states there should be no correlation between the spin directions of the "active" nucleon (the concept of which is undefined in such a situation). When _the interaction_ occured, transferring the nucleus from the \(|n\rangle\) state to the \(|m\rangle\) state, a correlation arises between the \(r\)- and \(r^{\prime}\)-spins of the active nucleon, _the intensity_ of which (up to zero) is regulated by the scalar product \((l_{s^{\prime}s}\,h_{r^{\prime}r})\).
Thus, _it can be accepted_ that the products of \(\lambda^{mn}_{r^{\prime}r}\) corresponding to the scattering on the same active nucleon do not depend on the indices \(m\) and \(n\) and are equal to some averaged
values for protons and neutrons15
Footnote 15: When \(x_{k}=r_{k}\) and \(x^{\prime}_{k}=r^{\prime}_{k}\), the physical meaning of the square \(|\lambda^{mn}_{r^{\prime}_{k}r_{k}}|^{2}\equiv\lambda^{mn}_{r^{\prime}_{k}r_{k} }[\lambda^{mn}_{x^{\prime}_{k}=r^{\prime}_{k},x_{k}=r_{k}}]^{*}\) is clear. It is the probability for the \(k\)th nucleon to meet the lepton with its spin projection \(r_{k}\) and sent the lepton away with the spin projection \(r^{\prime}_{k}\), provided that the nucleus passes from the \(|n\rangle\) state to the \(|m\rangle\) state. The meaning of the product \(\lambda^{mn}_{+-}[\lambda^{mn}_{++}]^{*}\), where the initial spin index \(r_{k}=-1\) does not coincide with the initial spin index \(x_{k}=+1\), is unclear, _if one considers the same active nucleon_. Such a situation can apparently arise when \(\lambda^{mn}_{+-}\) refers to one nucleon, and \([\lambda^{mn}_{++}]^{*}\) refers _to another_ one, i.e., when _two different active nucleons_ participate in the scattering process. As shown below, such a scenario contributes negligibly.
\[\lambda^{mn}_{r^{\prime}r}\simeq\lambda^{p/n}_{r^{\prime}r}. \tag{100}\]
Therefore, due to (100), the spin factors \(\lambda^{mn}_{r^{\prime}r}\equiv\lambda^{I}_{r^{\prime}r}\) (together with scalar products) can be taken out of the sum over \(m,n\).
Using conditions (101) and (100), one can remove part of the summation over the indices \(x^{\prime},x\) (when \(k=j\)) in formula (101), "separate" protons from neutrons, and get the expression
\[\frac{T^{s^{\prime}s}_{m\neq n}}{g_{\rm i}} = \sum_{k=j}^{Z}\sum_{n}\omega_{n}\sum_{m\neq n}f^{p}_{mn}f^{p*}_{ mn}\sum_{r^{\prime}_{k}r_{k}}|(l_{s^{\prime}s},h^{p}_{r^{\prime}_{k}r_{k}})| ^{2}+\sum_{k=j}^{N}\sum_{n}\omega_{n}\sum_{m\neq n}f^{n}_{mn}f^{n*}_{mn}\sum_{r ^{\prime}_{k}r_{k}}|(l_{s^{\prime}s},h^{n}_{r^{\prime}_{k}r_{k}})|^{2}\] \[+ \sum_{k\neq j}^{Z}\sum_{n}\omega_{n}\sum_{m\neq n}f^{k}_{mn}f^{j* }_{mn}\sum_{r^{\prime}_{k}r_{k}}\sum_{x^{\prime}_{j}x_{j}}\lambda^{p}_{r^{ \prime}_{k}r_{k}}[\lambda^{p}_{x^{\prime}_{j}x_{j}}]^{*}(l_{s^{\prime}s},h^{p} _{r^{\prime}_{k}r_{k}})(l_{s^{\prime}s},h^{p}_{x^{\prime}_{j}x_{j}})^{\dagger}+\] \[+ \sum_{k\neq j}^{N}\sum_{n}\omega_{n}\sum_{m\neq n}f^{k}_{mn}f^{j* }_{mn}\sum_{r^{\prime}_{k}r_{k}}\sum_{x^{\prime}_{j}x_{j}}\lambda^{n}_{r^{ \prime}_{k}r_{k}}[\lambda^{n}_{x^{\prime}_{j}x_{j}}]^{*}(l_{s^{\prime}s},h^{n} _{r^{\prime}_{k}r_{k}})(l_{s^{\prime}s},h^{n}_{x^{\prime}_{j}x_{j}})^{\dagger}+\] \[+ \sum_{k}^{Z}\sum_{j}^{N}\sum_{n}\omega_{n}\sum_{m\neq n}f^{p}_{ mn}f^{n*}_{mn}\sum_{r^{\prime}r}\sum_{x^{\prime}x}\lambda^{p}_{r^{\prime}r}[ \lambda^{n}_{x^{\prime}x}]^{*}(l_{s^{\prime}s},h^{p}_{r^{\prime}r})(l_{s^{ \prime}s},h^{n}_{x^{\prime}x})^{\dagger},\]
where the summation over only one index remains in the first two terms, since both indices \(k\) and \(j=k\) "point" to the same nucleon. In the first term it is the proton, and in the second it is the neutron. In the third and fourth terms, the indices \(k\) and \(j\) also point to nucleons of the same type, but these nucleons are different, \(k\neq j\). In the last line, the index \(k\) indicates a proton, and the index \(j\) indicates a neutron.
Given condition (100), calculation of sums in (100) should start with the first line, where the indices \(k\) and \(j\) "point" to the same proton. Since \(k=j\), then, taking into account the condition of completeness of nuclear states \(\sum_{m}|m\rangle\langle m|=\hat{I}\), the condition for conservation of probabilities \(\sum_{n}\omega_{n}=1\), the condition for normalizing nuclear states \(\langle m|n\rangle=\delta_{mn}\), function definitions \(f^{k}_{mn}(\mathbf{q})=\langle m|e^{i\mathbf{q}\mathbf{\bar{X}}_{k}}|n\rangle\) and expressions for their squares (100), one can sum over \(n\)
and \(m\) in the first term of formula (6.43) and obtain the following sequence of expressions:
\[\sum_{n}\omega_{n}\sum_{m\neq n}f^{k}_{mn}f^{k*}_{mn} = \sum_{n}\omega_{n}\Big{[}\sum_{m}f^{k}_{mn}f^{k*}_{mn}-f^{k}_{nn}f ^{k*}_{nn}\Big{]}=\] \[= \sum_{n}\omega_{n}\Big{[}\langle n|e^{i\mathbf{q}\mathbf{X}_{k}}\sum_{m}|m \rangle\langle m|e^{-i\mathbf{q}\mathbf{X}_{k}}|n\rangle\Big{]}-|F_{p}(\mathbf{q})|^{2}=1-| F_{p}(\mathbf{q})|^{2}.\]
For the second term in formula (6.43), the similarly obtained expression is \(1-|F_{n}(\mathbf{q})|^{2}\). For the second (proton) line in (6.43) one has \(k\neq j\), i.e., these indices indicate _different_ protons. Then, taking into account all above-mentioned conditions as in the case with the derivation of expression (6.44), for protons (index \(p\)) one can write the following:
\[\sum_{n}\omega_{n}\sum_{m\neq n}f^{k}_{mn}f^{j*}_{mn}=\langle\text{cov}(e^{i \mathbf{q}\hat{\mathbf{X}}_{k}},e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}})\rangle_{p}. \tag{6.45}\]
Here the covariance of the shift operators \(e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}\) and \(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}\) over \(|n\rangle\) is introduced in the form
\[\text{cov}_{nn}(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}},e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}) \equiv\langle n|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}\,e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}|n \rangle-\langle n|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|n\rangle\langle n|e^{-i\mathbf{q} \hat{\mathbf{X}}_{j}}|n\rangle, \tag{6.46}\]
and summation over initial states with factors \(\omega_{n}\) is taken into account as follows:
\[\langle\text{cov}(e^{i\mathbf{q}\hat{\mathbf{X}}_{k}},e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}) \rangle_{p}\equiv\sum_{n}\omega_{n}\text{cov}_{nn}(e^{i\mathbf{q}\hat{\mathbf{X}}_{k} },e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}}). \tag{6.47}\]
In deriving formulas (6.44) and (6.45), the fact was used the "incomplete" sum over \(m\), i.e., \(\sum_{m\neq n}f^{k}_{mn}f^{j*}_{mn}\,\), can be "completed" and represented as \(\sum_{m}f^{k}_{mn}f^{j*}_{mn}-\sum_{m=n}f^{k}_{mn}f^{j*}_{mn}\). Using the completeness condition \(\sum_{m}|m\rangle\langle m|=\hat{I}\), one arrives at expression (6.46)
\[\sum_{m\neq n}f^{k}_{mn}f^{j*}_{mn} = \sum_{m}\langle n|e^{i\mathbf{q}\mathbf{X}_{k}}|m\rangle\langle m|e^{-i \mathbf{q}\mathbf{X}_{j}}|n\rangle-\sum_{m=n}\langle n|e^{i\mathbf{q}\mathbf{X}_{k}}|m \rangle\langle m|e^{-i\mathbf{q}\mathbf{X}_{j}}|n\rangle=\] \[= \langle n|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}} |n\rangle-\langle n|e^{i\mathbf{q}\hat{\mathbf{X}}_{k}}|n\rangle\langle n|e^{-i\mathbf{q} \hat{\mathbf{X}}_{j}}|n\rangle.\]
Expression (6.47) vanishes both for small transferred momenta \(\mathbf{q}\to 0\) and for large \(\mathbf{q}\to\infty\), i.e., \(\lim_{\mathbf{q}\to 0}\langle\text{cov}(e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}},e^{i\mathbf{q}\hat{\bm {X}}_{k}})\rangle_{p}=0\) and \(\lim_{\mathbf{q}\to\infty}\langle\text{cov}(e^{-i\mathbf{q}\hat{\mathbf{X}}_{j}},e^{i\mathbf{q }\hat{\mathbf{X}}_{k}})\rangle_{p}=0.\) A similar consideration is carried out for _different_ neutrons (the 3rd line in formulas (6.43)) and is generalized to the joint case of protons and neutrons (the last line in formula (6.43)). Finally, the "incoherent" term (6.37) appears as expression (4.10) in the main text.
|
2306.03887 | Fast Context Adaptation in Cost-Aware Continual Learning | In the past few years, DRL has become a valuable solution to automatically
learn efficient resource management strategies in complex networks with
time-varying statistics. However, the increased complexity of 5G and Beyond
networks requires correspondingly more complex learning agents and the learning
process itself might end up competing with users for communication and
computational resources. This creates friction: on the one hand, the learning
process needs resources to quickly convergence to an effective strategy; on the
other hand, the learning process needs to be efficient, i.e., take as few
resources as possible from the user's data plane, so as not to throttle users'
QoS. In this paper, we investigate this trade-off and propose a dynamic
strategy to balance the resources assigned to the data plane and those reserved
for learning. With the proposed approach, a learning agent can quickly converge
to an efficient resource allocation strategy and adapt to changes in the
environment as for the CL paradigm, while minimizing the impact on the users'
QoS. Simulation results show that the proposed method outperforms static
allocation methods with minimal learning overhead, almost reaching the
performance of an ideal out-of-band CL solution. | Seyyidahmed Lahmer, Federico Mason, Federico Chiariotti, Andrea Zanella | 2023-06-06T17:46:48Z | http://arxiv.org/abs/2306.03887v1 | # Fast Context Adaptation in Cost-Aware Continual Learning
###### Abstract
In the past few years, DRL has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex learning agents and the learning process itself might and up competing with users for communication and computational resources. This creates friction: on the one hand, the learning process needs resources to quickly convergence to an _effective_ strategy; on the other hand, the learning process needs to be _efficient_, i.e., take as few resources as possible from the user's data plane, so as not to throttle users' QoS.
In this paper, we investigate this trade-off and propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning. With the proposed approach, a learning agent can quickly converge to an efficient resource allocation strategy and adapt to changes in the environment as for the CL paradigm, while minimizing the impact on the users' QoS. Simulation results show that the proposed method outperforms static allocation methods with minimal learning overhead, almost reaching the performance of an ideal out-of-band CL solution.
Resource allocation, Reinforcement learning, Cost of learning, Continual learning, Meta-learning, Mobile Edge Computing.
## 1 Introduction
The role of Artificial Intelligence (AI) in communication networks has become more and more central with the transition from 4G to 5G, and learning is at the core of the 6G standardization process [1]. Mobile networks are no longer designed as rigid entities that the final users have to adapt to, rather are becoming customizable services evolving according to the users' needs [2]. The Network Slicing (NS) paradigm supports this approach by enabling the definition of multiple logical networks overlaying the same physical infrastructure [3], with each _slice_ devoted to a specific class of service. This allows applications with very different requirements to coexist and share spectrum resources. However, managing NS, as well as other advanced application scenarios, requires judicious allocation of both transmission and computational resources to users, according to their Quality of Service (QoS) targets, in a fast-paced scenario [4], which is expected to become even more straining with 6G.
Hand-designed resource allocation strategies may not be up to this challenge, so that growing attention has been dedicated to machine-learning approaches. In particular, Deep Reinforcement Learning (DRL) is considered a promising framework for deriving adaptable and robust strategies for network orchestration [4] and resource allocation [5].
DRL's _effectiveness_ in dealing with complex scenarios is indeed well-established: with proper training, the DRL agents can find foresighted policies aiming for long-term objectives [6], significantly improving network performance. Such promising results, however, have been typically obtained in _stationary_ environments: if this assumption is not satisfied, the performance of pre-trained DRL agents may dramatically decrease when the network dynamic shifts away from the training environment.
Approaches based on the Continual Learning (CL) paradigm [7] are designed to deal with non-stationary systems. CL enables the adaptation of a learning agent to a series of subsequent tasks that, in a network scenario, may represent different network configurations. We observe that combining CL and DRL for managing network resources in non-stationary scenarios has a non-negligible cost in terms of energy, computation, and communication resources [8]. These resources are necessarily subtracted to the _data plane_, i.e., the part of the system that is responsible for transmitting, processing, and forwarding user data packets. Therefore, supporting the learning represents an overhead for the system, which can negatively impact users' QoS. We use the term _cost of learning_ to indicate the impact that the learning process can have on users' performance due to the resources it requires. Considering such a cost implies that the learning framework, in addition to being effective, must also be _efficient_, that is, require as few resources as possible to achieve its goals. The cost of learning problem is particularly critical considering the ever-larger size of most recent DRL neural networks, and the growing demand for efficient systems, as for the green networking paradigm [9]. We observe that Mobile Edge Computing (MEC) [10] solutions does not solve the problem, but just move it at the network edge. In fact, while MEC allows computationally-expensive tasks (such as the training of DRL algorithms) to be carried
out directly in dedicated edge nodes physically close to the data sources, still the limited transmission, computational and energetic resources of such nodes have to be shared between data and learning planes.
Finding a balance between the number of resources to be used for improving the system reactivity to variations and those to be allocated for serving the users may be a very difficult task. This is particularly critical in the case of CL systems, in which agents must constantly adapt to new working conditions. As the very same network resources are also used for the training, a trade-off between the capability of DRL agent to learn new tasks and its performance during the current task arises.
Note that, although at a first sight this problem may recall the well-known exploration-exploitation problem in learning systems, it is actually fundamentally different. In fact, the exploration-exploitation problem involves finding a balance between exploring new strategies, with the risk of temporarily losing some performance, and exploiting the currently learned strategy that, however, may be globally suboptimal, thus wasting part of the system capacity. In this setting, the resources required by the learning process are typically ignored: whatever action is taken, the outcome is assumed to be available to the learner, enriching its experience, without any cost. In this paper, instead, we look at the resources needed to transfer such information to the learner and turn it into experience, irrespective of whether it originates from an exploration or exploitation action. From a theoretical perspective, the scenario we look at is hence a Meta Learning (MeL) one, in which the agent's actions determine the efficiency of the learning data aggregation and processing.
The cost of learning is therefore a fundamental aspect to be considered in modern network design, and recent works have proposed learning-based frameworks that are computation-aware [11]. Despite the high interest of the scientific community in this field, the cost of learning for DRL models is still a relatively unexplored subject in the networking literature, and even the most recent works on resource allocation and NS ignore the true cost of combining DRL and CL in modern networks [12], making the effectiveness of state of the art DRL solutions questionable.
In this work, we analyze the trade-off between _effectiveness_ and _efficiency_ in CL, formally defining the resource allocation problem and presenting a heuristic solution to allocate resources to the data and learning planes. The proposed scheme effectively controls training in a CL framework, maximizing the efficiency of the training (i.e., reducing the learning plane overhead) while still achieving effective resource allocation (i.e., the same QoS as the ideal approach that assumes learning does not consume users' resources) in a reasonable time. Although we applied our optimization framework to a networking scenario, it is actually more general and can be adapted to any learning-based allocation problem in which the allocated resources are also required for the agent training, such as MEC job scheduling.
The major contributions of our work are the following:
* We define a theoretical model to analyze the trade-off between _effectiveness_ and _efficiency_ of learning-based resource allocation schemes;
* We propose a CL strategy to enable the resource allocation scheme to adapt to sudden changes in traffic dynamics;
* We test the proposed model in a NS use case, in which learning agent and system users compete the same network resources;
* We compare the benefits and drawbacks of the proposed approach against a static resource-sharing scheme between data and learning planes, and an ideal strategy that considers out-of-band resources for the agent training (or, equivalently, assumes the learning agent does not consume any user-plane resources). Our simulation results show that the proposed heuristic performs closely to the ideal (out-of-band) approach, minimizing the impact of learning plane traffic during the training.
A partial and preliminary version of this work was presented in [13]. In this paper, we extend that work by introducing the CL approach, providing a much richer set of results, and deepening the discussion and analysis of our observations.
The rest of this paper is organized as follows: first, Sec. 2 reports the most significant related work. We then present the model for optimizing data and learning plane in Sec. 3 and the NS use case definition in Sec. 4. Successively, Sec. 5 presents the simulation results. Finally, Sec. 6 concludes the paper and discusses some possible avenues for future work.
## 2 Related Work
While the latest advances in AI have made it possible to reach stunning performance levels in multiple fields, there is still a large gap between human cognition and AI models in terms of adaptation. Most of the current learning models need to be retrained from scratch every time a new task has to be accomplished, with a high cost in terms of computational power and time. For this purpose, the scientific community has recently leveraged the CL paradigm, which focuses on learning a series of subsequent data, associated with different tasks, without _catastrophically forgetting_ the past knowledge [14]. Therefore, in CL scenarios the goal is to adapt to a time-varying environment, working on one task at a time and assuming that future information is inaccessible. This model appeals to the resource allocation problem considered in this manuscript, since in realistic networks the type, number and requirements of the users keep changing over time, making the system non-stationary (though stationarity can be assumed during the _coherence intervals_, i.e., the time periods during which the main system parameters do not change).
A baseline CL solution may involve a pre-trained model that is iteratively adapted to new tasks (or to changes in the environment), e.g., taking advantage of curriculum learning, as done in [15]. Replay-based methods form a more recent class of CL algorithms, which store past experience in memory or exploit a generative model to reproduce it, using this information as model input while training on new tasks [16, 17]. Regularization-based methods, which introduce a penalty term in the model's loss function with the goal of avoiding performance degradation in past tasks [18, 19], form another class of solutions. An extension of the
aforementioned class is proposed in [20], where the authors estimate the importance of each learned parameter and prevent the modifications of such parameters that most affect performance in past tasks. Finally, architecture-based methods define an additional branch of the model for each task, freezing the previously learned parameters when training the model on new scenarios [21, 22]. An example is provided in [23], where the authors developed a model organized into two blocks: the first is retrained every time a new task arises, while the latter distills the knowledge acquired for future reuse.
From a different perspective, CL algorithms aim at defining a strategy to detect the best settings for training a learning model in new scenarios. This concept falls within the MeL paradigm, which focuses on convergence speed and stability [24]. MeL methods differ in the output of the learning optimization, which may include the weight initialization strategy [25], the optimizer algorithm [26], the loss function [27], the dimensions of the learning architecture [28], and other hyper-parameters.
The methodology used for the MeL optimization task may be based on gradient descent [29], evolutionary algorithms [30], or learning-based approaches. For instance, the authors of [31] develop a CL model in which a DRL agent has to define the optimal settings of the task-specific block, balancing between validation accuracy and model complexity. Besides, MeL methods differ in terms of optimization goals, which may either be fully based on the model performance on a validation set or consider more specific aspects, such as the adaptability of the solution to multiple tasks [32], the greater importance of fast adaptation than asymptotic performance [33], and the difference between online and offline learning scenarios [34].
In particular, the use of MeL methods for optimizing DRL models is a relatively new field. In this scenario, the combination of CL and MeL avoids the need for a centralized agent that can handle each possible state-action pair and enables the definition of multiple and more straightforward policies. The authors of [35] show the benefits of MeL in a real-world scenario, analyzing a DRL robotic system that exploits a recurrent module to preserve past knowledge and speed up the training process. Interestingly, when applying MeL combined with DRL, a fundamental parameter is the choice of the exploration policy by which the agent interacts with the environment, which is an absent aspect in classification tasks. For instance, the authors of [36] develop a MeL model where prior information is used to define agnostic exploration policies enabling a better agent adaptation to multiple learning problems.
In the context of CL and MeL for 5G and 6G network management, drift detection is a critical task: if the environment changes abruptly, the MeL paradigm requires the learner to first become aware of the change, and delayed detection may lead to a violation of the service requirements [37]. Drifts can be classified as abrupt or gradual depending on the period during which the system performance degrades. The literature presents several drift detection algorithms, usually considering the prediction error of the learning model for estimating environment changes. It is possible to consider both heuristic [38] or learning-based strategies [39], with different advantages and drawbacks in terms of, e.g., false alarm probability and assumptions on the drift characteristics.
Despite the numerous works investigating CL and MeL in supervised and reinforcement learning scenarios, to the best of our knowledge none of them considers the trade-off between efficiency and effectiveness proposed in this manuscript. In the following sections, we will take advantage of solutions inspired by the literature, analyzing their impact in terms of both training efficiency and user performance. We will consider a baseline CL system where the models trained for previous tasks are stored in a central memory, similarly to what is done in [15]. Besides, we will consider a simple drift detection algorithm to monitor the environment statistics and trigger the retraining of the learning model. We chose relatively simple techniques for both drift detection and CL in order to focus on the main aspect of our analysis, which is the trade-off between efficiency and effectiveness in resource allocation problems, as represented by the cost of learning.
## 3 System Model
Let us consider a generic resource allocation problem, which is modeled as an infinite horizon Markov Decision Process (MDP) defined by the tuple \((\mathcal{S},\mathcal{A},\mathbf{P},R,\gamma)\): \(\mathcal{S}\) represents the state space, \(\mathcal{A}\) is the action space (which is potentially different for each state), \(\mathbf{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the transition probability matrix, which depends on the current state, the action chosen by the agent, and the landing state, \(R:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function, and \(\gamma\in[0,1)\) is the discount factor. Time is divided in slots, and the slot index is denoted by \(t\in\mathbb{Z}^{+}\). The ultimate objective of a DRL agent is to find the optimal policy \(\pi^{*}:\mathcal{S}\rightarrow\mathcal{A}\), which maximizes the expected long-term reward:
\[\pi^{*}=\operatorname*{arg\,max}_{\pi:\mathcal{S}\rightarrow\mathcal{A}} \mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}R(\mathbf{s}_{t},\pi(\mathbf{s}_ {t}),\mathbf{s}_{t+1})\right]. \tag{1}\]
Let us assume that, in each time slot \(t\), the system can allocate \(N\) resource blocks, which may represent communication bandwidth, computational cycles, or energy units, depending on the specific application: the type of resource may affect the definition of the specific MDP, but is immaterial for our reasoning. In the following we hence generally refer to a _request_, which can be a packet to be transmitted, a computing job to be executed, or an action to be taken, and we assume that each request requires exactly one resource block of some kind (transmission capacity, computational power, or energy).
The system resources are assumed to be partitioned into \(M\) different _slices_, where a slice may serve a single user, or a group of users with the same features. The action space then contains all possible resource allocation vectors that split the \(N\) resources among the \(M\) slices:
\[\mathcal{A}=\left\{\mathbf{a}\in\{0,\ldots,N\}^{M}:\sum_{m=1}^{M}a_{m}=N \right\}. \tag{2}\]
Furthermore, we assume that each slice is associated to a First-In First-Out (FIFO) queue of requests: each queue has a limited size \(Q\), after which the system starts dropping older requests for that slice to make room for newer ones.
In this work, we focus on Key Performance Indicators (KPIs) tied to the latency with which the requests of the different slices are served. However, the approach can be generalized to consider other metrics.
We hence indicate by \(T_{m,i}\) the latency of the \(i\)-th request from slice \(m\), which depends on the time it spends in the queue before being assigned a resource. Dropped or rejected requests have an infinite latency by definition. The \(i\)-th request from slice \(m\) is generated at time \(t_{m,i}\), and age \(\Delta_{m,i}(t)\) is defined as:
\[\Delta_{m,i}(t)=t-t_{m,i}. \tag{3}\]
We can then define the reward function:
\[R(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})=\sum_{m=1}^{M}\sum_{i=1}^{a_{m}} f_{m}\left(\Delta_{q_{m}(i)}\right), \tag{4}\]
where \(\Delta_{q_{m}(i)}\) is the age of the packet in position \(i\) of the \(m\)-th queue at the current time \(t\), and \(f_{m}:\mathbb{N}\rightarrow[0,1]\) is a function mapping the latency of each request to slice \(m\)'s resulting QoS. With a slight abuse of notation, we define \(f(\varnothing)=0\), where \(\varnothing\) indicates that there is no packet in that position in the queue. We can distinguish between slices with _hard_ timing requirements, for which the QoS of a request is 1 if it is served within a maximum latency, and 0 if it exceeds that deadline; and _soft_ timing requirements, for which the QoS is a generic monotonically decreasing function of the latency.
We can then distinguish between _rejected_ and _dropped_ packets, the first being packets that find a full queue and are immediately discarded, the second referring to queued packets whose age is higher than the deadline and, in case of hard timing requirements, are hence dropped before service since they would not contribute to the QoS of the slice and just waste resources. We remark that only slices with hard timing requirements can experience dropped packets, while packet rejection can occur in any slice.
It should also be noted that dropped or rejected requests do not generate any rewards, as they are never included in the sum. The state of the system is then represented by the age of each request contained in each queue, so that in the most general case, \(\mathcal{S}=\left(\left\{\varnothing\right\}\cup\mathbb{N}\right)^{M\times Q}\).
The objective of the learning agent is then to learn how to allocate resources among users, so as to maximize their QoS parameters; it should also be aware of the slices that have a higher risk of violating hard timing requirements and schedule resources to avoid missing deadlines. However, learning is also a computational process, and the DRL agent may take up some of the same resources that may be allocated to the users in order to improve its policy. As we highlighted in our previous work [11], considering the cost of learning can lead to significantly different choices, limiting the amount and type of experience samples that are selected for training: this is also true regardless of the type of resource the learning requires.
However, even that work only considered static policies, which set up a separate virtual channel (either divided in time or in frequency) for the learning data, strictly separating the learning and data planes. Equivalently, an agent learning how to schedule tasks in an edge server could reserve a certain percentage of computation time to self-improvement, but the amount was decided in advance. This is clearly suboptimal: intuitively, the relative returns from policy self-improvement decrease over time, as the agent gradually converges to the optimal policy. After convergence, and as long as the environment statistics are stable, the value of further improvements to the policy is zero by definition. A dynamic policy for adapting the allocation between requests and learning should then take this into account.
Furthermore, the current state of the system also needs to be taken into account: if delaying the queued requests further does not have a large impact on the QoS, the
Fig. 1: Schematic of the learning control loop in a communication scenario: nodes on the left-hand side represent users, belonging to two different slices (blue and magenta). The connection between the base station and the Internet is used to carry both the users’ traffic (blue and magenta streams) and the data needed to train the learner in the cloud (green stream), according to the allocation scheme graphically shown above the link.
system can take away resources from the slices in order to improve the resource allocation policy, but if the impact is big, e.g., if some requests from a slice with hard timing requirements are already close to the deadline, they need to be prioritized, choosing immediate gains over potential future improvements.
This is particularly important for non-stationary environments, in which the coherence time of the MDP statistics is finite: in this kind of system, the learning agent needs to adapt the allocation to the changing statistics of the environment, and cannot rely on offline training, but must keep learning from experience and adapt to the changes proactively.
### _Learning Plane Resource Allocation_
One of the problems of including the learning plane in the resource allocation policy is the circularity of the policy: in order to learn when to allocate resources to policy improvement, a DRL agent needs to first learn when learning is important. As the policy evolves over time and learning becomes less of a priority, this makes the reward that the agent perceives dependent on the agent's own reduced resources demand, making the learning more difficult.
In order to avoid this problem, we set an external rule to regulate learning, so that the environment that the agent sees is stationary. We define a generic resource allocation vector space \(\mathcal{Z}\) as follows:
\[\mathcal{Z}=\left\{\mathbf{z}\in\{0,\ldots,N\}^{M}:\sum_{m=1}^{M}z_{m}\leq N \right\}. \tag{5}\]
We remark here that \(\mathcal{Z}\) is not the action space, but rather a superset of it, i.e., \(\mathcal{A}\subseteq\mathcal{Z}\): the definition of the action space in (2) indeed only considers allocations that assign all of the available resources to the users' slices, while \(\mathcal{Z}\) also includes actions that allocate only part of the resources to the users: if \(\sum_{m=1}^{M}z_{m}<N\), the remaining resources are allocated to the learning.
We can then divide the time slots in two categories, which we name _DRL_ and _learning_: in DRL slots, all the resources are allocated to the users' slices according to the action chosen by the DRL agent, while in learning slots the resources are divided between the learning process and the users' slices, according to a simple, empirical strategy. We also remark that learning slots are not considered as experience samples for the DRL training, as the action \(\mathbf{z}\) in these slots might not belong to the action space \(\mathcal{A}\) considered in the DRL slots.
Fig. 1 shows a basic schematic of the process in the communication use case: the two classes of users, corresponding to Internet of Things (IoT) (magenta) and human communications (blue), transmit over a shared link, and the resources in each time slot (which correspond to bandwidth and time resources in the uplink to the Cloud) are allocated following a dynamic division. Slots 3 and 6 in the figure are learning slots: a significant portion of the resources is allocated to the learning plane (green line). We remind the reader that this networking example, while being the main motivation for our study, is not the only application of the model, which may also be used to allocate scarce computational or energy resources to different users in a dynamic scenario.
For the sake of simplicity, we assume that each slot will be used as a learning slot with probability \(\rho(t)\), which decreases linearly over time as the learned policy becomes more stable. The actual shape of \(\rho(t)\) (i.e., the learning curve) shall be defined based on the coherence time of the scenario, i.e., the number of slots \(\tau\) over which the statistics of the environment will be approximately stationary. As explained later (see (14)) here we choose a linear function, but other choices are possible. We now need to define an allocation strategy in learning slots.
### _Greedy Allocation Strategy_
To define an strategy for splitting resources in the learning slots, we consider two contrasting objectives: minimizing the loss of QoS for users, and maximizing the number of experience samples that can be transferred to the learner.
To capture the first aspect, we define a function \(\hat{R}:\mathcal{S}\times\mathcal{Z}\rightarrow\mathbb{R}\) that represents an approximation of the instantaneous reward for each resource allocation, considering only the information available in the current state. If the QoS functions \(\{f_{m}(\cdot)\}\) are known, we can consider the following function:
\[\hat{R}(\mathbf{s},\mathbf{z})=\sum_{m=1}^{M}\left[\sum_{i=1}^{z_{m}}f_{m} \left(\Delta_{q_{m}(i)}\right)-\hskip-14.226378pt\sum_{j=z_{m}+1}^{L}\hskip-14.226378ptf_{m}\left(\Delta_{q_{m}(i)}+1\right)\right]. \tag{6}\]
Note that maximizing (6) may lead to suboptimal resource allocations, since \(\hat{R}(\mathbf{s},\mathbf{z})\) does not account for the long-term reward.
To model the second aspect, we consider that each DRL slot generates an experience sample, which requires \(\ell\) packets (and as many resources) to be transferred to the learner. Due to memory limitations, we assume that the number of samples that can be buffered cannot exceed \(E\). We hence define a second function \(S(\mathbf{z},e)\) that captures the effectiveness of the allocation \(\mathbf{z}\) in transferring the \(e\in\{0,\ldots,E\}\) samples in the experience queue. In this paper, we define \(S\) in terms of the number of experience samples that we manage to transmit in the learning slot, defined by (17) in our use case, but, once again, this is a reasonable but arbitrary choice and other options might be more suitable for other scenarios.
Fig. 2: Schematic of the learning plane resource allocation policy.
The greedy strategy is then the solution to the following optimization problem:
\[\mathbf{z}^{*}(\mathbf{s},\mathbf{z},e)=\operatorname*{arg\,max}_{\mathbf{z}} \left(M^{-1}\hat{R}(\mathbf{s},\mathbf{z})+E^{-1}S(\mathbf{z},e)\right). \tag{7}\]
If \(f_{m}\) is concave for all slices with a soft timing requirement, we can exploit the FIFO nature of the queue to provide a simple iterative solution, starting from the empty assignment and gradually assigning resources to either one of the slices or the learning process, depending on the value of the utility function. Fig. 2 shows a schematic of the full learning plane resource allocation strategy in a simple case with \(M=2\): at each time step, the node randomly selects either the DRL agent or (with probability \(\rho(t)\)) the greedy allocation, which reserves some resources for the learning plane.
### _Continual Learning_
We assume the environment can be characterized by a set of parameters \(\mathbf{\omega}\), which determine its stochastic dynamic. From time to time, this parameters can instantaneously change, making the environment non-stationary (in the DRL jargon, changing the task) and requiring the strategy to be updated in order to pursue the new task.
To cope with such a non-stationary context we proposed a CL strategy similar to the work in [15], but including considerations on the cost of learning. When a context-change is detected, say from \(\mathbf{\omega}\) to \(\mathbf{\omega}^{\prime}\), we consider its significance by means of a distance function \(\eta(\mathbf{\omega},\mathbf{\omega}^{\prime})\): if it is larger than a threshold \(\eta_{\text{thr}}\), we consider the environment to be novel enough to warrant a retraining. On the other hand, if the change is smaller than the threshold, we implicitly assume that the policy is close enough to the optimum that maintaining it is cheaper than running a new training phase.
We define the environment index \(k\in\mathbb{N}\), which starts from 0 and is incremented at every significant change in the environment. Our solution maintains a record of the past environments and the respective learned policies, so that as the environment shifts into the new context \(\mathbf{\omega}_{k+1}=\mathbf{\omega}^{\prime}\), we can find the closest past environment:
\[j^{*}=\operatorname*{arg\,min}_{j\in\{0,\dots,k\}}\eta(\mathbf{\omega}_{j},\mathbf{ \omega}^{\prime}). \tag{8}\]
If the previously experienced environment is close enough to the new one, i.e., \(\eta(\mathbf{\omega}_{j^{*}},\mathbf{\omega}^{\prime})<\eta_{\text{thr}}\), we can apply the stored policy directly, relying on a short training phase with increased exploration rate and training probability \(\rho(t)\) to adapt to the small change. If no environment in the memory is close enough, training needs to begin from scratch, with slower and more expensive training phase.
## 4 Network Slicing Use Case
To substantiate the approach on a practical but easy to analyze use case, we consider the resource allocation problem in a simple network slicing scenario. We assume a common communication link is used to transmit both the data packets generated by the users, which belong to two different network slices, and the pieces of information used to feed the learner. Time is divided in slots of constant duration \(\tau\), and in each slot the transmission channel can carry \(N\) orthogonal and identical resource blocks. The scenario fits the general model presented in the previous section, as the communication resources are shared between the data and learning planes. The full parameters for the scenario, which we will describe in this section, are given in Tab. I.
### _Communication System Model_
We consider two slices, corresponding to the two types of data sources:
* Slice 1 is for bulky file transfer, for which we do not set any strict latency constraints. However, we want the system to have the highest possible reliability to ease the burden on the higher layers. As such, \(f_{1}(T)=1\) for all finite values of \(T\), but the QoS is 0 if \(T\) is infinite (i.e., if the packet is dropped);
* Slice 2 is intended for interactive traffic, such as video conferencing or Virtual Reality (VR) traffic, with a strict latency deadline: packets need to be transmitted with a maximum latency \(T_{\text{soft}}^{(2)}\). For the sake of simplicity, we assume that, after \(T_{\text{soft}}^{(2)}\), the utility decreases linearly with time, dropping to 0 if the latency is higher than \(T_{\max}^{(2)}\geq T_{\text{soft}}^{(2)}\), i.e., \(f_{2}(x)=1\) if \(0\leq x\leq T_{\text{soft}}^{(2)}\), and \(f_{2}(x)=\max\left(0,1-(x-T_{\text{soft}}^{(2)})/(T_{\max}^{(2)}-T_{\text{soft} }^{(2)})\right)\) if \(x>T_{\text{soft}}^{(2)}\).
We remark that, although these QoS functions are reasonable, they may not be the most appropriate to represent the considered slices. Since the purpose of this study is to gain insights on the cost of learning in dynamic systems, more than proposing a quantitative performance analysis of the use-case, we prefer these neatly-shaped functions that allow for a qualitative performance analysis while easing the interpretability of the results.
The number of active users in each slice is variable, making traffic non-deterministic. We consider a maximum number of active users \(U_{m}\in\mathbb{N}\) for each slice \(m\in\{1,2\}\). Each user follows a on-off model, which can be modeled as a Gilbert-Elliott binary Markov chain with transition probability matrix \(\mathbf{O}^{(m)}\). In state 0, the user does not transmit,
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Symbol & Value \\ \hline \multicolumn{3}{c}{**Communication system**} \\ \hline Number of subchannels & \(N\) & 15 \\ Slot time duration & \(\tau\) & 1 ms \\ Packet queue length & \(Q\) & 1500 \\ Packet size & \(L\) & 512 B \\ Link capacity & \(C\) & 7.68 MB/s \\ \hline \multicolumn{3}{c}{**Learning plane**} \\ \hline Discount factor & \(\gamma\) & 0.95 \\ Learning queue length & \(E\) & 1500 \\ Packets required for each sample & \(\ell\) & 3 \\ Initial learning slot probability & \(\rho_{0}\) & 0.2 \\ Final learning slot probability & \(\rho_{f}\) & 0.01 \\ Learning slot probability decay & \(\sigma\) & \(8\times 10^{-4}\) \\ Learning slot decay pace & \(H\) & 1000 \\ Queue pressure parameter & \(\chi_{1}\) & 1400 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Use case and learning parameters.
while in state 1, it transmits packets of size \(L\) with a constant bitrate \(R_{m}\).
The aggregate traffic generated by slice \(m\) is then represented by the number \(u_{m}(t)\) of active users at time \(t\), multiplied by \(R_{m}\). We can then define a Markov chain over \(u_{m}\in\{0,\ldots,U_{m}\}\), with the following transition probabilities:
\[\begin{split} P(u_{m}(t+1)=v|u_{m}(t)=u)\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
queue are dropped in favor of new experiences, avoiding too many correlated samples filling the queue.
The probability of selecting a slot as a learning slot decays linearly, starting from an initial value \(\rho_{0}\) and gradually decaying to a value \(\rho_{f}\): every \(H\) steps the learning rate is decreased by a constant value \(\sigma\):
\[\rho(t)=\max\left(\rho_{f},\rho_{0}-\left\lfloor\frac{t}{H}\right\rfloor\sigma \right). \tag{14}\]
Finally, we consider the greedy allocation in the learning slots. As the first slice has no latency requirements, we consider allocating resources to it greedily only when the number of packets in the queue is higher than a threshold \(\chi_{1}\): in this way, we avoid packet rejections, but also leave more resources for learning plane and latency-sensitive packets.
We can then define the following estimated rewards:
\[\hat{R}_{1}(\mathbf{s},\mathbf{z}) =\min(0,z_{1}-\min(q_{1}-\chi_{1},N)); \tag{15}\] \[\hat{R}_{2}(\mathbf{s},\mathbf{z}) =\min(0,z_{2}-\min(\xi_{2},N));\] (16) \[S(\mathbf{z},e) =-\left(\min(e,N)-\left(N-\sum_{m=1}^{2}z_{m}\right)\right). \tag{17}\]
The minimum operation ensures that resources will not be allocated to a slice once the queue pressure is below the limit \(\xi_{1}\) or all packets with a close deadline are served, respectively. We can define the following problem:
\[\mathbf{z}^{*}(\mathbf{s},\mathbf{z},e)=\operatorname*{arg\,max}_{\mathbf{z} \in\mathcal{Z}}S(\mathbf{z},e)+\sum_{m=1}^{2}\hat{R}_{m}(\mathbf{s},\mathbf{ z}). \tag{18}\]
As the problem can easily be converted to an integer linear problem, we can easily solve it through iterative methods.
### _Continual Learning_
In our environment, the statistics of the traffic change periodically every 500 seconds: the parameter vector \(\boldsymbol{\omega}\) includes \(\mathbf{O}^{(1)}\), \(\mathbf{O}^{(2)}\), \(U_{1}\), or \(U_{2}\), and as a result, corresponding changes are made to the transition matrix \(\mathbf{P}\). The policy for a given set of environmental parameters \(\boldsymbol{\omega}\) is defined by the set of corresponding trained weights vectors in the DRL neural network, \(\theta_{\boldsymbol{\omega}}\).
In the slicing task, the average traffic for each slice is enough to characterize a new environment, and we can identify each context with the vector \(\boldsymbol{\omega}=\left(\mathbb{E}\left[U_{1}\right],\mathbb{E}\left[U_{2} \right]\right)\). Additionally, we define the threshold \(\eta\) as the point at which we trigger this event. In other words, we only initiate a change event if the distance between two environments is greater than \(\eta\). While the average may not accurately represent the environment due to potential variance changes with a constant average, in our specific scenario, the average proved sufficient in maintaining a system performance near the ideal one (as described in the following section). The implementation of a robust method for detecting changes in the environment is critical, as the failure to identify a true change or the detection of a false change can lead to a degradation in system performance. However, in this study, we focus on the efficiency-vs-effectiveness tradeoff of the learning and defer the study of advanced and reliable context-recognition problem to future research.
Following the strategy we outlined in Sec. 3-3, the weights of the neural network are chosen from the closest environment observed in the past. We can also make an additional consideration: if the offered traffic is decreasing for both slices, the previous policy will still obtain good results, as the new environment is substantially easier than the previous one. We then define a strict minority relation between vectors, so that \(\mathbf{x}\prec\mathbf{y}\) if the two vectors \(\mathbf{x}\) and \(\mathbf{y}\) are the same length and each element of \(\mathbf{x}\) is smaller than the corresponding element of \(\mathbf{y}\):
\[\mathbf{x}\prec\mathbf{y}\Leftrightarrow|\mathbf{x}|=|\mathbf{y}|\wedge x_{i} <y_{i},\;\forall i\in\{1,\ldots,|\mathbf{x}|\}. \tag{19}\]
We also employ the Euclidean distance to define \(\eta(\boldsymbol{\omega},\boldsymbol{\omega}^{\prime})\) between two environments, so the centralized agent is updated as follows:
\[\theta_{\boldsymbol{\omega}_{k+1}}=\begin{cases}\theta_{\boldsymbol{\omega}_{ k}},&\text{if }\boldsymbol{\omega}_{k+1}\prec\boldsymbol{\omega}_{k};\\ \theta_{\boldsymbol{\omega}_{j}\boldsymbol{\omega}_{j}},&\text{if }\| \boldsymbol{\omega}_{k+1}-\boldsymbol{\omega}_{j}\boldsymbol{\cdot}\|_{2}<\eta_{ \text{thr}};\\ \theta\sim\mathcal{N}(0,0.1),&\text{otherwise},\end{cases} \tag{20}\]
where \(||\mathbf{x}||_{2}\) is the \(\ell_{2}\) norm of vector \(\mathbf{x}\). After the new weight vector is selected, the algorithm temporarily increases both the training slot probability \(\rho(t)\) and the exploration rate of the DRL agent, so that the new policy can be adapted to the new task.
## 5 Simulation Settings and Results
In this section, we present numerical findings that demonstrate the efficacy of the dynamic learning plane resource allocation policy in a non-stationary environment. To assess the proposed framework, we run the resource allocation for \(64000\) seconds, corresponding to 128 coherence periods lasting \(500\) seconds each. As each allocation step corresponds to 1 ms, this means that the environment is statistically stable for \(5\times 10^{5}\) steps, then abruptly transitions to a different behavior. The changes in the environment are produced using Algorithm 1. In the algorithm, we denote the probability of a user belonging to slice \(m\) being active as on\({}_{m}\), and the uniform distribution between \(a\) and \(b\) as \(\mathcal{U}(a,b)\). The full parameters of the simulation model are given in Tables I and II.
We consider four different benchmarks for the proposed scheme:
* _Out-of-hand_: this scheme represents the ideal case in which training data is transmitted over a side channel with infinite capacity. This aligns with the common assumption in the literature of free training, and represents an upper bound for performance;
* _Frequency Division Multiple Access (FDMA)_: here we assume 1 resource block in each slot is reserved to the learning plane, while the other 14 resource blocks can be freely allocated to the users' slices;
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{**Layer size**} & \multicolumn{1}{c}{**Activation function**} \\ Input & Output & \\ \hline
13 & 64 & ReLU \\
64 & 32 & ReLU \\
32 & 3 & Linear \\ \hline \hline \end{tabular}
\end{table} TABLE III: DQN architecture.
* _Time Division Multiple Access (TDMA)_: we consider a time division between the learning and data planes, in which all available resources are allocated to the learning plane once every \(T_{\ell}\) slots. We consider two cases, with \(T_{\ell}=10\) and \(T_{\ell}=100\).
In the following, we will also consider a normalized reward, equal to \(1\) if all packets are delivered with utility \(1\) (i.e., before \(T_{\text{soft}}^{(2)}\) if they belong to slice \(2\)) and \(0\) if all packets are dropped or rejected.
Fig. (a)a shows the total number of active flows (i.e., of users transmitting data in the slot) in the first slice during the entire simulation time, while Fig. (b)b reports a smoothed time average, along with the tracked number of active flows. We can clearly see that the smoothed average is tracked relatively well, so that any significant drift is detected promptly and dealt with by the CL scheme.
We can also consider the effect of the shared resources on both the learning and data planes: Fig. 4 shows how many experience samples are forwarded to the Cloud during the training process. Following the linear decay of \(\rho(t)\), the number of new experience samples transmitted for training is initially very high, but decreases over about \(70\) seconds to reach the minimum, which is between \(40\) and \(50\) samples per second. This rate is high enough to guarantee that changes in the environment statistics are captured, but does not impact the final performance, as we will show in the following. Furthermore, we can analyze the impact of learning slots on the instantaneous reward by looking at the empirical Cumulative Distribution Function (CDF) of the reward penalty from using the greedy allocation, shown in Fig. 5: the reward loss is \(0\) in \(40\%\) of cases, and below \(0.1\) in \(80\%\) of cases. This means that the greedy allocation can still guarantee good performance in most cases, and as such, is a robust strategy for the learning slots.
We have explored the impact of three different initialization strategies on system performance: the proposed strat
Fig. 4: Average number of forwarded learning samples per second.
Fig. 3: Number of active flows over time for each coherence period in the first slice.
egy, which uses the nearest recorded environment, provides a significant boost over keeping the previous environment or randomly selecting a memorized one, as Fig. 6 shows: the nearest environment selection improves CL by starting the DQN with weights that are already close to the correct ones, reducing the mistakes in the initial phases of the retraining (as shown by the limited number of outliers in the boxplot).
We also sampled four different coherence periods for the system (i.e., periods of time during which the context does not change), whose parameters are reported in Tab. II. The reported index corresponds to the time of their appearance in the simulation. In all the selected environments, the load is greater than 0.945, i.e., the offered traffic is very close to the channel capacity, and in env. 12 and 110, the load is around 0.99. Fig. 7 shows the average reward for these environments, along with the packet drop rate for slice 1 (i.e., packets exceeding \(T_{\max}^{(2)}\), which are dropped from the queue as their utility is 0) and the packet rejection rate for slice 2 (i.e., packets which find a full buffer and are discarded directly). The indices of the periods represent an incremental number of different environment seen by the CL agent: the first, with index 0, is the first to be seen, while there are 12 other periods between 0 and 12, and so on. Each period has the same duration, i.e., 500 seconds.
As a first observation, we note that the ideal out-of-band policy, which neglects the cost of learning, clearly outperforms those that reserve some resources to the learning plane, which confirms that the cost of learning is not negligible and needs to be accounted for when designing the resource allocation strategies, as we have done in our "Dynamic" scheme. The bar plots in Fig. 6(a)-d, in fact, show that our scheme can outperform the static FDMA and TDMA resource allocation strategies, almost reaching the same performance as the ideal out-of-band system. The only cases with an appreciable performance gap between our scheme and the out-of-band system are env. 12 and env. 110: as we remarked above, these are the most challenging ones, with a total load close to or over 99% of the nominal link capacity. In these limit cases, any learning policy that requires resources for the training will unavoidably determine the violation of the QoS requirements for some users, which further highlights the importance of the cost of learning in the system design.
The relative simplicity of the system model we considered makes it possible to analyze in depth the choices made by the different schemes. From the bar plots in Fig. 6(e) to Fig. 6(f) we can observe that the FDMA and TDMA schemes tend to drop or reject a significant number of packets in all environments, while the ideal and dynamic ones manage to limit the number of unserved packets for both slices. Interestingly, even the ideal scheme drops a significant number of packets from slice 1 in env. 0, but performance is still high. We can explain this by considering Fig. 8, which shows the empirical CDF of the latency for packets in slice 2. Fig. 7(a) clearly shows that almost all packets have utility 1, i.e., are delivered before \(T_{\text{soft}}^{(2)}\): in this case, all schemes tend to privilege slice 2, filling the queue in slice 1 more often. We should also consider that the learning agents start from scratch in environment 0, i.e., they have no pre-trained weights to start from, and we should expect a relatively large number of mistakes.
We can also see that the ideal and dynamic schemes have matching latency profiles in env. 102, as Fig. 7(c) shows, while the FDMA and TDMA schemes tend to transmit more packets with a latency close to \(T_{\max}^{(2)}\). In environments 12 and 110, shown in Fig. 7(b) and Fig. 7(d), respectively, the dynamic scheme drops more packets than the ideal one, and has a higher overall latency, but still outperforms the static allocation schemes. Interestingly, the two TDMA schemes tend to have better latency performance than the dynamic scheme in env. 12, but cannot improve the utility: the fraction of packets with a latency higher than \(T_{\text{soft}}^{(2)}\) is the same for all three schemes, and the two TDMA ones drop a large number of packets, causing a significant performance difference. In this case, serving most packets from slice 2 as soon as they arrive is not advantageous, as it leads to worse performance overall for TDMA.
## 6 Conclusions and Future Directions
In this work, we have designed a dynamic resource allocation policy, which can mediate between the learning and data planes, controlling the trade-off between effectiveness and efficiency of DRL models. Unlike most works in the learning-based networking literature, we specifically consider the cost of learning, i.e., the resources required by the training process of a DRL agent, and show that our dynamic policy can outperform static schemes and, after a short transition phase, match the performance of an ideal system with an out-of-band learning plane. Furthermore, the adaptability of the scheme is demonstrated by applying it in a CL setting with environment changes, to which the dynamic scheme adapts extremely quickly.
Possible extensions of the work certainly include the adaptation of the scheme to more complex scenarios, with a larger number of resources and slices and more stringent QoS requirements, as those for Ultra-Reliable Low-Latency
Fig. 5: Empirical CDF of the reward loss during learning slots.
Fig. 6: Distribution of the performance with different initialization strategies after drift is detected.
Communications (URLLC). However, even more interesting would be addressing some theoretical questions, such as the interplay between the cost of learning and active learning, which requires to select the most valuable samples to be transmitted in order to accelerate the training, particularly when resources in the learning plane are scarce. As mentioned, furthermore, the detection of context changes that trigger a retraining of the network is another open challenge. Finally, of particular interest is the design of meta-learning schemes that can learn when the resource allocation scheme needs to be retrained, balancing the potential performance improvement that could be brought about by a retrained policy and the cost to learn it, relative to the expected system coherence time.
|
2304.04749 | Watch the Gap: Making code more intelligible to users without
sacrificing decentralization? | The potential for blockchain technology to eliminate the middleman and
replace the top down hierarchical model of governance with a system of
distributed cooperation has opened up many new opportunities, as well as
dilemmas. Surpassing the level of acceptance by early tech adopters, the market
of smart contracts is now moving towards wider acceptance from regular (non
tech) users. For this to happen however, smart contract development will have
to overcome certain technical and legal obstacles to bring the code and the
user closer. Guided by notions from contract law and consumer protection we
highlight the information gap that exists between users, legal bodies and the
source code. We present a spectrum of low-code to no-code initiatives that aim
at bridging this gap, promising the potential of higher regulatory acceptance.
Nevertheless, this highlights the so called "Pitfall of the Trustless Dream",
because arguably solutions to the information gap tend to make the system more
centralized. In this article, we aim to make a practical contribution of
relevance to the wide-spread adoption of smart contracts and their legal
acceptance by analyzing the evolving practices that bring the user and the code
closer. | Simona Ramos, Morshed Mannan | 2023-03-10T10:50:18Z | http://arxiv.org/abs/2304.04749v1 | # Watch the Gap: Making code more intelligible to users without sacrificing decentralization?
###### Abstract
The potential for blockchain technology to eliminate the middleman and replace the top-down hierarchical model of governance with a system of distributed cooperation has opened up many new opportunities, as well as dilemmas. Surpassing the level of acceptance by early tech adopters, the market of smart contracts is now moving towards wider acceptance from regular (non-tech) users. For this to happen however, smart contract development will have to overcome certain technical and legal obstacles to bring the code and the user closer. Guided by notions from contract law and consumer protection we highlight the 'information gap' that exists between users (including judges/legal bodies) and the source code. We present a spectrum of low-code to no-code initiatives that aim at bridging this gap, promising the potential of higher regulatory acceptance. Nevertheless, this highlights the 'The Pitfall of the Trustless Dream', because arguably solutions to the information gap tend to make the system more centralized. In this article, we aim to make a practical contribution of relevance to the wide-spread adoption of smart contracts and their legal acceptance by analyzing the evolving practices that bring the user and the code closer.
blockchain, smart contract, information gap
## I Introduction
A smart contract is an automatable and enforceable agreement between two or more parties. The concept of smart contract was first introduced by computer scientist and cryptographer Nick Szabo in the late nineties [1]. Szabo described a smart contract as a'smart' agreement tool that can automatically execute certain pre-programmed steps. Nevertheless, Szabo didn't argue for the superiority of smart contracts over paper contracts, as he noted that they should not be seen as intelligent tools that can phase out traditional contracts - as traditional contracts are designed to be understood by people and smart contracts by machines.
In 2013, Ethereum's implementation of a virtual machine allowed for snippets of code (smart contracts) to be executed in a decentralized way without third party interference, bringing a whole new spectrum of applications and possibilities. Under this system, parties can coordinate themselves in a peer-to-peer manner, according to a set of protocols and rules incorporated into self-executing smart contract code [2]. This has led some to describe blockchain as a "trustless" or "trust-free' technology [3]. However, although 'disintermediation' has been regarded as one of the most innovative traits of blockchain technology (and the smart contracts relying on it), blockchain-based systems are complex socio-technological assemblages. In other words, these systems are made up not only of code, but they also involve large variety of actors operating at different layers [4]. As such, due to economic and game-theoretic incentives propagated across the system, centralization can occur at different layers: in the concentration of mining pools and mining farms, as well as through online exchanges and blockchain explorers. For example, joining a mining pool is common among miners as a way to make mining rewards more predictable due to the increased difficulty level of mining a block [5].
According to [6], blockchains and smart contracts could lead to strengthening trust among colluders, where blockchain can transform non-cooperative games (collusion) into cooperative ones. Hence, blockchain can be considered a type of algorithmically run "confidence machine", in which users rely on the predictability of the technology but which inevitably involve trusting actors (such as developers, miners, wallet service providers, etc.). In other words, blockchains do not eliminate the need for collaboration and trust but provide reliable records and automation for transparent processes that may facilitate cooperation between agents [7].
Theoretically, smart contracts can be created on top of public, decentralized and distributed ledgers, accessible to everyone willing to enter a contractual relationship of a certain type. However, creating smart contracts requires certain technical knowledge and expertise, where average users are not able to develop nor fully understand a written smart contract code. Hence, for an average user to access smart contracts, he has to resort to a trusted party with sufficient technical expertise. This has ultimately limited the speed of expansion and adoption of smart contracts among the general public and created policy dilemmas among regulators. According to [8], the inability of average consumers to understand and interpret smart contracts in intelligible language has been seen as being contrary to consumer protection and the duty of information. We explain the notion of consumer protection and duty of information in the following section.
Arguably, the smart contract governance model focuses on proof-based automation of pre-stated functions run by the system and puts aside relevant legal rules and practices related to consumer protection and duty of information. In other words, this model focuses on providing function-based |
2301.08975 | Expectations for time-delay measurements in active galactic nuclei with
the Vera Rubin Observatory | The Vera Rubin Observatory will provide an unprecedented set of
time-dependent observations of the sky. The planned Legacy Survey of Space and
Time (LSST) operating for 10 years will provide dense lightcurves for thousands
of active galactic nuclei (AGN) in Deep Drilling Fields (DDFs) and less dense
lightcurves for millions of AGN. We model the prospects for measuring time
delays for emission lines with respect to the continuum, using these data. We
model the artificial lightcurves using Timmer-Koenig algorithm, we use the
exemplary cadence to sample them, we supplement lightcurves with the expected
contamination by the strong emission lines (Hbeta, Mg II and CIV as well as
with Fe II pseudo-continuum and the starlight). We choose the suitable
photometric bands appropriate for the redshift and compare the assumed line
time delay with the recovered time delay for 100 statistical realizations of
the light curves. We show that time delays for emission lines can be well
measured from the Main Survey for the bright tail of the quasar distribution
(about 15% of all sources) with the accuracy within 1 sigma error, for DDFs
results for fainter quasars are also reliable when all 10 years of data are
used. There are also some prospects to measure the time delays for the faintest
quasars at the smallest redshifts from the first two years of data, and
eventually even from the first season. The entire quasar population will allow
obtaining results of apparently high accuracy but in our simulations, we see a
systematic offset between the assumed and recovered time delay depending on the
redshift and source luminosity which will not disappear even in the case of
large statistics. Such a problem might affect the slope of the
radius-luminosity relation and cosmological applications of quasars if
simulations correcting for such effects are not performed. | Bozena Czerny, Swayamtrupta Panda, Raj Prince, Vikram Kumar Jaiswal, Michal Zajacek, Mary Loli Martinez Aldama, Szymon Kozlowski, Andjelka B. Kovacevic, Dragana Ilic, Luka C. Popovic, Francisco Pozo Nunez, Sebastian F. Hoenig, William N. Brandt | 2023-01-21T16:56:50Z | http://arxiv.org/abs/2301.08975v2 | # Expectations for time-delay measurements in active galactic nuclei with the Vera Rubin Observatory+
###### Abstract
Context:The Vera Rubin Observatory will provide an unprecedented set of time-dependent observations of the sky. The planned Legacy Survey of Space and Time (LSST) operating for 10 years will provide dense lightcurves for thousands of active galactic nuclei (AGN) in Deep Drilling Fields (DDFs) and less dense lightcurves for millions of AGN.
Aims:In this paper, we model the prospects for measuring the time delays for the AGN emission lines with respect to the continuum, using these data.
Methods:We model the artificial lightcurves using Timmer-Konig algorithm, we use the exemplary cadence to sample them (one for the Main Survey and one for the Deep Drilling Field), we supplement lightcurves with the expected contamination by the strong emission lines (H\(\beta\), Mg II and CIV as well as with Fe II pseudo-continuum and the starlight). We choose the suitable photometric bands appropriate for the redshift and compare the assumed line time delay with the recovered time delay for 100 statistical realizations of the light curves.
Results:We show that time delays for emission lines can be well measured from the Main Survey for the bright tail of the quasar distribution (about 15% of all sources) with the accuracy within 1\(\sigma\) error, for DDFs results for fainter quasars are also reliable when all 10 years of data are used. There are also some prospects to measure the time delays for the faintest quasars at the smallest redshifts from the first two years of data, and eventually even from the first season. The entire quasar population will allow obtaining results of apparently high accuracy but in our simulations, we see a systematic offset between the assumed and recovered time delay depending on the redshift and source luminosity which will not disappear even in the case of large statistics. Such a problem might affect the slope of the radius-luminosity relation and cosmological applications of quasars if simulations correcting for such effects are not performed.
Conclusions:
## 1 Introduction
The Vera C. Rubin Observatory and its Legacy Survey of Space and Time (LSST; 19) will provide an unprecedented amount of data in many fields thus revolutionizing our view of the Universe. Observations will start relatively soon, most likely in mid-2024, so optimizing the output from these data, particularly from the first stages of its operation is extremely important. Among numerous results, LSST will provide up to ten million quasars, opening a path to massive reverberation monitoring of active galactic nuclei (AGN) at a range of redshifts from 0 up to 7 (Shen et al., 2020; Kovacevic et al., 2021).
The current description of the Vera Rubin Observatory can be found in the work of Ivezic et al. (2019), general expectations of the LSST discoveries in the field of AGN were discussed by Brandt et al. (2018), and specific predictions for the mapping of AGN accretion disks with LSST were presented by Kovacevic et al. (2022). The prospects for the continuum time delays from accretion disks were also discussed by Yu et al. (2020). In the current paper, we aim to assess the accuracy of measuring emission-line time delays using the four photometric bands of LSST. Broad emission lines are characteristic for almost all bright AGN (see e.g. Krolik, 1999, for a suitable review), and the exact location and geometry of the corresponding region (known as Broad Line Region, BLR) is generally unresolved, apart from most recent observations of three AGN in the infrared domain,
with the use of the GRAVITY Very Large Telescope Interferometer (VLTI) instrument (3C 273, Gravity Collaboration et al. 2018, NGC 3783 GRAVITY Collaboration et al. 2021, and IRAS 09149-6206 GRAVITY Collaboration et al. 2020). For the remaining objects, we rely on reverberation monitoring in order to understand the structure and dynamics of their BLR.
Reverberation mapping is a well-established observational technique that has been extensively used to study the inner structure of Active Galactic Nuclei (AGN) on sub-parsec length scales, which are typically beyond the resolution limits of current telescopes. It effectively replaces the spatial resolution with a time resolution that needs to be adjusted according to the spatial scale of interest (see e.g. Cackett et al. 2021, for a review and references therein). The idea was proposed in 1982 by Blandford & McKee (1982) where they show that the line emission produced in the broad-line region (BLR) follows the same variability pattern as continuum emission from the disk but is delayed by the light travel time between the disk and the BLR. The first systematic observational results were published in 1990-1993, presenting the results from the ground-based campaigns as well as from the International Ultraviolet Explorer (IUE) for the bright nearby AGN Maoz et al. (1990); Netzer & Maoz (1990); Peterson & Gaskell (1991); Peterson et al. (1991); Peterson (1993). One of the main results of reverberation is the measured time lag that can be used to estimate the black hole mass. This technique has successfully measured the BH mass of many low redshift (\(z<0.5\)) AGNs through intense monitoring (Kaspi et al. 2000; Peterson et al. 2004; Bentz et al. 2009; Bentz & Katz 2015; Du et al. 2015, 2016, 2018b). Later the results including higher redshifts came, based on Mg II and CIV lines, as those required longer monitoring (Shen et al. 2016; Grier et al. 2017; Lira et al. 2018; Czerny et al. 2019; Zajacek et al. 2020; Kaspi et al. 2021; Zajacek et al. 2021). The radius of the BLR measured from the time delays and the monochromatic absolute luminosity of reverberated quasars/AGN are significantly correlated, which is referred to as the Radius-Luminosity (RL) relation (Bentz et al. 2013). Recently, the standard RL relation has been proposed for cosmological applications by Watson et al. (2011), Haas et al. (2011), and Czerny et al. (2013). The development has been encouraging although the cosmological constraints are not yet tight due to a relatively small number of studied AGN combined with a large scatter in individual measurements (Martinez-Aldama et al. 2019; Czerny et al. 2021; Zajacek et al. 2021; Khadka et al. 2021, 2022; Cao et al. 2022).
LSST will provide photometric lightcurves for many AGN but conversion from photometry to line time delays is not simple and accurate, and it is important to estimate the reliability of measurements depending on the source luminosity, redshift, and available cadence. We thus created a simulation tool that allows us to optimize the success of the measurements for the whole survey, the first two years and the first year, and we consider both the Main Survey as well as the planned Deep Drilling Fields.
## 2 Model
We test the prospects for the time-delay determination of broad emission lines with respect to continuum using the synthetic lightcurves probed according to the Operation Simulator (OpSim) provided by the VRO-LSST Data Management team1. For the Wide-Fast-Deep (WFD) survey, we use the OpSim Run: baseline_v2.0_10yrs, and extract the 5\(\sigma\) depth light curves for the 6 bands (ugrizy) determined for 30 seconds exposure. We make a random search using a 3.5\(\times\)3.5\({}^{\circ}\) search area and limit the sky coordinates (RA, DEC) within 0.01 dispersion. This criterion allows us to choose roughly the 5\(\sigma\) depth for the same (synthetic) source across the 6 bands. We make a similar search for the Deep-Drilling Field surveys, where we make use of the OpSim Run: ddf_v1.7_noothre_10yrs. We model the continuum variability and the contribution of the emission lines to the photometric bands. This allows us to simulate the accuracy of the broad emission line delays. In this way, we also optimize the future modelling effort by the choice of prospective sources at each stage of the duration of the project. In our simulations, we assume that there is a way to estimate the redshift of the studied source, \(z\). This allows us in principle to derive the absolute value of the monochromatic flux in one of the photometric bands from the measured magnitude. In the current version of the program, we simply assume the value of the absolute monochromatic flux as one of the parameters since we aim at testing the possibility of the time delay determination, and not at a determination of the cosmological parameters from the measured time delay. If only a **photo-z** is available, the delay measurement is still possible but it introduces additional errors (LSST Science Collaboration et al. 2009; Ivezic et al. 2019). We will touch on this issue in the Discussion part.
Footnote 1: The results of the OpSim runs are hosted on the publicly available website: [http://astro-lsst-01.astro.washington.edu:8080](http://astro-lsst-01.astro.washington.edu:8080)
The presented modelling is based on stochastic lightcurves which offer a good representation of AGN variability, and we present the method used in our simulations in Section 2.3. We only stress now that since the created lightcurves have a random character, we always calculate 100 realizations of the full process described in the next subsections for a single set of parameters, to assess the accuracy of the time delay modelling 2.
Footnote 2: A preliminary version of the results from our code can be found in Panda et al. (2019).
### Choice of suitable photometric bands
We consider \(g\), \(r\), \(i\), and \(z\) bands as particularly suitable for line delay measurements since their efficiency is high. We select only three strong emission lines for the tentative time delay measurements: H\(\beta\), Mg II, and CIV since they have the record of showing radius-luminosity relation from time-delay measurements (e.g. Bentz et al. 2013 for H\(\beta\), e.g. Homayouni et al. 2020; Zajacek et al. 2021 for Mg II, and e.g. Cao et al. 2022 for CIV). Since the line should be well within the border of the band edges, we limit the position of the line to 410 - 530 nm (in \(g\) band, 570 - 670 nm (in \(r\) band), 710-800 nm (in \(i\) band) and 830-910 nm (in \(z\) band)3. For the line centre in the rest frame, we assume 486.2721 nm (H\(\beta\)), 154.9000 nm (CIV), and the Mg II line is modelled as a doublet with the mean positions 0.5(279.635 + 280.353) nm. If any of the redshifted lines fits any of the favoured spectral regions, this line and this photometric band are selected for further modelling as a line-contaminated band. The near band is selected as an uncontaminated band for further processing, and all that is done automatically. If none of the lines fit into one of the bands, then for this redshift, no time delay recovery is performed.
### Artificial spectrum and line contamination of the photometric bands
We construct the artificial spectrum of an AGN in the wavelength range of 100 - 2000 nm which is enough to model the sources with redshift up to 5. The continuum is parameterized with a single slope \(\alpha\) (\(F_{\nu}\propto\nu^{\alpha}\), or \(F_{\lambda}\propto\lambda^{-2-\alpha}\)). We then add the emission lines: H\(\beta\), MgII, and CIV. Line shapes are parameterized as single Gaussian (MgII is treated as two Gaussian components since it is a doublet), the width is assumed to be the same for all three lines and it is set by the Full-Width at Half Maximum (FWHM). Optionally, it can be represented by a Lorentzian shape. The line intensity is set by assuming the line equivalent widths (EW). As a default, we assume the EWs characteristic for bright quasars (Forster et al. 2001). We also include the broadband contaminants in the form of the FeII pseudo-continuum and the starlight. FeII continuum is taken from one of the theoretical templates (\(d11-m20-20.5-735.dat\), fitting well the spectra of quasars, e.g. LBQS 2113-4538 Hryniewicz et al. 2014, HE 0435-4312, Zajack et al. 2021) of Pruhwieker & Verner (2008), and since the model does not contain any kinematic broadening, we apply Gaussian smearing parameterized by the Gaussian width. FeII pseudo-continuum contributes both to the optical and the UV part of the spectrum. The spectral shape of the starlight is also taken from the model of Cid Fernandes et al. (2005), the version developed by Bruzual & Charlot (2003), with parameters adjusted to fit the bright Seyfert galaxies in Sniggowska et al. (2018). Only the normalization is the free parameter of the model.
Next, we calculate the contamination of the three emission lines to all photometric bands for a given redshift and source properties. This is done by folding the spectrum shifted to the observed frame with the profiles of the LSST filters. We calculate first the content of the filter when the selected line is included in the AGN spectrum, then we repeat the calculations putting the EW of the selected line to zero, and next, we calculate the ratio. In this way, we have the fractional contamination of each line to each filter, which is a few up to 15 %, depending on the line. Starlight and FeII are always included, so the contamination is measured against the sum of the continuum and pseudo-continua. This allows us to confirm the band selection and informs us about the importance of the selected line in the selected band.
The exemplary spectra at two values of the redshift are shown in Figure 1, overplotted with the photometric profiles of the LSST filters.
### Construction of a single dense lightcurve
We construct first a single dense stochastic lightcurve. For this aim, we use the algorithm developed by Timmer & Koenig (1995). We usually assume broken power law shape for the underlying power spectrum, with two frequency breaks, \(fb_{1}\) and \(fb_{2}\), and three slopes (\(\alpha_{1}\),\(\alpha_{2}\), \(\alpha_{3}\)). Such parametrization is more general than the Damped Random Walk frequently used for modelling AGN lightcurves (Kelly et al. 2009) which would correspond to \(fb_{1}=fb_{2}\), and \(\alpha_{1}=0\), \(\alpha_{3}=2\). The random aspect appears when the power spectrum specified in the frequency domain is Fourier-transformed to the time domain, since the power spectrum specifies only the value of the Fourier transform, but not the phase. This phase is thus adopted as random, which is justified by the studies of AGN lightcurve. This leads to (an almost arbitrary) number of lightcurves corresponding to the same power spectrum and thus representing the same set of
\begin{table}
\begin{tabular}{l c r} \hline Parameter & notation & default \\ & & values \\ \hline \hline redshift & z & \\ photometric error & P & 0.001 mag \\ spectral slope \(F_{\nu}\propto\nu^{\alpha}\) & \(\alpha\) & 0 \\ equivalent width of H\(\beta\) & EW(H\(\beta\)) & 150 Å \\ equivalent width of Mg II & EW(MgII) & 47 Å \\ equivalent width of CIV & EW(CIV) & 45 Å \\ equivalent width of Fe II & EW(FeII) & 50 Å \\ line width & FWHM & 4000 km s\({}^{-1}\) \\ dispersion velocity FeII & \(\sigma_{FeII}\) & 900 km s\({}^{-1}\) \\ starlight normalization at 280 nm & Astar & 0.258 \\ number of statistical realizations & N & 100 \\ low frequency power spectrum slope & \(\alpha_{1}\) & 0 \\ low frequency break & \(fb_{1}\) & \(3.7\times 10^{-10}\) Hz \\ mid-frequency slope & \(\alpha_{2}\) & 1.2 \\ high frequency break & \(fb_{2}\) & \(5.8\times 10^{-9}\) Hz \\ high frequency slope & \(\alpha_{3}\) & 2.5 \\ sampling rate & \(\Delta T\) & 1 day \\ total duration of the TK lightcurve & T & \(10^{9}\) s \\ assumed variability level & \(F_{var}\) & 0.3 \\ offset in the R-L relation & \(\beta\) & 1.573 \\ slope in the R-L relation & \(\gamma\) & 0.5 \\ width of the BLR Gaussian response & \(\sigma_{BLR}\) & 0.1 \(\tau\) \\ curve subtraction coefficient & \(\epsilon_{min}\) & 0.85 \\ curve subtraction coefficient & \(\epsilon_{max}\) & 1.15 \\ no of subtraction sampling & \(n_{e}\) & 10 \\ \hline \end{tabular}
\end{table}
Table 1: Model parameters of pairs of photometric lightcurves.
parameters. As advised by Uttley et al. (2005), the stochastic lightcurve is exponentiated which allows avoidance of negative values when fluctuations are large and additionally reproduces the standard correlation between the rms and flux and the associated log-normal flux distribution seen in accreting sources. The remaining parameters are the time step in the dense lightcurve, \(\delta T\), and the total duration of the lightcurve, \(T\). The normalization of the curve is provided by the assumed total variance. This lightcurve will later represent the continuum band which is relatively free from contamination by a strong emission line.
### Delayed dense lightcurve
In the next step, we create a delayed dense light curve. This requires the assumption of the time delay expected in a given object. In our modelling, we use the radius-luminosity relation derived as
\[\log{((\tau_{\rm obs}[day])=\beta+\gamma\log{L_{3000,44}}+\log(1+z)}, \tag{1}\]
where \(L_{3000,44}\) is the monochromatic absolute luminosity at 3000A in units of \(10^{44}\) erg s\({}^{-1}\), \(\tau_{\rm obs}\) is the time delay in the observer's frame, and \(z\) is the source redshift. The values of the coefficients can be taken from the observational studies of the R-L relation (e.g. Kaspi et al., 2000; Peterson et al., 2004; Bentz et al., 2013), and they are slightly different for various lines, particularly the delay for CIV is shorter than the typical delay for H\(\beta\) and Mg II (e.g. Lira et al., 2018). This was not included in the simulations, we used the same R-L for all the lines, with fixed parameters, set as default (\(\beta\) = 1.573, \(\gamma\) = 0.5). The choice of the slope was motivated theoretically by the Failed Radiatively Accelerated Dusty Outflow (FRADO) model of the BLR (Czerny & Hryniewicz, 2011; Czerny et al., 2015, 2017; Naddaf et al., 2021; Naddaf & Czerny, 2022), well justified for H\(\beta\) and Mg II, but not for CIV which belongs to High Ionization Lines and forms in the dustless line-driven wind, closer to the black hole.
We use the delay given by Equation 1 to shift the original dense line **light curve**. Additionally, since the reprocessing in the BLR happens in an extended medium, we should actually convolve the original dense curve with the response function of the BLR. Such response functions have been derived observationally for a few nearby sources (e.g. Grier et al., 2013; Xiao et al., 2018; Du et al., 2018; Horne et al., 2021). Attempts to do that for Mg II are complicated by the presence of the underlying Fe II component (Panda, 2021; Prince et al., 2022). Therefore, in the current paper, we simply assume the response function in the form of a symmetric Gaussian shape, with the time shift set by Equation 1, and the width \(\sigma_{BLR}\) of 10% of the same delay. Exceptionally, we also performed tests using a half-Gaussian shape for this purpose, as done in the simulations of the time delay by Jaiswal et al. (2022).
### Cadence in all six photometric bands
As argued, for example by Malik et al. (2022), general sampling, and in particular seasonal gaps are critical issues for successful delay recovery. Thus, in order to realistically replicate the actual LSST cadence we use Operation Simulator (OpSim) results provided by the VRO-LSST Data Management team. For the Wide-Fast-Deep (WFD) survey, we use the OpSim Run: baseline_v2.0_10yrs, and extract the 5\(\sigma\) depth light curves for the 6 bands (ugrizy). We make a random search using a 3.5\({}^{\circ}\)\(\times\)3.5\({}^{\circ}\) search area and limit the sky coordinates (RA, DEC) within 0.01 dispersion. This criterion allows us to choose roughly the 5\(\sigma\) depth for the same (synthetic) source across the 6 bands. We make a similar search for the Deep-Drilling Field surveys, where we use the OpSim Run: ddf_v1.7_modither_10yrs. We use these two cadences in most of our simulations, referring to them as DDF and Main Survey (MS) for simplicity. The 5\(\sigma\) depth in principle informs us about the photometric accuracy of the measurement, depending on the source adopted luminosity and redshift, but this is not yet incorporated in the software, and we use a fixed photometric accuracy. However, the typical limit in the g band in the selected field is 24.5 mag which corresponds to 5\(\sigma\) detection of an AGN with \(\log{L_{3000}}\) = 43.814 erg s\({}^{-1}\) according to the online AGN calculator (Kozlowski, 2015). Thus a quasar with adopted \(\log{L_{3000}}\) = 44.7 at redshift 2 will be detected with 0.06 mag error and a quasar at log \(L_{3000}\) = 45.7 will be detected roughly with 0.02 mag error. However, some of the exposures are actually repeated 2 - 3 times within 6 hours for the WFD, and 5 - 10 times in DDF within a very short time period of 5 to 10 minutes, and while such multiple observations do not sample in practice the AGN variability, they effectively lower the error. As a default, we use even much lower error to emphasize the problems directly caused by the red noise character of the lightcurves combined with the planned sampling.
We select two bands currently for the time delay measurement: one strongly contaminated by one of the broad emission lines, the other one free of contamination - closely representing the continuum - and neighboring the selected contaminated band. Since in the future we may wish to also use the photometry from other bands to model the continuum, we currently read all the simulated observational dates from the LSST cadence simulator for a selected specific position on the sky and specific ca
Figure 1: An example of the artificial spectrum of an AGN at z=0 (upper panel) and high redshift z = 2.7 (lower panel), with the LSST filters overplotted.
dence model. This is at present done externally - the cadence is extracted using the simulated databases from the LSST's Operation Simulator which are processed locally using python and SQLrrr and stored in the form of an ASCII file.
### Creating two simulated photometric lightcurves
Having two dense lightcurves representing the continuum and the line emission, as described in Sections 2.3 and 2.4, as well as simulated dates of the measurements in the two selected bands (Section 2.5), we now construct the modelled lightcurves. Both are constructed by adding the reference curve and the delayed curve, but with the delay curve normalized by the level of line contamination as specified in Section 2.2, and by interpolation to the planned cadence. Observations in the two bands are not simultaneous, they simply follow the set LSST cadence for the chosen location in the sky. Only one of the two constructed curves is actually strongly contaminated by the BLR, as designed, so it contains the relatively delayed signal, typically of the order of a few percent, depending on the redshift and adopted strength of the lines.
At that stage, we also include the additional noise due to the expected photometric error. This is done assuming the photometric accuracy in magnitudes and by adding a Poisson noise to the curve by multiplying each data point flux by \((1+\mathrm{P}\,\sigma_{P})\), where \(\sigma_{P}\) is the random number representing the Gaussian distribution with zero mean and dispersion 1.
In this approach, we neglect the time delays between the nearby continuum bands used in the process which are generated by the reverberation in the accretion disk. This is an approximation made for simplicity since we concentrate on the plausibility of the recovery of the line delays. The intrinsic continuum delays between the continuum bands are much shorter, in order of a day to a few days, and their modelling involves the assumption of the height of the irradiating source (see e.g. Kammoun et al., 2021, for the delay plots). The measured delays seem frequently longer but this is quite likely a result of the BLR contamination (e.g. Netzer, 2022). Sparse monitoring of the Main Survey should not be affected by intrinsic continuum time delays. DDF monitoring can be possibly used to disentangle the intrinsic accretion disk delays and the BLR, but we do not address this issue in the current paper. It is important to note that the BLR time delays scale basically with the monochromatic luminosity as a square root (see Equation 1, where \(\gamma\sim 0.5\), and the same scaling is expected from the accretion disk reverberation (e.g. Collier et al., 1999; Cackett et al., 2007) so the intrinsic continuum time delay should be always shorter for all objects.
An example of the two dense lightcurves representing the non-contaminated photometric channel, the convolution representing the contaminated channel, and the observational points representing the actual cadence are shown in Figure 2.
### First stage of time delay measurement preparation
We initially tested if the time delay can be directly measured from the two photometric curves. However, the measurements were very inconclusive since the second light curve contained only a few percent of the delayed line emission. As discussed in previous studies on photometric reverberation mapping (Pozo Nunez et al., 2012, 2015), the varying AGN continuum must be removed before using cross-correlation techniques. This can be achieved by subtracting a fraction of the continuum traced by a band with negligible line contribution. Thus, to improve the chances of the delay measurement we thus first subtract the relatively uncontaminated curve \(F1(t)\) from the contaminated curve, \(F2(t)\),
\[F22_{i}(t)=F2(t)-\epsilon_{i}F1(t),\,\,\,i=1,\ldots,10 \tag{2}\]
for a selection of 10 values of the coefficient, \(\epsilon_{i}\) equally spaced between 0.85 and 1.15. It requires interpolation since \(F1\) and \(F2\) are not measured at the same moment. An example of such subtraction is shown in Figure 2, with the green points. The time delay is measured for all ten values of \(\epsilon_{i}\), and the time delay is set later to the value which favours the quality of the time delay fit.
### Time delay measurements
Finally, we determine the time delay using one of the two methods. The default method used in the current paper is the \(\chi^{2}\) method (for details, see Czerny et al., 2013; Bao et al., 2022). Optionally, we can use Interpolated Cross Correlation Function(ICCF), described in detail by Gaskell & Peterson (1987) and Peterson et al. (1998, 2004). The errors in the delay measurement in both cases are set by the dispersion (i.e. standard deviation) in the delay measurements obtained from 100 statistical realizations of the initial stochastic lightcurve described in Section 2.3.
To measure the time-lags, we have also tested the Javelin code (Zu et al., 2011, 2013). We have run numerous tests on the simulated data, yet Javelin was unable to reliably measure the intrinsic time lags. The output lags had no correspondence to the input ones, given the data cadence and quality.
### Assigning representative parameter values
Since the model has many parameters, in order to easily see the basic trend, we first set the representative parameters which were finally included in Table 1. We use the SDSS DR14 QSO catalogue (Rakshit et al., 2020) to obtain the distribution of the quasar luminosities, line intensities, and line widths. The parent sample from Rakshit et al. (2020) contains spectroscopically measured
Figure 2: An example of the artificial dense lightcurve (blue continuous line), its convolution with the BLR response (red continuous line), observational points in \(i\) band set by cadence (red circles), observational points in \(r\) set by cadence, with contamination from the CIV line (blue circles). Green points represent the net contamination, for \(\epsilon_{i}=1.0\) (see Equation 2). The delay is calculated between the green and the blue points. We adopt standard values of parameters from Table 1 and \(z=2.7\).
parameters - line and continuum luminosities, line widths and equivalent widths, etc., for 526 265 SDSS quasars. We consider the distribution of the relevant parameters that serve as input to our code, i.e., the continuum luminosity at 3000A (or L\({}_{\rm 3000\AA}\)), the line widths (full-width at half-maximum or FWHM) for the prominent broad emission lines (C iv, Mg ii and H\(\beta\)), and their equivalent widths (EWs). We also consider the EW for the optical Fe ii emission integrated between 4434-4684A that is an important contaminant and coolant in the BLR (Boroson & Green 1992; Verner et al. 1999; Shen & Ho 2014; Marinello et al. 2016; Panda et al. 2018; Marziani et al. 2018; Panda 2022). For the L\({}_{\rm 3000\AA}\), the DR14 QSO catalogue provides, in addition to the luminosity and corresponding error for each source, a quality flag to assess the goodness of fit. A quality flag = 0 corresponds to a good-quality measurement while those with a quality flag \(>\) 0 may not be reliable either due to poor S/N or poor spectral decomposition. We, therefore, filter sources with L\({}_{\rm 3000\AA}\)\(>\) 0 and a corresponding quality flag = 0. This leaves us with 405 077 sources. The first panel in Figure 3 shows the distribution of L\({}_{\rm 3000\AA}\) for these sources. We similarly create sub-samples for the FWHMs and EWs of the broad emission lines - their distributions are reported in the other panels of Figure 3. For the FWHMs and the EWs, there are no quality flags, therefore, in addition to filtering for sources with \(value>\) 0 (here \(value\) represents the FWHM or EW for the emission lines of interest), we employ an additional filter: \(e_{\rm value}/value<\) 0.1 (where \(e_{\rm value}\) represents the errors for the corresponding \(value\)). We noticed that these original distributions demonstrated a tail with absurdly high values, of the order of 4-5\(\times\)10\({}^{5}\) for FWHMs, and \(\sim\)10\({}^{8}\) for the EWs. To filter such erroneous fitted cases, we further restrict each of our sub-samples within an upper limit \(\gtrsim\)99\({}^{\rm th}\) percentile. This upper limit is employed uniquely for each case. The final distributions thus obtained are shown in the remaining panels in Figure 3. The overall counts in each sub-sample, the median value, and the respective 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles for each distribution are tabulated in Table 2. We note that this final filtering to restrict the upper limits of the sub-samples has no noticeable effect on the median, 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles per distribution. The plots from this catalogue are shown in Figure 3.
The two exemplary values of the \(\log L_{\rm 3000}\) luminosity used later in most simulations roughly correspond to 1 \(\sigma\) deviation from the mean, i.e. 16 % of quasars are expected to be brighter than \(\sim\) 45.7, and 84 % are brighter than \(\sim\) 44.7.
The line width \(\sim\) 4000 km s\({}^{-1}\) is quite representative for all lines. Line equivalent widths are of the order of 60 A in the studied sample. We compared that with the distribution of line equivalent widths from the Large Bright Quasar Survey (Forster et al. 2001). They give the EWs of 105 A for H\(\beta\), 52.7 A for CIV, and 35.6 A for Mg II narrow component, and their broad component most likely (partially or mostly) represents the Fe II contamination. The strength of H\(\beta\) is then much higher, and we use a higher value as a default value in our simulations but we later test the sensitivity of the results to the adopted parameters.
The representative LSST quasars will not necessarily have the same statistical properties since LSST will go considerably deeper (LSST Science Collaboration et al. 2009; Ivezic et al. 2019), but in the present study, we did not aim at the use of the luminosity function like done recently by Shen et al. (2020) to make specific predictions.
### Statistical error of the time delay recovery
As mentioned at the beginning of Section 2, we perform 100 simulations for each parameter set. All 100 simulations represent the same parameters, but they differ technically in the initialization parameter of the random sequence, and in the Fourier analysis terms, in the initial choice of the phase. All curves have thus the same statistical properties, but due to the red noise character each provides a different time delay. We calculate the mean derived time delay and standard deviation. We treat this standard deviation as the information about the measurement error for a single quasar. In principle, we should include here the additional error just related to individual assessment of the delay, but in the LSST setup, this red noise error is large and will likely dominate. An even more important result from our 100 light curves is the mean value of the delay, which does not always match the assumed time delay. This offset is a source of potential problems since it has the character of a systematic effect present even in the case of collective studies of a large number of sources. The number of light curves equal to 100 was selected for practical purposes - a larger number of light curves leads to a linear rise in the computational time. The final error of the mean time delay should be formally calculated as the error of the weighted mean from these 100 simulations. However, individual errors which can be calculated from the \(\chi^{2}\) procedure are usually smaller than the dispersion of the measured time delays, those errors are asymmetric and the computation of the weighted mean is not straightforward. The studies of the radius-luminosity relation also show that the errors of individual measurements are in general smaller than the dispersion around the fitted relation (e.g. Bentz et al. 2013; Zajacek et al. 2021). Therefore, we use the standard deviation as a rather conservative estimate of the potential error in the individual source, and the error of the mean is in that case equal to 10 % of the dispersion. Calculating 1000 realizations would be better for the assessment of shift accuracy, but it is also more time-consuming. An exemplary study done in the Appendix indicates that such an increase in the simulated curves is not necessary. More of a problem is the presence of secondary peaks in the histogram (see Appendix).
## 3 Results
### Expectation from the Main Survey
In this section we study the prospects of measuring emission line time delays using the data from the Main Survey, covering 10 years but not very densely. We selected the position in the sky. We used the location on the sky centred (0, -30) for the MS, and (9.45,-44.025) for DDF.
We follow the steps described in Section 2. To illustrate further the modelling, we selected a source with standard luminosity (see Table 1) located at redshift 2.7. In Figure 2 we show the two photometric bands, one only weakly contaminated (y band in this case), and the other one strongly contaminated (r band, in this case, containing the CIV line). We also show the effect of the subtraction described in Section 2.2. The two photometric lightcurves roughly follow each other since the contamination by the delayed line emission is small, 12 % in this case. Therefore the direct measurement of the delay between the two bands is not effective, but the subtracted curve follows the delayed curve much better, making the delay measurement much more accurate.
Nevertheless, the cadence in the Main Survey is not quite suitable for time-delay measurements of relatively faint AGN.
As we see from Figure 4, upper panel, the measured delay is always considerably shorter than the assumed delay since the timescales corresponding to the actual delay are not well probed. However, for brighter sources, the expected time delay is longer so the usual sampling characteristic for the Main Survey is adequate to recover the line delay at least for \(z>1\) (see Figure 4, lower panel). Therefore, in the actual data analysis, we should not include the fainter sources since the measurement will not be reliable for them. We should also be careful about including the results from too small redshifts, based on H\(\beta\). Although bright quasars, with monochromatic luminosities above \(\log L_{\rm 3000}=45.7\) erg s\({}^{-1}\), are relatively well measured, we still see an offset between the expected and recovered time delay at redshifts below 1.0.
In our simulations, we adopted the same theoretical time delay for all the lines while actually, the CIV line delay is usually shorter. This may cause an additional increase in the error at larger redshifts where the CIV line is used, and in this case, the minimum source luminosity should be even higher. The delay based on Mg II is overall comparable to H\(\beta\)(Zajacek et al., 2021) so the error in our approach is smaller.
### Expectations from the Deep Drilling Fields
In the DDF area, the number of quasars observed is orders of magnitude smaller than expected from the Main Survey. However, DDF provides a much denser sampling of the light curve, increasing the quality. What is important, such dense coverage allows determining some time delays in much shorter timescales than the 10 years. We thus first discuss the results of the simulations for the entire duration of the project, and then for the first two years, and the first year of its operation.
In the specific field which we used in simulations, the number of observing visits is high: 1056 (u), 2239 (g), 4495 (r), 4496 (i), 2330 (z), and 4436 (y). However, at least in this specific field usually, 6 visits were in the minute time separation which for AGN is equivalent to a single visit, although with an improved S/N ratio. If we count only visits separated by 1 day or more, then the monitoring is limited to 131 (u), 219 (g), 239 (r), 245 (i), 97 (z), and 226 (y) in 10 years, and the time separation is frequently of the order of 2 days with long gaps of the order of a month (apart from half a year seasonal gaps), averaging to the mean separation of 7 - 9 days in different colours. We illustrate this in Figure 5 where we plot just the first year of the curve simulated with DDF cadence. Most points are unresolved, and only well-separated point aggregates show up. This is still much better than the Main Survey but not as dense as it might seem from the total number of visits. In our simulation, we use all the visits as they are in the cadence. To illustrate this effect in the whole duration of the survey, in the lowest two panels of Fig. 5, we show the histogram of the time separations in all 10-yr monitoring. We had to use the logarithmic scale for the vertical axis since the time separations less than 2 days dominate all other separations by orders of magnitude. We stress that in actual computations no binning was performed, we used the data cadence as provided.
The representative results for the whole 10-year monitoring are shown in Figure 6. We see that for bright quasars the results from DDF are not considerably better than from the Main
\begin{table}
\begin{tabular}{l|c c c c c c c c c} \hline \hline & z & log L\({}_{\rm 3000}\) & FWHM\({}_{\rm CIV}\) & FWHM\({}_{\rm MgII}\) & FWHM\({}_{\rm H\beta}\) & EW\({}_{\rm CIV}\) & EW\({}_{\rm MgII}\) & EW\({}_{\rm H\beta}\) & EW\({}_{\rm FeII}\) \\ & & [erg s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [Å] & [Å] & [Å] & [Å] & [Å] \\ \hline Count & 526265 & 405077 & 96925 & 93398 & 13979 & 136138 & 116981 & 25298 & 18097 \\
16th & 0.944 & 44.51 & 2882.03 & 3576.09 & 2742.90 & 37.86 & 30.95 & 46.68 & 36.54 \\ Med & 1.8327 & 45.10 & 4466.10 & 4800.51 & 4153.97 & 63.26 & 46.18 & 67.35 & 60.96 \\
84th & 2.593 & 45.66 & 6000.93 & 6533.97 & 6084.53 & 109.40 & 72.83 & 92.94 & 89.76 \\ \hline \hline \end{tabular}
\end{table}
Table 2: SDSS DR14 QSO catalogue properties as shown in Figure 3.
Figure 3: The distribution of the quasar luminosity \(L_{\rm 3000i}\), line widths (FWHM of C iv, Mg ii and H\(\beta\)) and equivalent widths (C iv, Mg ii, H\(\beta\) and optical Fe ii) from the DR14 quasar catalogue (Rakshiti et al., 2020). For each distribution, we show the median (dashed lines), the 16\({}^{\rm th}\), and 84\({}^{\rm th}\) percentiles (dotted lines). These statistics are reported in Table 2 for each of these parameters.
Survey, apart from some improvement at the smallest redshifts (below 1.0) where denser coverage allows better determination of the (shorter in that case) time delay.
The difference is highly significant for the faint quasars. They are not reliably sampled in the Main Survey but those located in the DDF can be well measured at redshifts above 1.8. This is interesting and important since fainter quasars will dominate the quasar population, so many faint quasars can be detected in the DDF. In SDSS (see Fig. 3 and Table 2 about 80 % of quasars are brighter than \(\log L_{\rm 3000}=44.7\) erg s\({}^{-1}\), and in the DDF, they would be well measured while in the Main Survey only about 15 % of quasars are bright enough to have long enough delay timescales to be measured adequately.
### Expectations from the first year and two years of operation
We first start with a more conservative approach and analyze the possibility of obtaining interesting results from two years of data. We do not expect any reliable results from the Main Survey, taking into account the available sampling. However, the cadence of the DDF is much denser so the time delays of the order of a year could possibly be measured. And indeed, in the case of faint AGN, the expected delays are less than 400 days, so the measurement is possible (see Figure 7, upper panel). The measurements and the predictions are within \(1\sigma\) error for all redshifts. We see, however, a systematic offset of the mean values from the expected values for all redshifts. Too short data sets cause this offset. The measurements are clearly useful for some statistical studies, although the systematic offset of the order of \(\sim 40\) % should be included. This offset will depend on the exact
Figure 4: The adopted (black points) and mean recovered (red points) time delay as a function of redshift for faint AGN (\(\log L_{\rm 3000}=44.7\) erg s\({}^{-1}\) upper panel) and for bright AGN (\(\log L_{\rm 1000}=45.7\) erg s\({}^{-1}\), lower panel) from the Main Survey. Green error bars are the standard deviation, expected in a single source measurement, as determined by the use of 100 statistically equivalent simulations. The error of the mean recovered delay is 10 % of the dispersion. Other parameters have standard values given in Table 1. The redshift gaps correspond to no satisfactory selection of the contaminated and non-contaminated bands.
Figure 5: Upper panel: an example of the artificial lightcurve for the first year of observing with DDF cadence in \(i\) band (blue circles), and in \(r\) band contaminated by CIV line. Green points represent the net contamination, for \(\epsilon_{i}=1.0\) (see Equation 2). The delay is calculated between the green and the blue points. We adopt standard values of parameters from Table 1, z = 3.276. Middle panel: histogram of the time separation between the consecutive observation dates in \(r\) band in the selected DDF field during the whole 10 years. Lower panel: the same for \(i\) band.
source luminosity, so the appropriate accompanying simulations would be necessary to improve the quality of the results.
In the case of bright AGN, the expected time delays are so long that only the results for objects at redshift smaller than 0.3 are potentially useful when only two years of data are available. Otherwise, the recovered delay saturates at the maximum allowed by the code which is set at a minimum between half of the duration of the data set and twice the expected time delay (see Figure 7, lower panel).
If only the first year of the data is used the situation is even more difficult. For bright sources, the measurements are unreliable for any redshift, and in the case of faint sources, only the results from low redshifts are promising (see Figure 8). In any case, AGN at a redshift higher than 0.7 are beyond the reach, and conservative searches should rather use an even smaller redshift limit of \(\sim\) 0.4. Nevertheless, it is overall encouraging that some AGN emission line time delays can actually be measured from such short monitoring, with such a highly non-uniform cadence.
### Prediction sensitivity on the adopted parameters
Since the number of parameters in our model is large, and the representative parameter choice is justified, we test the dependence of the results only for a selection of assumed parameters. Some of the parameters like the FWHM are of no importance apart from a small change of the photometric bands available for time delay measurements.
We test the dependence on the parameters by mostly concentrating on the first year of the LSST data, but the trend generally applies to the full 10-year DDF survey as well as to the Main Survey mode.
We first tested the adopted standard values of the line EWs for the three lines (see Table 1). The assumed standard value for the H\(\beta\) line in particular is much higher than the mean value from the SDSS given in Table 2, 150 A vs. 67.36 A. So we recalculated the predictions for all the lines assuming mean values from Table 2. We considered only the case of the faint AGN population, \(\log L_{3000}=44.7\) erg s\({}^{-1}\). The result is shown in Figure 9, upper panel. Comparing the new results to Fig. 8, upper panel, we see that the change of the EWs of the lines did not affect the results, the delay for redshifts below 0.5 would be well recovered, but larger redshift sources are beyond the reach of the first year DDF monitoring, even for faint quasars.
We next tested the effect of the assumed photometric accuracy, replacing the rather unrealistic value of 0.001 with 0.02, hence being more conservative, clearly much larger than the systematic error expected to be at the level of 0.005 mag (Ivezic et al. 2019). This time we used the values of the EWs as in the previous plot, and we only changed the expected error. The result (second panel in Figure 9) is not very much different from the previous simulations (Figure 8). The delays are now determined with the error larger by up to 20 % but some low redshift measurements are still useful.
Figure 6: The adopted and recovered time delay as a function of redshift for faint AGN (\(\log L_{2000}=44.7\) erg s\({}^{-1}\), upper panel) and for bright AGN (\(\log L_{2000}=45.7\) erg s\({}^{-1}\), lower panel) from 10 years of observations in the DDF. Other parameters have standard values given in Table 1.
Figure 7: The adopted and recovered time delay as a function of redshift for faint AGN (\(\log L_{2000}=44.7\) erg s\({}^{-1}\), upper panel) and for bright AGN (\(\log L_{2000}=45.7\) erg s\({}^{-1}\), lower panel) from 2 years of observations in the DDF. Other parameters have standard values given in Table 1.
We also checked if the adopted high and low-frequency breaks are important for the simulations. We repeated computations for the standard accuracy of 0.001 but took the high and low frequency breaks roughly ten times higher (\(3.66\times 10^{-8}\) Hz, and \(3.66\times 10^{-9}\) Hz, correspondingly). That caused no systematic effects and a very slight increase of the error by at most a few percent (see Fig. 9). Finally, we tested the adopted level of quasar variability but in this case, the decrease in the variability level did not affect the results of the simulations (see Figure 9, lowest panel). Thus the parametrization of the variability does not seem essential for modelling.
Next, we tested the effect of the assumed \(R-L\) relation for the predictions. In our standard simulation setup, we used the same relation for all the lines. In order to see if this assumption is problematic, we performed simulations assuming
\[\log\tau(H\beta)=1.350+0.415(\log L_{\rm 3000}-44), \tag{3}\]
for the H\(\beta\) line (Khadka et al., 2021),
\[\log\tau(MgII)=1.67+0.30(\log L_{\rm 3000}-44), \tag{4}\]
for the Mg II line (Zajacek et al., 2021), and
\[\log\tau(CIV)=1.04+0.42(\log L_{\rm 3000}-44), \tag{5}\]
for the CIV line (Cao et al., 2022). The difference is most significant for the CIV line, suitable for quasars at higher redshifts so in this case, we show the results for bright quasars from the DDF field from the entire 10 years of data, to be compared with Figure 6. New results are shown in Figure 10. The results at low redshifts did not practically change as they were based on H\(\beta\) and Mg II, but a much shorter time delay for CIV actually created a problem at high redshifts, underpredicting the delay by \(\sim 10\) %.
The relations discussed above, representing the time delay as a function of the monochromatic luminosity, are not necessarily universal. There are indications that the time delays are relatively shorter for high Eddington ratio sources (e.g. Du et al., 2015, 2016). The knowledge of the Eddington ratio, however, requires the knowledge of the black hole mass and the black hole spin. In the case of spectroscopic studies, the problem can be addressed by additional measurement of the black hole mass from the line profiles, or by introducing a second parameter into the relation, for example, the normalization of the Fe II pseudo-continuum which is related to the Eddington ratio (Du et al., 2018), although it is not a good strategy for the sample which is very heterogeneous (Khadka et al., 2022). In the case of photometric measurements, it is not clear how to address the issue in an individual object.
The next potentially important assumption was the use of a symmetric Gaussian to model the response of the BLR while in the few cases when the response function shape was derived from the data, the shape was clearly asymmetric (Horne et al., 2021). Thus, for the comparison, we performed simulations with the response function in the form of half-Gaussian (see e.g. Jaiswal et al., 2022) but assuming the shift as implied by the formulae, and the width as in a standard model, i.e. 10 % of the time delay. While doing that, we kept the line delays different for each emission line, as in the previous case. We observed that the recovery of the delay, in this case, is less accurate, with the departure from the expected values up to \(\sim 20\) %, for the adopted width of the half-Gaussian (see Figure 10, lower panel). This systematic offset is thus a potential problem although the recovered and expected time delay difference is still within 1 \(\sigma\) error.
Finally, we tested other cadences than the two representative ones for MS and DDF. We used the location on the sky (0, - 30), for MS, and for DDF we are using the ELAIS-S1 centred at (9.45, -44.025) but we tested 8 more recently proposed cadences for the MS and for the DDF. We list them in Table 3. In order to make the presentation compact we plotted them by skipping the error bars, as these do not change much. Instead, we show two plots that are colour-coded referring to the relative difference of the mean time delay, \(\tau_{\rm derived}\), with the time delay adopted in the simulations, \(\tau_{\rm adopted}\),
\[\delta=\frac{\tau_{\rm derived}-\tau_{\rm adopted}}{\tau_{\rm adopted}}. \tag{6}\]
This dimensionless quantity depends strongly on the source luminosity so we plot it as a function of redshift separately for faint (upper panel of Figure 11) and bright AGN (lower panel). We use all 10 years of simulated cadence. For most of the cadences, the results are overall similar to those obtained previously, results for the bright AGN are quite satisfactory, particularly for moderate redshifts, and for the redshifts, \(z\sim 0.7\) the derived delays are frequently too short. As discussed already by Czerny et al. (2013), for bright quasars we need 5 measurements per year, if the coverage is uniform and the data is spectroscopic, for photometric (more difficult) time delay measurement with non-uniform sampling twice that many data points are needed, and lightcurves providing less than that cannot be used.
Figure 8: The adopted and recovered time delay as a function of redshift for faint AGN (\(\log L_{\rm 3000}=44.7\) erg s\({}^{-1}\), upper panel) and for bright AGN (\(\log L_{\rm 3000}=45.7\) erg s\({}^{-1}\), lower panel) from the first year of observations in the DDF. Other parameters have standard values given in Table 1.
For fainter quasars, again most of the cadences under-predict the time delays which are too short to be properly sampled in the MS mode.
In the case of DDF, the results were similar qualitatively, so with the aim to compare them quantitatively, we calculated two global parameters for each cadence. One was the mean separation of visits after treating the observation done within one day as a single exposure. The second parameter was the redshift-averaged value of \(\delta\) (see Equation 6). In Table 3, we give these values for the bright quasars. It is interesting to note that a simple comparison of the number of 'independent' measurements or an actual number of visits is not reflected in the quality. For example, in the cadence, S6-DDF the total number of visits is by a factor of 2 lower than in the remaining cadences (2542, 1672, 2637, and 2209 in \(g\), \(r\), \(i\)\(z\) bands in comparison with the mean of 5033, 2853, 5132, and 4356, correspondingly). However, the worst offset is actually for the cadence S3-DDF, which means that the specific distribution of observing dates is important. This offset remains, independent from the number of observed sources, so for accurate results, the offset must be corrected through simulations.
## 4 Discussion
LSST will provide up to ten million quasar detections in the Main Survey mode and a few thousand higher-quality AGN lightcurves from the DDF. It will be an enormous leap in the
Figure 10: The sensitivity of the simulations on the assumed parameter for the 10 years of DDF data, bright quasars \(\log L_{2000}=45.7\) erg s\({}^{-1}\), assuming different radius-luminosity relation for each of the emission line. Top panel: symmetric Gaussian response, lower panel: half-Gaussian asymmetric response for BLR.
Figure 9: The sensitivity of the simulations on the assumed parameter for the first year DDF data, faint quasars \(\log L_{2000}=44.7\) erg s\({}^{-1}\). Top panel: alternative values of the line EWs, the second panel: a larger photometric error, the third panel: is shorter timescales for the frequency breaks of the power spectrum, and the lowest panel: a lower variability amplitude (see Sect 3.4) for details
reverberation monitoring of AGN and the prospects of applying the results to constrain the cosmological models. We thus performed simulations to estimate the prospects of the results from the entire 10 yr survey as well as from the first two years, and the first year, of data collection. A large number of expected time-delay measurements will open an efficient way to use to study AGN properties, as well as to apply them to cosmology. We also should be aware that the large statistics of the measured time delays may reveal the dependence on the redshift, apart from the known trends with the Eddington ratio (Du et al., 2018; Panda, 2022; Panda and Marziani, 2022). For example, the R-L relation may be affected by a systematic change of the average viewing angle for selected sub-populations of quasars. Currently, we do not see any such trend with the redshift (e.g. Prince et al., 2022; Dainotti et al., 2022) but the accuracy of this statement is low as the 95% confidence level limit allows the change of the viewing angle from \(\sim 75\) deg (at redshift 0) to \(\sim 30\) deg (at redshift 3), with the corresponding systematic change of the luminosity distance by a factor of 1.8 (see Figure 3 in Prince et al., 2022).
The expected accuracy of the time delay measurement for a single AGN is of an order of 30 % if the source parameters are appropriate for the measurement. During the first year, only the shortest time delays can be measured which means that only DDF data are useful for that purpose. In addition, only faint AGN at small redshift has a time delay short enough to be measured. Using the SDSS AGN statistics for reference, we can expect only 15 % of AGN to have redshifts below 0.9, and not all of them are faint, so overall some 10 % of the AGN out of \(\sim 10\) thousand located in the DDFs can potentially allow measuring the time delay. This is still a few hundred measurements, more than available at present, and interesting for statistical studies. For cosmological applications such a sample will be still too small, particularly since the dispersion in measurements of line delays is usually high (e.g. Zajacek et al., 2021; Cao et al., 2022, for the most recent cosmological applications). Thus the reduction of the statistical error by a factor of 10 will not lead yet to high-precision cosmology.
Measurements based on two years of data from the DDF will improve the situation considerably but the critical improvement will come from the whole 10 years of data. DDF data will allow measurement of time delays for all bright AGN and for most - even faint - AGN at the redshift above 1.7. This means that at least half of the quasars in DDFs will have the determination of the line delay. A thousand measurements will reduce statistical error by a factor of 30, almost approaching the 1 % accuracy in overall distance determination. Even more spectacular improvement will come from the Main Survey. There only bright quasars will have the delay measurements, but this means about a million measurements which, at the face value, will reduce the statistical error by a factor of 1000, giving formally sub-percent accuracy for cosmological measurements.
However, in this case, the dominant source of the error will be the systematic error. And our simulations show that such systematic error is likely to be present. Our simulations for the 10 years of data in the Main Survey for faint AGN (\(\log L_{3000}=44.7\) erg s\({}^{-1}\)) always return the average time delay much shorter than the assumed value (see Figure 4). The smallest error is for redshifts close to 3 but it is \(\sim 30\%\) smaller than expected. There seem to be no systematic issues with the bright objects (\(\log L_{3000}=45.7\) erg s\({}^{-1}\)) between the redshift 1.7 and 3.0. A slight deviation appears above 3, and this effect should not be present for somewhat fainter AGN as it is related to too-long time delays in comparison with the survey duration. The clear discrepancy is present at the smallest redshift. However, for cosmological applications, it is necessary to cover a broad range of redshifts, including the smallest ones. Therefore a further study of the systematics present in this case below 1.7 is necessary. The solution may be to use extensive modelling and the appropriate corrections but they will most likely depend on the specific source luminosity and the actual cadence in the field. Combining the results from the DDF and the Main Survey will also improve the situation since for DDFs the results at lower redshift are more accurate. Also, as mentioned above, the additional intrinsic systematic trends in AGN may show up, so separate studies of sub-classes of AGN will be absolutely necessary. Also, in order to achieve high accuracy of the results, the intrinsic time delays between the continuum bands should be included in the actual
\begin{table}
\begin{tabular}{c c c c} \hline Cadence & formal name & effective & offset \\ & separation & in delay \\ & [days] & [\%] \\ \hline \hline S1-MS & baseline\_v2.0\_10yrs & 13.7 & 11.7 \\ S2-MS & baseline\_v2.1\_10yrs & 12.4 & 10.0 \\ S3-MS & baseline\_v2.2\_10yrs & 16.0 & 12.8 \\ S4-MS & draft\_connected\_v2.99\_10yrs & 16.6 & 10.2 \\ S5-MS & draft\_v2.99\_10yrs & 16.2 & 11.9 \\ S6-MS & light\_roll\_v2.99\_10yrs & 13.3 & 11.1 \\ S7-MS & retro\_baseline\_v2.0\_10yrs & 9.4 & 7.2 \\ S8-MS & roll\_early\_v2.99\_10yrs & 15.8 & 11.1 \\ \hline S1-DDF & ddf\_accout\_sf0.30\_1sf0.4\_1sr0.5\_v2.1\_10yrs & 5.8 & 8 \\ S2-DDF & ddf\_bright\_s1f0.35\_v2.1\_10yrs & 5.0 & 12 \\ S3-DDF & ddf\_double\_s1f0.35\_v2.1\_10yrs & 3.2 & 16 \\ S4-DDF & ddf\_old\_rot\_s1f0.35\_v2.1\_10yrs & 5.0 & 13 \\ S5-DDF & ddf\_quad\_s1f0.35\_v2.1\_10yrs & 2.7 & 10 \\ S6-DDF & ddf\_quad\_subfilter\_s1f0.35\_v2.1\_10yrs & 3.3 & 10 \\ S7-DDF & ddf\_season\_length\_s1f0.20\_v2.1\_10yrs & 5.9 & 10 \\ S8-DDF & ddf\_season\_length\_s1f0.35\_v2.1\_10yrs & 4.7 & 11 \\ \hline S2-DDF-equal & & 1.0 & 11.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The effective mean separation in the observing dates in \(r\) band and the redshift-averaged offset of the mean recovered time delay in comparison to the assumed time delay for bright quasars, 10 years of data
data analysis. The potential errors related to this issue were not estimated in the current study.
The cadence adopted in the simulations is one of the main sources for the offset. We checked that by repeating the computations for the faint quasars, two years of data, assuming the same number of observations as implied by the DDF cadence used in Figure 7 but assuming 180 days seasonal gap between year 1 and year 2, and roughly equally spaced observations during the remaining 6 month periods. In this case, we see much better agreement between the assumed and recovered time delays, although the dispersion in the redshift recovery is still roughly the same. However, such cadence is not likely to be accepted during the DDF monitoring. Otherwise, we do not see high sensitivity of the line delay recovery on the tested cadences, presently under consideration. The possibility to measure the time delay depends mostly on the source luminosity and redshift, and not so much on the actual cadence.
This is clearly different from the measurement of the continuum time delay, where dense sampling is essential, as argued already by Brandt et al. (2018). For this reason, Kovacevic et al. (2022) argued that useful measurements can be done only for DDF, and some of the cadences were favoured. Simulations performed by Pozo Nunez et al. (2012) also request the 2 - 5 day cadence for continuum delays with LSST, likely to meet in DDF fields.
Our simulations are based on the creation of the lightcurves using the Timmer & Koenig (1995) algorithm. This is not the only method available, although it is fairly general due to a number of parameters entering into the parametrization of the power density spectrum. Other methods of creating artificial lightcurves are also used, mostly working directly in the time domain, like Damped Random Walk (Kelly et al., 2009), damped harmonic oscillator (Yu et al., 2022), or more complex higher-order CARMA processes (Kelly et al., 2014). These methods were used by Sheng et al. (2022) to simulate quasar light curves in the DDF and aimed at the precise reconstruction of the light curves from the available photometric data. They used the advanced method of Stochastic Recurrent Neural Network and concluded that the recovery precision is most affected by the seasonal gaps.
More importantly, we assumed in the present simulations that we know the luminosity and the redshift of each source. The redshift is important both for the position of the emission lines as well as for the luminosity and the estimate of the expected time delay. However, most of the quasars, particularly the fainter sources, will be actually discovered in the course of the survey, which poses a problem for their identification and the photometric measurement of the redshift. The automatic AGN classifier (see Russell et al., 2022 is already working in the case of the Zwicky Transient Facility, and a similar classifier is under development for LSST. The accuracy of the photometric redshifts for AGN is not well established so we did not attempt to model this effect but it will be likely a considerable source of errors in the MS where spectroscopic follow-up of a large fraction of sources is not realistic.
## 5 Conclusions
We show that the recovery of the emission line time delay from the photometric measurements available from LSST is possible for a significant fraction of the quasars. The expected time delays depend on the source luminosity but quasars typically range from \(\sim\) 100 days to over 3 years, so the specific cadence requirements are not of such critical importance as for continuum time delays. Specifically:
* 15 %, so combining measurements for many quasars will allow studying statistically the trends like radius-luminosity relation,
* For quasars fainter than \(\log L_{3000}=44.7\), the Main Survey is not recommended, and for the intermediate luminosity quasars the redshift limit of practical use will have to be set through simulations
* The line time delays in quasars fainter than \(\log L_{3000}=44.7\) can be successfully measured from the DDF. In that case, even the first two years of data are enough, and the longer data set does not improve the delay measurement at lower redshifts unless the data is sampled in shorter periods or detrended.
* Bright quasars can be also studied with dense sampling when they are located in DDF fields but this only slightly decreases the individual error (down to below \(\sim\) 30 %)
* Each of the considered cadences leads to some systematic offset between the delay assumed in the setup and the recovered time delay. This offset (of the order of 10 %) will remain
Figure 11: Color-coded relative systematic error of the delay determination for faint (upper panel) and bright (lower panel) AGN in 10 years of data for 8 MS cadences and 8 DDF cadences different from considered before. Note a difference in the coding scale between the lower and upper panels.
even if numerous quasars are measured, so high-quality results will require correcting for this offset by numerical simulations. This offset will depend on quasar properties as well as cadence, photometric errors, and the time delay measurement method.
* Some of the considered cadences are better than the others but for line delay measurements the cadence is apparently not a critical issue.
## Acknowledgements
The project was partially supported by the Polish Funding Agency National Science Centre, project 2017/26/A/ST9/00756 (MAESTRO 9), project 2018/31/B/ST9/00334 (OPUS 16), and OPUS-LAP/GACR-LA bilateral project (2021/43/I/ST9/01352/OPUS 22 and. GF22-04053L). Partial support came from MNiSW grant DIR/WK/2018/12. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. [951549]). SP acknowledges the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) Fellowship (164753/2020-6). MZ, RP, and BC acknowledge the Czech-Polish mobility program (MSMT J820PL037 and PPN/BCZ/2019/1/00069). ASK acknowledges the National Science Centre grant (OPUS16, 2018/31/B/ST9/00334). BK, DI and L CP acknowledge funding provided by the University of Belgrade, Faculty of Mathematics (contract 451-03-68/2022-14/200104) and by Astronomical Observatory Belgrade (contract 451-03-68/2022-14/200002), through the grants by the Ministry of Education, Science and Technological Development of the Republic of Serbia. FPN gratefully acknowledges the generous and invaluable support of the Klaus Tschira Foundation. MZ acknowledges the financial support from the GACR-LA grant no. GF22-04053L.
|
2302.11974 | LightCTS: A Lightweight Framework for Correlated Time Series Forecasting | Correlated time series (CTS) forecasting plays an essential role in many
practical applications, such as traffic management and server load control.
Many deep learning models have been proposed to improve the accuracy of CTS
forecasting. However, while models have become increasingly complex and
computationally intensive, they struggle to improve accuracy. Pursuing a
different direction, this study aims instead to enable much more efficient,
lightweight models that preserve accuracy while being able to be deployed on
resource-constrained devices. To achieve this goal, we characterize popular CTS
forecasting models and yield two observations that indicate directions for
lightweight CTS forecasting. On this basis, we propose the LightCTS framework
that adopts plain stacking of temporal and spatial operators instead of
alternate stacking that is much more computationally expensive. Moreover,
LightCTS features light temporal and spatial operator modules, called L-TCN and
GL-Former, that offer improved computational efficiency without compromising
their feature extraction capabilities. LightCTS also encompasses a last-shot
compression scheme to reduce redundant temporal features and speed up
subsequent computations. Experiments with single-step and multi-step
forecasting benchmark datasets show that LightCTS is capable of nearly
state-of-the-art accuracy at much reduced computational and storage overheads. | Zhichen Lai, Dalin Zhang, Huan Li, Christian S. Jensen, Hua Lu, Yan Zhao | 2023-02-23T12:46:11Z | http://arxiv.org/abs/2302.11974v2 | # LightCTS: A Lightweight Framework for Correlated Time Series Forecasting
###### Abstract.
Correlated time series (CTS) forecasting plays an essential role in many practical applications, such as traffic management and server load control. Many deep learning models have been proposed to improve the accuracy of CTS forecasting. However, while models have become increasingly complex and computationally intensive, they struggle to improve accuracy. Pursuing a different direction, this study aims instead to enable much more efficient, lightweight models that preserve accuracy while being able to be deployed on resource-constrained devices. To achieve this goal, we characterize popular CTS forecasting models and yield two observations that indicate directions for lightweight CTS forecasting. On this basis, we propose the LightCTS framework that adopts plain stacking of temporal and spatial operators instead of alternate stacking which is much more computationally expensive. Moreover, LightCTS features light temporal and spatial operator modules, called L-TCN and GL-Former, that offer improved computational efficiency without compromising their feature extraction capabilities. LightCTS also encompasses a last-shot compression scheme to reduce redundant temporal features and speed up subsequent computations. Experiments with single-step and multi-step forecasting benchmark datasets show that LightCTS is capable of nearly state-of-the-art accuracy at much reduced computational and storage overheads.
correlated time series forecasting, lightweight neural networks +
Footnote †: ast}\)corresponding author: D. Zhang ([email protected]) and H. Li ([email protected]).
+
Footnote †: ast}\)corresponding author: D. Zhang ([email protected]) and H. Li ([email protected]). |
2304.13235 | Anyon Quantum Dimensions from an Arbitrary Ground State Wave Function | Realizing topological orders and topological quantum computation is a central
task of modern physics. An important but notoriously hard question in this
endeavor is how to diagnose topological orders that lack conventional order
parameters. A breakthrough in this problem is the discovery of topological
entanglement entropy, which can be used to detect nontrivial topological order
from a ground state wave function, but is far from enough for fully determining
the topological order. In this work, we take a key step further in this
direction: We propose a simple entanglement-based protocol for extracting the
quantum dimensions of all anyons from a single ground state wave function in
two dimensions. The choice of the space manifold and the ground state is
arbitrary. This protocol is both validated in the continuum and verified on
lattices, and we anticipate it to be realizable in various quantum simulation
platforms. | Shang Liu | 2023-04-26T01:54:21Z | http://arxiv.org/abs/2304.13235v2 | # Anyon Quantum Dimensions from an Arbitrary Ground State Wave Function
###### Abstract
Topological orders and anyons are fascinating phenomena that are both conceptually important and practically useful for quantum computing. However, topological orders lack conventional order parameters and are generically difficult to diagnose. Recent advances in quantum simulations have further emphasized the need for efficient methods for identifying topological orders. A breakthrough in this problem is the discovery of topological entanglement entropy, which can be used to detect nontrivial topological order from a ground state wave function, but is far from enough for fully determining the topological order. In this work, we take a key step further in this direction: We propose a simple entanglement-based protocol for extracting the quantum dimensions of all anyons from a single ground state wave function in two dimensions. The choice of the space manifold and the ground state is arbitrary. This protocol is first validated in the continuum using Chern-Simons field theories, and then analytically verified on lattices using Kitaev's quantum double models. We anticipate that our protocol can be implemented experimentally in various quantum simulation platforms, as well as in numerics.
## I Introduction
Topologically ordered phases of matter exhibit a number of remarkable properties, such as the existence of fractionalized excitations dubbed anyons, and robust ground state degeneracies on topologically nontrivial spaces [1]. From a practical perspective, they are also promising platforms for fault-tolerant quantum computation [2; 3].
Unlike symmetry breaking orders, topological orders (TOs) lack conventional order parameters. They do not even require any symmetry and sometimes support gapped boundaries. Therefore, diagnosing TOs is generically a difficult task. Recent advances in quantum simulating topologically ordered states [4; 5; 6; 7] have further highlighted the need for efficient protocols to identify them. A breakthrough in this problem is the discovery of topological entanglement entropy (EE) [8; 9; 10; 11]. It is shown that the EE of a disk region in a two-dimensional gapped ground state wave function contains a universal term dubbed the topological EE, from which we can read off the so-called total quantum dimension \(\mathcal{D}\) of the system. \(\mathcal{D}=1\) (\(\mathcal{D}>1\)) for a trivial (nontrivial) TO, and hence the topological EE can be used for detecting nontrivial TOs. However, \(\mathcal{D}\) is still far from fully characterizing a TO. In particular, it can not distinguish abelian and nonabelian TOs which have very different properties and applications. Another important quantity, the chiral central charge, can be extracted either from edge thermal transport [12; 13; 14] or the bulk wave function [15; 16; 17; 18], but it again does not distinguish abelian and nonabelian TOs, and vanishes for many TOs supporting gapped boundaries.
If we know the quantum dimensions \(d_{j}\) of all anyon types \(j\), we will be able to tell apart abelian and nonabelian TOs, because the former have \(d_{j}=1\) for all \(j\), while the latter have \(d_{j}>1\) for some \(j\). Intuitively, \(d_{j}\) is the Hilbert space dimension shared by each type-\(j\) anyon in the limit of many anyons. More precisely, in the presence of \(M\) anyons of type \(j\), let \(D_{j}(M)\) be the dimension of the low-lying degenerate subspace. Then in the large \(M\) limit, \(D_{j}(M)/d_{j}^{M}\) is of order 1 [3]. \(\mathcal{D}\) is related to \(d_{j}\)'s by \(\mathcal{D}^{2}=\sum_{j}d_{j}^{2}\). \(d_{j}\)'s impose nontrivial constraints on the fusion rules of anyons, and if the chiral central charge is known, they also constrain the anyon self-statistics [14]. Therefore, \(d_{j}\)'s are important quantitative characterizations of the anyon excitations.
There are existing methods to extract \(d_{j}\)[10; 14; 19; 20; 21; 22; 23], but they are all hard to implement in general: They either require knowing the operators for creating/annihilating and moving anyons, or knowing certain ground state(s) on a torus. In this paper, we overcome all these difficulties, and propose a very simple protocol for extracting the quantum dimensions of all anyons from an arbitrary ground state of a TO on an arbitrary space. We note that it has been proved in Ref. [24] that the information of \(d_{j}\)'s is indeed contained in an arbitrary ground state wave function, but the approach there is again not practical to use: One need to generate and analyze some infinite set of density matrices.
In the rest of the paper, we will first describe our protocol, then justify it with a field-theoretic approach, and finally test it using Kitaev's quantum double models [2].
## II Protocol
Consider a two-dimensional topologically ordered system on an arbitrary space manifold with or without a boundary. Let \(|\psi\rangle\) be any state with no excitations in a large enough region, say a ground state. We will describe and later justify an efficient protocol for extracting the quantum dimensions of all anyons.
Consider a partition of the space as shown in Fig. 1 in a region with no excitations. \(A=\bigcup_{i=1}^{4}A_{i}\) takes an annulus shape, and \(B\) is the rest of the system. Our
protocol consists of the following steps.
* **Step 1:** Obtain the reduced density matrix \(\rho_{A}:=\operatorname{Tr}_{B}\left|\psi\right\rangle\left\langle\psi\right|\) for the annulus region \(A\).
* **Step 2:** Map \(\rho_{A}\) to a pure state in the doubled Hilbert space: Let \(\rho_{A}=\sum_{i,j}M_{ij}\left|i\right\rangle\left\langle j\right|\) where \(\left\{\left|i\right\rangle\right\}\) is an arbitrary real-space tensor product basis for Region \(A\). We define \[\left|\rho_{A}\right\rangle:=\frac{1}{\sqrt{\operatorname{Tr}(\rho_{A}^{2})}} \sum_{i,j}M_{ij}\left|i\right\rangle\left|j\right\rangle.\] (1)
* **Step 3:** Denote the doubled system by \(A\cup A^{\prime}\), and divide \(A^{\prime}\) as \(\bigcup_{i=1}^{4}A^{\prime}_{i}\) in the same way as \(A\). Compute the Renyi mutual information \(I^{(n)}\) (defined later) between \(A_{1}\cup A^{\prime}_{1}\) and \(A_{3}\cup A^{\prime}_{3}\) for several different Renyi indices \(n\), and solve the anyon quantum dimensions \(d_{j}\) according to the following formula. \[I^{(n)}(11^{\prime},33^{\prime})=\frac{1}{n-1}\log\left[\sum_{j}\left(\frac{ d_{j}}{\mathcal{D}}\right)^{4-2n}\right],\] (2) where \(ii^{\prime}\) stands for \(A_{i}\cup A^{\prime}_{i}\), and \(\mathcal{D}:=\sqrt{\sum_{j}d_{j}^{2}}\) is the total quantum dimension.
Here, the Renyi mutual information is defined as usual by \(I^{(n)}(X,Y):=S_{X}^{(n)}+S_{Y}^{(n)}-S_{X\cup Y}^{(n)}\), where \(S_{P}^{(n)}:=(1-n)^{-1}\log\operatorname{Tr}(\rho_{P}^{n})\) is the Renyi entropy.
Intuitively, \(\left|\rho_{A}\right\rangle\) is a particular ground state of the TO on the torus obtained by gluing \(A\) and \(A^{\prime}\) along their boundaries. We will later carefully justify this picture.
A few comments are in order. In Step 2, the map from \(\rho_{A}\) to \(\left|\rho_{A}\right\rangle\) is basis dependent. If we choose a different real-space tensor product basis, then the new pure state \(\left|\rho_{A}\right\rangle_{\text{new}}\) is related to the old one by a _local_ basis rotation in \(A^{\prime}\). This does not affect the entanglement based quantity \(I^{(n)}\) that we need. In Step 3, a possible strategy for solving all \(d_{j}\) is as follows: First obtain \(I^{(2)}\) which gives the total number of anyon sectors \(t\). Then obtain \(I^{(n)}\) for more than \(t\) number of additional Renyi indices, from which we can uniquely determine all \(d_{j}/\mathcal{D}\). Since we know the smallest quantum dimension is that of the vacuum sector, \(d_{0}=1\), we can subsequently find the values of \(\mathcal{D}\) and all \(d_{j}\). Note that for an abelian TO where \(d_{j}=1\) for all \(j\), \(I^{(n)}=2\log\mathcal{D}\) is \(n\)-independent. Hence, if we are accessible to only a limited number of Renyi indices, although we are not able to obtain all \(d_{j}\), we can still tell whether the TO is abelian or nonabelian.
For integer values of \(n\), the quantity \(I^{(n)}(11^{\prime},33^{\prime})\) proposed above can in principle be experimentally measured without using too many copies of the state \(\left|\psi\right\rangle\). We thus expect our protocol to be practically useful for both experiments and numerics.
## III Continuum approach
In this section, we will give a field-theoretic proof of Eq. 2, assuming the underlying TO to be described by a Chern-Simons (CS) theory [25]. The readers need not be familiar with CS theories, and just need to know that (1) a CS theory is a gauge theory with some compact gauge group \(G\), and (2) it is a topological field theory, meaning that the action has no dependence on the spacetime metric and only the spacetime topology matters. As mentioned previously, we require the state \(\left|\psi\right\rangle\) to have no excitations (zero energy density) in a large enough region. We expect that the reduced density matrix of \(\left|\psi\right\rangle\) on a disk deep inside this region has no dependence on the boundary condition or excitations far away [26, 27, 20]. Hence, for simplicity, we assume \(\left|\psi\right\rangle\) to be the unique ground state of the TO on a sphere. In the CS theory, up to a normalization factor, this state can be prepared by performing the path integral in the interior of the sphere, i.e. on a solid ball, as illustrated in Fig. 2a.
Given the path integral representation of \(\left|\psi\right\rangle\), we take a mirror image for \(\left\langle\psi\right|\)[28] as shown in Fig. 2b, and then glue together \(B\) of \(\left|\psi\right\rangle\) and \(B^{\prime}\) of \(\left\langle\psi\right|\) to obtain the path integral for \(\rho_{A}\). The result is shown in Fig. 2c. Up to a
Figure 1: The partition of space used in our protocol.
normalization, \(\left|\rho_{A}\right\rangle\) has the same path integral representation as \(\rho_{A}\), it is therefore a state living on the torus. The Hilbert space on a torus without anyon excitation is multidimensional. An orthonormal basis of the space, denoted by \(\{\left|R_{j}\right\rangle\}\), one-to-one corresponds to a finite set of representations \(\{R_{j}\}\) of the gauge group, and also one-to-one corresponds to the anyon types of the TO. The state \(\left|R_{j}\right\rangle\) can be prepared by performing the path integral on the solid torus (bagel) with a noncontractible Wilson loop [29] carrying the corresponding representation \(R_{j}\) inserted. As shown in Fig. 2c, the path integral for \(\left|\rho_{A}\right\rangle\) has no Wilson loop insertion. The state thus corresponds to the trivial representation, or the trivial anyon sector (vacuum).
By keeping track of the subregions of \(A\), we observe that \(A_{1}\cup A_{1}^{\prime}\) and \(A_{3}\cup A_{3}^{\prime}\) correspond to two annuli shown in Fig. 2d. We are now supposed to compute the Renyi mutual information between these two regions. To this end, it is convenient to first apply an \(\mathcal{S}\) transformation [25], whose effect is shown in Fig. 2e: The two annuli now wind in the perpendicular direction, and a Wilson loop is inserted in the path integral. This new torus state is given by \(\sum_{j}\mathcal{S}_{0j}\left|R_{j}\right\rangle\), where \(\mathcal{S}_{0j}=d_{j}/\mathcal{D}\) are components of the modular \(\mathcal{S}\) matrix. The desired mutual information \(I^{(n)}\) can now be computed using the replica trick and the surgery method [21; 25; 19]. In fact, this has been done in Appendix B.4 of Ref. [21] (plug in \(\psi_{a}=\mathcal{S}_{0a}\)), and the technique is also pedagogically explained in that paper. We arrive at the result in Eq. 2.
As a fixed-point theory, the CS theory only captures the universal terms in EEs. For a generic gapped field theory or lattice model, the EE of a region also contains nonuniversal terms such as the "area-law" term proportional to the length of region boundary, and terms due to corners or other sharp features which are inevitable on lattices. We need to discuss whether the quantity \(I^{(n)}\) we consider contains any nonuniversal term. For a general gapped theory, we expect the picture of Fig. 2d still holds, although the theory is now not topological and depends on the spacetime metric. If we assume that nonuniversal terms in the EEs are made of _local_ contributions (which are insensitive to changes far away) near the partition interfaces [10; 11], then we see that all such terms have been canceled in \(I^{(n)}\). For example, the boundary of \(A_{1}\cup A_{1}^{\prime}\) contributes the same nonuniversal terms to \(S_{11^{\prime}}^{(n)}\) and \(S_{11^{\prime}\cup 33^{\prime}}^{(n)}\), and these terms have been canceled in \(I^{(n)}(11^{\prime},33^{\prime})\). We note that the locality assumption about nonuniversal terms does not hold in certain systems with the so-called _suprous_ long-range entanglement [30; 31; 32; 33; 34; 35; 36], which will not be considered in this work. As one test of the universality of \(I^{(n)}\), one can manually add a local bunch of coupled qubits to the state \(\left|\psi\right\rangle\) at an arbitrary location, and observe that the final result of \(I^{(n)}\) has no dependence on the state of these qubits.
## IV Test on lattices
In this section and the appendix, we will analytically test our protocol on lattices using Kitaev's quantum double models [2].
### Review of Quantum Double Models
Let us start by reviewing some definitions. There is a quantum double model for each finite group \(G\) (generally nonabelian). The model can live on an arbitrary lattice on an arbitrary orientable two-dimensional surface. The physical degrees of freedom, called spins, live on the edges, and the local Hilbert space of each spin is spanned by the orthonormal group element basis \(\{\left|g\right\rangle\left|g\in G\right\}\). We need to choose a direction for each edge. Reversing the direction of a particular edge will be equivalent to the basis change \(\left|z\right\rangle\mapsto\left|z^{-1}\right\rangle\) for the corresponding spin. Let \(v\) be a vertex, and \(f\) be an adjacent face, we define the local gauge transformations \(A_{v}(g)\) and magnetic charge operators \(B_{(v,f)}(h)\) as follows.
\[A_{v}(g) \tag{3}\] \[B_{(v,f)}(h)\Bigg{|}\begin{array}{c}z_{4}\\ v^{-}\end{array}\Bigg{|}\begin{array}{c}z_{3}\\ z_{1}\end{array}\Bigg{|}\begin{array}{c}z_{3}\\ v^{-}\end{array}\Bigg{|}\begin{array}{c}z_{1}\\ z_{2}\end{array}\Bigg{\rangle}\Bigg{|}\begin{array}{c}z_{3}\\ v^{-}\end{array}\Bigg{|}\begin{array}{c}z_{3}\\ z_{1}\end{array}\Bigg{\rangle}\Bigg{|}\begin{array}{c}z_{3}\\ v^{-}\end{array}\Bigg{|}\begin{array}{c}z_{1}\\ z_{2}\end{array}\Bigg{\rangle}\Bigg{\rangle}. \tag{4}\]
Here we use a tetravalent vertex and a square face as examples, and the generalizations should be straightforward. We further define two projectors:
\[A_{v}:=\left|G\right|^{-1}\sum_{g\in G}A_{v}(g),\quad B_{f}:=B_{(v,f)}(1). \tag{5}\]
Note that \(B_{f}\) does not depend on the choice of the adjacent vertex \(v\). The quantum double Hamiltonian is then given by [2]
\[H_{\text{QD}}=-\sum_{v}A_{v}-\sum_{f}B_{f}. \tag{6}\]
The projectors in \(H_{\text{QD}}\) all commute with each other.
On a sphere, the model has a unique gapped ground state \(\left|\Omega\right\rangle\) satisfying \(A_{v}=B_{f}=1\) for all vertices \(v\) and faces \(f\). Let us introduce some useful properties of this state before implementing our protocol. More explicitly, we can write \(\left|\Omega\right\rangle\propto\left(\prod_{i}A_{v}\right)(\bigotimes_{e} \left|1\right\rangle_{e})\) where \(e\) runs over all edges. Using \([A_{v}(g),A_{v^{\prime}}(h)]=0\) for \(v\neq v^{\prime}\), and \(A_{v}(g)A_{v}(h)=A_{v}(gh)\), one can check that
* \(A_{v}(g)\left|\Omega\right\rangle=\left|\Omega\right\rangle\) for all \(v\) and \(g\).
Given an oriented path along the edges from one vertex to another, when the relevant edge orientations are all consistent with the path direction, we define the _holonomy
of the path to be the product of all group elements on the path in the reversed order. For example, the holonomy of the following path from vertex \(u\) to \(v\) is given by \(g_{3}g_{2}g_{1}\).
With this terminology, \(B_{f}=1\) means that the holonomy around any face is trivial. It follows that
* for the state \(\ket{\Omega}\), the holonomy for any closed loop is trivial, and the holonomy between any two vertices does not depend on the choice of path.
Analogously, the property \(A_{v}(g)\ket{\Omega}=\ket{\Omega}\) also has a generalization on loops. Given a loop \(\mathcal{C}\) with a base point \(v\), we can define operators \(A_{\mathcal{C}}(g)\) whose support has the shape of a comb shown in Fig. 3a. The action of \(A_{\mathcal{C}}(g)\) is defined by
\[A_{\mathcal{C}}(g) \tag{7}\]
where \(x_{1k}:=x_{1}x_{2}\cdots x_{k}\). Let \(\mathcal{L}_{\text{flat}}\) be the Hilbert subspace spanned by spin configurations satisfying \(B_{f}=1\)\(\forall f\). \(A_{\mathcal{C}}(g)\) preserves \(\mathcal{L}_{\text{flat}}\). Moreover, \([A_{\mathcal{C}}(g),A_{v^{\prime}}(h)]=0\) for \(v\neq v^{\prime}\), and \(A_{\mathcal{C}}(g)A_{v}(h)=A_{v}(h)A_{\mathcal{C}}(h^{-1}gh)\). Let \(A_{\mathcal{C}}:=|G|^{-1}\sum_{g\in G}A_{\mathcal{C}}(g)\). One can check \(A_{\mathcal{C}}\ket{\Omega}=\ket{\Omega}\) with these commutation relations and the observation that acting \(A_{\mathcal{C}}(g)\) on \(\bigotimes_{\varepsilon}\ket{1}_{\varepsilon}\) is equivalent to acting several \(A_{v}(g)\) operators. Using \(A_{\mathcal{C}}(g)A_{\mathcal{C}}(h)=A_{\mathcal{C}}(gh)\), it follows that
* \(A_{\mathcal{C}}(g)\ket{\Omega}=\ket{\Omega}\).
We will also call \(A_{\mathcal{C}}(g)\) a gauge transformation operator.
### Applying the Protocol
With all these prerequisites, we are now ready to implement our protocol. Suppose a state \(\ket{\psi}\) has no excitation with respect to \(H_{\text{QD}}\) in a large contractible region, and suppose this region has the form of a square lattice. It has been shown in Ref. [26] that the reduced density matrix of \(\ket{\psi}\) deep inside this region has no dependence on the choice of \(\ket{\psi}\). Hence, we will just take \(\ket{\psi}\) to be the sphere ground state \(\ket{\Omega}\). Consider a bipartition of the space like that in Fig. 3b. Here, in order to draw the partition interface right on the edges, we imagine duplicating each edge into two, and require them to always be in the same group element state; see Fig. 3c. This is just a simple trick inspired by Ref. [11] for getting a nice partition; each pair of spins obtained this way can still be regarded as a single spin unless the partition is being considered [37]. Still calling the state \(\ket{\Omega}\), we can write
\[\ket{\Omega}=\sum_{\begin{subarray}{c}g_{A}\text{ with trivial}\\ \text{holonomies}\end{subarray}}\ket{g_{A}}_{A}\ket{\phi(g_{A})}_{B}, \tag{8}\]
where \(g_{A}\)'s are spin configurations in the annulus region \(A\), and \(\ket{\phi(g_{A})}_{B}\) are some set of states on \(B\) that are not necessarily normalized or orthogonal. In the summation above, we require \(g_{A}\) to have trivial holonomy around any loop in \(A\), contractible or noncontractible.
We claim that for any two product states of group elements \(\ket{g_{A}}\) and \(\ket{g^{\prime}_{A}}\) with trivial holonomies in \(A\), and with the same group elements on the boundary \(\partial A\), there exists a gauge transformation acting on \(A\) which transforms \(\ket{g_{A}}\) to \(\ket{g^{\prime}_{A}}\). We can build such a transformation step by step: First choose the unique local gauge transformation \(A_{v_{0}}(g_{0})\) acting on the top left internal vertex such that \(A_{v_{0}}(g_{0})\ket{g_{A}}\) matches \(\ket{g^{\prime}_{A}}\) on the entire top left face. Then move rightward and choose the unique \(A_{v_{1}}(g_{1})\) acting on the next vertex \(v_{1}\) such that \(A_{v_{1}}(g_{1})A_{v_{0}}(g_{0})\ket{g_{A}}\) matches \(\ket{g^{\prime}_{A}}\) on the top left two faces. Continue this process until the top row of faces are all done, and start over from the left most vertex in the second row. When the face at the top left corner of the internal boundary of \(A\) is encountered, in order to not alter the spin configuration on \(\partial A\), we need to utilize a loop operator \(A_{\mathcal{C}}(g)\) to fix that face, where \(\mathcal{C}\) coincides with the internal boundary. This kind of loop operators are no longer needed subsequently, and eventually \(g_{A}\) can be completely transformed into \(g^{\prime}_{A}\). Since the \(A_{v}(g)\) and \(A_{\mathcal{C}}(h)\) operators are all unitary and leave \(\ket{\Omega}\) invariant, when \(g_{A}\) and \(g^{\prime}_{A}\) are related by a gauge transformation, we have \(\ket{\phi(g_{A})}_{B}={}_{A}\bra{g_{A}}\ket{\Omega}={}_{A}\bra{g^{\prime}_{A} }\ket{\Omega}=\ket{\phi(g^{\prime}_{A})}_{B}\). This means that \(\phi(g_{A})\) actually only depends on the spin configuration on \(\partial A\). We can then write
\[\ket{\Omega}=\sum_{\begin{subarray}{c}g_{\partial A}\text{ with trivial}\\ \text{holonomies}\end{subarray}}\ket{\xi(g_{\partial A})}_{A}\ket{\phi(g_{ \partial A})}_{B}, \tag{9}\]
where \(\ket{\xi(g_{\partial A})}\) is the sum of all holonomy free \(\ket{g_{A}}\) such that \((g_{A})\ket{\partial_{A}}=g_{\partial A}\).
The \(\ket{\phi(g_{\partial A})}_{B}\) states are orthogonal to each other, because subsystem \(B\) contains a copy of the spin configuration on \(\partial A\). We will now prove that they also have the same norm. Observe that any spin configuration \(g_{\partial A}\) with trivial holonomies can be transformed into another
Figure 3: (a) Support of the operator \(A_{\mathcal{C}}(g)\) for a loop \(\mathcal{C}\) with base point \(v\). (b) Bipartition into \(A\) (annulus) and \(B\). (c) Duplication of a spin.
\(g^{\prime}_{\partial A}\) using the local gauge transformations \(A_{v}(g)\) overlapping with \(\partial A\). Let \(U\) be that total gauge transformation operator. We can write \(U=V_{A}V_{B}\) where \(V_{A}\) (\(V_{B}\)) is a unitary operator acting on \(A\) (\(B\)). From the definition of \(\ket{\xi(g_{\partial A})}\), we can see \(V_{A}\ket{\xi(g_{\partial A})}=\ket{\xi(g^{\prime}_{\partial A})}\). It then follows from \(U\ket{\Omega}=\ket{\Omega}\) that \(V_{B}\ket{\phi(g_{\partial A})}=\ket{\phi(g^{\prime}_{\partial A})}\). Hence \(\ket{\phi(g_{\partial A})}\) and \(\ket{\phi(g^{\prime}_{\partial A})}\) indeed have the same norm.
The above analysis is inspired by Ref. [26]. With these results established, we easily find
\[\ket{\rho_{A}}\propto\sum_{\begin{subarray}{c}g_{\partial A}\text{ with trivial}\\ \text{holonomies}\end{subarray}}\ket{\xi(g_{\partial A})}\ket{\xi(g_{\partial A })}. \tag{10}\]
This state lives on the doubled system \(A\cup A^{\prime}\), i.e. two copies of the annulus in Fig. 3b. Now imagine gluing the vertices in \(\partial A\) with the corresponding vertices in \(\partial A^{\prime}\), obtaining a torus. One can check that \(\ket{\rho_{A}}\) is a ground state of the quantum double model defined on this torus. In particular, it is invariant under the actions of (new) local gauge transformations crossing the gluing interface. The quantum double model has multiple ground states on the torus. \(\ket{\rho_{A}}\) is the special one characterized by trivial holonomy around the hole existing in each of the original annuli, and by \(A_{\mathcal{C}}(g)=1\) around the same hole. These actually imply trivial anyon flux through the hole, because any anyon flux would be detected by some loop operator that winds another anyon around it. We have thus successfully recovered the picture in Fig. 2c.
The next step is to obtain the desired mutual information. Although this has been done in the continuum, we did not find a lattice result meeting our need. We have thus performed an honest calculation, and it eventually works out magically. We refer interested readers to the appendix for the rather tedious details. The key technical trick is to use the holonomy basis introduced in Ref. [38]: In the subspace with \(B_{f}=1\), labeling spin configurations by group elements on all edges contains a lot of redundancy, and we can instead label spin configurations in each regions using independent holonomy variables. With this way of labeling basis vectors, the remaining calculation is more or less just brute-force.
We have found that the anyon sectors are labeled by a pair of variables \((C,\mu)\). Here, \(C\) labels a conjugacy class of \(G\). Let \(h_{C}\in C\) be a representative that is arbitrary but fixed once for all. \(\mu\) labels an irreducible representation of \(Z_{C}:=\{g\in G|gh_{C}=h_{C}g\}\), the centralizer of \(h_{C}\). The quantum dimensions are given by \(d_{(C,\mu)}=|C|d_{\mu}\) where \(d_{\mu}\) is the dimension of the representation \(\mu\). These are consistent with known results [2, 39, 40].
## V Conclusion
In this work, we have introduced a simple protocol for extracting all anyon quantum dimensions of a two-dimensional TO from an arbitrary ground state wave function. It is both validated in the continuum and verified on lattices. It is interesting to seek generalizations of this protocol for extracting more universal information, such as the fusion rules, \(\mathcal{S}\) matrix, and topological spins.
We should mention that this work is partially inspired by Ref. [41], which studies the entanglement negativity between two spatial regions in a tripartite topologically ordered state with trisection points (points where the three regions meet). Using some "wormhole" approach, it is found that the negativity contains an order-1 term that can distinguish abelian and nonabelian TOs. However, it is not clear at least to us whether this term is comparable to any universal quantity in generic models. It actually seems hard to extract a universal term from the entanglement negativity with trisection points [42, 22, 43], and more studies are needed to better understand this issue.
###### Acknowledgements.
I am grateful to Chao Yin for a previous related collaboration [22], to Yanting Cheng for feedbacks on the manuscript, and to Wenjie Ji, Yuan-Ming Lu, Nat Tantivasadakarn, Ashvin Vishwanath, and Liujun Zou for helpful discussions. I am supported by the Gordon and Betty Moore Foundation under Grant No. GBMF8690 and the National Science Foundation under Grant No. NSF PHY-1748958.
## Appendix A Computing EEs in Kitaev Quantum Double Models
In this section, we elaborate details about EE computations in quantum double models. We will utilize the technique of holonomy basis introduced in Ref. [38].
### Holonomy Basis
Throughout this section, we restrict our attention to the Hilbert subspace \(\mathcal{L}_{\text{flat}}\) spanned by spin configurations satisfying \(B_{f}=1\) for all \(f\). Within this subspace, it is convenient to label spin configurations by independent holonomies: We choose some base point \(v_{0}\), a path from \(v_{0}\) to every other vertex, and a closed loop based at \(v_{0}\) for each noncontractible cycle of the space. Then each spin configuration is uniquely determined by the holonomies along these paths and loops; one can solve the group element state on each edge from those holonomies.
As an example, let there be \(V\) number of vertices \(v_{0},w_{1},w_{2},\cdots,w_{V-1}\), and just one noncontractible circle. Denote by \(g_{i}\) the holonomy from \(v_{0}\) to \(w_{i}\) along the chosen path, and by \(k\) the holonomy around the closed loop. We can write the basis vectors as \(\{\ket{g_{i};k}\}\). An operator \(A_{w_{i}}(h)\) acts as
\[A_{w_{i}}(h)\ket{g_{j};k}=\ket{g_{1},\cdots,hg_{i},\cdots,g_{V-1};k}, \tag{11}\]
and \(A_{v_{0}}(h)\) acts as
\[A_{v_{0}}(h)\left|g_{j};k\right>=\left|g_{1}h^{-1},\cdots,g_{V-1}h^{-1};hkh^{-1} \right>. \tag{10}\]
Note that despite the terminology "holonomy basis", we are still using the natural basis of tensor products of group elements. We just adopt a more convenient labeling of the basis vectors.
### Warm-Up: EE of a Disk on a Sphere
As a warm-up, let us compute the EE for a disk region on a sphere. Denote this disk by \(A\), and the complement (also a disk) by \(B\). We define holonomy bases separately for the two regions, as shown in Fig. 4. The base point in \(A\) (\(B\)) is \(v_{A}\) (\(v_{B}\)). Let there be \(L\) number of vertices \(w_{1},\cdots,w_{L}\) on the partition interface, we denote the holonomy from \(v_{A}\) (\(v_{B}\)) to \(w_{i}\) by \(g_{i}\) (\(h_{i}\)). There are also holonomies from \(v_{A}\) (\(v_{B}\)) to other internal vertices of \(A\) (\(B\)), but it turns out that these internal holonomies do not contribute to the EE. Therefore, for simplicity, we just retain one such internal vertex \(u_{A}\) (\(u_{B}\)) in \(A\) (\(B\)) and denote the corresponding holonomy by \(p\) (\(q\)).
We are interested in states with \(B_{f}=1\) for all faces \(f\), which implies that the holonomy around any contractible loop is trivial. We thus have
\[g_{1}^{-1}h_{1}=g_{2}^{-1}h_{2}=\cdots=g_{L}^{-1}h_{L}=:a. \tag{11}\]
Take an arbitrary holonomy configuration \(\left|g_{i};h_{i};p;q\right>\). Imposing the above condition, we can write \(h_{i}=g_{i}a\). The ground state \(\left|\Omega\right>\) can be obtained by applying the projectors \(A_{v}\) for all \(v\) to the state \(\left|g_{i};g_{i}a;p;q\right>\). First consider the \(A_{w_{i}}\) operators. We have
\[\prod_{i=1}^{L}A_{w_{i}}\left|g_{1},g_{2},\cdots;g_{1}a,g_{2}a, \cdots;p;q\right> \tag{12}\] \[\propto \sum_{h_{i}}\left|h_{1}g_{1},h_{2}g_{2},\cdots;h_{1}g_{1}a,h_{2} g_{2}a,\cdots;p;q\right>\] (13) \[= \sum_{g_{i}}\left|g_{1},g_{2},\cdots;g_{1}a,g_{2}a,\cdots;p;q \right>. \tag{14}\]
Here, to obtain the last line, we first do a change of variable \(h_{i}\mapsto h_{i}g_{i}^{-1}\) to absorb all \(g_{i}\), and then rename the dummy variables \(h_{i}\) to \(g_{i}\). We see that the net effect of the \(A_{w_{i}}\) operators is a summation over the \(g_{i}\)'s. Similarly, applying \(A_{u_{A}}\) and \(A_{u_{B}}\) results in a summation over \(p\) and \(q\). Next, applying \(A_{v_{B}}\), we have
\[A_{v_{B}}\sum_{g_{i},p,q}\left|g_{i};g_{i}a;p;q\right> \tag{15}\] \[\propto \sum_{g_{i},p,q,h_{B}}\left|g_{i};g_{i}ah_{B}^{-1};p;qh_{B}^{-1}\right>\] (16) \[= \sum_{g_{i},p,q,a}\left|g_{i};g_{i}a;p;q\right>. \tag{17}\]
We effectively get a summation over \(a\). Finally, applying \(A_{v_{A}}\) to the above state has no effect. We have thus found
\[\left|\Omega\right>\propto\sum_{g_{i},p,q,a}\left|g_{i};g_{i}a;p;q\right>. \tag{18}\]
We now see that the internal vertices \(u_{A}\) and \(u_{B}\) just contribute a product state \((\left|G\right|^{-1}\sum_{p}\left|p\right>)_{A}(\left|G\right|^{-1}\sum_{q} \left|q\right>)_{B}\) which does not affect the EE. We will therefore ignore the \(p\) and \(q\) variables in the following, and simply write \(\left|\Omega\right>\propto\sum_{g_{i},a}\left|g_{i};g_{i}a\right>\).
Taking a partial trace over subsystem \(A\), we obtain
\[\rho_{B}\propto\sum_{g_{i},a,b}\left|g_{i}a\right>\left<g_{i}b\right|. \tag{19}\]
For later convenience, we do a change of variable: \(a\mapsto g_{1}^{-1}a\), \(b\mapsto g_{1}^{-1}b\), \(g_{i>1}\mapsto g_{i>1}g_{1}\). It follows that
\[\rho_{B}\propto\sum_{g_{i>1},a,b}\left|a,g_{2}a,g_{3}a,\cdots\right>\left<b,g _{2}b,g_{3}b,\cdots\right|. \tag{20}\]
We denote by \(\tilde{\rho}_{B}\) the unnormalized density matrix on the right-hand side.
To compute the Renyi EE, we need to evaluate \(\operatorname{Tr}(\tilde{\rho}_{B}^{n})\). We add a superscript \(\mu=1,2,\cdots,n\) to the dummy variables in the \(\mu\)-th copy of \(\tilde{\rho}_{B}\), and it is not hard to see that
\[b^{\mu}=a^{\mu+1},\quad g_{i>1}^{\mu}=g_{i>1}^{\mu+1}=:g_{i>1}. \tag{21}\]
We are left with
\[\operatorname{Tr}(\tilde{\rho}_{B}^{n})=\sum_{a^{\mu},g_{i>1}}1=\left|G\right| ^{n+L-1}. \tag{22}\]
The Renyi EE is then
\[S_{A}^{(n)}=S_{B}^{(n)}=\frac{1}{1-n}\log\left[\frac{\operatorname{Tr}(\tilde {\rho}_{B}^{n})}{\operatorname{Tr}(\tilde{\rho}_{B})^{n}}\right]=(L-1)\log|G|. \tag{23}\]
We can identify the \(L\log|G|\) term as the "area-law" term proportional to the length of the partition interface, and identify \(-\log|G|\) as the universal topological EE [10; 11]. It follows that \(\mathcal{D}=|G|\).
Our calculation above essentially follows that in Ref. [38]. The same result is also obtained in Ref. [26] with different approaches.
Figure 4: Holonomies for the inside and outside of a disk.
### EEs on a Torus
Let us now tackle the real problem. Recall that we need to compute the mutual information between two disjoint annulus regions of a torus. To the end, we need to compute the EEs both of a single annulus, and of two annuli. Let us then imagine partitioning the torus into \(2N\) number of annuli, denoted by \(T_{1},T_{2},\cdots,T_{2N}\). We will be interested only in the cases \(N=1\) and \(N=2\), but it will be convenient to keep a unified notation.
We define holonomy bases separately for the \(2N\) regions. In Fig. 5, we show our holonomy variables for the case of \(N=2\), and generalizations to other values of \(N\) should be clear. We denote by \(v_{I}\) with \(I\in\{1,2,\cdots,2N\}\) the base point in the region \(T_{I}\). From each \(v_{I}\), there are holonomies \(g_{I,m}\) and \(h_{I,m}\) to the interfaces with \(T_{I-1}\) and \(T_{I+1}\), respectively. We assume the numbers of vertices on all the interface circles are the same, denoted by \(L\), just for the simplicity of notations. There is also a holonomy \(k_{I}\) around a noncontractible loop in each region. As in the previous example, internal vertices will not contribute to the EE, so we have simply omitted them. We draw a loop \(\mathcal{C}\) that is based at \(v_{1}\) and passes through all the other \(v_{I}\). The comb-like support of \(A_{\mathcal{C}}(g)\) operators associated with the loop is also plotted. As we have shown in the main text, the special ground state \(\ket{\Omega}\) we consider has trivial holonomy around \(\mathcal{C}\), and satisfies \(A_{\mathcal{C}}\ket{\Omega}=\ket{\Omega}\).
Let us start from some holonomy configuration \(\ket{g_{I,m};h_{I,m};k_{I}}\). We would like to impose \(B_{f}=1\) for all faces \(f\), or equivalently, every contractible loop has trivial holonomy. Then \(g_{I+1,m}^{-1}h_{I,m}\) should be independent of \(m\), and we will denote this quantity by \(a_{I}\). In other words, \(h_{I,m}=g_{I+1,m}a_{I}\). The \(B_{f}=1\) conditions also imply \(k_{I}a_{I}^{-1}k_{I+1}^{-1}a_{I}=1\). We can thus write
\[k_{I+1}=a_{I}a_{I-1}\cdots a_{1}k_{1}a_{1}^{-1}\cdots a_{I-1}^{-1}a_{I}^{-1}. \tag{101}\]
The condition of trivial holonomy around \(\mathcal{C}\) implies that
\[a_{2N}a_{2N-1}\cdots a_{1}=1. \tag{102}\]
To obtain the desired ground state \(\ket{\Omega}\), we shall apply the projectors \(A_{\mathcal{C}}\), \(A_{I,m}\), and \(A_{v_{I}}\) to \(\ket{g_{I,m};h_{I,m};k_{I}}\), where \(A_{I,m}\) is the local gauge invariance projector acting on the \(m\)-th vertex of the \(T_{I}T_{I+1}\) interface (affecting \(h_{I,m}\) and \(g_{I+1,m}\)).
We can choose the paths for \(g_{I,m}\) and \(h_{I,m}\) to have no overlap with the support of \(A_{\mathcal{C}}(z)\) operators. The same holds for paths from \(v_{I}\) to other internal vertices, which we have omitted. As a result, \(A_{\mathcal{C}}(z)\) will only affect \(k_{I}\), and its action is more explicitly given by
\[k_{I}\mapsto (a_{I-1}\cdots a_{1}za_{1}^{-1}\cdots a_{I-1}^{-1})k_{I} \tag{103}\] \[=a_{I-1}\cdots a_{1}(zk_{1})a_{1}^{-1}\cdots a_{I-1}^{-1}. \tag{104}\]
Hence, after the action of \(A_{\mathcal{C}}\), the state \(\ket{g_{I,m};h_{I,m};k_{I}}\) becomes
\[\sum_{k_{1}}\ket{g_{I,m};g_{I+1,m}a_{I};a_{I-1}\cdots a_{1}k_{1}a_{1}^{-1} \cdots a_{I-1}^{-1}}, \tag{105}\]
up to a normalization factor. The actions of \(A_{I,m}\) effectively induce summations over \(g_{I,m}\) for the above expression. Finally, applying \(A_{v_{I}}\), we obtain
\[\ket{\Omega} \propto\sum_{k_{1},\ g_{I,m},\ h_{I}}\ket{g_{I,m}h_{I}^{-1};g_{I+ 1,m}a_{I}h_{I}^{-1};h_{I}a_{I-1}a_{I-2}\cdots a_{1}k_{1}a_{1}^{-1}\cdots a_{I-2 }^{-1}a_{I-1}^{-1}h_{I}^{-1}} \tag{106}\] \[=\sum_{k_{1},\ g_{I,m},\ h_{I}}\ket{g_{I+1,m}(h_{I+1}a_{I}h_{I}^ {-1});(h_{I}a_{I-1}h_{I-1}^{-1})(h_{I-1}a_{I-2}h_{I-2}^{-1})\cdots(h_{2}a_{1}h _{1}^{-1})k_{1}\cdots}, \tag{107}\]
where the second line is obtained by a change of variables: \(g_{I,m}\mapsto g_{I,m}h_{I}\) and \(k_{1}\mapsto h_{1}^{-1}k_{1}h_{1}\). Now observe that the following map within \(G^{\times 2N}\) is one-to-one:
\[(h_{2N},h_{2N-1},\cdots,h_{1})\mapsto(h_{2N}a_{2N-1}h_{2N-1}^{-1},h_{2N-1}a_{ 2N-2}h_{2N-2}^{-1},\cdots,h_{2}a_{1}h_{1}^{-1},h_{1}). \tag{108}\]
Figure 5: Holonomy variables for a tetrapartite torus, and the support (red comb) of the loop operators \(A_{\mathcal{C}}(g)\).
Therefore, summing over the variables on the left is equivalent to summing over those on the right. Using this fact, together with Eq. 17 that implies \((h_{1}a_{2N}h_{2N}^{-1})(h_{2N}a_{2N-1}h_{2N-1}^{-1})\cdots(h_{2}a_{1}h_{1}^{-1})=1\), we obtain
\[|\Omega\rangle\propto\sum_{k_{1},\ a_{I},\ g_{I,m}}\delta(a_{2N}a_{2N-1}\cdots a _{1},1)\left|g_{I,m};g_{I+1,m}a_{I};a_{I-1}\cdots a_{1}k_{1}a_{1}^{-1}\cdots a _{I-1}^{-1}\right\rangle, \tag{18}\]
where \(\delta(\cdot,\cdot)\) is the Kronecker delta function.
Partial tracing \(\left|\Omega\right\rangle\left\langle\Omega\right|\) over \(T_{1},T_{3},T_{5},\cdots,T_{2N-1}\), and after a simple change of variables, we find the following reduced density matrix.
\[\rho\propto\sum_{k,\ g_{I,m},\ a_{I},\ b_{I}}\delta(a_{2N}\cdots a _{1},1)\delta(b_{2N}\cdots b_{1},1)\delta(a_{2K-2}\cdots a_{1}ka_{1}^{-1} \cdots a_{2K-2}^{-1},b_{2K-2}\cdots b_{1}kb_{1}^{-1}\cdots b_{2K-2}^{-1})|_{2 \leq K\leq N}\] \[|g_{2K,m}a_{2K-1}^{-1};g_{2K+1,m}a_{2K};a_{2K-1}\cdots a_{1}ka_{1} ^{-1}\cdots a_{2K-1}^{-1}\rangle\left\langle g_{2K,m}b_{2K-1}^{-1};g_{2K+1,m} b_{2K};b_{2K-1}\cdots b_{1}kb_{1}^{-1}\cdots b_{2K-1}^{-1}\right|=:\tilde{\rho}. \tag{19}\]
We have introduced a new index \(K\) which takes values from \(\{1,2,\cdots,N\}\) unless otherwise indicated (like for the third \(\delta\) above). Now apply the following change of variables:
\[a_{2K-1}\mapsto a_{2K-1}g_{2K,1},\quad b_{2K-1}\mapsto b_{2K-1}g _{2K,1},\quad g_{2K,m>1}\mapsto g_{2K,m}g_{2K,1}, \tag{20}\] \[a_{2K}\mapsto g_{2K+1,1}^{-1}a_{2K},\quad b_{2K}\mapsto g_{2K+1, 1}^{-1}b_{2K},\quad g_{2K+1,m>1}\mapsto g_{2K+1,m}g_{2K+1,1}. \tag{21}\]
We obtain
\[\tilde{\rho}= \sum_{k,\ g_{I},\ldots,\ a_{I},\ b_{I}}\delta(g_{1,1}^{-1}a_{2N} a_{2N-1}g_{2N,1}g_{2N-1,1}^{-1}a_{2N-2}\cdots a_{1}g_{2,1},1)\delta(g_{1,1}^{-1}b_{2N }b_{2N-1}g_{2N,1}g_{2N-1,1}^{-1}b_{2N-2}\cdots b_{1}g_{2,1},1)\] \[\delta(g_{2K-1,1}^{-1}a_{2K-2}a_{2K-3}g_{2K-2,1}\cdots a_{1}g_{2,1 }k\cdots,g_{2K-1,1}^{-1}b_{2K-2}b_{2K-3}g_{2K-2,1}\cdots b_{1}g_{2,1}k\cdots)|_ {K>1}\] \[|(1,g_{2K,m>1})a_{2K-1}^{-1};(1,g_{2K+1,m>1})a_{2K};a_{2K-1}g_{2K,1 }g_{2K-1,1}^{-1}a_{2K-2}a_{2K-3}\cdots a_{1}g_{2,1}k\cdots)\] \[\langle(1,g_{2K,m>1})b_{2K-1}^{-1};(1,g_{2K+1,m>1})b_{2K};b_{2K-1} g_{2K,1}^{-1}g_{2K-2,1}^{-1}b_{2K-2}b_{2K-3}\cdots b_{1}g_{2,1}k\cdots|\,, \tag{22}\]
an incredibly complicated expression! This step is similar to the one we took in the warm-up example before evaluating \(\mathrm{Tr}(\tilde{\rho}_{B}^{n})\). Here and below, we often write terms of the form \(xkx^{-1}\) as \(xk\cdots\) when \(x\) has a long expression. Now we are ready to compute \(\mathrm{Tr}(\tilde{\rho}^{n})\). We again add a superscript \(\mu\) to the dummy variables of the \(\mu\)-th copy of \(\tilde{\rho}\). We see that \(b_{I}^{\mu}=a_{I}^{\mu+1}\). The summations over \(b_{I}^{\mu}\) can then be dropped. In addition, \(g_{I,m>1}^{\mu}=g_{I,m>1}^{\mu+1}\). We can now do the summations over \(g_{I,m>1}^{\mu}\) and get a factor \(|G|^{2N(L-1)}\). We rename \(g_{I,1}^{\mu}\) as \(g_{I}^{\mu}\), and we are left with
\[\mathrm{Tr}(\tilde{\rho}^{n})=\sum_{k^{\mu},g_{I}^{\mu},a_{I}^{ \mu}}|G|^{2N(L-1)}\] \[\delta[(g_{1}^{\mu})^{-1}a_{2N}^{\mu}a_{2N-1}^{\mu}g_{2N}^{\mu}(g _{2N-1}^{\mu})^{-1}a_{2N-2}^{\mu}\cdots a_{1}^{\mu}g_{2}^{\mu},1]\delta[(g_{1} ^{\mu})^{-1}a_{2N}^{\mu+1}a_{2N-1}^{\mu+1}g_{2N}^{\mu}(g_{2N-1}^{\mu})^{-1}a_{ 2N-2}^{\mu+1}\cdots a_{1}^{\mu+1}g_{2}^{\mu},1]\] \[\delta[(g_{2K-1}^{\mu})^{-1}a_{2K-2}^{\mu}a_{2K-3}^{\mu}g_{2K-2}^ {\mu}\cdots a_{1}^{\mu}g_{2}^{\mu}k^{\mu}\cdots,(g_{2K-1}^{\mu})^{-1}a_{2K-2}^ {\mu+1}a_{2K-3}^{\mu}g_{2K-2}^{\mu}\cdots a_{1}^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots ]|_{K>1}\] \[\delta[a_{2K-1}^{\mu+1}g_{2K}^{\mu}(g_{2K-1}^{\mu})^{-1}a_{2K-2}^ {\mu+1}a_{2K-3}^{\mu+1}\cdots a_{1}^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots,a_{2K-1}^{ \mu+1}g_{2K}^{\mu+1}(g_{2K-1}^{\mu+1})^{-1}a_{2K-2}^{\mu+1}a_{2K-3}^{\mu+1} \cdots a_{1}^{\mu+1}g_{2}^{\mu+1}k^{\mu}\cdots] \tag{23}\]
It seems too hard to proceed with this expression for a general \(N\), so in the following, we will restrict to \(N=1\) and \(N=2\) which are the cases we need.
#### a.2 N=1
When \(N=1\), \(K\) can only be 1, and we find
\[\mathrm{Tr}(\tilde{\rho}^{n})= \sum_{k^{\mu},g_{I}^{\mu},a_{I}^{\mu}}|G|^{2L-2}\] \[\delta[(g_{1}^{\mu})^{-1}a_{2N}^{\mu}a_{1}^{\mu}g_{2}^{\mu},1] \delta[(g_{1}^{\mu})^{-1}a_{2}^{\mu+1}a_{1}^{\mu+1}g_{2}^{\mu},1]\] \[\delta(a_{1}^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots,a_{1}^{\mu+1}g_{2}^{ \mu+1}k^{\mu+1}\cdots). \tag{24}\]
Notice that in the last delta function, \(a_{1}^{\mu+1}\) appears in both entries and thus can be removed. We can make a change of variables: \(a_{1}^{\mu}\mapsto(a_{2}^{\mu})^{-1}a_{1}^{\mu}\), and \(k^{\mu}\mapsto(g_{2}^{\mu})^{-1}k^{\mu}g_{2}^{\mu}\). The \(a_{2}^{\mu}\) summation can now be done and gives \(|G|^{n}\). The third \(\delta\) function becomes \(\delta(k^{\mu},k^{\mu+1})\). The summation over \(k^{\mu}\) therefore gives a factor of \(|G|\)
We are left with
\[\text{Tr}(\tilde{\rho}^{n})=\sum_{g_{1}^{\mu},g_{2}^{\mu},a_{3}^{\mu}} |G|^{2L-1+n}\] \[\delta[(g_{1}^{\mu})^{-1}a_{1}^{\mu}g_{2}^{\mu},1]\delta[(g_{1}^{ \mu})^{-1}a_{1}^{\mu+1}g_{2}^{\mu},1]. \tag{101}\]
We can do the summation over \(a_{1}^{\mu}\), and get
\[\text{Tr}(\tilde{\rho}^{n})=\sum_{g_{1}^{\mu}}|G|^{2L-1+n}\delta[g_ {1}^{\mu}(g_{2}^{\mu})^{-1},g_{1}^{\mu+1}(g_{2}^{\mu+1})^{-1}] \tag{102}\] \[=\sum_{g_{1}^{\mu}}|G|^{2L-1+n}\delta[g_{1}^{\mu},g_{1}^{\mu+1}]= |G|^{2L+2n}. \tag{103}\]
The Renyi EE can now be computed:
\[S_{T_{2}}^{(n)}=S_{T_{1}}^{(n)}=\frac{1}{1-n}\log\left[\frac{\text{Tr}(\tilde{ \rho}^{n})}{(\text{Tr}\tilde{\rho})^{n}}\right]=2L\log|G|. \tag{104}\]
Recall that we have assumed the two interface circles between \(T_{1}\) and \(T_{2}\) to have the same length \(L\). In general the \(2L\) factor above should be replaced by the total interface length. We see that the Renyi EE contains only an area-law term, so the topological EE in this case actually vanishes. This is consistent with the field-theoretic result.
#### a.2.2 N=2
Plugging \(N=2\) and \(K=1,2\) into Eq.101, we find
\[\text{Tr}(\tilde{\rho}^{n})=\sum_{k^{\mu},g_{1}^{\mu},a_{3}^{\mu}} |G|^{4(L-1)}\] \[\delta[(g_{1}^{\mu})^{-1}a_{4}^{\mu}a_{3}^{\mu}g_{4}^{\mu}(g_{3} ^{\mu})^{-1}a_{2}^{\mu}a_{1}^{\mu}g_{2}^{\mu},1]\] \[\delta[(g_{1}^{\mu})^{-1}a_{4}^{\mu}a_{3}^{\mu}+\cdots a_{1}^{\mu +1}g_{2}^{\mu},1]\] \[\delta[(g_{1}^{\mu})^{-1}a_{2}^{\mu}a_{1}^{\mu}g_{2}^{\mu}k^{\mu }\cdots,(g_{3}^{\mu})^{-1}a_{2}^{\mu+1}a_{1}^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots]\] \[\delta[a_{1}^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots,a_{1}^{\mu+1}g_{2}^{ \mu+1}k^{\mu}\cdots]\] \[\delta[a_{3}^{\mu+1}g_{4}^{\mu}(g_{3}^{\mu})^{-1}a_{2}^{\mu+1}a_{1 }^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots,\] \[a_{3}^{\mu+1}g_{4}^{\mu+1}(g_{3}^{\mu+1})^{-1}a_{2}^{\mu+1}a_{1} ^{\mu+1}g_{2}^{\mu+1}k^{\mu}\cdots]. \tag{105}\]
The last two delta functions here come from the last delta function of Eq.101 with \(K=1,2\), respectively. Utilizing the first two delta functions, the last one can be simplified to
\[\delta[(a_{4}^{\mu+1})^{-1}g_{1}^{\mu}k^{\mu}\cdots,(a_{4}^{\mu+1})^{-1}g_{1 }^{\mu+1}k^{\mu+1}\cdots]. \tag{106}\]
Further removing some obvious redundancies in the delta functions, we are left with
\[\text{Tr}(\tilde{\rho}^{n})=\sum_{k^{\mu},g_{1}^{\mu},a_{3}^{\mu}} |G|^{4(L-1)}\] \[\delta[(g_{1}^{\mu})^{-1}a_{4}^{\mu}a_{3}^{\mu}g_{4}^{\mu}(g_{3} ^{\mu})^{-1}a_{2}^{\mu}a_{1}^{\mu}g_{2}^{\mu},1]\] \[\delta[(g_{1}^{\mu})^{-1}a_{4}^{\mu}a_{3}^{\mu+1}g_{4}^{\mu}(g_{3 }^{\mu})^{-1}a_{2}^{\mu+1}a_{1}^{\mu+1}g_{2}^{\mu},1]\] \[\delta[a_{2}^{\mu}a_{1}^{\mu}g_{2}^{\mu}k^{\mu}\cdots,a_{2}^{\mu +1}a_{1}^{\mu+1}g_{2}^{\mu}k^{\mu}\cdots]\] \[\delta[g_{2}^{\mu}k^{\mu}(g_{2}^{\mu})^{-1},g_{2}^{\mu+1}k^{\mu+1 }(g_{2}^{\mu+1})^{-1}]\] \[\delta[g_{1}^{\mu}k^{\mu}(g_{1}^{\mu})^{-1},g_{1}^{\mu+1}k^{\mu+1 }(g_{1}^{\mu+1})^{-1}]. \tag{107}\]
Now we do a change of variables: \(g_{4}^{\mu}\mapsto g_{4}^{\mu}g_{3}^{\mu}\), \(a_{3}^{\mu}\mapsto(a_{4}^{\mu})^{-1}a_{3}^{\mu}\), \(a_{1}^{\mu}\mapsto(a_{2}^{\mu})^{-1}a_{1}^{\mu}\). The summations over \(g_{3}\), \(a_{4}^{\mu}\) and \(a_{2}^{\mu}\) can now be done and give \(|G|^{3n}\). We also do the summation over \(g_{4}^{\mu}\) which reduces the first two delta functions into a single one. We get
\[\text{Tr}(\tilde{\rho}^{n})=|G|^{4(L-1)+3n}\sum_{k^{\mu},g_{1}^{ \mu},g_{2}^{\mu},a_{1}^{\mu},a_{3}^{\mu}}\] \[\delta[a_{1}^{\mu}g_{2}^{\mu}(g_{1}^{\mu})^{-1}a_{3}^{\mu},a_{1 }^{\mu+1}g_{2}^{\mu}(g_{1}^{\mu})^{-1}a_{3}^{\mu+1}]\] \[\delta[a_{1}^{\mu}g_{2}^{\mu}k^{\mu}\cdots,a_{1}^{\mu+1}g_{2}^{ \mu}k^{\mu}\cdots]\] \[\delta[g_{2}^{\mu}k^{\mu}(g_{2}^{\mu})^{-1},g_{2}^{\mu+1}k^{\mu+1 }(g_{2}^{\mu+1})^{-1}]\] \[\delta[g_{1}^{\mu}k^{\mu}(g_{1}^{\mu})^{-1},g_{1}^{\mu+1}k^{\mu+1 }(g_{1}^{\mu+1})^{-1}]. \tag{108}\]
One more change of variables: \(k^{\mu}\mapsto(g_{2}^{\mu})^{-1}k^{\mu}g_{2}^{\mu}\), and \(g_{1}^{\mu}\mapsto g_{1}^{\mu}g_{2}^{\mu}\). The summation over \(g_{2}^{\mu}\) can be done to give \(|G|^{n}\). Renaming \(g_{1}^{\mu}\) as \(g^{\mu}\), we get
\[\text{Tr}(\tilde{\rho}^{n})=|G|^{4(L-1)+4n}\sum_{k^{\mu},g^{\mu},a_{ 1}^{\mu},a_{3}^{\mu}}\] \[\delta[a_{1}^{\mu}(g^{\mu})^{-1}a_{3}^{\mu},a_{1}^{\mu+1}(g^{\mu} )^{-1}a_{3}^{\mu+1}]\] \[\delta[a_{1}^{\mu}k^{\mu}(a_{1}^{\mu})^{-1},a_{1}^{\mu+1}k^{\mu}(a _{1}^{\mu+1})^{-1}]\delta[k^{\mu},k^{\mu+1}]\] \[\delta[g^{\mu}k^{\mu}(g^{\mu})^{-1},g^{\mu+1}k^{\mu+1}(g^{\mu+1})^{ -1}] \tag{109}\] \[=|G|^{4(L-1)+4n}\sum_{k^{\mu},g^{\mu},a_{1}^{\mu},a_{3}^{\mu}}\] \[\delta[a_{1}^{\mu}(g^{\mu})^{-1}a_{3}^{\mu},a_{1}^{\mu+1}(g^{\mu} )^{-1}a_{3}^{\mu+1}]\] \[\delta[a_{1}^{\mu}k(a_{1}^{\mu})^{-1},a_{1}^{\mu+1}k(a_{1}^{\mu+1})^ {-1}]\] \[\delta[g^{\mu}k(g^{\mu})^{-1},g^{\mu+1}k(g^{\mu+1})^{-1}]. \tag{110}\]
The last delta function is equivalent to
\[\delta[g^{1}k(g^{1})^{-1},g^{\mu}k(g^{\mu})^{-1}]|_{\mu>1}\] \[= \delta[k,(g^{1})^{-1}g^{\mu}k(g^{\
\(|G|\). We get
\[\mathrm{Tr}(\tilde{\rho}^{n})=|G|^{4L-3+4n}\sum_{k,g^{\mu>1},a_{1}^{ \mu},a_{3}^{\mu}}\] \[\delta(a_{1}^{1}a_{3}^{1},a_{1}^{2}a_{3}^{2})\delta[a_{1}^{\mu}(g ^{\mu})^{-1}a_{3}^{\mu},a_{1}^{\mu+1}(g^{\mu})^{-1}a_{3}^{\mu+1}]|_{\mu>1}\] \[\delta[a_{1}^{\mu}k(a_{1}^{\mu})^{-1},a_{1}^{\mu+1}k(a_{1}^{\mu+1 })^{-1}]\] \[\delta[k,g^{\mu}k(g^{\mu})^{-1}]|_{\mu>1}. \tag{104}\]
Notice that although the second last delta function applies to all \(\mu\), only \(n-1\) values of \(\mu\) give independent constraints. Hence, we can restrict that delta function to \(2\leq\mu\leq n\). We further rewrite
\[\mathrm{Tr}(\tilde{\rho}^{n})=|G|^{4L-3+4n}\sum_{k,g^{\mu>1},a_{1}^ {\mu},a_{3}^{\mu}}\] \[\delta[(a_{1}^{2})^{-1}a_{1}^{1}a_{3}^{(2}a_{3}^{-1},1]\] \[\delta[(g^{\mu})^{-1},(a_{1}^{\mu})^{-1}a_{1}^{\mu+1}(g^{\mu})^{ -1}a_{3}^{\mu+1}(a_{3}^{\mu})^{-1}]|_{\mu>1}\] \[\delta[k,(a_{1}^{\mu})^{-1}a_{1}^{\mu+1}k(a_{1}^{\mu+1})^{-1}(a_{ 1}^{\mu})]|_{\mu>1}\] \[\delta[k,g^{\mu}k(g^{\mu})^{-1}]|_{\mu>1}. \tag{105}\]
Now define some variables:
\[(x^{1},x^{2},x^{3},\cdots,x^{n})=\] \[[a_{1}^{2},(a_{1}^{2})^{-1}a_{1}^{3},(a_{1}^{3})^{-1}a_{1}^{4}, \cdots,(a_{1}^{n})^{-1}a_{1}^{1}], \tag{106}\] \[(y^{1},y^{2},y^{3},\cdots,y^{n})=\] \[[a_{3}^{2},a_{3}^{3}(a_{3}^{2})^{-1},a_{3}^{4}(a_{3}^{3})^{-1}, \cdots,a_{3}^{1}(a_{3}^{n})^{-1}]. \tag{107}\]
The maps from \(a_{1}^{\mu},a_{3}^{\mu}\) to \(x^{\mu},y^{\mu}\) are one-to-one, so we can change the summation variables to \(x^{\mu}\) and \(y^{\mu}\). The summations over \(x^{1}\) and \(y^{1}\) can then be done, giving \(|G|^{2}\). We get
\[\mathrm{Tr}(\tilde{\rho}^{n})=|G|^{4L-1+4n}\sum_{k,g^{\mu>1},x^{ \mu>1},y^{\mu>1}}\] \[\delta(x^{2}x^{3}\cdots x^{n}y^{n}\cdots y^{3}y^{2},1)\] \[\delta[(g^{\mu})^{-1},x^{\mu}(g^{\mu})^{-1}y^{\mu}]|_{\mu>1}\] \[\delta[k,x^{\mu}k(x^{\mu})^{-1}]|_{\mu>1}\delta[k,g^{\mu}k(g^{\mu })^{-1}]|_{\mu>1}. \tag{108}\]
Using the second delta function, we find \((y^{\mu})^{-1}=g^{\mu}x^{\mu}(g^{\mu})^{-1}\). We then have
\[\mathrm{Tr}(\tilde{\rho}^{n})=|G|^{4L-1+4n}\sum_{k,g^{\mu>1},x^{ \mu>1}}\] \[\delta[x^{2}x^{3}\cdots x^{n},g^{2}x^{2}(g^{2})^{-1}g^{3}x^{3}(g^{ 3})^{-1}\cdots g^{n}x^{n}(g^{n})^{-1}]\] \[\delta[k,x^{\mu}k(x^{\mu})^{-1}]|_{\mu>1}\delta[k,g^{\mu}k(g^{\mu })^{-1}]|_{\mu>1}. \tag{109}\]
The last two delta functions imply that \(x^{\mu}\) and \(g^{\mu}\) both belong to the centralizer of \(k\), namely \(Z(k):=\{g\in G|gk=kg\}\). We obtain the result
\[\mathrm{Tr}(\tilde{\rho}^{n})=|G|^{4L-1+4n}\sum_{k\in G}\ \sum_{g^{\mu>1},x^{\mu>1}\in Z(k)}\] \[\delta[x^{2}x^{3}\cdots x^{n},g^{2}x^{2}(g^{2})^{-1}g^{3}x^{3}(g^ {3})^{-1}\cdots g^{n}x^{n}(g^{n})^{-1}]. \tag{110}\]
We can no longer simplify this expression just by changing variables, but it can be computed using nonabelian Fourier transforms. We summarize the key step as the following lemma.
**Lemma 1**.: _Let \(H\) be a finite group, \(z_{i}\) and \(h_{i}\) with \(i=1,2,3,\cdots,r\) be group elements. Then,_
\[\sum_{h_{i},z_{i}\in H}\delta[z_{1}z_{2}\cdots z_{r},(h_{1}z_{1}h _{1}^{-1})(h_{2}z_{2}h_{2}^{-1})\cdots(h_{r}z_{r}h_{r}^{-1})]\] \[=|H|^{2r-1}\sum_{\mu\in\mathrm{Irrep}(H)}d_{\mu}^{2-2r}, \tag{111}\]
_where \(\mathrm{Irrep}(H)\) is the set of irreducible representations of \(H\), and \(d_{\mu}\) is the dimension of the representation \(\mu\)._
Proof.: Denote by \(L^{2}(H)\) the Hilbert space generated by the orthonormal basis \(\{|h\rangle\,|h\in H\}\). Another orthonormal basis of \(L^{2}(H)\) consists of the following states.
\[|\mu;a,b\rangle=\sqrt{\frac{d_{\mu}}{|H|}}\sum_{h\in H}\bar{D}_{ab}^{\mu}(h) \,|h\rangle\,. \tag{112}\]
Here, \(\mu\in\mathrm{Irrep}(H)\), \(a,b\in\{1,2,\cdots,d_{\mu}\}\), \(D^{\mu}(h)\) is the matrix for \(h\) in the representation \(\mu\), and \(\bar{D}^{\mu}(h)\) is the complex conjugate of \(D^{\mu}(h)\). The orthonormality of this new basis lies in Schur's orthogonality relation:
\[\sum_{h\in H}\bar{D}_{ab}^{\mu}(h)D_{cd}^{\nu}(h)=\frac{|H|}{d_{\mu}}\delta_{ \mu\nu}\delta_{ac}\delta_{bd}. \tag{113}\]
The inverse map is given by
\[|h\rangle=\sum_{\mu\in\mathrm{Irrep}(H)}\sum_{a,b=1}^{d_{\mu}}\sqrt{\frac{d_{\mu }}{|H|}}D_{ab}^{\mu}(h)\,|\mu;a,b\rangle\,. \tag{114}\]
This is called the nonabelian Fourier transform.
Denote by \(\mathcal{I}\) the summation in the statement of the lemma. We can rewrite the delta function there as an inner product in \(L^{2}(H)\):
\[\mathcal{I}=\sum_{h_{i},z_{i}\in H}\left\langle(h_{1}z_{1}h_{1}^{-1})\cdots(h_{r }z_{r}h_{r}^{-1})|z_{1}\cdots z_{r}\right\rangle. \tag{115}\]
Applying nonabelian Fourier transforms,
\[\mathcal{I}=\sum_{h_{i},z_{i}}\sum_{\mu}\sum_{a,b=1}^{d_{\mu}}\frac{d_{\mu}}{|H| }\bar{D}_{ab}^{\mu}(h_{1}z_{1}h_{1}^{-1}\cdots)D_{ab}^{\mu}(z_{1}z_{2}\cdots z _{r}). \tag{116}\]
It is now useful to introduce a diagrammatic language:
\[D_{ab}^{\mu}(h)=a\frac{\stackrel{{ h}}{{\underset{\vdots}{\vdots}}{{ \vdots}}}}{D^{\mu}}\,b^{\mu}(h)=a\frac{\stackrel{{\overline{D}}^{ \mu}}{{\underset{\vdots}{\vdots}}}}{\stackrel{{{\vdots}}}{{ \vdots}}}b\,. \tag{117}\]
The orthogonality relation then looks like
(100)
Using this diagrammatic language, and the fact that \(D^{\mu}\) matrices satisfy the group multiplication laws, we compute the summations over \(z_{i}\) as
\[\sum_{z_{i}}\bar{D}^{\mu}_{ab}(h_{1}z_{1}h_{1}^{-1}\cdots)D^{\mu}_{ ab}(z_{1}z_{2}\cdots z_{r})\] \[=\underbrace{\left(\begin{array}{c|c}\bar{D}^{\mu}&\bar{D}^{\mu} &\bar{D}^{\mu}&\bar{D}^{\mu}&\bar{D}^{\mu}&\cdots&\bar{D}^{\mu}&\bar{D}^{\mu}& \bar{D}^{\mu}\\ \hdashline&\hdashline&h_{1}^{-1}&h_{2}&h_{2}^{-1}&\ldots&h_{r}^{-1}&\vdots&h_{r}^ {-1}\\ \hdashline&\hdashline&\hdashline&\hdashline&\hdashline&\hdashline&\hdashline &\hdashline&\hdashline\\ \end{array}\right)}_{\begin{array}{c|c}\sum_{z_{i}}\bar{D}^{\mu}_{ab}(h_{1}z_{ 1}h_{1}^{-1}\cdots)D^{\mu}_{ab}(z_{1}z_{2}\cdots z_{r})\\ \hdashline&\hdashline&\hdashline&\hdashline&\hdashline&\hdashline\\ \end{array} \tag{101}\] \[=\underbrace{\left(\frac{|H|}{d_{\mu}}\right)^{r}\underbrace{\bar{D} ^{\mu}}_{h_{1}}\underbrace{\bar{D}^{\mu}}_{h_{1}^{-1}}\cdots\underbrace{\bar{D} ^{\mu}}_{h_{r-1}^{-1}}\underbrace{\bar{D}^{\mu}}_{h_{r}^{-1}}\underbrace{\bar{D} ^{\mu}}_{h_{r}^{-1}}\underbrace{\bar{D}^{\mu}}_{h_{r}^{-1}} \tag{102}\]
Since all the representations are unitary, we have \(\bar{D}^{\mu}_{ab}(h^{-1})=D^{\mu}_{ba}(h)\). This implies, for example,
\[\underbrace{\bar{D}^{\mu}}_{h_{1}^{-1}}\underbrace{\bar{D}^{\mu}}_{h_{2}}= \underbrace{\bar{D}^{\mu}}_{h_{1}}. \tag{103}\]
We can now do the summations over \(h_{i}\), and find
\[\sum_{h_{i}}\sum_{z_{i}}\bar{D}^{\mu}_{ab}(h_{1}z_{1}h_{1}^{-1} \cdots)D^{\mu}_{ab}(z_{1}z_{2}\cdots z_{r})\] \[=\left(\frac{|H|}{d_{\mu}}\right)^{r}\underbrace{\bar{D}^{\mu}}_{ h}\underbrace{\bar{D}^{\mu}}_{h}\underbrace{\bar{D}^{\mu}}_{h}\cdots\underbrace{ \bar{D}^{\mu}}_{h} \tag{104}\] \[=\left(\frac{|H|}{d_{\mu}}\right)^{2r}\underbrace{\bar{D}^{\mu}} _{h}=\left(\frac{|H|}{d_{\mu}}\right)^{2r}d_{\mu} \tag{105}\]
The lemma can be proved by plugging this result into the expression for \(\mathcal{I}\).
We can now use this lemma to compute \(\operatorname{Tr}(\tilde{\rho}^{n})\), by identifying \(r\) with \(n-1\) and \(H\) with \(Z(k)\). Suppose \(k\) belongs to the conjugacy class \(C\). Then \(Z(k)\) is isomorphic to \(Z_{C}\) which is the centralizer of some arbitrary representative of \(C\). We then find
\[\operatorname{Tr}(\tilde{\rho}^{n})=|G|^{4L+6n-4}\sum_{(C,\mu)}\left(|C|d_{\mu }\right)^{4-2n}, \tag{106}\]
where \(\mu\in\operatorname{Irrep}(Z_{C})\), and we have used \(|G|=|C||Z_{C}|\). In particular, \(\operatorname{Tr}\tilde{\rho}=|G|^{4L+4}\), where we used \(\sum_{\mu}d_{\mu}^{2}=|Z_{C}|\). We can then obtain the Renyi entropy
\[S^{(n)}_{\mathcal{T}_{2}\cup\mathcal{T}_{4}} =S^{(n)}_{\mathcal{T}_{1}\cup\mathcal{T}_{3}}=\frac{1}{1-n}\log \left[\frac{\operatorname{Tr}(\tilde{\rho}^{n})}{(\operatorname{Tr}\tilde{ \rho})^{n}}\right] \tag{107}\] \[=4L\log|G|+\frac{1}{1-n}\log\left[\sum_{(C,\mu)}\left(\frac{d_{( C,\mu)}}{|G|}\right)^{4-2n}\right], \tag{108}\]
where \(d_{(C,\mu)}:=|C|d_{\mu}\). Recall again that we have assumed the four \(T_{i}T_{i+1}\) interface circles to have the same length \(L\). In general the \(4L\) factor here should be replaced with the total interface length.
Using the previous result for \(N=1\), we get
\[I^{(n)}(T_{1},T_{3})=\frac{1}{n-1}\log\left[\sum_{(C,\mu)}\left(\frac{d_{(C,\mu )}}{|G|}\right)^{4-2n}\right], \tag{109}\]
suggesting that \(d_{(C,\mu)}\) are nothing but the anyon quantum dimensions.
|
2303.00588 | Coexistence of single particle and collective excitation in $^{61}$Ni | The high spin states in 61 Ni have been studied using the fusion evaporation
reaction, Ti( $^{14}$C,3n) $^{61}$Ni at an incident beam energy of 40 MeV. A
Compton suppressed multi-HPGe detector setup, consisting of six Clover
detectors and three single crystal HPGe detectors was used to detect the
de-exciting $\gamma$ rays from the excited states. The level scheme has been
extended up to an excitation energy of 12.8 MeV and a tentative J$_{\pi}$ =
35/2$^+$ . The low-lying negative parity levels are found to be generated by
single particle excitation within the f p shell and also excitations to the
g$_{9/2}$ orbitals as explained well with shell model calculations using the
GXPF1Br+V M U (modified) interaction. Two rotational structure of regular E2
sequences with small to moderate axial deformation have been established at
higher excitation energy. Most interestingly, two sequences of M1 transitions
are reported for the first time and described as magnetic rotational bands. The
shears mechanism for both the bands can be described satisfactorily by the
geometrical model. The shell model calculation involving the cross shell
excitation beyond the fp shell well reproduce the M1 and E2 sequences. The
shell model predicted B(M1) values for the magnetic rotational band B1 show the
decreasing trend with spin as expected with closing of the shears. | Soumik Bhattacharya, Vandana Tripathi, E. Rubino, Samuel Ajayi, L. T. Baby, C. Benetti, R. S. Lubna, S. L. Tabor, J. Döring, Y. Utsuno, N. Shimizu, J. M. Almond, G. Mukherjee | 2023-03-01T15:29:11Z | http://arxiv.org/abs/2303.00588v1 | # Coexistence of single particle and collective excitation in \({}^{61}\)Ni
###### Abstract
The high spin states in \({}^{61}\)Ni have been studied using the fusion evaporation reaction, \({}^{50}\)Ti(\({}^{14}\)C,3n)\({}^{61}\)Ni at an incident beam energy of 40 MeV. A Compton suppressed multi-HPGe detector setup, consisting of six Clover detectors and three single crystal HPGe detectors was used to detect the de-exciting \(\gamma\) rays from the excited states. The level scheme has been extended up to an excitation energy of 12.8 MeV and a tentative \(J^{\pi}=35/2^{+}\). The low-lying negative parity levels are found to be generated by single particle excitation within the \(fp\) shell and also excitations to the \(g_{9/2}\) orbitals as explained well with shell model calculations using the GXPF1Br+\(V_{MU}\)(modified) interaction. Two rotational structure of regular E2 sequences with small to moderate axial deformation have been established at higher excitation energy. Most interestingly, two sequences of M1 transitions are reported for the first time and described as magnetic rotational bands. The shears mechanism for both the bands can be described satisfactorily by the geometrical model. The shell model calculation involving the cross shell excitation beyond the \(fp\) shell well reproduce the M1 and E2 sequences. The shell model predicted B(M1) values for the magnetic rotational band B1 show the decreasing trend with spin as expected with closing of the shears.
## I Introduction
Neutron-rich nuclei with A\(\sim\)60 have attracted the attention of experimental and theoretical studies in the last few decades because of the evolution of shell structure with the \(N/Z\) value in this region. On one hand, from the systematic studies of \(2^{+}_{1}\) energies and B(E2:\(2^{+}_{1}\)\(\rightarrow\)\(0^{+}_{1}\)) strengths, a new sub-shell closure has been established at \(N=32\) in \({}^{52}\)Ca [1], \({}^{54}\)Ti [2], and \({}^{56}\)Cr[3]. Contrary to that, the decreasing trend of the \(2^{+}_{1}\) and \(4^{+}_{1}\) energies towards \(N=40\)[4; 5], for both the Fe and Cr isotopic chains point to the diminishing of the \(N=40\) sub-shell closure for mid-shell nuclei between the Z=20 and 28 major shell-closures. For Ni isotopes, the \(Z=28\) major shell closure stabilizes the spherical shapes near the ground state between \(N=28\) and \(N=40\), and the low and moderate energy states can be well reproduced by shell-model calculations involving the \(\nu\)p\({}_{3/2}\), \(\nu\)f\({}_{5/2}\), \(\nu\)p\({}_{1/2}\) and \(\nu\)g\({}_{9/2}\) orbitals [6; 7]. Though the N=40 subshell closure is visible in \({}^{68}\)Ni [8], the energy gap between the 1p\({}_{1/2}\) and 0g\({}_{9/2}\) orbitals is estimated to be small to allow the cross-shell excitations in \({}^{68,67}\)Ni which are well reproduced by shell-model calculations [9; 6]. The involvement of the shape driving \(\nu\)0g\({}_{9/2}\) orbital induces collectivity in lighter Ni isotopes, and rotational bands have been observed in \({}^{56-59}\)Ni [10; 11; 12; 13]. Furthermore, magnetic rotational bands and super-deformed bands have also been reported in \({}^{60}\)Ni[14] and \({}^{63}\)Ni [15], respectively. Recently, excited states in \({}^{61}\)Ni have been studied experimentally [16] via a \({}^{7}\)Li induced reaction and the energy levels could be explained fairly well within the scope of shell-model calculation involving cross-shell excitations to 0g\({}_{9/2}\) single-particle orbital.
The odd-mass Ni isotopes with neutron number slightly over \(N=28\) are ideal cases to address the role of neutron g\({}_{9/2}\) excitations at high spin. In the investigation of \({}^{59}\)Ni [13] four rotational bands up to a probable spin of (43/2) with terminating properties were found. Based on configuration-dependent cranked Nilsson-Strutinsky calculations, two bands maintained significant collectivity until the maximum spin was reached. In \({}^{63}\)Ni three collective bands up to spin and parity of (57/2\({}^{+}\) ) were identified [15]. Model calculations of cranked Nilsson-Strutinsky type indicate that in \({}^{63}\)Ni collective excita
tions sustain at moderate and high spins. The shears mechanism in \({}^{60}\)Ni has been described microscopically with the self-consistent tilted axis cranking relativistic mean-field theory [17] and via geometrical model in Ref. [18]. Along with the \({}^{60}\)Ni[14], the magnetic rotational band has also been reported in \({}^{62}\)Co[19]. For both the nuclei, the occupation of proton hole in the high-j \(\pi\)f\({}_{7/2}\) and neutron particle in the \(\nu\)g\({}_{9/2}\) orbital are described to form the shear arms. Recently, an adiabatic and configuration-fixed constrained triaxial CDFT calculations searched for the possible wobbling motions in \({}^{59-63}\)Ni [20]. They predicted wobbling motion in \({}^{59,61,62}\)Ni, specially for \({}^{62}\)Ni they pointed out the Band D1 as the possible wobbling band involving \(\nu\)g\({}_{9/2}^{2}\) configuration. Being in the same Fermi level for proton and neutron, the intermediate odd-mass \({}^{61}\)Ni is emerged as an optimistic ground to search for the similar excitations. For \({}^{61}\)Ni isotope, collective bands at high spins were not known according to the most recent study using the \({}^{59}\)Co(\({}^{7}\)Li, \(\alpha\)n) reaction in combination with a modern array of Ge detectors [16].
The aim of the present work is to populate high-spin states in \({}^{61}\)Ni by using a heavy-ion induced reaction, and to search for possible collective excitations at high spins. Since it is difficult to reach high spin states in \({}^{61}\)Ni with stable target-beam combinations, we selected the combination of a \({}^{50}\)Ti target and a long-lived radioactive \({}^{14}\)C beam. \({}^{61}\)Ni was produced via the 3n-evaporation channel with a very high yield. Excited states in \({}^{61}\)Ni were studied previously using the \({}^{58}\)Fe(\(\alpha\), n) and \({}^{53}\)Cr(\({}^{11}\)B, p2n) reactions [21; 22], and the \({}^{48}\)Ca(\({}^{18}\)O, 5n) reaction [23] where levels up to 5316 keV were identified, including a 799 keV transition depopulating the (17/2\({}^{+}\)) level at 4818 keV to the 15/2\({}^{+}\) level at 4019 keV, which decays further mainly via a 593 keV transition to the 13/2\({}^{-}\) level at 3426 keV. In the most recent study [16], using the reaction \({}^{59}\)Co(\({}^{7}\)Li, \(\alpha\)n), states were identified up to a possible spin of 17/2\({}^{+}\) at 6734 keV. In the current work, based on the coincidences observed, a complex level scheme has been constructed, including many newly observed transitions.
## II Experiment
Excited states of \({}^{61}\)Ni were populated using the \({}^{50}\)Ti( \({}^{14}\)C, 3n) fusion- evaporation reaction at 40 MeV incident energy at the John D. Fox Superconducting Accelerator Facility, Florida State University (FSU). The beam energy was chosen to maximize the population of the 3n evaporation channel. An isotopically enriched (90%) \({}^{50}\)Ti foil of 500 \(\mu\)g/cm\({}^{2}\) thickness was used as target. The decaying \(\gamma\) rays from the excited states were detected using the FSU \(\gamma\)-ray array, consisting of six Compton-suppressed Clover HPGe detectors and three single-crystal detectors. Three Clover detectors were placed at 90\({}^{\circ}\), two Clovers are placed at 135\({}^{\circ}\) and one detector at 45\({}^{\circ}\) angles with respect to the beam direction. One single-crystal detector was placed at 90\({}^{\circ}\) relative to the beam axis and two single-crystal detectors were placed at 135\({}^{\circ}\) with respect to beam direction. The pre-amplifier signals from the Clover detectors, single crystal HPGe, and BGO detectors were processed using a PIXIE based digital data acquisition system in 2 fold coincidence mode for the \(\gamma\) - \(\gamma\) coincidence measurements. The efficiency and energy calibrations of each detector were carried out using \({}^{133}\)Ba and \({}^{152}\)Eu standard radioactive sources, placed at the target position. The calibration and efficiency beyond the 1408 keV transition of \({}^{152}\)Eu source were done by the high-energy transitions from \({}^{56}\)Co source made at FSU using a proton beam.
## III Data Analysis
The time stamped data were sorted using GNUS-COPE, a spectroscopic software package developed at FSU [24; 25]. A total of \(9.4*10^{4}\)\(\gamma-\gamma\) coincidence events have been accumulated from the present data. The event by event data were converted into a \(\gamma-\gamma\) coincidence matrix with 1.0 keV/channel dispersion, which was also converted to a Radware [26] compatible matrix to analyze the coincidences between the de-exciting \(\gamma\) rays. An asymmetric matrix was created using the data from the detectors at the backward (145\({}^{\circ}\)) angles on the y-axis and the data from the 90\({}^{\circ}\) angle detectors on the x-axis to find out the Directio
Figure 1: DCO ratio vs polarization asymmetry (\(\Delta_{IPDCO}\)) of some of the known (marked blue) and new transitions (marked red) in \({}^{61}\)Ni obtained from different quadrupole gates. The dotted lines (green) parallel to the Y-axis correspond to the R\({}_{DCO}\) values for dipole and quadrupole transitions in a pure quadrupole (E2) gate, respectively, and are shown to guide the eye. The dotted line (violet) parallel to the X-axis is to guide the eye for positive and negative values of \(\Delta_{IPDCO}\) for determining the electric and magnetic transitions, respectively.
ented states (DCO) ratio [27] for various transitions. The Clover detectors at 90\({}^{\circ}\) were additionally used for the measurement of Integrated Polarization from Directional Correlation of Oriented states (IPDCO) [28; 29] for assigning parity to the excited states. The deduced DCO ratios and the IPDCO values were used to determine the multipolarities and the electric or magnetic nature of the transitions leading to J\({}^{\pi}\) assignments of the states wherever possible.
The multipolarities of the transitions belonging to \({}^{61}\)Ni were obtained from the DCO ratios (R\({}_{DCO}\)), defined by
\[R_{DCO}=\frac{I_{\gamma_{1}}\text{at }\theta_{1}\text{, gated by }\gamma_{2}\text{ at }\theta_{2}}{I_{\gamma_{1}}\text{at }\theta_{2}\text{, gated by }\gamma_{2}\text{ at }\theta_{1}} \tag{1}\]
The DCO ratio of an unknown (\(\gamma_{1}\)) transition is obtained from the ratio of its intensities at two angles \(\theta_{1}\)(145\({}^{\circ}\)) and \(\theta_{2}\)(90\({}^{\circ}\)) gated by another transition (\(\gamma_{2}\)) of known multipolarity, as per the equation 1. For the present experimental set-up, the typical value of R\({}_{DCO}\) for a known quadrupole or dipole transition (for \(\gamma_{1}\)) comes out to be 1.0 when gated by a transition of the same multipolarity (\(\gamma_{2}\)). When gated by a known stretched and pure (mixing ratio \(\delta\sim 0\)) quadrupole (dipole) transition (\(\gamma_{2}\)), then the R\({}_{DCO}\) value comes out to be close to 0.6 (1.7) for dipole (quadrupole) transition.
The parities of most of the excited states were determined by the polarization asymmetry measurement using the relative intensities of the parallel and perpendicular (with respect to the beam direction) Compton scattering of the emitted \(\gamma\) rays, detected in the corresponding crystals of the Clover HPGe detector. The 90\({}^{\circ}\) detectors are used for this purpose to maximize the sensitivity of the polarization measurements (\(\Delta_{IPDCO}\)) following the prescription of Ref. [28; 29].
The IPDCO asymmetry parameter(\(\Delta_{IPDCO}\)) is defined as,
\[\Delta_{IPDCO}=\frac{a(E_{\gamma})N_{\perp}-N_{\parallel}}{a(E_{\gamma})N_{ \perp}+N_{\parallel}} \tag{2}\]
where N\({}_{\parallel}\) and N\({}_{\perp}\) are the total counts of the \(\gamma\)-ray, scattered events in the planes parallel and perpendicular to the reaction plane, respectively. Here, a(E\({}_{\gamma}\)) [= \(\frac{N_{\parallel}}{N_{\perp}}\)] is the geometrical correction factor (asymmetry factor) of the detector array and addresses the asymmetry in the response of the four crystals of a Clover Ge detector and was obtained from the known \(\gamma\) rays from \({}^{152}\)Eu unpolarized source as a function of \(\gamma\)-ray energy. The values of a(E\({}_{\gamma}\)) for different \(\gamma\)-ray energies are estimated from the fitting of the data points from the unpolarized source with a linear equation [a(E\({}_{\gamma}\))=a\({}_{0}\)+a\({}_{1}\)E\({}_{\gamma}\)] following the similar procedure described in Ref. [30] whereas the fit parameter are found to be a\({}_{0}\)=0.96385 and a\({}_{1}\)=3.24*10\({}^{-5}\). In order to calculate \(\Delta_{IPDCO}\), the data were sorted to consider only the Compton events and two spectra were made with the total photopeak counts of parallel/perpendicular scattered events of the three 90\({}^{\circ}\) detectors. From these two parallel and perpendicular spectra, the number of counts in the perpendicular (\(N_{\perp}\)) and parallel (\(N_{\parallel}\)) scattering for a given \(\gamma\)-ray were obtained. The positive (negative) values of \(\Delta_{IPDCO}\) correspond to the electric (magnetic) transitions are listed in Table 1 for various transitions in \({}^{61}\)Ni. The DCO ratio and the \(\Delta_{IPDCO}\) values of various known and new transitions are also shown in Fig. 1.
## IV Results
The level scheme of \({}^{61}\)Ni deduced from the present coincidence measurement is shown in Fig. 2 which is significantly extended up to a spin of 35/2 and excitation energy of 12874 keV with the observation of 76 new transitions and 28 new levels compared to the last published work [16]. All the low-lying known yrast and non-yrast states with the associated transitions, observed in the most recent high-spin study [16] were confirmed with a few excptions. At higher spin, a sequence of 483 (484 keV in Fig. 2), 562 and 941 keV M1 transitions were reported in Ref. [16]. The presence of those transitions could be verified but they have been placed differently in the level scheme Fig. 2) according to the coincidence relationship from the present work. The present work reports the collective structure of \({}^{61}\)Ni for the first time with the observation of four decay sequences referred as Bands B1, B2, B3 and B4 in Fig. 2 for the convenience of discussing them. Two of them, Bands B2 and B4 are build with the transitions of quadrupole (E2 in this case) nature whereas in the Bands B1 and B3 the states are connected by strong M1 transitions. The level scheme is formed based on the different coincidence relationships observed and relative intensities among the transitions. The spin-parity of the levels are determined from the measured R\({}_{DCO}\) and \(\Delta_{IPDCO}\) values of the transitions. The details of the levels, and the \(\gamma\)-ray transitions measured in this work, along with the DCO ratios and \(\Delta_{IPDCO}\) values for all the transitions in \({}^{61}\)Ni are tabulated in Table 1. The low-lying states, seen from the present work, are found to be of negative parity with a ground state of \(J^{\pi}=3/2^{-}\). The lowest opposite (positive) parity state 9/2\({}^{+}\) is observed at 2121 keV. The 1177 keV \(\gamma\) ray was reported to decay from 11/2\({}^{+}\) level at 3298 keV to the 2121 keV 9/2\({}^{+}\) spin state. The coincidence spectrum of the 1177 keV transition shown in Fig. 3 displays almost all \(\gamma\)-rays that were already known from prior work and are marked with black. Many new transitions are found to be present in this gate and marked with red (also *) in the figure. Interestingly, the presence of an 1179 keV peak in the 1177 keV gate establishes that it is a doublet with the 1179-keV transition de-exciting from the level at 4477 keV. The absence of a 1314 keV transition in Fig. 3 also confirms that the new 1179 keV transition is decaying to the 3298 keV level bypassing the 3435 keV level. The inset of Fig. 3(c) shows the presence of 134 and 137 keV transitions. The presence of 1079 keV and
Figure 2: Level scheme for \({}^{61}\)Ni from present work. The newly found transitions and repositioned transitions are shown in red. The width of the transitions represents the corresponding relative intensity. For clarity and easy comprehension of the level scheme, the energy values for the levels and the gamma rays are labeled to the nearest integer values. The accurate energy values upto one decimal place are listed in Table. 1 with corresponding error.
972 keV peak also confirms the placement of the 134 keV transition from the 2121 keV 9/2\({}^{+}\) state to the 1987 keV 9/2\({}^{-}\) state.
A representative coincidence spectrum for band B1 is shown in Fig. 4. This band is found to be connected to the already reported 4477 keV and 4206 keV levels with 670 and 941 keV connecting \(\gamma\)-transitions, respectively. The 384 keV transition is found to be in coincidence with the already known 2635 keV transition decaying from 4763 keV state and thus placed on the top of 4763 keV level. The DCO ratio and \(\Delta_{IPDCO}\) value of the 941 keV transition, as tabulated in Table 1, confirms it to be of M1 nature and thus the spin and parity of the 5147 keV state is fixed as 15/2\({}^{+}\). The Band B1 consists of M1 transitions, namely 384 keV, 562 keV, 779 keV up to the weak 1295 keV and can be seen from the figure along with the other new and known transitions from \({}^{61}\)Ni. The DCO ratios and \(\Delta_{IPDCO}\) values of the transitions belonging to Band B1 are listed in Table 1 which established the magnetic dipole nature of the transitions. We have not identified any crossover E2 transitions for this band with the exception of one crossover transition, 1341 keV, that has been tentatively placed from the 21/2\({}^{+}\) to the 17/2\({}^{+}\) level. The spin-parity of the 5147 keV level is fixed as 15/2\({}^{+}\) from the M1 nature of 941 keV transition which is decaying from 5147 keV state to the known 4206 keV 13/2\({}^{+}\) state.
The band B2 is made up of a sequence of E2 transitions. The newly found transitions corresponding to Band B2 in coincidence with the 1588 keV (E2) transition are shown in Fig. 5. All the transitions (1179-, 1502-, 1540-, 1822- and 1945 keV) belonging to Band B2 can be seen in the 1588 keV gate. This band is connected to Band B3 with the 1963 keV transition also seen in 1588 keV coincidence gate. The regular pattern of E2 transitions starts from the 3298 keV level and extends up to the 12874 keV level with a plausible spin of 35/2\({}^{+}\). The \(R_{DCO}\) and \(\Delta_{IPDCO}\) values confirm the spin and parity of the 4477 keV level as 15/2\({}^{+}\). The positive parity of the states in Band B2 is confirmed from the nature of the 1588 and 1502 keV transitions. The DCO ratio values could not be found for the 1822 keV and 1945 keV transitions due to limited statistics, therefore the spin-parity of the 10929 keV and 12874 keV levels are only tentatively assigned.
Another sequence of M1 transitions is found to be formed on top of the 7679 keV 21/2\({}^{+}\) level. The spin of this level is fixed from the quadruple nature of the 2523 keV transition which joins it to the already known 5156 keV level. The corresponding new transitions of band B3 are shown in the coincidence gate of the 349 keV transition in Fig. 6. The presence of 2048, 2368, 2523 and 2861 keV transitions in the 349-keV gate confirms the connection of Band B3 with band B2 and different single particle structures. This band extends up to 11138 keV level with the sequence 525, 784. 863 and 938 keV transitions on the top of the 349 keV transition, and these transitions show clear coincidences with the 349 keV transition (Fig. 6). The statistics corresponding to the 938 keV transition is limited and we could not determine the \(R_{DCO}\) or polarization asymmetry value for this transition. Thus the spin parity of the 11138 keV
Figure 3: Coincidence spectra gated by the 1177 keV transitions in \({}^{61}\)Ni. The upper panel a (lower panel b) represents the \(\gamma\) transitions from 300 keV to 1490 keV (1490 keV to 2620 keV) range. The inset panel (c) shows the presence of the 134 and 137 keV transitions. ‘**’ and red marked transitions are newly placed in the level scheme. The transitions marked with ‘\(\#\)’ are the contamination from the neighbouring channels (mostly \({}^{60}\)Ni).
Figure 4: Coincidence spectra gated by the 484 keV transition corresponding to band B1 in \({}^{61}\)Ni. The upper panel a (lower panel b) represents the gamma transitions from 170 keV to 1200 keV (1200 keV to 2100 keV) range. All the M1 transitions corresponding to band B1 are shown along with the lower transitions. ‘**’ and red marked transitions are newly placed in the level scheme.The transitions marked with ‘\(\#\)’ are the contamination.
level is assigned as 31/2\({}^{+}\) tentatively.
The Band B4 in the level scheme of \({}^{61}\)Ni is populated rather weakly upto higher spin. Two of its levels 3435 and 5156 keV were already known from the previous work connected through two E2 transitions 1721 and 1314 keV, respectively. The transitions belonging to Band B4 are confirmed by the coincidence with the 1721 keV transition, as shown in Fig. 7. This band is found to continue up to 29/2\({}^{+}\) 10993 keV level with the sequence of 2111, 1684 and 2042 keV transition. The spin-parity of the 21/2\({}^{+}\) 7267 keV level is confirmed from the E2 nature of 2111 keV level, whereas the spin of 8951 keV level is fixed as 25/2 from the \(R_{DCO}\) value of 2217 keV connected to the previously known 21/2\({}^{+}\) 6734 keV level. Among the three newly placed 2111, 1684, 2042 keV transitions in Band B4, the nature of the 2042 keV gamma cannot be confirmed because of overlapping with the already known 2045 keV transition, and thus the spin-parity of the 10993 keV level is tentatively assigned. The spin and parity of the band-head of Band B4 is taken as the positive parity 9/2\({}^{+}\) This E2 sequence is found to be weakly populated after spin 17/2\({}^{+}\).
## V Discussion
It is evident from the level scheme in Fig. 2 that with five valance neutrons outside the \({}^{56}\)Ni doubly magic core, \({}^{61}\)Ni is likely spherical at low spin and the lower excited irregular states are best described as single particle excitations. At higher spin, with more particle-hole excitations, collective behavior is expected and can be seen from several regular sequences in the level scheme.
### Low energy single particle structure and shell model calculations
The present work confirmed the already established low energy excited states from the previous studies on \({}^{61}\)Ni as well as established many new negative and positive parity states which decay to those well known states. Since Ni has a magic proton number (Z=28) and has only
Figure 5: Coincidence spectrum gated by 1588 keV showing the \(\gamma\) transitions corresponding to the transitions of band B2 in \({}^{61}\)Ni. ‘*’ and red marked transitions are newly placed in the level scheme. The transitions marked with ‘#’ are the contamination from the weakly populated neighbouring channels.
Figure 6: Coincidence spectra gated by 349 keV showing the (a) lower energy from 450 keV to 1445 keV and (b) higher energy from 1445 keV to 2999 keV region corresponding to the transitions of band B3 in \({}^{61}\)Ni. ‘*’ and red marked transitions are newly placed in the level scheme. The transitions marked with ‘#’ are the contamination and the transitions from the neighbouring channels.
Figure 7: Coincidence spectra gated by 1721 keV transition showing the (a) lower energy (100 to 1200 keV) and (b) higher energy (1200 to 2550 keV) transitions corresponding to the band B4 in \({}^{61}\)Ni. ‘*’ and red marked transitions are newly placed in the level scheme.
five neutrons outside the N=28 shell, it is in the scope of the shell model calculations to interpret its low energy structures. To reproduce the experimental low energy states we used Shell Model (SM) calculations involving available orbitals near the Fermi surface. It has been seen that in the mass region Z or N from 20 to 40, natural parity states for odd(even) mass nuclei with \(\pi\)=-(+) can be described well within fp-shell model space with a inert core of \({}^{40}\)Ca. Therefore, for the low lying negative parity states, the shell model involves the 2p\({}_{3/2}\), 0f\({}_{5/2}\) and 2p\({}_{1/2}\) shells above the N, Z=28 gap and the 0f\({}_{7/2}\) shell below the gap. However, to explain the unnatural parity state with \(\pi\)=+(-) for the odd (even) mass nuclei in this region, the model space within the major fp shell is not enough. Thus in the present calculation, to interpret the higher lying positive parity states in odd mass \({}^{61}\)Ni the inclusion of 0\(\nu\)g\({}_{9/2}\) and 1\(\nu\)d\({}_{5/2}\) is necessary along with the major fp shell. The present shell model calculation is carried out using the GXPF1Br+\(V_{MU}\)(modified) interaction and the model space is composed of (0f\({}_{7/2}\)+2p\({}_{3/2}\)+0f\({}_{5/2}\)+2p\({}_{1/2}\))+ \(\nu\)g\({}_{9/2}\) + \(\nu\)d\({}_{5/2}\) orbits [31]. The configuration space was truncated to allow up to six-particle excitations, from the 0f\({}_{7/2}\) shell to the upper fp-shell for protons. In the case of neutrons, to focus on the states dominated by the 1p-1h positive parity states across the N=40 shell gap, only one neutron is allowed to occupy the 0g\({}_{9/2}\) or 1d\({}_{5/2}\) orbitals.
The low-lying experimental negative parity and positive parity states are compared with the SM generated states in Fig. 8. It is observed that both for positive parity and negative parity, the first experimental excited state of each spin that is the so called yrast states are matched quite well with the shell model predictions. The SM generated negative parity states are predicted to have the proton mostly paired to spin 0 and the configuration of these states can be described as the odd neutron(s) occupying the \(\nu\)p\({}_{3/2}\), \(\nu\)f\({}_{5/2}\) or \(\nu\)p\({}_{1/2}\). The experimental yrare state for 1/2\({}^{-}\) and 11/2\({}^{-}\) spin have not been observed in the present data.
On the other hand for the positive parity states, SM calculation shows that a neutron has to be excited into the positive parity \(\nu\)0g\({}_{9/2}\) orbital. The first three SM predicted yrast states match reasonably well with the experimental yrast states. It may also be noted that we haven't seen any experimental yrare state for 9/2\({}^{+}\) and 11/2\({}^{+}\) spin. Beyond those two spins, the yrast and yrare states are come closer in energy as the spin increases. This is expected as with more energy and spin, more and more orbitals are accessible by the neutrons and protons for which the density of states are increasing significantly. Here it is worth mentioning that the yrast 9/2\({}^{+}\) and 13/2\({}^{+}\) states are part of the Band B4 and the SM calculation well reproduces those states of the Band B4 along with the other collective bands which will be discussed later.
### Bands B1 and B3
In the present work we have found two regular sequence of M1 transitions, named Bands B1 and B3, marked in Fig. 2. Both are comprised of states with positive parity. The involvement of the \(\nu\)0g\({}_{9/2}\) orbital plays a crucial role in the structure of these bands. To understand the nature of these bands, excitation energy as a function of spin has been plotted in Fig 9. The excitation energy for both M1 sequences have been fitted with Eq. 3 to understand their rotational nature.
Figure 8: The excitation energies of (a) negative parity and (b) positive parity low lying experimental states with SM calculations using the GXPF1Br+\(V_{MU}\)(modified) interaction. The first (yrast) and second excited (yrare) states for both experimental and theoretical levels of each spin are plotted. The experimental energy levels are marked with ’*’ whereas the SM calculated states are represented with open circles. The negative parity states are well reproduced using the \(\nu\)p\({}_{3/2}\), \(\nu\)f\({}_{5/2}\) and \(\nu\)p\({}_{1/2}\) orbitals, whereas for all the positive parity states, the neutrons are allowed to occupy the \(\nu\)g\({}_{9/2}\) or \(\nu\)d\({}_{5/2}\) orbitals.
\[E-E_{0}=A*(I-I_{0})^{2} \tag{3}\]
where \(E_{0}\) and \(I_{0}\) represents the band head energy and spin respectively and A is an arbitrary parameter.
It is evident from Fig 9 that the Eq. 3 fits the experimental data quite well for both the bands. Therefore, it can be said that the excitation energy of these two bands follows the trend of a rotational band. For further investigation, these two sequences of magnetic dipole transitions have been studied in the framework of SCM description of shears mechanism [32] prescribed by Machiavelli and Clark [33, 34, 35].
#### iii.2.1 Magnetic rotational band and shears mechanism
Observation of rotational-like strong sequence of magnetic dipole (M1) transitions with very weak or no crossover E2 transitions in nearly spherical nuclei are often found to be generated from shears mechanism and well known as magnetic rotational (MR) bands. This MR band structure is generated due to the symmetry breaking by the associated magnetic moments of the current of few high-spin hole and particles outside a weakly deformed core. As these magnetic moments break the symmetry of the system and rotate around the total angular momentum of the near spherical core, this mode of nuclear excitations is described as magnetic rotational band [32, 33, 36, 37].
This type of sequence exhibits some special features like a strong B(M1) strength and a small B(E2) value, resulting in a large B(M1)/B(E2) value. The current distributions of a few high spin particles and holes outside a near spherical core breaks the symmetry, resulting in these types of strong M1 sequences. The magnetic moments associated with these currents rotates around the total angular momentum vector. At the band head, the magnetic rotational band starts with a 90\({}^{\circ}\) angle between the two shear arms formed with particle and hole angular momentum. With increasing excitation energy the two
Figure 11: SCM fit with the experimental data for magnetic rotational band B1.
Figure 10: The orientation of proton (particle/hole) and neutron (hole/particle) angular momenta vectors with respect to the core angular momentum in the case of shears bands. With energy, the two angular momentum vectors (the arms of the shear closes down) align themselves with the core angular momentum due to interaction between the two angular momentum vectors.
Figure 9: The excitation energy of (a) Band B1 and (b) Band B3 as a function of the spin of the level. The experimental data are fitted with Eq. 3 where \(E_{0}\) and \(I_{0}\) represent the band head energy and spin respectively.
arms come closer and align themselves with the angular momentum of the core, increasing the total angular momentum of the system. At the highest point of the band the particle and hole angular momentum is completely aligned and the angle between the two arms approaches zero.
The shear structure were first described by Frauendorf using the tilted-axis-cranking model Frauendorf _et al._ (1983), and a semiclassical model (SCM) was described by Macchiavelli et al. Macchiavelli _et al._ (1993). The SCM describes schematically how the energy states of a shears band are generated from the coupling of long spin vector of proton particles(or holes) \(j_{\pi}\) and neutron holes(or particles) \(j_{\nu}\). The shears angle \(\theta(I)\) (= \(\theta_{\pi}\) + \(\theta_{\nu}\)), i.e. the angle between two long spin vectors \(j_{\pi}\) and \(j_{\nu}\) as shown in Fig. 10, is a function of total spin (\(I\)) and for a given angular momentum state (of total spin \(I\)), it can be derived semiclassically using the expression
\[cos[\theta(I)]=\frac{\overrightarrow{j_{\pi}}\cdot\overrightarrow{j_{\nu}}} {|\overrightarrow{j_{\pi}}||\overrightarrow{j_{\nu}}|}=\frac{I(I+1)-j_{\pi}( j_{\pi}+1)-j_{\nu}(j_{\nu}+1)}{2\sqrt{(j_{\pi}(j_{\pi}+1)j_{\nu}(j_{\nu}+1))}} \tag{4}\]
The \(j_{\pi}\) and \(j_{\nu}\) are chosen to reproduce the band-head spin. For a nucleus with small deformation, it has been observed that the total angular momentum has some contribution from the collective core in addition to that from the shears mechanism. Therefore, the total angular momentum can be written as \(\overrightarrow{I}=\overrightarrow{I_{Sh}}+\overrightarrow{R_{core}}\) where the \(R_{core}\) represent the core angular momentum. The small effect of the core towards the total angular momentum is represented by,
\[R_{core}=(\frac{\Delta R}{\Delta I})*(I-I_{bh}) \tag{5}\]
Using the band head spin \(I_{bh}\), \(j_{\pi}\), \(j_{\nu}\) and the maximum spin observed in a band \(I_{max}\) the \(\Delta R\) and \(\Delta I\) can be estimated as \(\Delta R\) = \(I_{max}\) - (\(j_{\pi}\)+\(j_{\nu}\)) and \(\Delta I\)=\(I_{max}\)-\(I_{bh}\)= \(I_{max}\)-\(\sqrt{{j_{\pi}}^{2}+{j_{\nu}}^{2}}\) which leads to
\[\frac{\Delta R}{\Delta I}=\frac{(I_{max}-(j_{\pi}+j_{\nu}))}{(I_{max}-\sqrt{{j _{\pi}}^{2}+{j_{\nu}}^{2}})} \tag{6}\]
Because of the effective interaction \(V[I(\theta)]\) between the proton and neutron angular momenta, dynamics of the system gives rise to a rotation like spectrum consisting M1 transitions. The effective interaction \(V[I(\theta)]\) can be represented in terms of an expansion of even multipoles as stated in Macchiavelli _et al._ (1993)
\[V[I(\theta)]=V_{0}+V_{2}\frac{3cos^{2}\theta-1}{2}+... \tag{7}\]
The excitation energies along the MR band are generated due to the effective interaction by re-coupling of the two long angular momentum vector and can be calculated from the experimental energy level of the band as \(V[I(\theta)]\) = E(I) - \(E_{bh}\).
With the aim to extract the effective interaction (\(V_{2}\)) between the nucleons involved in generation of angular momentum by the shears mechanism for the different bands, we plot the E(I) - \(E_{bh}\) (i.e \(V[I(\theta)]\)) vs the shears angle \(\theta\)(I) for band B1 and B3 using the above formalism and fit it with Eq. 7 assuming the probable \(j_{\pi}\) and \(j_{\nu}\).
In \({}^{60}\)Ni the negative parity magnetic rotational bands (marked as M1 and M4 in ref. Macchiavelli _et al._ (1993)) are predicted to have the configuration \(\pi(f_{7/2}^{-1}(fp)^{1})\otimes\nu(g_{y/2}^{1}(fp)^{3})\). Considering the similar configuration for the dipole band B1 and B3 in \({}^{61}\)Ni, the experimental data are fitted with the Eq. 7. The proton (hole) and neutron(particle) angular momentum (\(j_{\pi}\) and \(j_{\nu}\)) corresponding to the blades of the shear are chosen so as to have the shears angle nearly \(90^{\circ}\) at the band head energy as well as to generate the spin throughout the band by the gradual alignment of these two angular momenta. Depending on the configuration suggested in ref. Macchiavelli _et al._ (1993), if we consider the \(j_{\pi}\) and \(j_{\nu}\) as \(6\hbar\) and \(7.5\hbar\) respectively for Band B1, for the first few spin states upto \(19/2^{+}\hbar\) the shears angle is found to be more than \(90^{\circ}\). Thus the band head energy and the spin for Band B1 is considered as \(6972\) keV and \(21/2^{+}\) to plot the \(V[I(\theta)]\) as a function of shears angle \(\theta\) in Fig. 11. The SCM model fit using Eq. 7 matches quite well with the experimental data with the above considerations for the upper part of Band B1. The effective interaction strength \(V_{2}\)=3167 keV from the fit in Fig. 11 seems to match with the effective interaction of the magnetic rotational band reported in the neighbouring \({}^{60}\)Ni Macchiavelli _et al._ (1993). The shears angle is found to be about \(90^{\circ}\) for the \(21/2^{+}\) spin state and reaches about \(25^{\circ}\) at the maximum observable spin (\(29/2^{+}\)) level for Band B1. For the first few states making up band B1, it seems there is less contribution from the \(j_{\pi}\) and \(j_{\nu}\) for the shears mechanism and therefore the geometrical model does not agree well for these states using \(j_{\pi}\) = \(6\hbar\) and \(j_{\nu}\) = \(7.5\hbar\).
For Band B3, same values of \(j_{\pi}\) and \(j_{\nu}\) are considered (\(j_{\pi}\)=6 \(\hbar\) and \(j_{\nu}\)= 7.5 \(\hbar\)) to fit the experimental data. The
Figure 12: SCM fit with the experimental data for magnetic rotational band B3.
Eq. 7 fitted the experimental data quite well for Band B3 as can be seen from Fig. 12 considering the band head energy and spin of the Band B3 as 7679 keV and 21/2\({}^{+}\) respectively. The interaction strength is found to be \(V_{2}\)=2604 keV which is a bit less compared to that of Band B1. It is seen from the Fig. 12 that the shears angle \(\theta\) is about 90\({}^{\circ}\) at the band head and gradually approaches 15\({}^{\circ}\) at the highest spin. The Fermi surface of the proton-hole is near the \(\pi\)f\({}_{7/2}\) orbital in this nucleus and that for the neutron particles, it is near the \(\nu g_{9/2}\) orbital. The proton holes in the high-j f\({}_{7/2}\) and neutron particle in the high-j \(g_{9/2}\) orbital mainly build the two angular momenta arms of the shears bands discussed above. We find that the same value of \(j_{\pi}\) and \(j_{\nu}\) fits well for both the MR bands in \({}^{61}\)Ni similar to what was observed for the configuration of the two negative parity magnetic rotational bands (M1 and M4 in Ref. [14]) in \({}^{60}\)Ni. Further, We have tried to identify the corresponding states of Band B1 and B3 from the SM calculations to understand their intrinsic structure as discussed in the next section.
#### iv.2.2 Magnetic dipole band and Shell Model Calculation
The positive parity states corresponding to the band B1 can be reproduced within the scope of the SM calculation as discussed before. The states which form the magnetic rotational bands are identified by their higher transition (B(M1) values) probabilities associated with them. The experimental levels and Shell Model predicted states are compared in Fig. 13 for Band B1 and a good agreement is observed. The comparatively large SM cal
Figure 13: Simplified level structure of \({}^{61}\)Ni showing the four collective structures newly found in the present work. The shell model calculated levels (right panel) along with the experimental levels (left panel) are shown for Band B1 and B4 for comparison. The shell model calculation were done with the GXPF1Br+V\({}_{MU}\) (modified) interaction using the model space composed of (0f\({}_{7/2}\)+2p\({}_{3/2}\)+0f\({}_{5/2}\)+2p\({}_{1/2}\))+ \(\nu\)g\({}_{9/2}\) + \(\nu\)d\({}_{5/2}\) orbitals both for proton and neutron with the truncation of 1p-1h states. The spins of each levels are marked for each band. The dashed lines from the experimental levels to the SM calculated levels are to indicate corresponding spin states of each band. For each SM calculated level, the B(M1) values (for Band B1) and B(E2) values (for Band B4) are shown in the gap between the corresponding states. For the experiment, the arrows indicate the \(\gamma\) transitions and the energies (in keV) are also noted alongside.
culated B(M1) values are also shown in the corresponding gap of SM calculated states in Fig. 13, which represent a nice decreasing trend with spin. Therefore, the important criteria of shears mechanism of having decreasing B(M1) values across the band with the closing of two shears arms is well reproduced with the SM calculation. To understand the configuration of the bands, the occupation number of different orbitals for Band B1 (and also Band B3) are plotted as a function of spin and shown in the Fig. 14. For neutron, the occupation number for band B1 clearly indicates that one neutron occupies the \(g_{9/2}\) orbital whereas all the other four neutrons reside in the fp shell. If we follow the trend of the occupation number for \(1p_{3/2}\) and \(0f_{5/2}\) orbitals with spin, we can see that there is a gradual change in occupancy especially after \(19/2^{+}\) spin which is consistent with the geometrical model interpretation of Band B1. It may happen that only after \(19/2^{+}\) spin, the neutrons are completely aligned and then the shears arms formed by the protons and neutrons start closing to generate the higher spin. The shell model picture also supports the choice of the band head at \(21/2^{+}\) in Fig. 11 and the high \(j_{\nu}\) value as used in the geometrical model is also justified. Along with that, one have to keep in mind that the expected (favourable) \(90^{\circ}\) angle between the proton and neutron arm at the band head in the shears mechanism is based on the argument that protons are in pure hole states and neutrons are in pure particle states (or vice versa). For \({}^{61}\)Ni, on the other hand, proton angular momentum is created by \(\pi(f_{7/2}^{-1}f_{5/2}^{1})\) configuration, which is a mixture of hole and particle state. This may change the favorable angle between these two arms and the angle can be more than \(90^{\circ}\) at the band head. Another important note to be made about the occupation number is that it can have contribution from the pairing correlation. With both proton hole and neutron particle in high-j orbital, the ideal situation of creating a shears band exists and the high B(M1) values from the shell model calculations strengthen the concept of a magnetic rotational band. Therefore, from the occupation of protons and neutrons in different orbitals, the upper part of Band B1 are predicted to have the configuration \(\pi(f_{7/2}^{-1}[p_{3/2}/f_{5/2}]^{1})\otimes\nu(g_{9/2}^{1}p_{3/2}^{2}[f_{5/2 }/p_{1/2}]^{2})\) and they agree with the configuration proposed in \({}^{60}\)Ni Band M1 as well.
The SM predicted energies for the Band B3 are a little bit over-estimated compared to the experimental data and therefore are not show in Fig. 13. To understand the structure of Band B3, the neutron occupation numbers for various orbitals are compared to Band B1 in Fig.14(b) and (c). It may be worth mentioning that unlike the Band B1, the occupation number for neutron for Band B3 remains same for the entire spin range and matches with the later part of Band B1. Therefore, from the SM calculation, the configuration for band B3 is predicted to be similar as Band B1 if not same. This configuration of Band B3 also supports the choice of same \(j_{\pi}\) and \(j_{\nu}\) values corresponding to the geometrical semi-classical model fit with respect to Band B1, as discussed in previous section.
### Bands B2 and B4
We have observed two more sequences connected by E2 transitions indicated as Band B2 and Band B4 in Fig. 2. The Band B4 starts right from the first observed \(9/2^{+}\) spin and extends upto \(29/2^{+}\) whereas the band B4 is assumed to start from \(11/2^{+}\) spin. It is quite clear that the involvement of the shape-driving g\({}_{9/2}\) orbital induces deformation and the rotational bands are likely to be seen as also reported in neighbouring nuclei. To find out the nature of these two bands we plot the excitation energy vs spin for both the bands (B2 and B4) which are displayed in Fig. 15(a) and (b). The trend in the E vs I plot for Band B2 suggests that the first few transitions are not a part of the rotational structure, whereas the transitions beyond the \(23/2^{+}\) spin can be fitted with the rotational equation described as
\[E=E_{0}+A*(I-I_{0})(I-I_{0}+1) \tag{8}\]
The experimental data in Fig. 15 for both the bands are fitted with Eq. 8. For Band B2, the fitting is for the upper 4 data points starting from \(23/2^{+}\hbar\) and clearly, this rotational structure is not followed by the states below \(23/2^{+}\) spin. For band B4, the E vs I plot indicates the overlap of two sequences. Therefore they are fitted by the rotational equation in two part: lower energy part and higher energy part, which is shown in Fig. 15(b). The fitted curves indicate the crossing of two rotational bands around \(23/2^{+}\) spin. The fitted parameter A in
Figure 14: The occupation number for different orbitals calculated by shell model calculation with respect to spins of the levels. The upper panel is consisting of the occupation number for (a) proton and (b) neutron associated with Band B1 configuration whereas the lower panel shows occupation number associated with the Band B3 configuration for (c) proton and (d) neutron.
Eq. 8 is inversely proportional to the Moment of Inertia (MoI) of the nuclear shape of associated with the corresponding band. The values obtained for A for Band B2 (upper part) (Fig. 15(a)) and Band B4 lower and upper part (Fig. 15(b)) are 28, 103 and 53 respectively. Therefore, the higher slope of Band B4 compared to Band B2 indicates that a higher moment of inertia is associated with the Band B2 rotational structure. Thus the configuration for band B4 can be assumed to have less number of quasi-particle involving one \(\nu 0g_{9/2}\) orbital and possibly the upper part of Band B4 may have more quasi-particles.
To understand the structure better, the aligned angular momentum (I\({}_{x}\)) for bands B2 and B4, observed in \({}^{61}\)Ni from the present work are compared with the similar rotational bands reported in neighbouring Ni and Fe nuclei and are shown in Fig. 16. As the Band B4 starts from the very first observed \(9/2^{+}\) state, the configuration \(\pi(f_{7/2}^{-1}(fp)^{1})\otimes\nu(g_{9/2}^{1}(fp)^{4})\) is assumed for the lower part of the Band B4. As the aligned angular momentum is low for the first part of the band B4, we can assume that the few neutrons in fp shell are paired up and do not contribute to the total angular momentum. The aligned angular momentum (I\({}_{x}\)) for the lower part of the Band B4 are matched with those of the positive parity band of \({}^{59}\)Fe with only two proton less than \({}^{61}\)Ni. Band B4 is seen to exhibit a back-bending at 0.95 MeV rotational frequency (around spin 10.5\(\hbar\)) and seems to have a different intrinsic structure after 21/2\({}^{+}\) spin. With only two more neutron compared to \({}^{59}\)Ni, the rotational bands of \({}^{61}\)Ni are expected to have similar kind of structure as the rotational bands reported in \({}^{59}\)Ni. The configuration of rotational Band 1 of \({}^{59}\)Ni is predicted to be \(\pi(f_{7/2}^{-2}(p_{3/2}/f_{5/2})^{2})\otimes\nu((p_{3/2}f_{5/2})^{2}g_{9/2}^{ 2})\) from configuration-dependent cranked Nilsson-Strutinsky (CNS) calculations [13]. The Band B4 is not well extended after the band crossing, but the I\({}_{x}\) of the Band B4 after band crossing has the indication to match well with that of Band 1 of \({}^{59}\)Ni and thus the upper part of Band B4 is proposed to be generated from promoting an extra proton to the upper fp shell and creating an extra hole in high-j f\({}_{7/2}\) which in turns increase the angular momentum of the system. The aligned angular momentum also supports the configuration of \(\pi(f_{7/2}^{-2}(fp)^{2})\otimes\nu(g_{9/2}^{1}(fp)^{4})\) for band B4 after the back-bending. The deformation for the configuration involving the \(\nu g_{9/2}^{1}\) orbital in \({}^{61}\)Ni is predicted to be \(\beta_{2}\)=0.24 from configuration-fixed constrained CDFT calculations in Ref. [20]. Therefore we can also assume the same defor
Figure 16: The aligned angular momentum of Band B2 and Band B4 of \({}^{61}\)Ni comparing with the rotational bands reported in neighbouring odd mass Ni isotopes and \({}^{59}\)Fe as a function of the rotational frequency(\(\hbar\omega\)) of the levels. The information of the excited levels for \({}^{59}\)Ni, \({}^{63}\)Ni and \({}^{59}\)Fe are taken from the Ref. [13], [15] and [38] respectively.
Figure 15: The excitation energy of (a) Band B2 and (b) Band B4 as a function of the spin of the level. The experimental data are fitted with Eq. \(E=E_{0}+A*(I-I_{0})(I-I_{0}+1)\), where A, \(E_{0}\) and \(I_{0}\) are varied as free parameter. \(E_{0}\) and \(I_{0}\) are equivalent to the band head energy and spin respectively for the corresponding Band (Band B2 and B4).
mation for the upper part of Band B4. The configuration-dependent cranked Nilsson-Strutinsky (CNS) approach also predicts a similar deformation (\(\epsilon_{2}\)=0.22) for the collective band WD1 in \({}^{60}\)Ni [14] with the configuration \(\pi(f_{7/2}^{-2}(fp)^{2})\otimes\nu(g_{9/2}^{1})(fp)^{3}\)). Therefore the deformation of Band B4 is assumed to be of the same order as predicted in Ref. [20].
The \(\mathrm{I}_{x}\) for the upper part of Band B2 in \({}^{61}\)Ni is only plotted in Fig. 16 as the states below \(23/2^{+}\) are not a part of the rotational structure. The \(\mathrm{I}_{x}\) of the Band B2 seems to have relatively higher values with respect to Band B4 and matched with the lower part of the D1 band of \({}^{63}\)Ni [15] as well as with band 2 of \({}^{59}\)Ni. The positive parity Band D1 of \({}^{63}\)Ni is predicted to have the configuration involving \(\pi\mathrm{g}_{9/2}\) and \(\nu\mathrm{g}_{9/2}^{2}\) to generate such large aligned angular momentum. Similarly, the band 2 of \({}^{59}\)Ni is also predicted to have both \(\pi\mathrm{g}_{9/2}^{2}\) and \(\nu\mathrm{g}_{9/2}^{1}\) orbitals involved in the configuration. It may be noted that these bands in \({}^{59}\)Ni or \({}^{63}\)Ni appear at a higher excitation energy. From the present experimental data, we could not extend the Band B2 beyond \(35/2^{+}\) and thus its difficult to predict the intrinsic structure for this band. But it may be inferred that the configuration related to Band B4 involve more high-j orbital(s) along with one \(\nu\mathrm{g}_{9/2}\) in it.
#### v.2.1 Shell Model Calculations for Band 2 and 4
The experimental states corresponding to the rotational Band B4 are also compared with the SM calculated states and shown in Fig. 13. The SM calculation for Band B4 matched beautifully with the experimental value. This band is predicted to have a high B(E2) values (in order of 200 e\({}^{2}\) fm\({}^{4}\)) for the lower states which increase even higher after \(21/2^{+}\) spin. The occupation number for both the Band B2 and B4 are represented in Fig. 17 calculated by the SM using same prescription described above. As these bands (Band B2 and B4) are associated with some deformation, the spherical orbital configuration are not so rigid. The occupation number for the Band B4 predicts one proton hole in \(\mathrm{f}_{7/2}\) and at least one neutron in shape driving \(\mathrm{g}_{9/2}\) orbitals for the lower spin with the most probable configuration of \(\pi(f_{7/2}^{-1}(p_{3/2})^{1})\otimes\nu(g_{9/2}^{1}(fp)^{4})\). As the band proceed to the higher spin it is clearly seen in Fig. 17(b), that the occupation of proton in \(0f_{7/2}\) reduces after \(21/2^{+}\) whereas the occupation in \(0f_{5/2}\) increases. Therefore, an extra proton is predicted to be promoted in the fp shell from \(0f_{7/2}\) after \(21/2^{+}\) and the configuration of the higher spin states for Band B4 is emerged as \(\pi(f_{7/2}^{-2}p_{3/2}^{1}f_{5/2}^{1})\otimes\nu(g_{9/2}^{1}(fp)^{4})\). This SM predicted configuration is well matched with the structure of band B4 discussed above. For Band B2 the SM predicted level energies for each spins are overestimated than the experimental states and thus are not shown along with experimental data in Fig.13. However from the SM calculated occupation number of proton and neutron for Band B2 in Fig. 17(a) and (b), it may be predicted that this band also contains at least one proton hole in \(0f_{7/2}\) and one neutron in \(0g_{9/2}\). The B(E2) values for this band is found out to be around 120 e\({}^{2}\)fm\({}^{2}\) on average from SM. As discussed above, from the comparison with the neighboring nuclei in Fig. 16, the aligned angular momentum \(\mathrm{I}_{x}\) of this band is little bit higher and matched with the bands having configuration involving \(\nu g_{9/2}^{2}\). It is beyond the scope of the present SM calculation to have more than one particle to cross the fp shell and occupy the \(g_{9/2}\) or \(d_{5/2}\) orbitals. That may be the reason behind the fact that the experimental excited states of Band B2 do not match well with the SM calculated states.
## VI Summary
In the present work we have studied the structure of \({}^{61}\)Ni using the \({}^{14}\)C beam from the John D Fox laboratory at FSU incident on a thin \({}^{50}\)Ti target and using the FSU gamma array to detect the \(\gamma\) rays. The spectroscopic information on \({}^{61}\)Ni has been extended considerably up to \(\sim\)13 MeV and \(35/2^{+}\) spin/parity with the establishment of 28 new levels and 76 new transitions decaying from those levels. The spin and parity of most of the states have been assigned from DCO ratios and \(\Delta_{IPDCO}\) measurements of the \(\gamma\) rays. With Z=28 for \({}^{61}\)Ni, the low lying negative-parity states are found to be generated from single particle excitations of the odd neutrons in the \(\nu p_{3/2}\), \(\nu f_{5/2}\) and \(\nu p_{1/2}\) orbitals as expected. For the positive parity states, one neutron must be pro
Figure 17: The occupation number for different orbitals calculated by shell model calculation with respect to spins of the levels. The upper panel shows the occupation numbers for (a) proton and (b) neutron associated with Band B2 configuration whereas the lower panel shows occupation number associated with the Band B4 configuration for (c) proton and (d) neutron.
moted to the next shell \(0g_{9/2}\) or \(2d_{5/2}\) orbitals. The shell model calculation using the GXPF1Br+\(V_{MU}\)(modified) interaction within a model space of fp-shell + \(\nu\)g\({}_{9/2}\) + \(\nu\)d\({}_{5/2}\) reproduced the negative as well as positive parity states pretty well. Apart from the low lying irregular structure, two magnetic rotational (MR) bands Band B1 and Band B3 with a regular sequence of M1 transitions have been established. The shears mechanism associated with the MR band are described by the semi classical model. With a single value of \(j_{\pi}\) and \(j_{\nu}\), the SCM cannot fit the Band B1 entirely. Therefore the values of \(j_{\pi}\) and \(j_{\nu}\) are chosen wisely to have the shears mechanism starts at \(21/2^{+}\) (upper part of Band B1) with the shears angle\(\sim\)90\({}^{\circ}\) and continued up to maximum observed spin at \(29/2^{+}\) by closing the shears angle between the proton and the neutron arms. With this consideration, and SCM fit matched pretty well with the experimental data. The experimental levels corresponding to Band B3 are found to be associated with same \(j_{\pi}\) and \(j_{\nu}\) values from SCM fit to form the shears. The shell model calculations predict the configuration for these bands involving one high-j \(\pi f_{7/2}\) holes and one high-j \(\nu g_{9/2}\) particle. The shell model predicted a high B(M1) values for these two band which further indicate them to be of magnetic rotational type. With the involvement of the shape driving \(\nu g_{9/2}\) orbital, a small deformation is expected to be induced into the system. The formation of two deformed collective bands named Bands B2 and B4 with regular E2 transitions established the onset of deformation in this nuclei at higher energy. The excited energies of the collective bands are plotted as a function of spin for better understanding. The Band B4 are extended beyond the band crossing and an extra proton is predicted to be excited from the lower shell to upper fp shell around \(21/2^{+}\). The SM predicts the energy levels of the Band B1 and B4 matched quite well with the experimental data within the considered model space whereas it overestimated the energies for Band B2 and Band B3. The occupation number of different orbitals for proton and neutrons, corresponding to different bands are plotted as a function of spin for these collective structures which indicate the configurations associated with them. The Band B2 is seen not to be well developed from the present data and future experimental effort to extend this band would reveal additional information. Along with the extension of the quadrupole bands, the lifetime measurements of the levels of the magnetic rotational bands (B1 and B3) will be interesting to understand those structure better.
## VII Acknowledgement
This work was supported by the U.S. National Science Foundation under Grant Nos. PHY-2012522 (FSU) and U.S. Department of Energy, office of Science under Awards No. DE-AC05-00OR22725 (ORNL). Y. Utsuno and N. Shimizu acknowledge KAKENHI grants (20K03981, 17K05433), "Priority Issue on post-K computer" (hp190160, hp180179, hp170230) and "Program for Promoting Researches on the Supercomputer Fugaku" (hp200130, hp210165). The authors are thankful to Mr. Shabir Dar, VECC, Kolkata, India for providing his semi-classical geometrical model code to fit the magnetic rotational band. We acknowledge the useful discussion with Dr. S. Bhattacharyya, VECC, Kolkata, India.
|
2310.01976 | Some New Results With k-set agreement | In this article, we investigate the solvability of $k$-set agreement among
$n$ processes in distributed systems prone to different types of process
failures. Specifically, we explore two scenarios: synchronous message-passing
systems prone to up to $t$ Byzantine failures of processes. And asynchronous
shared memory systems prone to up to $t$ crash failures of processes. Our goal
is to address the gaps left by previous works\cite{SSS,AsynchKset} in these
areas. For Byzantine failures case we consider systems with authentication
where processes have unforgeable signatures.
For synchronous message-passing systems, we present an authenticated
algorithm that achieves $k$-set agreement in only two rounds, with no
constraints on the number of faults $t$, with $k$ determined as $k \geq \lfloor
\frac{n}{n-t} \rfloor + 1$. In fact the lower bound for $k$ is $k \geq \lfloor
\frac{n}{n-t} \rfloor $ that is obtained by an algorithm based on traditional
consensus with $t+1$ rounds.
In asynchronous shared memory systems, we introduce an algorithm that
accomplishes $k$-set agreement for values of $k$ greater than $ \lfloor
\frac{n-t}{n-2t} \rfloor +1$. This algorithm uses a snapshot primitive to
handle crash failures and enable effective set agreement. | Delporte-Gallet Carole, Fauconnier Hugues, Safir Mouna | 2023-10-03T11:35:35Z | http://arxiv.org/abs/2310.01976v2 | # Some New Results With \(k\)-Set Agreement
###### Abstract
In this article, we investigate the solvability of \(k\)-set agreement among \(n\) processes in distributed systems prone to different types of process failures. Specifically, we explore two scenarios: synchronous message-passing systems prone to up to \(t\) Byzantine failures of processes. And asynchronous shared memory systems prone to up to \(t\) crash failures of processes. Our goal is to address the gaps left by previous works[Delporte-Gallet et al.(2022), De Prisco et al.(1999)] in these areas. For Byzantine failures case we consider systems with authentication where processes have unforegable signatures.
For synchronous message-passing systems, we present an authenticated algorithm that achieves \(k\)-set agreement in only two rounds, with no constraints on the number of faults \(t\), with \(k\) determined as \(k\geq\lfloor\frac{n}{n-t}\rfloor+1\). In fact the lower bound for \(k\), for the Byzantine case is \(k\geq\lfloor\frac{n}{n-t}\rfloor\) that is obtained by an algorithm based on traditional consensus with \(t+1\) rounds.
In asynchronous shared memory systems, we introduce an algorithm that accomplishes \(k\)-set agreement for values of \(k\) greater than \(\lfloor\frac{n-t}{n-2t}\rfloor+1\). This algorithm uses a snapshot primitive to handle crash failures and enable effective set agreement.
**Keywords:** Byzantine failures, Crash failures, Distributed systems, \(k\)-set agreement.
## 1 Introduction
The consensus problem is an abstraction of many coordination problems in a distributed system that can suffer process failures. Roughly speaking, the consensus problem is to have processes of a distributed system agree on a common decision. Because of the many practical problems that can be reduced to this simple primitive, consensus has been thoroughly studied. We refer the reader to [Turek and Shasha(1992)] for a detailed discussion of consensus. Motivated by the significance of consensus, researchers have explored variations of the problem to investigate the boundaries of what is possible and impossible. One such variation is the \(k\)-set consensus [Chaudhuri(1993)], which relaxes the safety conditions of consensus to allow for a set of decision values with a cardinal of up to \(k\) (compared to \(k=1\) in consensus). The \(k\)-set agreement problem has been widely studied in the field of distributed computing [Raynal(2010)]. Beyond the practical interest of this problem, particularly regarding fault-tolerant distributed computing, one of the main reasons behind the focus on \(k\)-set agreement problem is the fact that it can be used to define and compare computational power properties of systems.
In \(k\)-set agreement, each process must decide on a value such that no more than \(k\) different values are decided by processes. In addition, the processes must guarantee a validity condition which characterizes which decision values are allowed as a function of the input values and whether failures occur.
Hence, with crash process failures, the validity condition generally considered ensures that the decided values are initial values proposed by processes. Regarding \(k\)-set agreement in asynchronous models one of the most famous (and difficult) results is the extension of the consensus impossibility result to impossibility of the \(k\)-set agreement [Borowsky and Gafni(1993), Herlihy and Shavit(1993), Saks and Zaharoglou(2000)] when at least \(k\) processes may fail. In synchronous models with crash failures, \(k\)-set agreement is solvable
for all \(k\). But interestingly, an (imperfect) agreement on more than one value will divide the complexity in the number of rounds needed to solve the \(k\)-set agreement: as proved in [Chaudhuri et al.(2000)], \(\lfloor t/k\rfloor+1\) rounds of communication are necessary and sufficient to solve \(k\) set agreement with no more than \(t\) faulty processes. Note that these results depends on the chosen validity condition. Note also that an important interest of the \(k\)-set agreement is its universality in the sense where the \(k\)-set agreement allows the state machine (with some liveness conditions) replication [Gafni and Guerraoui(2011)].
With more severe failures than crashes, initial values of faulty processes has no real meaning (what is the initial value of a Byzantine process?). Then other validity properties have been defined for the \(k\)-set agreement. The work presented in [De Prisco et al.(1999)] investigates the \(k\)-set consensus problem, considering various problem definitions and models with both crash and Byzantine failures, and shows that the precise definition of the validity requirement is crucial to the solvability of the problem. The authors of [De Prisco et al.(1999)] envisage up to six validity properties. Among them the strong validity ensuring that if all correct processes have the same initial value \(v\) then all correct decide \(v\), is the more appropriate for Byzantine failures.
In this work, we aim to investigate the solvability of \(k\)-set agreement in distributed systems prone to different types of failures with this validity condition ensuring that if all correct processes have the same initial \(v\) all correct processes decide \(v\). Specifically, we focus on two scenarios: synchronous message-passing systems prone to Byzantine failures and asynchronous shared memory systems prone to crash failures. Concerning Byzantine failures we consider models with authentication in which messages may be signed by processes with unforgeable signatures. Most of the results concerning general Byzantine failures are already shown in [Bouzid et al.(2016), Delporte-Gallet et al.(2022)]. Our objective is to address gaps left by previous works in these areas and provide insights into the solvability of \(k\)-set agreement under specific failures models and give some results in term of rounds complexity.
For synchronous message-passing systems, the \(k\)-set agreement is possible to achieve only when \(k\geq\lfloor\frac{n}{n-t}\rfloor\). We present an authenticated algorithm that achieves \(k\)-set agreement in only two rounds, with no constraints on the number of failures \(t\), but the value of \(k\) is greater or equal to \(\lfloor\frac{n}{n-t}\rfloor+1\) and hence is not optimal. To achieve an optimal \(k\), we propose an algorithm that spans \(t+1\) rounds and guarantees \(k\)-set agreement for \(k=\lfloor\frac{n}{n-t}\rfloor\) for any value of \(t\). This algorithm leverages \(n\) instances of the Terminating Reliable Broadcast (TRB), where the delivered value represents the proposed value for the set agreement. This result is interesting: if \(k\) is the optimal value for \(k\)-set agreement, we have \(t+1\) rounds to achieve the \(k\)-set but only 2 rounds for the \((k+1)\)-set. That means for example that only 2 rounds are needed for the 2-set agreement. Note also that these results apply to crash failure models with this considered validity property.
In asynchronous shared memory systems, we propose an algorithm that accomplishes \(k\)-set agreement for values of \(k\) strictly greater than \(\lfloor\frac{n-t}{n-2t}\rfloor\). This algorithm effectively handles crash failures using a snapshot primitive.
In summary, our work contributes to the understanding of \(k\)-set agreement in distributed systems, providing valuable insights into the solvability of this problem under specific failure models. We address important gaps in existing research and offer practical solutions to achieve \(k\)-set agreement in both synchronous and asynchronous distributed systems with different types of failures. The rest of the paper is organized as follows: Section 2 presents the model and some preliminary results as a bound on the value ok \(k\) enabling \(k\)-set Agreement, Section 3 presents an only two rounds algorithm for \(k\)-set Agreement for \(k>\lfloor\frac{n}{n-t}\rfloor\) and a \(t+1\) rounds algorithm for \(k\geq\lfloor\frac{n}{n-t}\rfloor\) in synchronous message passing model with Byzantine failures and authentication, Section 4 presents results in asynchronous shared memory model. Finally, Section 5 concludes the paper and discusses future research directions.
## 2 Preliminaries
In this section, we provide a detailed explanation of the communication model and failure models used in our study, as well as an overview of two essential primitives, the Terminating Reliable Broadcast (TRB), and the Snapshot primitive.
In the following \(n\) denote the number of processes and \(t\) the maximum number of faulty processes.
### Communication Model
We consider systems with \(n\) processes with at most \(t\) processes may be faulty. There is no communication failure. We first describe the communication models.
#### 2.1.1 Message Passing Model
In the _message passing model_, we consider a system consisting of \(n\) processes that communicate by sending and receiving messages over a complete point to point communication network without communication failure. Any message sent by a correct process is eventually received by its receiver.
In _synchronous_ message passing model, the messages are guaranteed to arrive within a bounded time interval \(\Delta\) to their receiver. In the following, in the synchronous message passing model processes run in synchronized rounds: at each round the processes send messages that are received in the same round. On the other hand, in _asynchronous_ model, there are no such timing guarantees.
#### 2.1.2 Shared Memory Model
In the _shared memory model_, processes communicate by reading from and writing to shared (atomic) registers.
### Failure Models
Furthermore, the system can be susceptible to process failures. Here, we consider two types of process failures. The first one is the **Crash failures** where a process simply stops its execution. The second type is the **Byzantine failure** where a process may arbitrarily deviate from its protocol specification. Note that a crash is a special case of Byzantine failure. Assuming that runs are infinite a faulty process for crash failure make a finite number of steps. A process is _correct_ in a run if it does not fail at any point of its execution.
#### 2.2.1 Encryption Scheme
To ensure authentication, we employ a public key encryption scheme, where each process possesses a signing (private) key \(sk_{i}\) and knows the public key \(pk_{j}\) of every other process \(p_{j}\). A process can sign a message \(m\) using its private key as \(\sigma=\text{sign}(sk_{i},m)\). We assume a perfectly secure signature scheme, ensuring that no signature of a correct process can be forged. A process can also forward a received message from process \(p_{j}\) by adding its own signature to the message.
### Two Useful Primitives
In sections 3 and 4 we present algorithms that solve \(k\)-set agreement. Our solutions rely mainly on two primitives **Terminating Reliable Broadcast (TRB)** for Byzantine failure with authentication and the **Snapshot** primitive for the shared memory model.
#### 2.3.1 Terminating Reliable Broadcast
A TRB protocol typically organizes the system into a sending process and a set of receiving processes, which includes the sender itself. A process is called 'correct' if it does not fail at any point during its execution.
The goal of the protocol is to transfer a message from the sender to the set of receiving processes, at the end of TRB a process will 'deliver' a message by passing it to the application level that invoked the TRB protocol.
In order to tolerate arbitrary failures, the TRB protocol is enriched with authentication so that the ability of a faulty process to lie is considerably limited, and also detected by correct processes; thus deliver a "sender faulty" message. This protocol then works for any number of faulty processes.
Consider a set of value \(V\), and a special value \(SF\) (for sender faulty). A TRB protocol is a protocol of broadcast value with a process \(p\) being the sender, and making a _TRB-bcast(v,p)_ for some \(v\in V\). All the
correct processes deliver a value \(m\) by a _TRB-deliver(p)_, where \(p\) is the sender of the signed message \(m\), in such a way to satisfy the following properties:
* **Termination.** Every correct process delivers some value.
* **Validity.** If the sender, \(p\), is correct and broadcasts a message \(m\), then every correct process delivers \(m\).
* **Integrity.** A process delivers a message at most once, and if it delivers some message \(m\neq SF\), then \(m\) was broadcast by the sender.
* **Agreement**. If a correct process delivers a message \(m\), then all correct processes deliver \(m\).
The main idea of the algorithm for solving the TRB is the following. If \(p_{0}\) the sender wants to broadcast a value \(m\), it signs this value then send it. When \(p_{1}\) a process that receives that message,it signs the received message and forwards it to the next process \(p_{2}\) and so on and so forth until a process \(p_{i}\) receives that message, and it signs it. We represent such a message as \(m:p_{0}:p_{1}:\ldots:p_{i}\). With \(m:p_{0}\) being the result of \(p_{0}\) signing \(m\).
When a correct process receives a message \(m:p_{0}:p_{1}:\ldots:p_{i}\), this message should be valid before the process extracts \(m\) from it. We say that a message is _valid_ if (i) all processes that have signed the message are distinct, and has the form \(m:p_{0}:p_{1}:\ldots:p_{i}\), note that the valid messages are the only one that 'count', in the sense that all the other non-valid messages are ignored by correct processes.
If \(m:p_{0}:p_{1}:\ldots:p_{i}\) is valid:
1. The process extracts the value \(m\) from the message, then
2. It relays the value if didn't do before, with its own signature appended
At round \(t+1\) : if the process has extracted exactly one message, it delivers it otherwise it delivers \(\mathbf{SF}\).
As a matter of fact TRB is solvable in synchronous models with Byzantine failures and authentication [12], Srikanth and Toueg(1987), and we recall algorithm 1, where \(p_{0}\) is the sender ensuring that.
```
1TRB-bcast(\(v_{0}\),\(p_{0}\)):
2\(m:=v_{0}\)
3extracted := \(m\)
4TRB-deliver(\(m:p_{0}\)) /* --In Round 1--
5sign \(m\) and send \(m:p_{0}\) to all;
6\(At\)the end of round \(t+1\)
7if\(\exists m\)s.t extracted = \(\{m\}\)then
8 deliver \(m\)
9else
10 deliver \(\mathbf{SF}\);
11
12 end if
```
**Algorithm 1**Algorithm Of TRB, with \(p_{0}\) being the sender.
And we recall algorithm 2, where \(p_{0}\) sends \(v_{0}\).
**Algorithm 2** Algorithm of TRB with sender \(p_{0}\).
Code of a process \(p\) with \(p\neq p_{0}\).
```
1TRB-deliver(\(p_{0}\))
2In round \(i\), \(1\leq i\leq t+1\) ;
3foreach signed message \(s^{\prime}\in relay\)do
4sign \(s^{\prime}\) and send \(s^{\prime}:p\) to all
5
6 end for
7Receive round \(i\) messages from all processes ;
8
9relay \(:=\emptyset\) ;
10
11foreach valid message \(s^{\prime}=m:p_{0}:\cdots:p_{i}\) received in round \(i\)do
12if\(m\notin\) extractedthen
13extracted \(:=\) extracted \(\cup\)\(\{m\}\) ;
14relay \(:=\) relay \(\cup\)\(\{s^{\prime}\}\)
15
16 end if
17
18 end for
19
20At the end of round \(t+1\)
21if\(\exists m\in\mathcal{M}\) s.t extracted \(=\{m\}\)then
22deliver \(m\)
23else
24deliver \(\mathbf{SF}\);
25
26 end if
```
**Algorithm 2**Algorithm of TRB with sender \(p_{0}\).
**Proof of TRB**
**Claim 1**: _If the sender is correct and broadcasts a message \(m\), then every correct process delivers \(m\)._
If the sender is correct and wants to broadcast \(m\), by definition it extracts \(m\) (and no other value) in 'round' \(0\). It does not extract any other value in any round \(i>0\), because to do so it would have received a valid message \(m^{\prime}:p_{0}:\cdots:p_{i}\) where \(p_{0}=\mathit{sender}\), which contradicts the unforgettable property of authenticated messages. Thus, if the sender is correct, it extracts only the message it broadcast.
**Claim 2**: _If a correct process extracts \(m\) then all correct processes will extract \(m\)._
Let \(i\) be the earliest round in which some correct process extracts \(m\) and let \(p\) be such a process.
**Base case:** If \(i=0\), then \(p_{0}\) is the sender, and it will send \(m:p_{0}\) to all processes in round \(1\), and all other correct processes will extract \(m\) in that round. Thus, all correct processes will extract \(m\), as wanted. From claim 1, no correct process will extract a message from \(p_{0}\) if \(p_{0}\) did not send it.
Thus, we may assume that \(i>0\). Process \(p\) extracts \(v\) because it has received a valid message \(m:p_{0}:\cdots:p_{i}\) in round \(i\). By the definition of the valid message, \(p_{0},\ldots,p_{i}\) are all distinct.
We claim that all the processes in the sequence \(p_{0},\ldots,p_{i}\) are faulty. We suppose, for contradiction, that \(p_{j}\) is correct, for some \(j\) such that \(1\leq j\leq i\). Since the signature of a correct process cannot be forged, it follows that \(p_{j}\) signed and relayed the message \(m:p_{0}:\cdots:p_{j}\) in round \(j\). Since \(p_{j}\) is correct it extracted \(m\) in round \(j-1<i\). Which contradicts the assumption that \(i\) is the earliest round a correct process extracts \(m\). Thus, \(p_{0},\ldots,p_{i}\) are distinct and faulty; hence \(i\leq t\).
Therefore, \(p\) will send a valid message \(m:p_{0}:\cdots:p_{i}:p\) to all processes in round \(i+1\leq t+1\). All correct processes will receive that message and will extract \(m\) in round \(i+1\) if they have not done so already. Thus, all correct processes extract \(m\), as wanted.
From the claim, it follows that all correct processes extract the same set of values. Thus they all deliver the same message, proving Agreement.
The termination is trivial: The sender \(p_{0}\) delivers in round \(0\), and every other correct process will deliver a message at the end of the \(t+1\) rounds
**Claim 3**: _A process delivers a message at most once, and if it delivers some message \(m\neq SF\), then \(m\) was broadcast by the sender._
At the end of the \(t+1\) rounds, if a correct process delivers a message \(m\), then the set of extracted contains only one message \(m\). If a correct process delivers \(m\), then \(m\) was extracted from a valid message, signed and broadcast by the sender, in case of \(p\) being the sender, it stores \(m\) in extract in round 0, and delivers it.
#### 2.3.2 Snapshot
The snapshot object was introduced in [1] as a shared data structure allowing concurrent processes to store information in a collection of shared registers. It can be seen as an initially empty set, which can then contain up to \(n\) values (one per process). This object provides two operations denoted \(update()\) and \(snapshot()\). The invocation \(update(m)\) by a process \(p_{i}\) writes in process \(p_{i}\)'s register the value \(m\). The invocation \(snapshot()\) by a process \(p_{i}\) reads process \(p_{j}\)'s register and returns its content, which we denote as \(view_{i}\).
We consider an atomic snapshot object i.e the snapshot object satisfies the following properties:
* **Termination.** The invocation of \(snapshot()\) or \(update()\) by a correct process terminates.
* **Atomicity Property.** The snapshot and update operations are atomic, meaning that they appear to execute instantaneously and they satisfy the sequential specification of the snapshot.
By enforcing Atomicity on the snapshot operation, each process can obtain a consistent and unchanging view of the shared registers, preventing any concurrent modifications that could lead to data corruption or incorrect decisions.
The snapshot object satisfies the following property when the code of each process is first the invocation of update for some value (its local value) then any number of invocation to snapshot.
* **Inclusion Property.** (1) When a process \(p_{i}\) takes a snapshot, the resulting view \(view_{i}\) includes the local value of \(p_{i}\). (2) For any process \(p_{j}\), if the snapshot of \(p_{i}\) occurs before the snapshot of \(p_{j}\), then \(view_{i}\) is a subset of \(view_{j}\).
### \(k\)-Set Agreement
In \(k\)-set agreement, each process must decide on a value such that no more than \(k\) different values are decided by correct processes. More precisely, let \(V\) be a finite set of at least \(k+1\) values. Each process has an _initial_ value \(v\) in \(V\) and we say that _proposes_\(v\) to \(k\)-set Agreement. Each process has to irrevocably _decide_ on a value in \(V\). The decided values must satisfy the following properties.
* **Validity.** If all the correct processes propose the same initial value \(v\), no correct process decides a value different from \(v\).
* **Agreement.** At most \(k\) different values are decided by the correct processes.
* **Termination.** Eventually, all the correct processes decide.
We say that an algorithm solves \(k\)-set Agreement in a system of \(n\) processes with at most \(t<n\) failures of processes, if all the executions in this system satisfy these properties. When \(k=1\), \(k\)-Set Agreement is the classical _consensus_. Several non equivalent validity properties for \(k\)-set agreement have been proposed and argued [10]. The validity considered here is generally the one used in the Byzantine case. More recently, [12] argues on the possibilities and impossibilities of various validity in the context of consensus.
Remark that the validity property given here is stronger than the validity property generally given for crash failures in which a decided value has only to be one of the initial of processes (correct or faulty). Hence, in some case, a decided value would come from a faulty process, that can be acceptable when processes are
not malicious as with crash failures. Moreover, in the Byzantine case, it is not clear what is the initial value of a Byzantine process (any value, no value?), and such a weak validity condition would lead to decide any value in all cases.
Note that, if applied to crash failure model, it could also be interesting from a practical point of view to force the decided value when all correct processes propose the same value.
### A Lower Bound For Number For \(k\)-Set Agreement
We demonstrate that the lower bound for the \(k\)-set agreement is \(k=\lfloor\frac{n}{n-t}\rfloor\), with \(k\) an integer, which implies that no lower value of \(k\) is achievable in the system, with \(k>1\), when we have \(t\) processes that fail, let us proceed with the formal proof.
**Theorem 1**: \(\lfloor\frac{n}{n-t}\rfloor\) _is a lower bound for solving \(k\)-set agreement_
We consider a system with \(n-t\) processes, and \(t\) faulty. Let us partition the processes in \(k=\lfloor\frac{n}{n-t}\rfloor\) subsets \(g_{1},g_{2},\ldots,g_{\frac{n}{n-t}}\) of size at least \(n-t\).
Let us consider a run \(\alpha\) where in each \(g_{i}\) all processes have initial value \(v_{i}\), and there is at least a non-faulty process \(p_{i}\), and all the faulty processes crash after all the correct processes have decided, (we can also consider that there is no crash). Moreover, let us suppose that the values \(v_{i}\) are pairwise distinct.
Let us consider a run \(\alpha_{1}\), where all the processes in \(g_{1}\) are correct and have initial value \(v_{1}\). All the processes in \(g_{j}\), for \(j\neq 1\); have initial value \(v_{j}\) and crash after all the correct processes have decided. As the processes in \(g_{1}\) need to ensure validity they need to decide \(v_{1}\).
Notice that process \(p_{1}\in g_{1}\) cannot distinguish between \(\alpha\) and \(\alpha_{1}\). Thus, for any decision algorithm it has to decide \(v_{1}\) in both runs.
Generalizing this argument for any \(p_{i}\), at least \(k=\lfloor\frac{n}{n-t}\rfloor\) different values are decided in \(\alpha\) by any decision algorithm.
As we give in section 3 an algorithm for \(k\)-Set Agreement in the Byzantine case with authentication for \(k\geq\frac{n}{n-t}\), we deduce that the \(k\geq\frac{n}{n-t}\) is a lower bound to get \(k\)-set Agreement.
## 3 \(k\)-Set Agreement in an authenticated Synchronous Message Passing Model
In this section, we explore \(k\)-Set agreement in a message-passing model with authentication. In fact all these results apply to crash failures models too. Our focus is on ensuring reliable communication between processes through the exchange of **authenticated** messages in the TRB algorithm. Each received message is signed by the sender, guaranteeing its authenticity. Furthermore, we assume that the communication is reliable, meaning that messages are neither lost, forged, nor generated by the network.
In a first step, we present a \(k\)-set agreement algorithm in two rounds, with no constraints on the number of failures \(t\). But with a value of \(k\) such that \(k>\lfloor\frac{n}{n-t}\rfloor\) which is not an optimal value for \(k\). In a second step we give an optimal algorithm concerning the value of \(k\) for \(k\geq\lfloor\frac{n}{n-t}\rfloor\), where this algorithm needs \(t+1\) rounds.
### A Two Rounds \(k\)-Set Agreement In a Byzantine Failures Synchronous Model With Authentication
We first present a two rounds algorithm, that ensures \(k\)-Set Agreement for \(k=\lfloor\frac{n}{n-t}\rfloor+1\), this is an authenticated algorithm where the messages sent by processes are signed. This prevents any faulty process from forging the signature of a correct process or misrepresenting the value sent by a correct process.
### The Algorithm
Algorithm 3 ensures \(k\)-set agreement for \(k=\lfloor\frac{n}{n-t}\rfloor+1\) and \(\forall t\). The processes exchanges their messages in two rounds. In the first round, a process \(p_{i}\) sends its initial value \(v_{i}\), and receives every other process' initial value if any, it stores them in a vector \(V_{i}\), and in the second round it sends this vector to every other processes and receive from every process \(p_{j}\) a vector \(V_{j}\) if any.
``` Input:\(v_{i}\) initial value Result: decide /* Local variables */
1\(V_{i}\) Vector of decision of size \(n\) initialized to \(\bot\) \(M_{i}[i][i]=V_{i}[i]\gets v_{i}\) ; for all\(r,\ell\neq i\)do\(M_{i}[j][r]\leftarrow\bot\)endfor; /* ------------------- round 1 ------------------- */ Send the value \(M_{i}[i][i]\) to all the processes ; When\(p_{i}\) receives \(v_{j}\) from \(p_{j}\)do\(M_{i}[i][j]\gets v_{j}\); /* ------------------- round 2 ------------------- */ Send the vector \(V_{i}=M_{i}[i][*]\) to all the processes; when\(p_{i}\) receives a vector \(V\) from \(p_{j}\)do\(M_{i}[j]\gets V\); /* ------------------- at the end of round 2 ------------------- */ for\(j=1\)to\(n\), with \(j\neq i\)do \(w=M[i][j]\); if\(w=\bot\)then \(V_{i}[j]=\bot\); else \(V_{i}[j]=w\); for\(\ell=1\)to\(n\), with \(\ell\neq i\)do if\(M_{i}[\ell][j]\neq\bot\)then if\(M_{i}[\ell][j]\neq w\)then \(V_{i}[j]=\bot\) end if end if end for /* ------------------- Decision at the end of round 2 ------------------- */ if\(p_{i}\) finds in \(V_{i}\), \(n-t\) values equal to \(v_{i}\)then decide \(v_{i}\);
end if end for /* ------------------- Decision at the end of round 2 ------------------- */
### Proof of the algorithm
First, note that the exchanged messages are authenticated, where a Byzantine process might choose to not relay a message, but it is constrained from misrepresenting or altering the content of a received message. And only the valid signed messages are considered.
Each process \(p_{i}\) manages a local matrix \(M_{i}[1..n][1..n]\), such that \(M_{i}[i][i]\) is initialized to the value proposed by \(p_{i}\), where \(M_{i}[i][i]\) is a signed message, and all the other entries are initialized to \(\bot\). In the first round each \(p_{i}\) sends \(M_{i}[i][i]\) to all the processes and assigns to \(M_{i}[i][j]\) the value it receives from \(p_{j}\).
Then at the second round, each process \(p_{i}\) broadcasts its \(M_{i}[i][*]\) vector, and the received vectors from other processes are used to update the matrix \(M_{i}\) as follows: \(p_{i}\) receives from \(p_{j}\) the vector \(M\) that it stores it in \(M_{i}[j][*]\).
An example of \(p_{i}\)'s matrix \(M_{i}\) is represented in fig. 1. At the conclusion of this round, process \(p_{i}\) compares the values of each column. If it detects that a process has sent two different values to two different processes, it sets its own values in \(V_{i}\) to \(\bot\). To guarantee the Validity property, if \(p_{i}\) finds \(n-t\) values in \(V_{i}\) that are equal to its own initial value \(v_{i}\), it decides on \(v_{i}\) as the agreed value. Otherwise, it decides on \(\bot\).
**Lemma 2**: _Let \(p_{i}\) be a correct process with matrix \(M_{i}\), and let \(p_{\ell}\) and \(p_{j}\) be two processes. If \(M_{i}[i][j]\neq M_{i}[\ell][j]\), which means that if process \(p_{j}\) sent two distinct values, different than \(\bot\) to \(p_{i}\) and \(p_{\ell}\), then \(p_{j}\) is Byzantine._
By contradiction, lets suppose that \(p_{j}\) is correct and send the same value to everyone, thanks to the authentication, \(p_{j}\)'s message cannot be forged or lied about, then all the processes in \(\Pi\) will have the same view on \(p_{j}\)' value, and all the processes correct or Byzantine will relay the same message sent by \(p_{j}\).
**Lemma 3** (Validity): _If all the correct processes propose the same value, they decide that value._
Consider a set \(C\) consisting of correct processes that propose their initial value \(v\). Since we have at most \(t\) faulty, \(C\) is consisting at least of \(n-t\) correct processes. Since the messages are authenticated, no Byzantine can lie about a correct process value, thus all the column corresponding to correct process have the same value or bottom in case a Byzantine lies about receiving nothing. For every process \(p_{j}\) in \(C\), \(V[j]=v\). Thus, by the end of the second round, all correct processes will have at least \(n-t\) values in their vector that are equal to \(v\). Then, in line 23 of the algorithm, when a correct process observes this condition, it will decide on the value \(v\).
**Lemma 4** (Agreement): _at most \(\lfloor\frac{n}{n-t}\rfloor+1\) values are decided._
We first prove that no more than \(\lfloor\frac{n}{n-t}\rfloor\) different values from \(\bot\) are decided.
First, note that a Byzantine process may lie about another Byzantine process' value, in this case we have two different values in the same column, then a correct process analyzing its matrix will set that value to \(\bot\).
Let \(p_{\ell}\) be a Byzantine process that sends two distinct values, to two correct processes in the first round, since the line of the correct processes are equal in all the matrices, all the correct processes will detect that \(p_{\ell}\) is Byzantine, and set its value to \(\bot\) in line 17.
Figure 1: Matrix \(M_{i}\)
Now we suppose that \(p_{\ell}\) does not send two distinct values to two correct processes in the first round, but send \(v_{\ell}\) to one and nothing to the other. In this case, at the end of the second round, let us suppose two correct processes \(p_{i}\) and \(p_{j}\) with respectively \(V_{i}\) and \(V_{j}\) as decision vector. Either the two processes have \(V_{i}[\ell]=V_{j}[\ell]\) or one of them has \(\bot\) and the other \(v_{\ell}\).
From all the above, the word case scenario is when \(p_{\ell}\) a Byzantine process is acting like a correct, in this case, when a correct process \(p_{i}\) is deciding its initial value, the occurrence of \(v_{i}\) in \(V_{i}\) has to be \(n-t\). Thus, we can have up to \(\lfloor\frac{n}{n-t}\rfloor\) groups of size \(n-t\) with different values, since a Byzantine process cannot belong to a set of processes proposing \(v\) and another set of processes proposing \(v^{\prime}\).
In conclusion, we have at most \(\lfloor\frac{n}{n-t}\rfloor\) decided values plus the \(\bot\).
**Lemma 5** (Termination): _All the correct processes decide._
All the correct processes will execute the two rounds, and decide at the end of the second round.
By the above lemmas, we get the following theorem.
**Theorem 6**: _Algorithm 3, ensures \(k\)-set agreement in an authenticated message passing system with Byzantine failures for \(k=\lfloor\frac{n}{n-t}\rfloor+1\)_
An Algorithm For \(k\)-Set Agreement In Byzantine Failures Synchronous Message Passing Models: Optimal Concerning The Value \(k\)
In this section, we present an algorithm for \(k\)-set Agreement when \(k\geq\lfloor\frac{n}{n-t}\rfloor\). From Lemma 3 it is the best we can do and then this algorithm is optimal concerning \(k\).
Before that, we give a Terminating Reliable Broadcast [10] that is in the heart of the following \(k\)-set Agreement algorithm.
For this, we use TRB to solve an interactive consistency [11], namely correct processes will agree on a \(n\)-vector corresponding to the initial values of processes. More precisely, each process \(p_{i}\) invokes first a \(TRB-bcast(v,p_{i})\) with its initial value \(v\). Then each process \(p_{i}\) will fill a local \(n\) vector \(L_{i}\) with the values obtained by each \(TRB-deliver(p_{j})\). It is the Phase 1 of algorithm 4.
Specifically, we introduce an authenticated algorithm that ensures \(k\)-set agreement for \(k=\lfloor\frac{n}{n-t}\rfloor\). To achieve this level of agreement, we are constrained to use more than two rounds. The algorithm incorporates a primitive that operates over \(t+1\) rounds. Once again, there are no limitations on the number of failures \(t\). This algorithm uses \(n\) instances of the **Terminating Reliable Broadcast** (TRB), where the delivered value is the proposed value for the set agreement.
For our algorithm, we implement the authenticated TRB primitive for \(n\) instances, where the \(ith\) instance of TRB corresponds to the run instance where process \(p_{i}\) is the sender in TRB.
### The Algorithm
Algorithm 4, ensures \(k\)-set agreement for \(k\geq\lfloor\frac{n}{n-t}\rfloor\) and \(\forall t\). when \(p_{i}\)-\(TRB-bcast(v_{i})\) is called then process \(p_{i}\) is the sender in the TRB algorithm, it stores its value in the vector \(L_{i}\), while on the call of \(q\)-\(deliver(m)\), a process \(q\) is the sender in the TRB algorithm that sent a value \(m\), and \(q_{i}\) will deliver that value in the TRB algorithm.
### Proof of the algorithm
Every process \(p_{i}\) holds a vector \(L_{i}\) initialized to \(\bot\). \(n\) instances of TRB are started, one per process, with each process \(p_{i}\) being the sender in one instance \(i\).
Every process \(p_{i}\) records in vector \(L_{i}\) the message \(m\) delivered from process \(q\). We consider \(L_{i}\) the vector of the proposed values for the \(k\)-set Agreement.
At the end of Phase 1, after each process sets its vector \(L_{i}\), even if we have in the system Byzantine processes, all the vectors of correct processes will be equal, and if all the correct processes propose the same
value, we will have at least \(n-t\) values equal to their initial value.
This ensures the Validity of \(k\)-set agreement, at the end of Phase 2, they will decide that value in line 9.
Let us suppose we are in an execution where the correct processes do not have the same initial value, since we have at least \(n-t\) correct processes, for every process \(p_{i}\) we have at least \(n-t\) values different from \(SF\) in \(L_{i}\).
At the end of the \(n\) instances of TRB, all the correct processes have the same \(L\). Thus, in phase 2, each correct process \(p_{i}\), decides from \(L_{i}\), by either finding \(n-t\) values equal to its initial value, or any value repeated \(n-t\) times, if not decides \(\bot\). We will see in the following lemma prove that if a process decides \(\bot\), then no other correct process decides a value different from \(\bot\).
**Lemma 7**: _Let \(p_{i}\) and \(p_{j}\) be two correct processes that run algorithm 4. \(L_{i}=L_{j}\), at the end of Phase 1._
From the agreement of the TRB [section 2.3.1], we have if a correct process delivers a value \(m\), that value is delivered by all the correct processes, the delivered message is stored in the vectors of correct processes, then all the correct processes will have the same value in their vector.
**Lemma 8**: _The vector \(L\) contains the initial value broadcast by correct processes._
Let \(L\) be the vector \(L_{i}\) of a correct process \(p_{i}\), from lemma 7, all the correct processes have the same \(L\). Since in Phase 1, every correct process \(p_{i}\) stores its initial value in \(L_{i}\) in line 2, then every initial value broadcast by a correct process \(p_{i}\) is in \(L\).
Hence, from the above Lemmas we conclude that at the end of phase 1, all the correct processes have the same view on \(L\).
**Lemma 9**: _If a correct process \(p_{i}\) decides a value \(v\) different from \(\bot\), no other correct process will decide \(\bot\)._
Let \(p_{i}\) be a correct process that decides on the value \(v\) in Phase 2. This implies that \(p_{i}\) has found at least \(n-t\) instances of the value \(v\) in \(L_{i}\). Consider another correct process \(p_{j}\). From lemma 7, given that the vectors \(L_{i}\) and \(L_{j}\) are equal for all correct processes, \(p_{j}\) will also find at least \(n-t\) instances of the value \(v\). Thus, every correct process will at least decide \(v\).
**Lemma 10**: [Validity] _If all the correct processes propose the same value, they decide this value._
Let \(C\) be a set of correct processes that propose \(v\), from lemmas 7 and 8 we have at least \(n-t\) values equal to \(v\) in \(L\), then in line 9, all the correct processes will find that condition and decide \(v\).
**Lemma 11**: [Termination] _All the correct processes decide._
All the correct processes will execute the two phases and will decide at the end of Phase 2.
**Lemma 12**: [Agreement] _At most \(\lfloor\frac{n}{n-t}\rfloor\) values are decided by correct processes._
If a correct process decides \(\bot\), all the correct processes will decide that value. Thus, exactly one value is decided.
If a correct process decides a value different from \(\bot\), by lemma 9, all correct processes decide a value that have a frequency of at least \(n-t\).
We suppose that the correct processes decide on a specific number of distinct values \(\alpha\). Given that each of the \(\alpha\) values must appear at least \(n-t\) times in \(L\) for a decision to be made, the total count of these appearances is \(\alpha(n-t)\). This count cannot exceed the total number of the process in the system, we have \(\alpha(n-t)\leq n\), leading to \(\alpha\leq\lfloor\frac{n}{n-t}\rfloor\). From lemma 7, no correct process will decide \(\bot\).
#### 3.2.1 Case of Consensus in crash failures or Byzantine failures with authentication
When examining Consensus with the same validity condition, where the decided value must be the same if all correct processes propose the same value, the traditional bound of \(t+1\) rounds to achieve consensus [Aguilera and Toueg(1999)] applies. Building upon the result from the previous section, we have:
**Theorem 13**: _For crash failures or Byzantine failure with authentication, \(1\)-set Agreement (consensus) is solvable if and only if there is a majority of correct processes. Moreover the algorithm requires \(t+1\) rounds._
## 4 \(k\)-Set Agreement in a Crash failures Asynchronous Read-Write shared memory models
In this section, we present an algorithm that operates in an Asynchronous Read-Write (Shared Memory) setting, specifically designed to handle crash failures. This algorithm is applicable to a system consisting of \(n\) processes, among which up to \(t\) processes may crash. It is known that in the asynchronous system, there is no algorithm for solving consensus when the system is subject to crash failures[Fischer et al.(1985)]. The authors in [De Prisco et al.(1999)] investigates the \(k\)-set agreement in an asynchronous system, exploring several variations of the problem definition by varying the validity condition and the system model. In our case, we are interested in the following validity: _if all the correct processes propose the same value, that value is decided_, and the shared memory model, where they presented an impossibility result.
**Theorem 14**: _[De Prisco et al.(1999)] In the Shared Memory/Crash model, there is no protocol for solving \(k\)-set agreement when \(t\geq\frac{n}{2}\) and \(t\geq k\)._
And a possibility one:
**Theorem 15**: _[De Prisco et al.(1999)] There exists a protocol that can solve \(k\)-set agreement for \(t<\frac{k-1}{2k}n\)._
They left a small gap between their possibility and impossibility results. In this section, we managed to find an algorithm for the case for \(t<\frac{k-1}{2k-1}n\), which is equivalent to \(k>\lfloor\frac{n-t}{n-2t}\rfloor\). Our algorithm uses the snapshot primitive, in a look-alike first phase of a round then exploiting the result returned by the snapshot each process makes a decision.
### The Algorithm
Algorithm 16 solves \(k\)-set agreement for \(k>\lfloor\frac{n-t}{n-2t}\rfloor\), since we are in an asynchronous system, processes can not wait indefinitely, as a process cannot distinguish between a correct process that is slow and a process that crashed. Thus if a process receives at least \(n-t\) values it moves to a decision-making step.
```
Input: Initial value \(m\) Result: decide /* Shared variables */
1\(S\) : snapshot /* Local variables */
2\(X_{i}\)and \(L_{I}\) are two vectors \([1,\ldots,n]\) initialized to \(\bot\)
3\(x_{i}=0\)
4\(S.update(m)\)
5while\(|j,L_{i}[j]\neq\bot\mid<n-t\)do
6\(L_{i}\gets S.snapshot()\)
7 end while
8\(X_{i}\gets L_{i}\)
9\(x_{i}\leftarrow\sharp\) of values \(\neq\bot\) in \(X_{i}\)
10if\(p_{i}\) finds in \(X_{i}\), \(x_{i}-t\) values equal to initial value \(m\)then
11 decide \(m\);
12
13elseif\(p_{i}\) finds in \(X_{i}\), \(x_{i}-t\) values equal to any value \(v\)then
14 decide \(v\);
15
16else
17 decide \(\bot\) ;
18
19 end if
```
**Algorithm 5**Solving \(k\)-set Agreement in Crash failures shared memory models.
### Proof of the algorithm
We begin with a snapshot object denoted as \(S\). Each process \(p_{i}\) starts by updating the snapshot \(S\) with its initial value \(m\) by invoking the operation \(update(m)\). It then calls the function \(snapshot()\) and stores the resulting view in a vector \(X_{i}\) of size \(n\), initially set to \(\bot\). The processes continue invoking the snapshot function until \(n-t\) processes have updated the snapshot.
At line 9, process \(p_{i}\) calculates \(x_{i}\), the number of values in \(X_{i}\) that are different from \(\bot\),and it has to be \(\geq n-t\) because of the condition in line 5. If \(p_{i}\) finds \(x_{i}-t\) values equal to its initial value, it decides on that value. If not, it checks if there are \(x_{i}-t\) values equal to a value \(v\) and makes a decision based on that value. If neither of these conditions are met, it decides on \(\bot\) as the outcome.
It is important to note that due to the inclusion property, if a snapshot \(X_{i}\) is taken at an earlier point in time than another snapshot \(X_{j}\), then the values represented by \(X_{i}\) is included in the values represented by \(X_{j}\), and we say that \(X_{i}\) is smaller that \(X_{j}\). If the process \(p_{i}\) decides \(v\) it has to find at least \(x_{i}-t\) values \(v\) in \(X_{i}\). By the inclusion property, the process \(p_{j}\) finds also at least \(x_{i}-t\) values \(v\) on \(X_{j}\), but to decide \(v\), it has to find \(x_{j}-t\) values \(v\).
**Lemma 16**: _Let \(p_{i}\) be a correct process that decides a value \(v\), \(v\neq\bot\), for every correct process \(p_{j}\) with \(X_{i}>X_{j}\), \(p_{j}\) will not decide \(\bot\)._
Let \(p_{i}\) be a correct process that decides \(v\). Let \(p_{j}\) be a correct process with \(X_{j}<X_{i}\), lets prove that \(p_{j}\) will not decide \(\bot\). Let \(\alpha_{v}\) be the number of occurrences of \(v\) in \(X\). Since \(p_{i}\) decided \(v\), then \(\forall v\in X_{i}\), \(\alpha_{v}^{i}\geq x_{i}-t\). Since \(X_{j}<X_{i}\)
\[\begin{split}&\alpha_{v}^{i}-\alpha_{v}^{j}\leq x_{i}-x_{j}\Rightarrow \alpha_{v}^{i}-x_{i}\leq\alpha_{v}^{j}-x_{j}\\ &\text{Since }\alpha_{v}^{i}\geq x_{i}-t\\ &\text{We obtain }\alpha_{v}^{j}-x_{j}\geq-t\Rightarrow\alpha_{v}^{j} \geq x_{j}-t\end{split} \tag{1}\]
That does not necessary implies that \(p_{j}\) will decide \(v\), since it looks first for its initial value, but as there is at least \(\alpha\) values of \(v\) in \(X_{j}\) such that \(\alpha_{v}^{j}\geq x_{j}-t\), \(p_{j}\) will not decide \(\bot\).
Let \(X_{i}\) be the smallest snapshot among all the snapshots in line 8.
**Lemma 17**: _No correct process decides a value \(v\), if \(\alpha_{v}^{i}<x_{i}-t\)._
Let \(p_{j}\) be a correct process with \(X_{j}\geq X_{i}\). Let us suppose by contradiction that \(p_{j}\) decides \(v\). Then \(\alpha_{v}^{j}\geq x_{j}-t\).
Since \(X_{i}\leq X_{j}\):
\[\begin{split}&\alpha_{v}^{j}-\alpha_{v}^{i}\leq x_{j}-x_{i} \Rightarrow\alpha_{v}^{j}-x_{j}\leq\alpha_{v}^{i}-x_{i}\\ &\text{Since }\alpha_{v}^{i}<x_{i}-t\\ &\text{Then }\alpha_{v}^{j}-x_{j}<-t\Rightarrow\alpha_{v}^{j}<x_{j}-t \end{split} \tag{2}\]
Contradicting the fact that \(\alpha_{v}^{j}\geq x_{j}-t\).
**Lemma 18** (Validity): _If all the correct processes propose the same value, that value is decided._
If all the correct processes propose the same value \(v\), then there are at most \(t\) values ( proposed by the faulty processes) that can be different from \(v\). So for a correct process \(p_{i}\), in its snapshot \(X_{i}\) there are at most \(t\) values different from \(v\), therefore \(x_{i}-t\) values of \(X_{i}\) are equal to \(v\) and \(p_{i}\) decides \(v\) Line 11.
**Lemma 19** (Agreement): _At most \(k\) values decided with \(k>\lfloor\frac{n-t}{n-2t}\rfloor\)._
Let \(X_{i}\) be the smallest snapshot among all the snapshots in line 8. From lemma 17, a value \(v\) can only be decided if \(\alpha_{v}^{i}\geq x_{i}-t\).
In \(X_{i}\), there can be at most \(\lfloor\frac{x_{i}}{x_{i}-t}\rfloor\) different values \(v\) such that \(\alpha_{v}^{i}\geq x_{i}-t\).
As \(n-t\leq x_{i}\leq n\) and \(\lfloor\frac{x_{i}}{x_{i}-t}\rfloor\) is a decreasing function, the maximum number of different values decided is obtained when \(x_{i}=n-t\), thus, \(\frac{n-t}{n-2t}\) is the maximum number of decided values.
**Lemma 20** (Termination): _All the correct processes will eventually decide._
The number of failures is at most \(t\), we have then at least \(n-t\) correct processes that update the snapshot \(S\) so a correct process will find a snapshot of \(S\) of size at least \(n-t\) and decides
By the above Lemmas, we get the following Theorem.
**Theorem 21**: _For \(k>\frac{n-t}{n-2t}\), Algorithm 16 ensures \(k\)-set agreement._
**Theorem 22**: _There is no Algorithm that can solve \(k\)-set agreement for \(k\leq\lfloor\frac{n-t}{n-2t}\rfloor-1\)._
Let us partition \(n-t\) processes into \(\lfloor\frac{n-t}{n-2t}\rfloor\) subsets \(g_{1},g_{2},\ldots,g_{\frac{n-t}{n-2t}}\) of size of at least \(n-2t\) processes, and let \(t\) be the remaining processes.
We consider a run \(\alpha\), where for each \(g_{i}\), at least a correct process \(p_{i}\) proposes its initial value \(v_{i}\). Let \(\tau\) be the time at which all the correct processes decide in \(\alpha\). Before \(\tau\), all the processes in \(g_{i}\) write \(v_{i}\), while the remaining processes \(t\) do not take any step.
Let \(\alpha_{1}\) be a run, where after \(\tau\), for each \(g_{i}\), with \(i>1\), the processes in \(g_{i}\) crash, and the remaining \(t\), have an initial value \(v_{i}\) and send it after \(\tau\). All the correct processes in \(g_{1}\), to ensure validity have to decide \(v_{1}\).
A correct process \(p_{1}\) in \(g_{1}\) cannot distinguish between \(\alpha\) and \(\alpha_{1}\), thus decides the same value \(v_{1}\) in both runs.
Generalizing the same argument,t for every \(g_{i}\) in \(\alpha\). There are at least \(\lfloor\frac{n-t}{n-2t}\rfloor\) different decided values.
### \(k\)-Set Agreement in Crash Failures Asynchronous Message Passing Model
It is shown in [1], that any wait-free algorithm based on atomic, single-writer (and multi-writer) multi-reader registers can be automatically emulated in message-passing systems, provided that at least a majority of the processors are not faulty. We have \(t<\frac{k-1}{2k-1}n\) then we have majority of correct since \(\frac{2k-1}{k-1}>2\). In order to simulate a message-passing model with a shared memory one, we consider that there exists a channel from a process \(p\) and \(q\) where \(p\) is the writer and \(q\) is the reader. And a channel from \(q\) to \(p\), where \(q\) is the writer and \(p\) is the reader.
From Theorem 21, we have the following Theorem.
**Theorem 23**: _For \(k>\frac{n-t}{n-2t}\), we have an algorithm ensuring \(k\)-set agreement in an asynchronous message passing model._
## 5 Conclusion
In this study, we have contributed to the understanding of \(k\)-set agreement in distributed systems, particularly focusing on scenarios with different types of failures. We addressed and filled certain gaps left by previous works such as [1] and [1], providing valuable insights into the solvability of the problem.
In the context of the Synchronous Message Passing model with Byzantine failures, we presented two algorithms that achieve \(k\)-set agreement. The first algorithm ensures \(k=\lfloor\frac{n}{n-t}\rfloor+1\) in just two rounds, demonstrating its efficiency and practicality. On the other hand, the second algorithm achieves optimality, ensuring \(k\geq\lfloor\frac{n}{n-t}\rfloor\) in \(t+1\) rounds.
We also present an algorithm for the Asynchronous Shared Memory model with crash failures, which solves \(k\)-set agreement for \(k>\frac{n-t}{n-2t}\).
Furthermore, we extend these results by leveraging the equivalency of the models. The Synchronous Authenticated Byzantine model is equivalent to the Synchronous Crash model, and the Shared Memory model canbe simulated with the message passing model.
Although certain gaps remain open, our aim is to provide answers and establish a comprehensive understanding of the solvability of \(k\)-set agreement in both asynchronous and synchronous systems. We emphasize the importance of the validity property, which guarantees that if all correct processes propose the same value, that value will be decided upon.
|
2306.02227 | One-step implementation of a multi-target-qubit controlled-phase gate
with photonic qubits encoded via eigenstates of the photon-number parity
operator | In recent years, quantum state engineering and quantum information processing
using microwave fields and photons have received increasing attention. In
addition, multiqubit gates play an important role in quantum information
processing. In this work, we propose to encode a photonic qubit via two
arbitrary orthogonal eigenstates (with eigenvalues 1 and -1, respectively) of
the photon-number parity operator. With such encoding, we then present a
single-step method to realize a multi-target-qubit controlled-phase gate with
one photonic qubit simultaneously controlling n-1 target photonic qubits, by
employing n microwave cavities coupled to one superconducting flux qutrit. This
proposal can be applied not only to implement nonhybrid multi-target-qubit
controlled-phase gates using photonic qubits with various encodings, but also
to realize hybrid multi-target-qubit controlled-phase gates using photonic
qubits with different encodings. The gate realization requires only a
single-step operation. The gate operation time does not increase with the
number of target qubits. Because the qutrit remains in the ground state during
the entire operation, decoherence from the qutrit is greatly suppressed. As an
application, we show how to apply this gate to generate a multicavity
Greenberger-Horne-Zeilinger (GHZ) entangled state with general expression.
Depending on the specific encodings, we further discuss the preparation of
several nonhybrid and hybrid GHZ entangled states of multiple cavities. We
numerically investigate the circuit-QED experimental feasibility of creating a
three-cavity spin-coherent hybrid GHZ state. This proposal can be extended to
accomplish the same tasks in a wide range of physical systems, such as multiple
microwave or optical cavities coupled to a three-level natural or artificial
atom. | Qi-Ping Su, Liang Bin, Yu Zhang, Chui-Ping Yang | 2023-06-04T01:29:00Z | http://arxiv.org/abs/2306.02227v1 | One-step implementation of a multi-target-qubit controlled-phase gate with photonic qubits encoded via eigenstates of the photon-number parity operator
###### Abstract
In recent years, quantum state engineering and quantum information processing using microwave fields and photons have received increasing attention. In addition, multiqubit gates play an important role in quantum information processing. In this work, we propose to encode a photonic qubit via two arbitrary orthogonal eigenstates (with eigenvalues \(\pm 1\), respectively) of the photon-number parity operator. With such encoding, we then present a single-step method to realize a multi-target-qubit controlled-phase gate with one photonic qubit simultaneously controlling \(n-1\) target photonic qubits, by employing \(n\) microwave cavities coupled to one superconducting flux qutrit. This proposal can be applied not only to implement _nonhybrid_ multi-target-qubit controlled-phase gates using photonic qubits with various encodings, but also to realize _hybrid_ multi-target-qubit controlled-phase gates using photonic qubits with different encodings. The gate realization requires only a single-step operation. The gate operation time does not increase with the number of target qubits. Because the qutrit remains in the ground state during the entire operation, decoherence from the qutrit is greatly suppressed. As an application, we show how to apply this gate to generate a multicavity Greenberger-Horne-Zeilinger (GHZ) entangled state with general expression. Depending on the specific encodings, we further discuss the preparation of several nonhybrid and hybrid GHZ entangled states of multiple cavities. We numerically investigate the circuit-QED experimental feasibility of creating a three-cavity spin-coherent hybrid GHZ state. This proposal can be extended to accomplish the same tasks in a wide range of physical systems, such as multiple microwave or optical cavities coupled to a three-level natural or artificial atom.
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
## I Introduction and Motivation
Quantum computing has attracted much interest as quantum computers can in principle solve hard computing problems much more efficiently than classical computers. Multiqubit gates are particularly attractive and have been considered as appealing building blocks for quantum computing. As is well known, there exist two kinds of important multi-qubit gates, namely, _multi-control-qubit_ gates with multiple control qubits acting on a single target qubit, and _multi-target-qubit_ gates with a single qubit simultaneously controlling multiple target qubits. These two kinds of multiqubit gates have many applications in quantum information processing (QIP). For instance, they are key elements in quantum algorithms [1; 2; 3], quantum Fourier transform [4], quantum cloning [5], error correction [6; 7; 8], and entanglement preparation [9]. Therefore, it is necessary and important to implement these two kinds of multiqubit gates.
A multiqubit gate can in principle be constructed by using single-qubit and two-qubit fundamental gates. However, when the conventional gate-decomposition protocols are used to construct a multiqubit gate [10; 11; 12; 13], the number of fundamental gates increases substantially and the procedure usually becomes complicated with increasing the number of qubits. As a result, the operation time required to implement a multiqubit gate will be quite long and thus the fidelity will be significantly deteriorated by decoherence. Therefore, it is worth seeking efficient ways to _directly_ implement multiqubit gates. In the past years, a number of proposals have been put forward for directly realizing multi-control-qubit gates and multi-target-qubit gates using _matter_ qubits (e.g., atoms, trapped ions, superconducting qubits, quantum dots, nitrogen-vacancy centers, etc.) [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33].
Circuit quantum electrodynamics (QED), composed of microwave cavities and superconducting (SC) qubits, has been considered as one of the best platforms for quantum computing [34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. SC qubits (such as flux qubits, transmon qubits, Xmon qubits, and fluxonium qubits) and microwave resonators (a.k.a. cavities) can be fabricated using modern integrated circuit technology. Because their energy-level spacings can be rapidly adjusted _in situ_[44; 45; 46] and their coherence time has been much improved [47; 48; 49; 50; 51], SC qubits play an important role in QIP. In addition, due to the high-quality factors of microwave cavities demonstrated in experiments [52; 53; 54; 55; 56], the microwave photons contained in microwave cavities or resonators can have a long lifetime [57]. Recently, quantum state engineering and QIP using microwave fields and photons have received increasing attention. In this fast-growing field, photonic qubits, which are encoded via discrete-variable or continuous-variable states
of microwave cavity fields, have been adopted as information carriers for quantum computing and communication.
Compared to SC qubits or other matter qubits, photonic qubits can have various encodings. Each encoding pattern has its own advantage and can be applied in different tasks of QIP. For example, photonic qubits, encoded via coherent states, are tolerant to single-photon loss [58]; the lifetime of photonic qubits encoded via cat states can be greatly enhanced by quantum error correction [59]; and photonic qubits encoded via the vacuum state and the single photon state are relatively easy to manipulate [60]. Over the past years, a number of proposals have been presented for directly realizing single-qubit gates, two-qubit gates, multi-control-qubit gates, and multi-target-qubit gates, by using photonic qubits which are specifically encoded via (i) coherent states [61, 62], (ii) cat states [63, 64, 65, 66, 67, 68], (iii) the vacuum and single-photon states [69, 70, 71, 72, 73], or (iii) polarization, spatial, and temporal degrees of freedom (DOFs) of photons [74, 75, 76, 77, 78, 79, 80], etc. The previous works have made great contributions to quantum computing and QIP with photonic qubits. Generally speaking, different encodings of photonic qubits may require different methods in implementing quantum gates with photonic qubits. In other words, methods for realizing quantum gates using photonic qubits with specific encodings may not be applied to implement quantum gates using photonic qubits with various encodings.
Motivated by the above, in this work, our goal is to propose an idea to encode photonic qubits, i.e., encoding a photonic qubit via two _arbitrary_ orthogonal eigenstates \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) of the photon-number parity operator \(\hat{\pi}=e^{\hat{\pi}\hat{\pi}^{2}\hat{\pi}}\). Here, for the state \(|\varphi_{e}\rangle\), the photon-number parity operator \(\hat{\pi}\) has an eigenvalue \(1\), while for the state \(|\varphi_{o}\rangle\), the photon-number parity operator \(\hat{\pi}\) has an eigenvalue \(-1\). After a deep search of literature, we find no indication that this idea for encoding the photonic qubit has been reported yet.
With such encoding, we then propose a one-step method to implement a multi-target-qubit controlled-phase gate with one photonic qubit simultaneously controlling \((n-1)\) target photonic qubits, based on circuit QED. This gate is realized by employing \(n\) cavities coupled to a superconducting qutrit. As discussed in Sec. III.3 below, this proposal can be applied to photonic qubits with various encodings. More remarkably, as discussed in Sec. III.4 below, the proposal can be used to realize multi-target-qubit _hybrid_ controlled-phase gates using photonic qubits with different encodings. Hybrid quantum gates are of fundamental interest in quantum physics and have significant applications in hybrid quantum communication and quantum computation.
Greenberger-Horne-Zeilinger (GHZ) entangled states are of great interest in the foundations of quantum mechanics [81], and have many important applications in QIP [82], quantum communications [83], quantum metrology [84], error-correction protocols [85], and high-precision spectroscopy [86]. On the other hand, hybrid entangled states play a key role in QIP and quantum technology. They can serve as important quantum channels and intermediate resources for quantum technologies, covering the transmission, operation, and storage of quantum information between different formats and encodings [87, 88, 89]. In this work, as an application of the proposed gate, we further discuss how to generate a multicavity GHZ entangled state with general expression. Depending on the specific encodings \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\), we also discuss the preparation of several nonhybrid and hybrid GHZ entangled states of multiple cavities. Furthermore, we numerically investigate the circuit-QED experimental feasibility of preparing a spin-coherent hybrid GHZ state of three cavities.
We believe that this work is interesting and important from these aspects: First, the proposal can be applied not only to implement nonhybrid multi-target-qubit controlled-phase gates using photonic qubits with various encodings, but also to realize hybrid multi-target-qubit controlled-phase gates using photonic qubits with different encodings. Second, the gate implementation is very simple because it requires only a single-step operation that is independent of the number of target qubits, while the gate realization becomes complicated (especially for a large number of target qubits) by the use of the conventional gate-decomposition protocols [10, 11, 12, 13]. Third, the gate operation time does not depend on the number of target qubits and thus does not increase with the number of target qubits; however, the operation time required to realize this multi-target-qubit gate will increase with the number of target qubits when the conventional gate-decomposition protocols [10, 11, 12, 13] are applied. Last, the qutrit stays in the ground state during the entire gate operation, thus decoherence from the qutrit is significantly suppressed.
This paper is organized as follows. In Sec. II, we give a brief introduction to the multi-target-qubit gate considered in this work. In Sec. III, we explicitly show how to realize this multi-target-qubit gate, introduce concrete examples for the photonic-qubit encodings, and give a brief discussion on the nonhybrid and hybrid multi-target-qubit gates. In Sec. IV, we show how to apply this gate to generate a multicavity GHZ entangled state with general expression. Depending on the specific encodings \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\), we further show the generation of several nonhybrid and hybrid GHZ entangled states of multiple cavities. In Sec. V, we numerically analyze the circuit-QED experimental feasibility of creating a three-cavity spin-coherent hybrid GHZ state. A concluding summary is given in Sec. VI.
## II II. The multi-target-qubit controlled-phase gate
For \(n\) qubits, there exist \(2^{n}\) computational basis states, which form a set of complete orthogonal bases in a \(2^{n}\)-dimensional Hilbert space of the \(n\) qubits. An \(n\)-qubit computational basis state is denoted as \(|i_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle\), where subscripts \(1\), \(2\),..., \(n\) represent qubits \(1\), \(2\),..., \(n\), respectively, and \(i_{j}\in\{0,1\}\) (\(j=1\), \(2\),..., \(n\)). The multi-target-qubit gate considered in this work (Fig. 1) is described by
\[|0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle \rightarrow |0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle,\] \[|1_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle \rightarrow |1_{1}\rangle(-1)^{i_{2}}(-1)^{i_{3}}\cdots\] \[(-1)^{i_{2}}|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle, \tag{1}\]
which shows that if the control qubit (qubit \(1\)) is in state \(|0\rangle\), the states of each of the \((n-1)\) target qubits (qubits \(2\), \(3\),..., \(n\)) remain unchanged, while if the control qubit is in state \(|1\rangle\), state \(|1\rangle\) of each target qubit undergoes a phase flip from sign \(+\) to \(-\).
In this work, the \(n\) qubits involved in this multiqubit gate are photonic qubits. For photonic qubit \(k\) (\(k=1,\,2,\,...,\,n\)), the two logical states \(|0_{k}\rangle\) and \(|1_{k}\rangle\) are encoded with two arbitrary orthogonal eigenstates \(|\varphi_{e}\rangle_{\alpha}\) (with eigenvalue \(1\)) and \(|\varphi_{o}\rangle_{k}\) (with eigenvalue \(-1\)) of the photon-number parity operator \(\hat{\pi}_{k}=e^{i\pi\hat{a}_{k}^{\dagger}\hat{a}_{k}}\) of cavity \(k\) (\(k=1,\,2,\,...,\,n\)), i.e.,
\[|0_{k}\rangle = |\varphi_{e}\rangle_{\alpha_{k}}=\sum_{p_{k}}d_{p_{k}}|p_{k}\rangle,\] \[|1_{k}\rangle = |\varphi_{o}\rangle_{e_{k}}=\sum_{q_{k}}d_{a_{k}}|q_{k}\rangle, \tag{2}\]
where the subscript \(c_{k}\) represents cavity \(k\) (\(k=1,\,2,\,...,\,n\)), \(|p_{k}\rangle\) is a Fock state of cavity \(k\) with any even number \(p_{k}\) of photons, while \(|q_{k}\rangle\) is a Fock state of cavity \(k\) with any odd number \(q_{k}\) of photons. The sum \(\sum_{p_{k}}\) is taken over all Fock states with even-number photons, while the sum \(\sum_{p_{k}}\) is taken over all Fock states with odd-number photons. The coefficients \(d_{p_{k}}\) and \(d_{q_{k}}\) satisfy the normalization conditions \(\sum_{p_{k}}|d_{p_{k}}|^{2}=\sum_{q_{k}}|d_{q_{k}}|^{2}=1\). Obviously, the two states \(|\varphi_{e}\rangle_{c_{k}}\) and \(|\varphi_{o}\rangle_{c_{k}}\) are orthogonal to each other. One can easily check \(\hat{\pi}_{k}|\varphi_{e}\rangle_{c_{k}}=|\varphi_{e}\rangle_{c_{k}}\) and \(\hat{\pi}_{k}|\varphi_{o}\rangle_{c_{k}}=-|\varphi_{o}\rangle_{c_{k}}\), namely, the two states \(|\varphi_{e}\rangle_{\alpha}\) and \(|\varphi_{o}\rangle_{c_{k}}\) are the eigenstates of the photon-number parity operator \(\hat{\pi}_{k}\) with eigenvalues \(1\) and \(-1\), respectively. Please keep in mind that, in Eq. (2), \(p_{k}\) is a nonnegative even number while \(q_{k}\) is a positive odd number, which will apply in the next section.
## III Implementation of the multi-target-qubit controlled-phase gate
In this section, we will show how to implement the multi-target-qubit controlled-phase gate described by Eq. (1). We then discuss some issues related to the gate implementation, provide examples for the photonic qubit encodings, and give a brief discussion on the nonhybrid and hybrid multi-target-qubit gates.
### The gate realization
Consider a physical system consisting of \(n\) microwave cavities (\(1,\,2,\,...,\,n\) ) coupled to a SC flux qutrit [Fig. 2(a)]. The three levels of the qutrit are labeled as \(|g\rangle\), \(|e\rangle\), and \(|f\rangle\) [Fig. 2(b)]. The \(|g\rangle\leftrightarrow|e\rangle\) transition can be made weak by increasing the barrier between the two potential wells. Suppose that cavity \(1\) is dispersively coupled to the \(|g\rangle\leftrightarrow|f\rangle\) transition with coupling constant \(g_{1}\) and detuning \(\delta_{1}\) but highly detuned (decoupled) from the \(|e\rangle\leftrightarrow|f\rangle\) transition of the qutrit. In addition, assume that cavity \(l\) (\(l=2,\,3,\,...,\,n\) ) is dispersively coupled to the \(|e\rangle\leftrightarrow|f\rangle\) transition with coupling constant \(g_{l}\) and detuning \(\delta_{l}\) but highly detuned (decoupled) from the \(|g\rangle\leftrightarrow|f\rangle\) transition of the qutrit (Fig. 3). These conditions can be met by prior adjustment of the level spacings of the qutrit or the frequencies of the cavities. Note that both of the level spacings of a SC qutrit and the frequency of a microwave cavity can be rapidly tuned within a few nanoseconds [44, 45, 46, 90, 91].
Figure 2: (a) Schematic circuit of \(n\) microwave cavities coupled to a SC flux qutrit. Each square represents a one-dimensional (1D) or three-dimensional (3D) microwave cavity. The circle \(A\) represents the flux qutrit, which is inductively or capacitively coupled to each cavity. (b) Level configuration of the flux qutrit, for which the transition between the two lowest levels can be made weak by increasing the barrier between two potential wells.
Figure 1: Circuit of a multi-target-qubit controlled phase gate with one control qubit (qubit \(1\)) simultaneously controlling \(n-1\) target qubits (\(2,\,3,\,...,\,n\)). When the control qubit (on the filled circle) is in state \(|1\rangle\), state \(|1\rangle\) at \(Z\) for each target qubit is phase flipped as \(|1\rangle\rightarrow-|1\rangle\) but nothing happens to state \(|0\rangle\) at \(Z\) for each target qubit. In this work, each qubit is a photonic qubit, for which the two logic states are encoded through two _arbitrary_ orthogonal eigenstates (with eigenvalues \(\pm 1\), respectively) of the photon-number parity operator.
After the above considerations, the Hamiltonian of the whole system in the interaction picture and under the rotating-wave approximation is given by (in units of \(\hbar=1\))
\[H_{\rm I}=g_{1}(e^{-i\delta_{1}t}\hat{a}_{1}^{+}\sigma_{fg}^{-}+ \ {\rm H.c.})+\sum_{l=2}^{n}g_{l}(e^{-i\delta_{l}t}\hat{a}_{l}^{+}\sigma_{f_{e}}^{ -}+{\rm H.c.}), \tag{3}\]
where \(\hat{a}_{1}\) (\(\hat{a}_{l}\)) is the photon annihilation operator of cavity \(1\) (\(l\)), \(\sigma_{fg}^{-}=|g\rangle\langle f|\), \(\sigma_{f_{e}}^{-}=|e\rangle\langle f|\), \(\delta_{1}=\omega_{fg}-\omega_{c_{1}}>0\), and \(\delta_{l}=\omega_{f_{e}}-\omega_{c_{1}}>0\) (Fig. 3). Here, \(\omega_{fg}\) (\(\omega_{fe}\)) is the \(|f\rangle\leftrightarrow|g\rangle\) (\(|f\rangle\leftrightarrow|e\rangle\)) transition frequency of the qutrit, while \(\omega_{c_{1}}\) (\(\omega_{c_{1}}\)) is the frequency of cavity \(1\) (\(l\)).
Under the large detuning conditions \(\delta_{1}\gg g_{1}\) and \(\delta_{l}\gg g_{l}\) (\(l=2,\,3,\,...,\,n\)), the energy exchange between the coupler qutrit and the cavities is negligible. In addition, under the condition of
\[\frac{|\delta_{p}-\delta_{q}|}{\delta_{p}^{-1}+\delta_{q}^{-1}} \gg g_{p}g_{q} \tag{4}\]
(where \(p\), \(q\in\{2,\,3,\,...,\,n\}\) and \(p\neq q\)), the interaction between the cavities (\(2\), \(3\), \(...\), \(n\)), induced by the coupler qutrit, is negligible. Therefore, the Hamiltonian (3) becomes [92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 1111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 1111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 1111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 1111, 111, 111, 1111, 111, 111, 111, 1111, 111, 1111, 111, 111, 1111, 1111, 111, 1111, 111, 111, 111, 111, 1111, 111, 111, 1111, 111, 111, 1111, 111, 111, 1111, 111, 1111, 1111, 111, 1111, 111, 1111, 111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 1111, 11111, 11111, 1111, 1111, 11111, 1111, 1111, 1111, 11111, 1111, 1111, 1111, 1111, 11111, 1111, 11111, 1111, 11111, 11111, 1111, 1111, 11111, 11111, 1111, 11111, 1111, 11111, 11111, 1111, 1111, 1111, 1111, 1111, 11111, 11111, 1111, 11111, 11111, 11111, 1111, 11111, 11111
Based on Eq. (14), it is straightforward to get the following state transformation:
\[\prod_{l=2}^{n}U_{1l}|0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i _{n}\rangle = |0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle,\] \[\prod_{l=2}^{n}U_{1l}|1_{1}\rangle|i_{2}\rangle|i_{3}\rangle \cdots|i_{n}\rangle = |1_{1}\rangle(-1)^{i_{2}}(-1)^{b_{i}}\cdots \tag{15}\] \[(-1)^{i_{2}}|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle.\]
On the other hand, based on Eqs. (10) and (2), the unitary operator \(U_{1}\), acting on the two logic states \(|0_{1}\rangle\) and \(|1_{1}\rangle\) of photonic qubit 1, results in the following state transformation:
\[U_{1}|0_{1}\rangle = \sum_{p_{1}}\exp{(-ip_{1}\eta t)}d_{p_{1}}|p_{1}\rangle,\] \[U_{1}|1_{1}\rangle = \sum_{q_{1}}\exp{(-iq_{1}\eta t)}d_{q_{1}}|q_{1}\rangle. \tag{16}\]
For \(\eta t=2\pi\) (\(s\) is an integer), we have \(\exp(-ip_{1}\eta t)=\exp(-iq_{1}\eta t)=1\). Thus, it follows from Eq. (16) that
\[U_{1}|0_{1}\rangle=|0_{1}\rangle,\quad U_{1}|1_{1}\rangle=|1_{1}\rangle. \tag{17}\]
By combining Eq. (15) with Eq. (17), one can have the following state transformation:
\[U_{1}\prod_{l=2}^{n}U_{1l}|0_{1}\rangle|i_{2}\rangle|i_{3} \rangle\cdots|i_{n}\rangle = |0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle,\] \[U_{1}\prod_{l=2}^{n}U_{1l}|1_{1}\rangle|i_{2}\rangle|i_{3} \rangle\cdots|i_{n}\rangle = |1_{1}\rangle(-1)^{i_{2}}(-1)^{i_{3}}\cdots \tag{18}\] \[(-1)^{i_{4}}|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle.\]
Because of \(U=U_{1}\otimes\prod_{l=2}^{n}U_{1l}\) [see Eq. (9)], Eq. (18) can be simplified as
\[U|0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle = |0_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle,\] \[U|1_{1}\rangle|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle = |1_{1}\rangle(-1)^{i_{2}}(-1)^{i_{3}}\cdots \tag{19}\] \[(-1)^{i_{4}}|i_{2}\rangle|i_{3}\rangle\cdots|i_{n}\rangle,\]
which implies that if the control photonic qubit 1 is in state \(|0\rangle\), the states of each of target photonic qubit \((2,\,3,\,...,\,n)\) remain unchanged, while, if the control photonic qubit 1 is in state \(|1\rangle\), a phase flip (from sign \(+\) to \(-\) ) happens to state \(|1\rangle\) of each target photonic qubit (2, 3, \(...,\,n\)). Hence, a multi-target-qubit controlled phase gate, described by Eq. (1), is realized with \(n\) photonic qubits (1, 2,..., \(n\) ), after the above operation.
### Discussion
From the description given above, one can see that:
(i) The unitary operator \(U\) was obtained by starting the original Hamiltonian (3). Thus, the gate is implemented through a single-step operation which is described by \(U\).
(ii) During the gate operation, the qutrit stays in the ground state. Thus, decoherence from the qutrit is greatly suppressed.
(iii) In the above, we have set \(\chi_{1l}t=\pi\) (independent of \(l\)), \(\eta t=2s\pi\), and \(\eta=-\lambda_{1}+\sum_{l=2}^{n}\chi_{1l}\), which turn out to
\[-\lambda_{1}t+(n-1)\pi=2s\pi. \tag{20}\]
For even-number target qubits (i.e., \(n-1\) is even), one can choose \(\lambda_{1}t=2m\pi\) (\(m\) is a positive integer) in order to meet the condition given by Eq. (20). In this case, combining \(\chi_{1l}t=\pi\) and \(\lambda_{1}t=2m\pi\) results in \(\chi_{1l}=\lambda_{1}/(2m)\), which turns into
\[g_{l}=\frac{\delta_{l}}{\delta_{1}+\delta_{l}}\sqrt{2\Delta_{1l}\delta_{1}/m}. \tag{21}\]
On the other hand, for odd-number target qubits, one can select \(\lambda_{1}t=(2m^{\prime}+1)\pi\) (\(m^{\prime}\) is a nonnegative integer) to satisfy Eq. (20). In this case, combining \(\chi_{1l}t=\pi\) and \(\lambda_{1}t=\pi\) results in \(\chi_{1l}=\lambda_{1}/(2m^{\prime}+1)\), which leads to
\[g_{l}=\frac{2\delta_{l}}{\delta_{1}+\delta_{l}}\sqrt{\Delta_{1l}\delta_{1}/(2m ^{\prime}+1)}. \tag{22}\]
The condition (21) or (22) can be readily met by adjusting \(g_{l}\) or \(\delta_{l}\) or both, given \(\delta_{1}\). It is noted that the detuning \(\delta_{l}=\omega_{fc}-\omega_{c_{l}}\) can be tuned by changing the frequency \(\omega_{c_{l}}\) of cavity \(l\) (\(l=2,\,3,\,...,\,n\)). In addition, the coupling strength \(g_{l}\) can be adjusted by a prior design of the sample with appropriate capacitance or inductance between the coupler qutrit and cavity \(l\) (\(l=2,\,3,\,...,\,n\)) [95, 96].
### Examples of photonic-qubit encodings
As stated previously, the two logic states of a photonic qubit are encoded with two arbitrary orthogonal eigenstates \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) of the photon-number parity operator \(\hat{\pi}=e^{i\pi\hat{a}^{i}\hat{a}}\) of a cavity. Namely, to have the encoding be effective, the orthogonality between the two states \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) needs to hold, i.e.,
\[\langle\varphi_{e}\,|\varphi_{o}\rangle=0. \tag{23}\]
In the following, we will provide a few examples of the encodings, for which the encoding condition (23) satisfies the following:
1. The two logic states of each photonic qubit are encoded by the vacuum state \(|0\rangle\) and the single-photon state \(|1\rangle\), i.e., \(|\varphi_{e}\rangle=|0\rangle\) and \(|\varphi_{o}\rangle=|1\rangle\).
2. The two logic states of each photonic qubit are encoded by a Fock state \(|2m\rangle\) with even-number photons and a Fock state \(|2n+1\rangle\) with odd-number photons, i.e., \(|\varphi_{e}\rangle=|2m\rangle\) and \(|\varphi_{o}\rangle=|2n+1\rangle\) (\(m\) and \(n\) are nonnegative integers).
3. The two logic states of each photonic qubit are encoded by one superposition of Fock states with even-number photons, i.e., \(|\varphi_{e}\rangle=c_{0}|0\rangle+c_{2}|2\rangle+\cdots+c_{2m}|2m\rangle\) and \(|\varphi_{o}\rangle=c_{1}|1\rangle+c_{3}|3\rangle+\cdots+c_{2n+1}|2n+1\rangle\) (\(m\) and \(n\) are nonnegative integers).
(iv) The two logic states of each photonic qubit are encoded by two cat states, e.g., \(|\varphi_{e}\rangle=|\alpha\rangle+|-\alpha\rangle\) and \(|\varphi_{o}\rangle=|\alpha\rangle-|-\alpha\rangle\). Here, the cat state \(|\alpha\rangle+|-\alpha\rangle\) is an even-number-photon coherent state while the cat state \(|\alpha\rangle-|-\alpha\rangle\) is an odd-number-photon coherent state.
(v) The two logic states of each photonic qubit are encoded by one superposition of a Fock state and a cat state, each with even-number photons, and one superposition of a Fock state and a cat state, each with odd-number photons, e.g., \(|\varphi_{e}\rangle=c_{0}|2m\rangle+c_{1}(|\alpha\rangle+|-\alpha\rangle)\) and \(|\varphi_{0}\rangle=d_{0}|2n+1\rangle+d_{1}(|\alpha\rangle-|-\alpha\rangle)\) (\(m\) and \(n\) are nonnegative integers).
(vi) The two logic states of each photonic qubit are encoded by a squeezed vacuum state \(|\xi\rangle\) and a cat state \(|\alpha\rangle-|-\alpha\rangle\), i.e., \(|\varphi_{e}\rangle=|\xi\rangle\) and \(|\varphi_{o}\rangle=|\alpha\rangle-|-\alpha\rangle\). Note that the squeezed vacuum state is a superposition state of Fock states with even-number photons, while the cat state \(|\alpha\rangle-|-\alpha\rangle\) is an odd-number-photon coherent state.
(vii) The two logic states of each photonic qubit are encoded by two multicomponent cat states, e.g., \(|\varphi_{e}\rangle=c_{0}(|\alpha\rangle+|-\alpha\rangle)+c_{1}\) (\(|i\alpha\rangle+|-i\alpha\rangle\)) and \(|\varphi_{o}\rangle=d_{0}(|\alpha\rangle-|-\alpha\rangle)+d_{1}\) (\(|i\alpha\rangle-|-i\alpha\rangle\)).
One can verify that for the examples above, the two states \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) are the eigenstates of the photon-number parity operator \(e^{i\pi\hat{a}+\hat{a}}\) (with eigenvalues \(\pm 1\), respectively), and they satisfy the orthogonality condition (23). For simplicity, the states \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) given above for some examples are not normalized, which does not affect their orthogonality. The normalization of \(|\varphi_{e}\rangle\) or \(|\varphi_{o}\rangle\) can be easily made by adding a normalization coefficient. We should point out that, apart from the above examples, there are other possible encodings that satisfy the encoding condition (23). Therefore, the present proposal is quite general and can be used to implement the multi-target-qubit controlled-phase gate by using photonic qubits with various encodings, for which the two logic states used to encode photonic qubits can be any two orthogonal eigenstates of the photon-number parity operator.
### Nonhybrid and hybrid gates
The multi-target-qubit gate (19) or (1), whose implementation as shown in the preceding Sec. III.1, can be a nonhybrid gate or a hybrid gate. In the case when the two logic states \(|0\rangle\) and \(|1\rangle\) of each photonic qubit are encoded by the same two arbitrary orthogonal eigenstates \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\), i.e., \(|\varphi_{e}\rangle_{c_{1}}=|\varphi_{e}\rangle_{c_{2}}=\cdots=|\varphi_{e} \rangle_{c_{n}}\) and \(|\varphi_{o}\rangle_{c_{1}}=|\varphi_{o}\rangle_{c_{2}}=\cdots=|\varphi_{o} \rangle_{c_{n}}\), the gate (19) or (1) is a nonhybrid gate. Since the states \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) can take various quantum states, the present proposal can be used to implement _nonhybrid_ multi-target-qubit gates using photonic qubits with various encodings.
On the other hand, if there exists at least one photonic qubit whose encoding is different from the encodings of other photonic qubits, the gate (19) or (1) is a hybrid gate. When the encodings of all photonic qubits are different, i.e., \(|\varphi_{e}\rangle_{c_{1}}\neq|\varphi_{e}\rangle_{c_{2}}\neq\cdots\neq| \varphi_{e}\rangle_{c_{n}}\) and \(|\varphi_{o}\rangle_{c_{1}}\neq|\varphi_{o}\rangle_{c_{2}}\neq\cdots\neq| \varphi_{o}\rangle_{c_{n}}\), we say that the multi-target-qubit gate (19) or (1) has a maximal hybridization. Because the states \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) can take various quantum states, the present proposal can also be applied to realize _hybrid_ multi-target-qubit gates using photonic qubits with different encodings.
## IV IV. Generation of multicavity GHZ entangled states
In this section, we discuss how to apply the gate (19) or (1) to generate a multi-cavity GHZ entangled state with general expression. Depending on the specific encodings \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\), we then discuss the preparation of a few non-hybrid and hybrid GHZ entangled states of multiple cavities.
### Preparation of GHZ entangled state with general expression
Let us return to the physical system depicted in Fig. 2. Assume that the coupler SC qutrit is initially in the ground state \(|g\rangle\) and each cavity is initially in the state \((|\varphi_{e}\rangle+|\varphi_{o}\rangle)/\sqrt{2}\). Here, the states \(|\varphi_{e}\rangle\) and \(|\varphi_{o}\rangle\) are normalized. The initial state of the whole system is thus given by
\[|\psi\left(0\right)\rangle=2^{-\pi/2}\big{(}|\varphi_{e}\rangle_{c_{1}}+| \varphi_{o}\rangle_{c_{1}}\big{)}\prod_{l=2}^{n}\big{(}|\varphi_{e}\rangle_{c_ {l}}+|\varphi_{o}\rangle_{c_{l}}\big{)}\otimes|g\rangle. \tag{24}\]
Here and after, the subscript \(c_{1}\) represents cavity \(1\) while the subscript \(c_{l}\) represents cavity \(l\) (\(l=2,3,...,n\)). In view of Eq. (2), i.e., based on the encoding \(|0_{k}\rangle=|\varphi_{e}\rangle_{c_{l}}\) and \(|1_{k}\rangle=|\varphi_{o}\rangle_{c_{k}}\), the state \(|\psi(0)\rangle\) can be rewritten as
\[|\psi\left(0\right)\rangle=2^{-n/2}(|0_{1}\rangle+|1_{1}\rangle)\prod_{l=2}^ {n}\big{(}|0_{l}\rangle+|1_{l}\rangle)\otimes|g\rangle, \tag{25}\]
where states \(|0_{1}\rangle\) and \(|1_{1}\rangle\) are the two logic states of photonic qubit \(1\), while states \(|0_{l}\rangle\) and \(|1_{l}\rangle\) are the two logic states of photonic qubit \(l\) (\(l=2,3,...,n\)).
Now apply the multi-target-qubit controlled-phase gate (19), i.e., the gate described by Eq. (1). One can easily see that after this gate operation, the state (25) changes to
\[2^{-n/2}\Bigg{[}\left|0_{1}\right\rangle\prod_{l=2}^{n}\big{(}|0_{l}\rangle+|1 _{l}\rangle\big{)}+|1_{1}\rangle\prod_{l=2}^{n}\big{(}|0_{l}\rangle-|1_{l} \rangle\big{)}\Bigg{]}\otimes|g\rangle. \tag{26}\]
According to the encoding (2), i.e., \(|0_{k}\rangle=|\varphi_{e}\rangle_{c_{k}}\) and \(|1_{k}\rangle=|\varphi_{o}\rangle_{c_{k}}\), the state (26) can be written as
\[2^{-n/2}\Bigg{[}|\varphi_{e}\rangle_{c_{1}}\prod_{l=2}^{n}\big{(}| \varphi_{e}\rangle_{c}+|\varphi_{o}\rangle_{c}\big{)}\] \[+|\varphi_{o}\rangle_{c_{1}}\prod_{l=2}^{n}\big{(}|\varphi_{e} \rangle_{c_{2}}-|\varphi_{o}\rangle_{c_{1}}\big{)}\Bigg{]}\otimes|g\rangle, \tag{27}\]
which shows that the cavity system is prepared in the following GHZ entangled state with general expression
\[|\text{GHZ}\rangle =\frac{1}{\sqrt{2}}\Bigg{[}|\varphi_{e}\rangle_{c_{1}}\prod_{l=2}^ {n}\frac{1}{\sqrt{2}}\big{(}|\varphi_{e}\rangle_{c_{l}}+|\varphi_{o}\rangle_{c_ {l}}\big{)}\] \[+|\varphi_{o}\rangle_{c_{1}}\prod_{l=2}^{n}\frac{1}{\sqrt{2}} \big{(}|\varphi_{e}\rangle_{c_{l}}-|\varphi_{o}\rangle_{c_{l}}\big{)}\Bigg{]}. \tag{28}\]
One can verify that for cavity \(l\) (\(l=2,3,...,n\)), the two rotated states \((|\varphi_{e}\rangle_{l}+|\varphi_{o}\rangle_{l})/\sqrt{2}\) and \((|\varphi_{e}\rangle_{l}-|\varphi_{o}\rangle_{l})/\sqrt{2}\) are orthogonal to each other.
From the descriptions given above, it can be seen that, independent of \(|\varphi_{e}\rangle_{l}\) and \(|\varphi_{o}\rangle_{l}\), the GHZ entangled state (28) of \(n\) cavities \((1,\,2,\,...,\,n)\), which takes a general form, can be straightforwardly generated by applying the multi-target-qubit controlled-phase gate (19) or (1). Thus, given that the initial state \((|\varphi_{e}\rangle+|\varphi_{o}\rangle)/\sqrt{2}\) of each cavity is ready, the Hamiltonians and the operation used for preparing the GHZ state (28) are the same as those for implementing the gate (19) or (1). Moreover, as shown above, the gate is implemented with a single-step operation. Thus, the GHZ state (28) can be created through a single-step operation, given that the initial state of each cavity is ready.
### Preparation of nonhybrid and hybrid GHZ entangled states
In the following, we will show that, according to Eq. (28), which gives a general expression of the prepared GHZ entangled state of \(n\) cavities, several nonhybrid and hybrid GHZ entangled states of \(n\) cavities can be created by choosing appropriate encodings of \(|\varphi_{e}\rangle_{l}\) and \(|\varphi_{o}\rangle_{l}\).
To begin with, we define the following:
\[|+\rangle_{c_{1}} = \big{(}|0\rangle_{\alpha}+|1\rangle_{\alpha}\big{)}/\sqrt{2},\] \[|-\rangle_{c_{1}} = \big{(}|0\rangle_{\alpha}-|1\rangle_{\alpha}\big{)}/\sqrt{2}, \tag{29}\]
where \(|+\rangle_{c_{1}}\) and \(|-\rangle_{c_{2}}\) are the two rotated states of cavity \(k\) (\(k=1,\,2,\,...,\,n\)). Note that \(|0\rangle_{c_{1}}\) and \(|1\rangle_{c_{2}}\) are the vacuum state and the single-photon state of cavity \(k\), which are different from the two logic states \(|0_{k}\rangle\) and \(|1_{k}\rangle\) of photonic qubit \(k\) (\(k=1,\,2,\,...,\,n\) ). In addition, we define
\[|cat\rangle_{\alpha} = \mathcal{N}\big{(}|\alpha\rangle_{c_{k}}+|-\alpha\rangle_{\alpha }\big{)},\] \[|\overline{cat}\rangle_{\alpha} = \mathcal{N}\big{(}|\alpha\rangle_{c_{k}}-|-\alpha\rangle_{\alpha }\big{)}, \tag{30}\]
where \(|cat\rangle_{c_{1}}\) and \(|\overline{cat}\rangle_{c_{2}}\) are the two cat states of cavity \(k\) (\(k=1,\,2,\,...,\,n\) ), with a normalization coefficient \(\mathcal{N}=\)\(1/\sqrt{2(1+e^{-2|\alpha|^{2}})}\).
#### 1. Preparation of nonhybrid GHZ entangled state
Consider \(|\varphi_{e}\rangle_{c_{1}}=|0\rangle_{c_{1}}\) and \(|\varphi_{o}\rangle_{c_{2}}=|1\rangle_{c_{1}}\) (\(k=1,\,2,\,.\,...n\)). In this case, according to Eq. (29), it follows from Eq. (24) that the initial state of the system is
\[|\psi(0)\rangle=|+\rangle_{c_{1}}|+\rangle_{c_{2}}\cdots|+\rangle_{c_{\alpha} }\otimes|g\rangle. \tag{31}\]
According to Eq. (28), the \(n\) cavities are prepared in the following GHZ entangled state:
\[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}\Bigg{(}|0\rangle_{c_{1}}\prod_{l=2}^{n }|+\rangle_{c_{l}}+|1\rangle_{c_{1}}\prod_{l=2}^{n}|-\rangle_{c_{l}}\Bigg{)}. \tag{32}\]
By performing a local operation on cavity \(l\) and the coupler qutrit or applying a local operation on cavity \(l\) and an auxiliary two-level SC qubit (placed in cavity \(l\)), one can achieve the state transformation \(|+\rangle_{c_{l}}\rightarrow|0\rangle_{c_{l}}\) and \(|-\rangle_{c_{l}}\rightarrow|1\rangle_{c_{l}}\)[16]. Thus, the GHZ state (32) turns into
\[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}\Big{(}|0\rangle_{c_{1}}|0\rangle_{c_{2} }\cdots|0\rangle_{c_{\alpha}}+|1\rangle_{c_{1}}|1\rangle_{c_{2}}\cdots|1 \rangle_{c_{\alpha}}\Big{)}, \tag{33}\]
which is a nonhybrid GHZ entangled state of \(n\) cavities (\(1,\,2,\,...,\,n\)).
#### 2. Preparation of cat-coherent hybrid GHZ entangled state
Consider \(|\varphi_{e}\rangle_{c_{1}}=|cat\rangle_{c_{1}}\) and \(|\varphi_{o}\rangle_{c_{1}}=|\overline{cat}\rangle_{c_{1}}\) (\(k=1,\,2,\,...,\,n\)). In this case, according to Eq. (30), it follows from Eq. (24) that the initial state of the system is
\[|\psi\left(0\right)\rangle=|\alpha\rangle_{c_{1}}|\alpha\rangle_{c_{2}}\cdots |\alpha\rangle_{c_{\alpha}}\otimes|g\rangle, \tag{34}\]
i.e., each cavity is initially in a coherent state \(|\alpha\rangle\). According to Eq. (28), the \(n\) cavities are prepared in the following cat-coherent hybrid GHZ entangled state:
\[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}\Bigg{(}|cat\rangle_{c_{1}}\prod_{l=2}^{ n}|\alpha\rangle_{c_{l}}+|\overline{cat}\rangle_{c_{1}}\prod_{l=2}^{n}|- \alpha\rangle_{c_{\alpha}}\Bigg{)}. \tag{35}\]
#### 3. Preparation of cat-spin hybrid GHZ entangled state
For cavity \(1\), consider \(|\varphi_{e}\rangle_{c_{1}}=|cat\rangle_{c_{1}}\) and \(|\varphi_{o}\rangle_{c_{1}}=|\overline{cat}\rangle_{c_{1}}\). For cavity \(k\) (\(k=2,\,3,\,...,\,n\)), consider \(|\varphi_{e}\rangle_{c_{1}}=|0\rangle_{c_{2}}\) and \(|\varphi_{o}\rangle_{c_{2}}=|1\rangle_{c_{1}}\). In this case, according to Eqs. (29) and (30), it follows from Eq. (24) that the initial state of the system is
\[|\psi\left(0\right)\rangle=|\alpha\rangle_{c_{1}}\prod_{k=2}^{n}|+\rangle_{c_{ 1}}\otimes|g\rangle. \tag{36}\]
According to Eq. (28), the \(n\) cavities are prepared in the following cat-spin hybrid GHZ entangled state:
\[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}\Bigg{(}|cat\rangle_{c_{1}}\prod_{k=2}^{ n}|+\rangle_{c_{1}}+|\overline{cat}\rangle_{c_{1}}\prod_{k=2}^{n}|-\rangle_{c_{ \alpha}}\Bigg{)}, \tag{37}\]
where the two rotated states \(|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}\) and \(|-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}\) correspond to the spin left-hand circular motion and the spin right-hand circular motion, respectively.
#### 4. Preparation of spin-coherent hybrid GHZ entangled state
Consider \(|\varphi_{e}\rangle_{c_{1}}=|0\rangle_{c_{1}}\), \(|\varphi_{o}\rangle_{c_{1}}=|1\rangle_{c_{1}}\), \(|\varphi_{e}\rangle_{c_{2}}=|cat\rangle_{c_{2}}\), and \(|\varphi_{o}\rangle_{c_{2}}=|\overline{cat}\rangle_{c_{2}}\) (\(l=2,\,3,\,...,\,n\)). In this case, according to Eqs. (29) and (30), it follows from Eq. (24) that the initial state of the system is
\[|\psi\left(0\right)\rangle=|+\rangle_{c_{1}}|\alpha\rangle_{c_{2}}|\alpha\rangle _{c_{3}}\cdots|\alpha\rangle_{c_{\alpha}}\otimes|g\rangle. \tag{38}\]
According to Eq. (28), the \(n\) cavities are prepared in the following spin-coherent hybrid GHZ entangled state:
\[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}\Bigg{(}|0\rangle_{c_{1}}\prod_{l=2}^{n}| \alpha\rangle_{c_{l}}+|1\rangle_{c_{1}}\prod_{l=2}^{n}|-\alpha\rangle_{c_{1}} \Bigg{)}. \tag{39}\]
Note that the vacuum state and the single-photon state of cavity \(1\) here correspond to the spin up and down, respectively.
To ensure that the prepared states (33), (35), (37), and (39) above are GHZ states, the two states \(|cat\rangle\) and \(|\overline{cat}\rangle\), \(|+\rangle\) and \(|-\rangle\), \(|0\rangle\) and \(|1\rangle\), or \(|\alpha\rangle\) and \(|-\alpha\rangle\) of each cavity are required to be orthogonal or quasi-orthogonal to each other. Note that the two cat states \(|cat\rangle\) and \(|\overline{cat}\rangle\) are orthogonal, the two rotated states \(|+\rangle\) and \(|-\rangle\) are orthogonal, and the two states \(|0\rangle\) and \(|1\rangle\) are orthogonal. In addition, the two coherent states \(|\alpha\rangle\) and \(|-\alpha\rangle\) can be made to be quasi-orthogonal for a large enough \(\alpha\) (e.g., \(|\alpha\rangle-\alpha\rangle|^{2}=e^{-4|\alpha|^{2}}\lesssim 10^{-2}\) for \(\alpha=1.1\)).
Before ending this section, we should mention that, in addition to the above GHZ states (33), (35), (37), and (39), other types of nonhybrid or hybrid GHZ entangled states of multiple cavities can be created according to Eq. (28), because there exist other possible encodings of \(|\psi_{e}\rangle\) and \(|\psi_{o}\rangle\). The multi-target-qubit controlled-phase gate considered in this work can be used to prepare various nonhybrid and hybrid GHZ entangled states of multiple cavities.
## V Possible experimental implementation
In this section, as an example, we investigate the experimental feasibility for creating a hybrid GHZ state of three cavities (1,2,3), by using three 1D microwave cavities coupled to a SC flux qutrit (Fig. 4). The hybrid GHZ state to be prepared is
\[|\text{GHZ}\rangle=\frac{1}{\sqrt{2}}\big{(}|0\rangle_{c_{1}}|\alpha\rangle_{ c_{2}}|\alpha\rangle_{c_{3}}+|1\rangle_{c_{1}}|-\alpha\rangle_{c_{2}}|- \alpha\rangle_{c_{3}}\big{)}, \tag{40}\]
i.e., a spin-coherent hybrid GHZ state, given by Eq. (39) for \(n=3\). From Eq. (38), one can see that the initial state of the whole system is
\[|\psi\left(0\right)\rangle=|+\rangle_{c_{1}}|\alpha\rangle_{c_{2}}|\alpha \rangle_{c_{3}}\otimes|g\rangle. \tag{41}\]
### Full Hamiltonian
As discussed in Sec. IV.1, one can see that the GHZ state (40) here is prepared by applying the gate (19) or (1) with \(n=3\) (i.e., a three-qubit gate with one qubit simultaneously controlling two target qubits). Thus, the Hamiltonian used for the generation of the GHZ state (40) is the effective Hamiltonian (7) with \(n=3\). This Hamiltonian was derived from the original Hamiltonian (3) with \(n=3\), which only includes the coupling between cavity 1 and the \(|g\rangle\leftrightarrow|f\rangle\) transition as well as the coupling between other cavities and the \(|e\rangle\leftrightarrow|f\rangle\) transition of the SC qutrit. In a realistic situation, there is the unwanted coupling between cavity 1 and the \(|e\rangle\leftrightarrow|f\rangle\) transition, the unwanted coupling between cavity 2 and the \(|g\rangle\leftrightarrow|f\rangle\) transition, as well as the unwanted coupling between cavity 3 and the \(|g\rangle\leftrightarrow|f\rangle\) transition of the SC qutrit. Besides, there is the unwanted intercavity crosstalk between the three cavities.
When the unwanted couplings and the unwanted intercavity crosstalk are taken into account, the Hamiltonian (3), with \(n=3\) for the present case, is modified as
\[H_{\text{I}}^{\prime} = g_{1}(e^{-i\delta_{1}t}\hat{a}_{1}^{+}\sigma_{fg}^{-}+\text{H.c. })+\sum_{l=2}^{3}g_{l}(e^{-i\delta_{1}t}\hat{a}_{l}^{+}\sigma_{fe}^{-}+\text{ H.c.}) \tag{42}\] \[+g_{1}^{\prime}(e^{-i\delta_{1}^{\prime}t}\hat{a}_{1}^{+}\sigma_{ fe}^{-}+\text{H.c.})+\sum_{l=2}^{3}g_{l}^{\prime}(e^{-i\delta_{1}^{\prime}t} \hat{a}_{l}^{+}\sigma_{fg}^{-}+\text{H.c.}),\] \[+(\widehat{g}_{12}e^{i\vec{\Delta}_{12}t}\hat{a}_{1}^{+}\hat{a}_{2 }+\widehat{g}_{13}e^{i\vec{\Delta}_{23}t}\hat{a}_{2}^{+}\hat{a}_{3}\] \[+\widehat{g}_{23}e^{i\vec{\Delta}_{33}t}\hat{a}_{1}^{+}\hat{a}_{3} +\text{H.c.}),\]
where the terms in line one represent the required coupling between cavity 1 and the \(|g\rangle\leftrightarrow|f\rangle\) transition as well as the required coupling between cavity 2 (cavity 3) and the \(|e\rangle\leftrightarrow|f\rangle\) transition of the SC qutrit (Fig. 5), the first term in line two represents the unwanted coupling between cavity 1 and the \(|e\rangle\leftrightarrow|f\rangle\) transition of the SC qutrit with coupling constant \(g_{1}^{\prime}\) and detuning \(\delta_{1}^{\prime}=\omega_{fe}-\omega_{c_{1}}\) (Fig. 5), the second term in line two represents the unwanted coupling between cavity 2 (cavity 3) and the \(|g\rangle\leftrightarrow|f\rangle\) transition of the SC qutrit with coupling constant \(g_{2}^{\prime}\) (\(g_{3}^{\prime}=\omega_{fg}-\omega_{c_{2}}\) (\(\delta_{3}^{\prime}=\omega_{fg}-\omega_{c_{3}}\)) (Fig. 5), while the terms in the last line represent the unwanted intercavity crosstalk among the three cavities, with \(\widehat{g}_{kl}\) (\(\widehat{\Delta}_{kl}=\omega_{c_{3}}-\omega_{c_{2}}\)) the crosstalk strength (the
Figure 4: Diagram of three 1D microwave cavities capacitively coupled to a SC flux qutrit. Each cavity here is a one-dimensional transmission line resonator. The flux qutrit consists of three Josephson junctions and a superconducting loop.
Figure 5: Illustration of the required coupling between cavity 1 and the \(|g\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{1}\) and detuning \(\delta_{1}\)), the required coupling between cavity 2 and the \(|e\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{2}\) and detuning \(\delta_{3}\)), and the required coupling between cavity 3 and the \(|e\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{3}\) and detuning \(\delta_{3}\)). In addition, illustration of the unwanted coupling between cavity 2 and the \(|g\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{2}^{\prime}\) and detuning \(\delta_{2}^{\prime}\)), the unwanted coupling between cavity 1 and the \(|e\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{1}^{\prime}\) and detuning \(\delta_{1}^{\prime}\)), the unwanted coupling between cavity 2 and the \(|g\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{2}^{\prime}\)), as well as the unwanted coupling between cavity 3 and the \(|g\rangle\leftrightarrow|f\rangle\) transition of the qutrit (with coupling constant \(g_{3}^{\prime}\)) and detuning \(\delta_{3}^{\prime}\)). Note that the coupling of each cavity with the \(|g\rangle\leftrightarrow|e\rangle\) transition of the qutrit is negligible because of the weak \(|g\rangle\leftrightarrow|e\rangle\) transition. Red, blue, and purple lines correspond to cavities 1, 2, and 3, respectively.
frequency detuning) between the two cavities \(k\) and \(l\) (\(k,l\in\{1,\,2,\,3\};k\neq l\)). Note that the coupling of each cavity with the \(|g\rangle\leftrightarrow|e\rangle\) transition of the SC qutrit is negligible because the \(|g\rangle\leftrightarrow|e\rangle\) transition can be made weak by increasing the barrier between the two potential wells of the SC qutrit. Hence, this coupling is not considered in the above Hamiltonian (42) to simplify our numerical simulations.
### Numerical results
With finite qutrit relaxation, dephasing and photon lifetime being included, the dynamics of the lossy system is determined by the following master equation:
\[\frac{d\rho}{dt}= - i[H_{1}^{\prime},\,\rho]+\sum_{j=1}^{3}\kappa_{j}\mathcal{L}[ \hat{a}_{j}] \tag{43}\] \[+ \gamma_{eg}\mathcal{L}[\sigma_{eg}^{-1}]+\gamma_{fe}\mathcal{L}[ \sigma_{fe}^{-}]+\gamma_{fg}\mathcal{L}[\sigma_{fg}^{-}]\] \[+ \gamma_{e,\varphi}(\sigma_{ee}\rho\sigma_{ee}-\sigma_{ee}\rho/2- \rho\sigma_{ee}/2)\] \[+ \gamma_{f,\varphi}(\sigma_{ff}\rho\sigma_{ff}-\sigma_{ff}\rho/2- \rho\sigma_{ff}/2),\]
where \(\sigma_{ee}=|e\rangle\langle e|,\quad\sigma_{ff}=|f\rangle\langle f|,\quad \mathcal{L}[\Lambda]=\Lambda\rho\Lambda^{+}-\Lambda^{+}\Lambda\rho/2-\rho \Lambda^{+}\Lambda/2\) (with \(\Lambda=\hat{a}_{j},\,\sigma_{eg}^{-},\sigma_{fe}^{-},\sigma_{fg}^{-}\)), \(\gamma_{eg}\) is the energy relaxation rate of the level \(|e\rangle\) for the decay path \(|e\rangle\rightarrow|g\rangle\), \(\gamma_{fe}\) (\(\gamma_{fg}\)) is the relaxation rate of the level \(|f\rangle\) for the decay path \(|f\rangle\rightarrow|e\rangle\) (\(|f\rangle\rightarrow|g\rangle\)), \(\gamma_{e,\varphi}\) (\(\gamma_{f,\varphi}\)) is the dephasing rate of the level \(|e\rangle\) (\(|f\rangle\)) of the qutrit, while \(\kappa_{j}\) is the decay rate of cavity \(j\) (\(j=1,\,2,\,3\)). For numerical calculations, we here use the qutip software [97, 98], which is an open-source software for simulating the dynamics of open quantum systems.
The fidelity for the prepared three-cavity GHZ state is evaluated by
\[\mathcal{F}=\sqrt{\langle\psi_{\rm id}|\rho|\,\psi_{\rm fid}\rangle}, \tag{44}\]
where \(|\psi_{\rm fid}\rangle=|\rm{GHZ}\rangle\otimes|g\rangle\) is the ideal output state, with the state \(|\rm{GHZ}\rangle\) given in Eq. (40). The ideal output state \(|\psi_{\rm fid}\rangle\) here is obtained without taking into account the system dissipation, the intercavity crosstalk, and the unwanted couplings, while \(\rho\) is the density operator describing the real output state of the system, which is achieved by numerically solving the master equation and considering the operation performed in a realistic situation.
For a flux qutrit, the typical transition frequency between adjacent energy levels can be made as 1-20 GHz [99, 100, 101]. As an example, consider the parameters listed in Table 1, which are used in the numerical simulations. The coupling constants \(g_{2}\) and \(g_{3}\) in Table 1 are calculated for \(m=10\) according to Eq. (21). By a proper design of the flux qutrit [102], one can have \(\phi_{fg}\sim\phi_{fe}\sim\)10\(\phi_{eg}\), where \(\phi_{ij}\) is the dipole coupling matrix element between the two levels \(|i\rangle\) and \(|j\rangle\) with \(ij\in\{eg,fe,fg\}\). Thus, one has \(g_{1}^{\prime}\sim g_{1}\), \(g_{2}^{\prime}\sim g_{2}\), and \(g_{3}^{\prime}\sim g_{3}\), while the \(|g\rangle\leftrightarrow|e\rangle\) transition is much weaker compared to the \(|g\rangle\leftrightarrow|f\rangle\) and \(|e\rangle\leftrightarrow|f\rangle\) transitions of the qutrit. The maximum among the coupling constants listed in Table 1 is \(g_{\rm max}=2\pi\times 0.303\) GHz, which is readily available since a coupling constant \(\sim\)2\(\pi\)\(\times\) 0.636 GHz has been reported for a flux device coupled to a microwave cavity [103].
Other parameters used in the numerical simulations are as follows: (i) \(\gamma_{eg}^{-1}=10T\), \(\gamma_{fe}^{-1}=T\), \(\gamma_{fg}^{-1}=T\), \(\gamma_{\varphi e}^{-1}=\gamma_{\varphi ff}^{-1}=T/2\), (ii) \(\kappa_{1}=\kappa_{2}=\kappa_{3}=\kappa\), (iii) \(g_{12}=g_{23}=g_{13}=g_{cr}\), and (iv) \(\alpha=1.1\). We should mention that \(\gamma_{eg}^{-1}\) is much larger than \(\gamma_{fe}^{-1}\) and \(\gamma_{fg}^{-1}\) because of \(\phi_{fg}\sim\phi_{fe}\sim\)10\(\phi_{eg}\). The maximal value of \(T\) adopted in our numerical simulations is 20 us. As a result, the decoherence times of the qutrit employed in the numerical simulations are 10\(-\)200 us, which is a rather conservative case since a decoherence time of 70 us to 1 ms for a superconducting flux device has been experimentally demonstrated [48, 104].
By numerically solving the master equation (43), we plot Fig. 6, which gives the fidelity versus \(\kappa^{-1}\) for \(T=10\), 15, and 20 us and \(g_{cr}=0.01g_{\rm max}\). The setting \(g_{cr}=0.01g_{\rm max}\) here is obtainable in experiments by a prior design of the sample with appropriate capacitances \(C_{1},C_{2}\), and \(C_{3}\) depicted in Fig. 4[95]. One can see from Fig. 6 that the fidelity exceeds 93.07% for \(\kappa^{-1}\geq 20\) us and \(T\geq 10\) us. In addition, Fig. 6 shows that the fidelity is insensitive to the qutrit decoherence. This is because during the GHZ state preparation the qutrit mostly stays in the ground state and thus the effect of decoherence from the qutrit is negligible, while Fig. 6 shows that the fidelity is sensitive to the cavity decay. This is understood because the photons are populated in each cavity during the GHZ state preparation.
In reality, cavity 1 may not be prepared in a perfect initial state. Thus, we consider a nonideal initial state of the system
\[|\psi(0)\rangle_{\text{non-ideal}}= \,\mathcal{N}_{x}[(1+x)|0\rangle_{c_{1}}\] \[+(1-x)]|1\rangle_{c_{1}}|\alpha\rangle_{c_{2}}|\alpha\rangle_{c_{3 }}\otimes|g\rangle, \tag{45}\]
where \(\mathcal{N}_{x}=1/\sqrt{2(1+x^{2})}\) is a normalization factor. In this situation, we numerically plot Fig. 7 for \(T=10\,\upmu\)s, which shows that the fidelity decreases with increasing of \(x\). Nevertheless, for \(x\in[-0.1,0.1]\), i.e., a 10% error in the weights of the \(|0\rangle\) and \(|1\rangle\) states, a fidelity greater than 92.48%, 92.92%, and 93.07% can be obtained for \(\kappa^{-1}=20\), 50, and 100 us, respectively.
With the parameters listed in Table 1, the operational time for preparing the GHZ state is estimated to be \(\sim\)0.63 us, much shorter than the cavity decay time and the qutrit decoherence time used in the numerical simulations. For the cavity frequencies given in Table 1 and \(\kappa^{-1}=20\,\upmu\)s, the quality factors for the three cavities are \(Q_{1}\sim 2.34\times 10^{6}\), \(Q_{2}\sim 1.26\times 10^{6}\), and \(Q_{3}\sim 1.21\times 10^{6}\), which are achievable because a 1D microwave cavity or resonator with a high-quality factor \(Q\gtrsim 2.7\times 10^{6}\) has been reported in experiments [52, 53].
### Discussion
The above analysis shows that the fidelity is sensitive to the error in the initial state and the cavity decay, but it is insensitive to the decoherence from the qutrit. The imperfect fidelity is also induced by the unwanted qutrit-cavity couplings. Moreover, the imperfect fidelity is caused because the large detuning condition is not well satisfied. We note that the fidelity can be further improved by optimizing the system parameters to reduce the errors. As demonstrated in Fig. 8, a high fidelity greater than 99% can be achieved for \(\delta_{1}/g_{1},m\gtrsim 50\), and \(g_{cr}=0.01g_{\text{max}}\) when the unwanted qutrit-cavity couplings, the dissipation of the system, and the error in the initial state are negligible. Finally, it should be noted that further study is needed for each particular experimental setup. However, this requires a rather lengthy and complex analysis, which is beyond the scope of this theoretical work.
## VI Conclusions
We have proposed a single-step method to implement a multi-target-qubit controlled-phase gate with photonic qubits each encoded via two arbitrary orthogonal eigenstates of the photon-number parity operator. The gate is realized by employing multiple microwave cavities coupled to a SC flux qutrit. As shown above, this proposal has the following features and advantages: (i) It can be applied not only to implement nonhybrid multi-target-qubit controlled-phase gates using photonic qubits with various encodings, but also to realize hybrid multi-target-qubit controlled-phase gates using photonic qubits with different encodings; (ii) the gate realization is quite simple because only a single-step operation is needed; (iii) neither classical pulse nor measurement is required, thus the gate is implemented in a deterministic way; (iv) since only one coupler SC qutrit is needed to couple all of the cavities, the hardware circuit resources are significantly reduced and minimized; (v) during the gate realization, the SC qutrit remains in the ground state, thus decoherence from the qutrit is greatly suppressed; and (vi) the operation time for the gate realization is independent of the number of target qubits,
Figure 8: Fidelity versus \(\delta_{1}/g_{1}\) and \(m\). Here, \(m\) is the parameter involved in Eq. (21). The figure is plotted by setting \(\delta_{1}/g_{1}=m\), \(g_{1}/2\pi=0.16\) GHz, \(\delta_{2}=\delta_{1}+2\pi\times 0.4\) GHz, \(\delta_{3}=\delta_{1}+2\pi\times 0.8\) GHz, and assuming that the dissipation of the system, the unwanted couplings of each cavity with the transitions between the irrelevant levels of the qutrit (see Fig. 6), as well as the error in the initial state preparation are negligible. The coupling constants \(g_{2}\) and \(g_{3}\) used in the numerical simulation are calculated based on Eq. (21).
therefore it does not increase with increasing of the number of target qubits.
As an application, we have further discussed how to apply this multiqubit gate to generate a multicavity GHZ entangled state with general expression. Depending on the specific encodings, we have also discussed the generation of several nonhybrid and hybrid GHZ entangled states of multiple cavities. Furthermore, we have numerically analyzed the experimental feasibility of generating a spin-coherent hybrid GHZ state of three cavities within current circuit QED technology. To the best of our knowledge, this work is the first to demonstrate the realization of the proposed multiqubit gate with photonic qubits, encoded via two arbitrary orthogonal eigenstates of the photon-number parity operator, based on cavity or circuit QED. Finally, note that the same Hamiltonians introduced in the above Sec. III.1 can be obtained for many physical systems, such as multiple microwave or optical cavities coupled to a three-level natural or artificial atom. Hence, this proposal is quite general, thus can be applied to implement the proposed multi-target-qubit gate using photonic qubits with various encodings and can be used to prepare different kinds of nonhybrid and hybrid GHZ entangled states of multiple cavities in a wide range of physical systems.
###### Acknowledgements.
This work was partly supported by the National Natural Science Foundation of China (NSFC) (Grants No. 11074062, No. 11374083, No. 11774076, and No. U21A20436), the Key-Area Research and Development Program of GuangDong province (Grant No. 2018B030326001), the Jiangsu Funding Program for Excellent Postdoctoral Talent, and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301705).
|
2305.14115 | RLBoost: Boosting Supervised Models using Deep Reinforcement Learning | Data quality or data evaluation is sometimes a task as important as
collecting a large volume of data when it comes to generating accurate
artificial intelligence models. In fact, being able to evaluate the data can
lead to a larger database that is better suited to a particular problem because
we have the ability to filter out data obtained automatically of dubious
quality. In this paper we present RLBoost, an algorithm that uses deep
reinforcement learning strategies to evaluate a particular dataset and obtain a
model capable of estimating the quality of any new data in order to improve the
final predictive quality of a supervised learning model. This solution has the
advantage that of being agnostic regarding the supervised model used and,
through multi-attention strategies, takes into account the data in its context
and not only individually. The results of the article show that this model
obtains better and more stable results than other state-of-the-art algorithms
such as LOO, DataShapley or DVRL. | Eloy Anguiano Batanero, Ángela Fernández Pascual, Álvaro Barbero Jiménez | 2023-05-23T14:38:33Z | http://arxiv.org/abs/2305.14115v1 | # RLBoost: Boosting Supervised Models using Deep Reinforcement Learning
###### Abstract
Data quality or data evaluation is sometimes a task as important as collecting a large volume of data when it comes to generating accurate artificial intelligence models. In fact, being able to evaluate the data can lead to a larger database that is better suited to a particular problem because we have the ability to filter out data obtained automatically of dubious quality. In this paper we present RLBoost, an algorithm that uses deep reinforcement learning strategies to evaluate a particular dataset and obtain a model capable of estimating the quality of any new data in order to improve the final predictive quality of a supervised learning model. This solution has the advantage that of being agnostic regarding the supervised model used and, through multi-attention strategies, takes into account the data in its context and not only individually. The results of the article show that this model obtains better and more stable results than other state-of-the-art algorithms such as LOO, DataShapley or DVRL.
## 1 Introduction
Collecting large amounts of quality data is always desirable, if not essential for most successful use cases in the field of machine learning. However, sometimes this is not possible. Usually, we either have a large amount of data that we doubt whether it is of good quality, or we have little data that we know that have high quality but it becomes complicated to obtain more for the intended use case. That can be either because of the novelty of the use case to be treated or because of the difficulty of the acquisition of data. Additionally, there may
be some cases where the cost of re-training a model needs to be optimized as some models are very expensive to train, so if you have a method of evaluating new data you can delegate that decision to when you obtain sufficient quality data.
The evaluation of data quality turns out to be a problem that concerns the entire field of supervised learning and needs to be addressed because of the potential benefits it can bring. However, it is necessary to clarify that a data point is sometimes not good or bad individually, but depends on the origin or quality of the rest of the available data, so it is desirable to bring a more contextual approach to it. Indeed, we can formulate our problem as an optimization problem where, given \(N\) samples, the objective is to select the best samples to the given task.
Fortunately, there is a field of machine learning that tries to optimize processes in order to maximize a metric (or reward) function. Reinforcement learning algorithms are essentially search algorithms that find a solution by trial and error. Therefore, using this kind of algorithms could be the key to solve the data valuation problem, knowing that it should be scalable to large amounts of data, as well as agnostic to the supervised learning model used.
This is why this article proposes a method of data valuation based on reinforcement learning that takes into account the information that a data sample can provide, both individually and contextually.
This paper is organized as follows. In Section 2 we briefly describe the state of the art in data quality assessment for supervised models. We present the proposed method in Section 3, and the experimental results over several datasets are shown in Section 4. The paper ends with some conclusions in Section 5.
## 2 State of the art
There are previous approaches in the field of data quality assessment for supervised learning models. First, and a fairly common method, is the Leave One Out (LOO) approach. This technique attempts to isolate the effect of each data point on the training sample by simply removing it from the set and looking at the results it achieves. The worse the result achieved by removing one of the records, the greater the value of that data point is considered to be. While simple, the computational cost of this approach is in the order of \(O(N^{2})\) for a dataset with \(N\) training samples. Nevertheless, this method has been reformulated in a computationally less inefficient way (Koh and Liang, 2017).
On the other hand, and based on game theory, Shapley values can be defined (Lundberg and Lee, 2017). Shapley values are a fairly widely used method in variable selection and explainability of supervised learning models. However, it has been seen in the literature that they can also be used for the particular use case of data quality assessment at Ghorbani and Zou (2019). The original computational cost of Shapley values is \(O(N!)\) where \(N\) is the number of training samples, so it is necessary to perform a Monte Carlo sampling for cases
where then number of items to be discriminated is high. In particular, in Ghorbani and Zou (2019) they reduce this cost to \(O(2^{n})\) by reducing the problem to a cooperative game and using a truncated Monte Carlo (TMC) sampling strategy.
Finally, there is also prior work on methods that use reinforcement learning techniques for the problem of data quality assessment such as Data Valuation using Reinforcement Learning (DVRL) (Yoon et al., 2020). This method proved to be experimentally quite powerful, while featuring a computational complexity well below those discussed above (see D). It approaches the objective as a multi-step trajectory problem, so it is forced to generate a reward that takes into account a moving window of scores within each episode versus the score of the actual step, being the episode a set of batches of the whole training dataset. This technique uses the REINFORCE algorithm (Williams, 1992) and treats each record individually to produce a probability value to be selected or not by a multinomial distribution.
Therefore, the current paper proposes a novel methodology based on a reinforcement learning approach improving over the one previously discussed, aimed at enhancing certain aspects of the aforementioned technique.
## 3 RLBoost method
The problem of selecting and sorting a collection of training data for its application in a supervised learning model can be addressed as an optimization problem, making the field of reinforcement learning particularly relevant. Specifically, the task involves selecting a subset of records (actions) from a given set (state) to optimize the model score (reward). To accomplish this, the paper delves into the domain of reinforcement learning, with a focus on policy gradient and Actor-Critic strategies-based agents.
Reinforcement learning(Sutton and Barto, 2018) is the field that deals with the optimization of decision sequences along a trajectory. In this way, we would go on to define the following concepts.
1. A trajectory \(T\) is composed of single steps \(t\in T\).
2. During the trajectory \(T\), decisions \(a_{t}\) are made based on certain situations or states \(s_{t}\) for which an immediate reward \(r_{t}\) is obtained.
3. The agent must adjust the parameters \(\theta\) of a policy \(\pi\) by which to decide what action to take in a given state, which will be denoted as \(\pi_{\theta}(a_{t}|s_{t})\).
4. In the case of agents based on Actor-Critic strategies it will also be necessary to adjust the parameters \(\phi\) of a complementary estimator of the value function \(V\) depending only on the current state. This estimator will be in charge of predicting the cumulative reward from the current state to the end of the trajectory, and such estimation will be noted in the form \(V_{\phi}(s_{t})\).
### Proximal Policy Optimization
The reinforcement learning algorithm used in this study is Proximal Policy Optimization (PPO). At Yoon et al. (2020) they use a regular Policy Gradient method, but PPO (Schulman et al., 2017) usually outperforms these techniques by several reasons. First, we have the use of advantages \(A_{\phi}(s_{t},a_{t},\pi)=\delta_{t}+\gamma\delta_{t+1}+\gamma^{2}\delta_{t+2}+...+\gamma^{T-2}\delta_{t+T-1}\) where \(\delta_{t}=r_{t}+\gamma V_{\phi}(s_{t+1})-V_{\phi}(s_{t})\) and \(\gamma\) is defined as a discount parameter over the steps in a trajectory. Those advantages are used to check whether some actions can obtain higher/lower rewards than expected, and therefore they are reinforced/disincentivized. This leads us to a generalization of the advantages calculation called Generalized Advantages Estimation (GAE) defined as \(A_{t}^{GAE(\gamma,\lambda)}=\sum\limits_{t=0}^{\infty}(\lambda\gamma)^{t} \delta_{t+1}\). Here, the use of an exponential weight discount \(\lambda\) is needed to control the bias variance trade-off.
On the other hand, PPO incorporates an intrinsic reward mechanism via an entropy term to promote exploration. Finally, the clipping component of this algorithm's policy loss acts as a regularizer of the model, ensuring that the policy does not suffer from excessive changes between each iteration and thereby enabling smoother improvements to be made.
The original PPO algorithm has this loss function definition
\[L(\theta)=L^{clip}(\theta)-c_{1}L^{VF}(\phi)+c_{2}S(\theta), \tag{1}\]
with the following entropy bonus loss to encourage exploration
\[S(\theta)=\mathbb{E}_{t}\left[-\pi_{\theta}(a_{t},s_{t})\log(\pi_{\theta}(a_{ t},s_{t}))\right],\]
the value function (or critic) loss defined as
\[L^{VF}(\phi)=\mathbb{E}_{t}\left\|r_{t}+\gamma V_{\widehat{\phi}}(s_{t+1},\pi) )-V_{\phi}(s_{t},\pi))\right\|_{2}^{2}, \tag{2}\]
with an old copy of the value function estimator with parameters \(\widehat{\theta}\), and the policy (or actor) loss as
\[\begin{gathered} t_{1}=A_{\phi}(s_{t},a_{t},\pi)R_{t}(\theta),\\ t_{2}=A_{\phi}(s_{t},a_{t},\pi)clip(R_{t}(\theta),1-\epsilon,1+ \epsilon),\\ L^{clip}(\theta)=\mathbb{E}_{t}[\min(t_{1},t_{2})],\end{gathered} \tag{3}\]
where \(R_{t}=\pi_{\theta}(a_{t},s_{t})/\pi_{\widehat{\theta}}(a_{t},s_{t})\). Here, we can see that the clipping method is used as a kind of regularizer to avoid aggressive changes on the policy function as previously mentioned.
### PPO as a bandit agent
After reviewing the existing literature on data evaluation using reinforcement learning, it became apparent that a simplified approach could potentially establish a starting point for improvement. As outlined in earlier sections, the
problem was formulated as a one-step optimization problem in which a reinforcement learning algorithm selects a subset of data from a training dataset to improve a chosen metric. The size of the data batch for the agent's states (\(N\)) must be large enough to be representative of the original data, but not so large as to excessively increase the complexity of the problem.
As part of this simplification process, a reward mechanism was defined based on calculating the difference between the validation score obtained by a supervised estimator trained with the full data batch and the validation score of the same estimator with the subset of data selected by the reinforcement learning agent.
Having specified the use case at hand, one could argue that a bandit method could better fit in this case, since the only actions available in the environment are the selection of data points in the batch being evaluated, for which an immediate reward is obtained. But in fact, to adapt PPO to a problem formulated in this way, we only need to check that it is mathematically possible and thus take advantage of the improvements of the algorithm mentioned above. This adaptation involves making two assumptions: \(\gamma=0\) and \(t=0\) as there is no trajectory.
These assumptions lead us to rewrite the previous formulation. First, the advantages calculated in the GAE are simplified in the form:
\[A_{\phi}(s,a,\pi)=\delta=r-V_{\phi}(s,\pi).\]
Therefore, the new expression of the policy loss is as follows
\[\begin{gathered} t_{1}=\delta R(\theta),\\ t_{2}=\delta clip(R(\theta),1-\epsilon,1+\epsilon),\\ L^{clip}(\theta)=\min(t_{1},t_{2}),\end{gathered} \tag{4}\]
and the loss function of the value function (the critic part) is given by:
\[L^{VF}(\phi)=\left\|r-V_{\phi}(s,\pi)\right\|_{2}^{2}. \tag{5}\]
It is worth noting that, as we see in the differences between equation (2) and equation (5), what was previously a temporal difference error between the actual value functions estimates and the estimation of the value function of the next state has become only the estimate of the actual state. This tells us that the value function of this agent indicates whether a higher or lower difference at model score is expected exclusively for the actual data.
Finally, it should also be noted that the entropy bonus, which is the value that allows the agent to be able to explore, would look like this
\[S(\theta)=-\pi_{\theta}(a,s)\log(\pi_{\theta}(a,s)).\]
After having tested the mathematical feasibility of making the proposed change, it has been proven that not only the change is feasible but it also provides us with some interesting advantages for the specific use case, such as
the estimation of the profits of filtering non-quality data at each step through the agent's critic estimation or the clipping of the actor (equation (4)) as a method of regularization of the agent.
### Model architecture
For a complete understanding of the architecture of the model used by this particular algorithm and presented in Figure 1, it is necessary to clarify certain aspects beforehand.
First, depending on the use case for which the strategy presented in this paper is to be used, some kind of vectorization of each of the data samples to be evaluated is needed. In the case of tabular data this step is straightforward since each of the records is a vector in itself and the vectorizing part can be simply a series of fully connected layers or just one that functions as a projection to the latent state of the data. However, in the case of images, some kind of architecture would be needed to vectorize each of these samples, for example, a
Figure 1: Schematic diagram of the operation of the proposed RLBoost model. It shows the application cases for both tabular data and image data, where a non-trainable data vectorizer will be needed for the latter. In this diagram it can be also seen that the reward calculation is the difference between the score of the episode using the proposed filtering versus the score of the model without filtering, both against the validation set.
Contrastive Language-Image Pretrained model (CLIP) (Radford et al., 2021).
Once all the data has been vectorized, each of these vectors is used as input to a sequence of \(L\) Transformer Encoders (Vaswani et al., 2017). An extra parameterizable vector, known as _CLS Token_, is also added to this process. The strategy of including a learnable vector called _CLS_ is frequently employed in document classification within the field of natural language processing (_NLP_) (Devlin et al., 2018). The output of the \(L\) Transformer Encoders corresponding to the _CLS Token_ is expected to capture the contextual information of the entire batch of vectors, allowing it to estimate the value function (5). In this case, the value function pertains to the complete batch of proposed data.
## 4 Experiments
As depicted in Figure 1, the proposed method is versatile and applicable to various use cases, as long as the input data can be vectorized to serve as input to the Transformer Encoder. Hence, this approach will be evaluated on both tabular and image data, where the image data will be vectorized beforehand.
In addition, in order to test the training data filtering behavior that the proposed method can perform, a noise rate has been introduced in each of the train datasets to be tested. This error rate consisted of a target class shift to the opposite class of a random percentage of the training data at tabular data and a complete circular shift at image data (MNIST dataset), but neither at validation nor test datasets.
Since we are introducing noise into datasets that are expected to be sufficiently clean it has been decided to evaluate each of the proposed methods in two different ways. In all cases, we perform a test of the accuracy score of the supervised learning model using the filtering proposed by RLBoost against the same supervised learning model trained with all the available records, which we refer to as the "baseline".Additionally, as in some cases we are intentionally introducing noise (15% and 30%) we have also decided to evaluate the ability of the selection model to detect noisy records. Since in this case we also want to measure the ability of the model to detect data in which noise has been introduced, the evaluation of the data can be seen as a classification problem between noiseless and noisy records, such that the positive class is noiseless record and the negative class is noisy record. In this way, Receiver Operating Characteristic (ROC) curves can be calculated to measure the effectiveness of the method.
The methods compared in this study will be those presented in Section 2. The LOO, Data Shapley (SHAP), DVRL and RLBoost methods will be run for each of the use cases presented. It should be noted that the version of Data Shapley used needs to be the one that improves the computational cost of the algorithm from \(O(N!)\) to \(O(2^{n})\) in order to obtain results for large dataset sizes.
It is necessary to emphasize that, since the LOO and SHAP methods are not agents intended for the task of data selection, a subsequent process has been performed after obtaining the data values from these methods. This process
involves a sweep with different thresholds for the values obtained so that, once the best cut-off threshold is selected against the validation set, this threshold is used to train the final model.
This sweep has been performed in order to compare the rest of the methods against the best possible execution of these methods. However, it generates a couple of side-effects that must be taken into account:
* This sweep is very costly to perform in the face of new data arrivals as it forces several re-trainings, so it is not desirable to perform.
* The score results of the models using sweeping are measured in a way that generates a lower bound for the final score with the value of the baseline model. This is because a threshold that doesn't filter any data will always result in the baseline score. Therefore, the worst-case scenario for this method can only be the same as the baseline accuracy. In other words, the baseline score will be achieved even if no method had been applied.
### Tabular data
Since each record in tabular datasets corresponds to a vector, the projection operation for this kind of problems is fairly straightforward. Specifically, it has been decided to use the Adult dataset and A5A, W5A and CIFAR10 datasets (from the LibSVM repository Chang and Lin (2011)) to test the performance with this kind of data.
It should be noted that in the case of the CIFAR10 dataset, which is not a binary classification problem, the problem was previously binarized by simply segregating class #1 from the rest. In all cases, the noise rates proposed for each of the tests are 0%, 15% and 30%.
As can be seen in Table 1, in all cases priority has been given to having as large a dataset as possible under test to ensure that the evaluation of the algorithm performance is as realistic as possible. On the other hand, it should be noticed that in the case of the Adult dataset, the same training, validation and test cuts and the same preprocessing as in the original DVRL paper (Yoon et al., 2020) have been performed.
The parameters used in each of the runs are common to all experiments. In all cases, the internal generic estimator was a logistic regression (to speed up the
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline
**Dataset** & **Train** & **Validation** & **Test** & **\# Features** & **\# Classes** \\ \hline Adult & 1k & 400 & 47,442 & 123 & 2 \\ A5A & 5k & 1k & 10k & 123 & 2 \\ W5A & 5k & 1k & 10k & 300 & 2 \\ CIFAR10 & 4k & 1k & 10k & 3,072 & 10 \(\rightarrow\)2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of the records, features and classes of the tabular datasets to be used in the experimentation.
execution of the tests), the batch sizes of records proposed to the agent were of size 200 samples, the agent has 4 stacked Transformer Encoders, trained during 1e5 steps and the agent training batch being 64 in all proposed datasets.
\begin{table}
\begin{tabular}{l||l l l|} & **0\% noise** & **15\% noise** & **30\% noise** \\ \hline
**Baseline** & **0.982** & **0.977** & 0.946 \\
**LOO** & 0.982(\(\pm\)0.0) & 0.977(\(\pm\)0.0) & 0.946(\(\pm\)0.0) \\
**SHAP** & — & — & — \\
**DVRL** & 0.979(\(\pm\)0.004) & 0.970(\(\pm\)0.004) & 0.899(\(\pm\)0.096) \\ \hline
**RLB (1e-1)** & 0.979(\(\pm\)0.001) & 0.970(\(\pm\)0.007) & 0.959(\(\pm\)0.005) \\
**RLB (1e-2)** & 0.974(\(\pm\)0.006) & 0.972(\(\pm\)0.005) & **0.966(\(\pm\)0.005)** \\
**RLB (1e-3)** & 0.980(\(\pm\)0.001) & 0.969(\(\pm\)0.004) & 0.961(\(\pm\)0.002) \\
**RLB (1e-4)** & 0.981(\(\pm\)0.001) & 0.970(\(\pm\)0.005) & 0.957(\(\pm\)0.005) \\ \hline \end{tabular}
\end{table}
Table 4: Table of scores against test data using the final filtering proposed by each of the models at W5A dataset at 0%, 15% and 30% noise ratio.
\begin{table}
\begin{tabular}{l||l l l|} & **0\% noise** & **15\% noise** & **30\% noise** \\ \hline
**Baseline** & **0.832** & **0.814** & 0.776 \\
**LOO** & 0.832(\(\pm\)0.0) & 0.814(\(\pm\)0.0) & 0.776(\(\pm\)0.0) \\
**SHAP** & 0.832(\(\pm\)0.00) & 0.793(\(\pm\)0.029) & 0.768(\(\pm\)0.007) \\ \hline
**RLB (1e-1)** & 0.807(\(\pm\)0.006) & 0.799(\(\pm\)0.008) & 0.813(\(\pm\)0.004) \\
**RLB (1e-2)** & 0.814(\(\pm\)0.005) & 0.811(\(\pm\)0.006) & 0.812(\(\pm\)0.007) \\
**RLB (1e-3)** & 0.813(\(\pm\)0.001) & 0.816(\(\pm\)0.004) & **0.814(\(\pm\)0.001)** \\
**RLB (1e-4)** & 0.814(\(\pm\)0.002) & 0.812(\(\pm\)0.005) & 0.814(\(\pm\)0.007) \\ \hline \end{tabular}
\end{table}
Table 2: Table of scores against test data using the final filtering proposed by each of the models at Adult dataset at 0%, 15% and 30% noise ratio.
\begin{table}
\begin{tabular}{l||l l l|} & **0\% noise** & **15\% noise** & **30\% noise** \\ \hline
**Baseline** & **0.844** & **0.838** & 0.807 \\
**LOO** & 0.844(\(\pm\)0.0) & 0.838(\(\pm\)0.0) & 0.807(\(\pm\)0.0) \\
**SHAP** & — & — & — \\
**DVRL** & 0.828(\(\pm\)0.002) & 0.829(\(\pm\)0.004) & 0.809(\(\pm\)0.016) \\ \hline
**RLB (1e-1)** & 0.834(\(\pm\)0.004) & 0.831(\(\pm\)0.004) & 0.826(\(\pm\)0.006) \\
**RLB (1e-2)** & 0.833(\(\pm\)0.002) & 0.830(\(\pm\)0.004) & **0.831(\(\pm\)0.003)** \\
**RLB (1e-3)** & 0.827(\(\pm\)0.003) & 0.826(\(\pm\)0.002) & 0.823(\(\pm\)0.006) \\
**RLB (1e-4)** & 0.826(\(\pm\)0.002) & 0.825(\(\pm\)0.004) & 0.826(\(\pm\)0.001) \\ \hline \end{tabular}
\end{table}
Table 3: Table of scores against test data using the final filtering proposed by each of the models at A5A dataset at 0%, 15% and 30% noise ratio.
The tables 2, 3, 4 and 5 show the results of each of the proposed methods for the indicated datasets. On the one hand we have the results of previous filtering methods such as LOO, SHAP and DVRL together with the model without any filtering (Baseline) and on the other hand we have the results of our RLBoost method (RLB) with different entropy bonus values (the hyperparameter that quantifies the exploration in the PPO algorithm).
To assess the stability of the aforementioned methods, 5 executions were performed for each method in order to calculate the standard deviation of the metric being evaluated. This allowed us to examine the variability and consistency of the results obtained. It is important to note that in the case of DVRL, several re-executions were required due to the need to make a final decision on whether or not to choose a datum, as it was not necessary to use the sweep of methods that did choose an action. However, in some cases, this decision was incompatible with the training of a final estimator, requiring a rerun. To obtain the 5 valid runs for all the cases proposed, 38 backup runs were necessary in addition to the 75 runs required (also considering the results of Table 11).
Several things must be emphasized in this table. On the one hand it can be seen that the execution of the SHAP method on datasets where the training data size has 5k examples could not be performed since the computational efficiency of the algorithm does not make it feasible (see appendix D). On the other hand, and as it is obvious, the baseline models worsen their performance as noise is introduced to the training dataset, which corroborates that the noise generation is done correctly.
Additionally, it can be also observed how the RLBoost-based methods outperform in cases where the noise is high and still work pretty well in cases where there is not noise, so it seems that it is a fairly robust method as far as improving the final metric of the model (accuracy in our cases) is concerned.
\begin{table}
\begin{tabular}{l||l l l} & **0\% noise** & **15\% noise** & **30\% noise** \\ \hline
**Baseline** & 0.880 & 0.781 & 0.683 \\
**LOO** & 0.890(\(\pm\)0.0) & 0.759(\(\pm\)0.0) & 0.648(\(\pm\)0.0) \\
**SHAP** & — & — & — \\
**DVRL** & 0.866(\(\pm\)0.031) & 0.748(\(\pm\)0.017) & 0.648(\(\pm\)0.010) \\ \hline
**RLB (1e-1)** & 0.896(\(\pm\)0.003) & 0.855(\(\pm\)0.007) & 0.797(\(\pm\)0.007) \\
**RLB (1e-2)** & 0.902(\(\pm\)0.006) & 0.869(\(\pm\)0.016) & 0.860(\(\pm\)0.007) \\
**RLB (1e-3)** & **0.902(\(\pm\)0.002)** & **0.882(\(\pm\)0.014)** & **0.879(\(\pm\)0.011)** \\
**RLB (1e-4)** & 0.898(\(\pm\)0.004) & 0.873(\(\pm\)0.014) & 0.866(\(\pm\)0.007) \\ \hline \end{tabular}
\end{table}
Table 5: Table of scores against test data using the final filtering proposed by each of the models at binarized CIFAR10 dataset at 0%, 15% and 30% noise ratio.
As we can see, Figure 2 shows the performance against the test data for each of the models (0% error, 15% and 30%) at the end of the final selection at every tabular dataset case tested.
However, it can also be seen that the higher the error introduced, the more the generic estimator can take advantage of the data thanks to the selection
Figure 2: Final Scores for the agents trained over tabular datasets
made by our agent, which was the objective we were looking for.
On the other hand, in Figure 3 and tables 6, 7, 8 and 9, we can see the ROC curves on whether the model has identified that a particular record is a data sample without noise (positive value), or it is a value with noise (negative value) based on the outputs of the agent before being binarized to be selected or not selected.
These curves show us that the capacity of the proposed agents are similar to the best of those available in the state of the art, but without the computational limitation of the execution of Data Shapley (Ghorbani and Zou, 2019), as well as the improved stability of our method over DVRL.
It should be noted that in the case of CIFAR10 the results corroborate the
Figure 3: ROC curves to measure the ability to detect noisy data in the Adult dataset, where the positive class is the data without noise and the negative class is the data with noise.
quality of the agent and are quite promising as can be seen at Figures 2 and 4.
\begin{table}
\begin{tabular}{l||c c|} & **15\% noise** & **30\% noise** \\ \hline
**LOO** & 0.339(\(\pm\)0.0) & 0.509(\(\pm\)0.0) \\
**SHAP** & 0.729(\(\pm\)0.018) & 0.724(\(\pm\)0.019) \\
**DVRL** & 0.657(\(\pm\)0.193) & 0.734(\(\pm\)0.195) \\ \hline
**RLB (1e-1)** & 0.816(\(\pm\)0.006) & **0.856(\(\pm\)0.009)** \\
**RLB (1e-2)** & **0.831(\(\pm\)0.01)** & 0.825(\(\pm\)0.015) \\
**RLB (1e-3)** & 0.815(\(\pm\)0.009) & 0.839(\(\pm\)0.008) \\
**RLB (1e-4)** & 0.828(\(\pm\)0.008) & 0.819(\(\pm\)0.013) \\ \hline \end{tabular}
\end{table}
Table 6: Tables of AUCs detecting noisy data by each of the models in the Adult dataset
\begin{table}
\begin{tabular}{l||c c|} & **15\% noise** & **30\% noise** \\ \hline
**LOO** & 0.628(\(\pm\)0.0) & 0.455(\(\pm\)0.0) \\
**SHAP** & — & — \\
**DVRL** & 0.946(\(\pm\)0.008) & 0.859(\(\pm\)0.185) \\ \hline
**RLB (1e-1)** & **0.957(\(\pm\)0.013)** & **0.967(\(\pm\)0.003)** \\
**RLB (1e-2)** & 0.947(\(\pm\)0.008) & 0.967(\(\pm\)0.006) \\
**RLB (1e-3)** & 0.916(\(\pm\)0.008) & 0.933(\(\pm\)0.006) \\
**RLB (1e-4)** & 0.887(\(\pm\)0.007) & 0.892(\(\pm\)0.025) \\ \hline \end{tabular}
\end{table}
Table 8: Tables of AUCs detecting noisy data by each of the models in the W5A dataset
In Appendices A and B we report the result graphs of the rest of the ROC analysis on the proposed tabular datasets.
### Image data
As previously mentioned in Subsection 3.3, this algorithm is not restricted only to its application to tabular data. Any problem in which a data can be vectorized in advance can be used to evaluate the quality of the given samples.
In this case it has been decided to test the robustness of the model by evaluating the quality of vectorized image data using the MNIST dataset [4]. The way to vectorize these images has been through a CLIP model [11] with a pre-trained ResNet50. At Table 10 can be seen the different sets and the specification of each one after vectorizing and splitting the data.
Since this dataset contains handwritten digits from 0 to 9, it was also decided to complicate a little more the error introduced to the data. Thus, each of the erroneous data has been classified as the class of the next digit in a circular fashion, so that 0 becomes 1, 1 becomes 2, etc. and 9 becomes 0.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline
**Dataset** & **Train** & **Validation** & **Test** & **\# Features** & **\# Classes** \\ \hline MNIST & 5k & 1k & 5k & 1024 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Details of the records, features and classes of the image dataset to be used in the experimentation.
\begin{table}
\begin{tabular}{l||c c|} & **15\% noise** & **30\% noise** \\ \hline
**LOO** & 0.619(\(\pm\)0.0) & 0.592(\(\pm\)0.0) \\
**SHAP** & — & — \\
**DVRL** & 0.548(\(\pm\)0.061) & 0.498(\(\pm\)0.003) \\ \hline
**RLB (1e-1)** & 0.757(\(\pm\)0.014) & 0.767(\(\pm\)0.005) \\
**RLB (1e-2)** & **0.779(\(\pm\)0.044)** & **0.855(\(\pm\)0.005)** \\
**RLB (1e-3)** & 0.764(\(\pm\)0.019) & 0.82(\(\pm\)0.009) \\
**RLB (1e-4)** & 0.657(\(\pm\)0.021) & 0.678(\(\pm\)0.021) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Tables of AUCs detecting noisy data by each of the models in the binarized CIFAR10 dataset
\begin{table}
\begin{tabular}{l||l l l|} & **0\% noise** & **15\% noise** & **30\% noise** \\ \hline
**Baseline** & **0.968** & 0.779 & 0.621 \\
**LOO** & 0.968(\(\pm\)0.000) & 0.780(\(\pm\)0.000) & 0.620(\(\pm\)0.000) \\
**SHAP** & - & - & - \\
**DVRL** & 0.908(\(\pm\)0.119) & 0.923(\(\pm\)0.081) & 0.780(\(\pm\)0.379) \\ \hline
**RLB (1e-1)** & 0.967(\(\pm\)0.001) & 0.890(\(\pm\)0.013) & 0.782(\(\pm\)0.017) \\
**RLB (1e-2)** & 0.968(\(\pm\)0.002) & 0.910(\(\pm\)0.007) & 0.842(\(\pm\)0.016) \\
**RLB (1e-3)** & 0.969(\(\pm\)0.002) & **0.914**(\(\pm\)**0.009) & **0.867**(\(\pm\)**0.018) \\
**RLB (1e-4)** & 0.968(\(\pm\)0.001) & 0.876(\(\pm\)0.011) & 0.802(\(\pm\)0.030) \\ \hline \end{tabular}
\end{table}
Table 11: Table of scores against test data using the final filtering proposed by each of the models in the MNIST data set
Figure 5: Final scores of the agents trained over the MNIST data
\begin{table}
\begin{tabular}{l||l l|} & **15\% noise** & **30\% noise** \\ \hline
**LOO** & 0.440(\(\pm\)0.0) & 0.575(\(\pm\)0.0) \\
**SHAP** & - & - \\
**DVRL** & 0.884(\(\pm\)0.192) & **0.977(\(\pm\)0.006)** \\ \hline
**RLB (1e-1)** & 0.816(\(\pm\)0.006) & 0.737(\(\pm\)0.03) \\
**RLB (1e-2)** & **0.853(\(\pm\)0.014)** & 0.861(\(\pm\)0.012) \\
**RLB (1e-3)** & 0.839(\(\pm\)0.014) & 0.843(\(\pm\)0.012) \\
**RLB (1e-4)** & 0.679(\(\pm\)0.013) & 0.688(\(\pm\)0.016) \\ \hline \end{tabular}
\end{table}
Table 12: Table of AUCs detecting noisy data by each of the models in image dataset
Figure 6: ROC curves to measure the ability to detect noisy data in the MNIST dataset, where the positive class is the data without noise and the negative class is the data with noise.
In view of the above results, we can conclude that the RLBoost method is really effective for its purpose. In each of the tests performed against the proposed dataset, it has ended up with, if not the best value in terms of accuracy after agent filtering, a fairly competitive one. One of the striking cases of these results is the exceptional good performance of the DVRL method in the case where the noise is 30%, in detection of noisy data in Table 12. However, in view of the results of other runs, we can see that DVRL suffers from instability when it comes to yield good results, while the proposed method does not.
## 5 Conclusions and future work
The problem of data evaluation can be oriented as if it were a search problem, and in particular it is possible to formulate it as a trajectory free reinforcement learning problem. This approach makes it possible to understand that each data point is part of a context and that it will be more or less valuable depending on the supervised learning model to be used to model it.
This paper opens a new framework in the evaluation of data from a supervised learning model using reinforcement learning without any trajectory approach. In this sense, it opens up to us different branches of research. First, one of the next steps is to apply this strategy to text classification problems in order to be able to extend a training dataset, since this problem usually has the difficulty that the manual evaluation of records by is often very expensive and specialized. Therefore, a properly labeled validation set can be developed with the goal of collecting training data automatically for further evaluation. On the other hand, it also opens the possibility of improving the reinforcement learning algorithm (perhaps using SAC (Haarnoja et al., 2018) or V-MPO (Song et al., 2020)), taking into account that it must be of the Actor-Critic type. This improvement should have much emphasis on the sample-efficiency of the method, since this will allow a faster convergence to more complex estimators. Finally, there could be some improvements at the model architecture, like using LongFormer from Beltagy et al. (2020) to try larger batches without increasing the computational cost of the algorithm, or using a Encoder/Decoder architecture for the policy network.
## Appendix A A5A dataset
## Appendix B W5A dataset
Figure 8: ROC curves to measure the ability to detect noisy data in the W5A dataset, where the positive class is the data without noise and the negative class is the data without noise.
Figure 7: ROC curves to measure the ability to detect noisy data in the A5A dataset, where the positive class is the data without noise and the negative class is the data without noise.
Ablation study
To understand what elements or parts make ReSuLT work in this way, it has been decided to ablate different parts to see how it behaves towards them. In particular, it has been decided to see what influence the CLS token and the Transformer Encoder in general can have on the final performance of the agents.
Therefore, we have executed 5 times each of the combinations shown previously in the experimental part but leaving only the baseline score as input value to the value function (SB), leaving only the CLS token derived from the Transformer Encoder with all the batch data (CLS) and leaving both values as input to the value function (CLS+SB).
### Adult ablation scores
Figure 9: Final Scores for the agents trained over the Adult dataset making the ablations
### A5A ablation scores
### W5A ablation scores
### CIFAR10 ablation scores
### MNIST ablation scores
## Appendix D Time cost comparison
As this is a reinforcement learning problem, the computational cost is increasing in the number of interactions of the environment with the agent. The purpose of the Figure 14 is to illustrate several concepts. First, the high computational cost of Data Shapley and the low computational cost of LOO. Secondly, the stability of the computational cost of our agents regardless of the number of training samples we are using. And thirdly and lastly the cost of increasing the batch size of examples that see each interaction of the agent with the environment.
Figure 13: Final Scores for the agents trained over the MNIST dataset making the ablations |
2304.12895 | Discovering Graph Generation Algorithms | We provide a novel approach to construct generative models for graphs.
Instead of using the traditional probabilistic models or deep generative
models, we propose to instead find an algorithm that generates the data. We
achieve this using evolutionary search and a powerful fitness function,
implemented by a randomly initialized graph neural network. This brings certain
advantages over current deep generative models, for instance, a higher
potential for out-of-training-distribution generalization and direct
interpretability, as the final graph generative process is expressed as a
Python function. We show that this approach can be competitive with deep
generative models and under some circumstances can even find the true graph
generative process, and as such perfectly generalize. | Mihai Babiac, Karolis Martinkus, Roger Wattenhofer | 2023-04-25T15:06:22Z | http://arxiv.org/abs/2304.12895v1 | # Discovering Graph Generation Algorithms
###### Abstract
We provide a novel approach to construct generative models for graphs. Instead of using the traditional probabilistic models or deep generative models, we propose to instead find an algorithm that generates the data. We achieve this using evolutionary search and a powerful fitness function, implemented by a randomly initialized graph neural network. This brings certain advantages over current deep generative models, for instance, a higher potential for out-of-training-distribution generalization and direct interpretability, as the final graph generative process is expressed as a Python function. We show that this approach can be competitive with deep generative models and under some circumstances can even find the true graph generative process, and as such perfectly generalize.
## 1 Introduction
Generating new samples of graphs similar to a given set of graphs is a long-standing problem, which initially was tackled with various statistical models, such as the Erdos/Renyi model (Erdos and Renyi, 1959; Holland et al., 1983; Eldridge et al., 2017). While such models lend themselves well to formal analysis, they do not closely fit real-world graph distributions. More recently, deep generative models have proven to fit graph distributions well (You et al., 2018; Liao et al., 2020; Simonovsky and Komodakis, 2018; Martinkus et al., 2022; Haefeli et al., 2022; Vignac et al., 2022). However, similar to other deep models, they are not interpretable and struggle to generalize to graph sizes outside of the training distribution.
In this work, we propose an alternative approach. Given a set of graphs, we aim to learn a single algorithm, represented as a Python function, that can generate that given set of graphs as well as new examples. If we find an algorithm that fits the data well we can directly inspect it and potentially learn about the generative process that originally created the data. This can be quite useful for example in social sciences. Additionally, the produced algorithm will have a predictable behavior if the input parameters go beyond those observed during training.
We achieve this by factorizing the graph construction algorithm into two loops (Figure 1) over nodes and their potential neighbors. Evolutionary search is used to assemble the logic inside the loops to define when graph edges are added or removed. This setup is closely related to genetic programming (Sobania et al., 2022) where programs are evolved to match input-output examples, especially the grammar-guided (Whigham et al., 1995; Forstenelchner et al., 2016; Fenton et al., 2017) and linear (Lalejini and Ofria, 2019; Hernandez et al., 2019; Dolson et al., 2019; Ferguson et al., 2020) genetic programming approaches. In contrast to stack-based approaches (Perkis, 1994; Spector et al., 2005) they aim to generate code in a standard programming language such as Python. In general, evolutionary search has proven to be a powerful method for code search. It can even discover neural network architectures and their training functions (Real et al.,
Figure 1: An example program from the search space that generates a grid graph. i, j, N and W are aliases for int00-03, where N is the number of nodes in the grid and W is the width of the grid. Lines 1, 2, 7, and 8 are hard-coded, while the contents of the two for loops are what we are searching over.
2020). Our work is also related to methods that aim to learn a program that draws a figure (Ellis et al., 2018; 2020). However, we want a program that captures a family of graphs, not just a single figure.
Note that there are some genetic programming packages that allow generating Python code (Fenton et al., 2017). However, these proved to be too slow and not well suited for our problem.
We implement our method as a highly-optimized C++ framework and experimentally validate it on synthetic and real-world datasets. When applied to grid graphs with known grid sizes, our search finds algorithms that generalize perfectly to arbitrary grid sizes.
## 2 Method
In this section, we explain our representation, and how the evolutionary search is performed.
### Algorithm Representation
Ideally, the representation should be fast to execute, easy to mutate, and human-readable. Unfortunately, some of these goals can be hard to align. Instead, we choose to build a representation that is easy to mutate and can be easily converted to either a human-readable or an efficiently executable format. This representation consists of a tree of nodes (Figure 1(b)), in which each node corresponds (with few exceptions) to one line of code in a Python program (Figure 1(a)), where the lines contain either simple operations or if-statements. The structure of the tree matches the logical structure of the code that is implied through indentation: the tree branches out at if-statements and is chain-like otherwise. This allows us to execute the lines of code in the correct sequence by simply correctly traversing the tree (Appendix A.1). As shown in Figure 1, our algorithms consist of two for-loops, which can be nested. The code tree can represent the contents of such a for-loop, but the loop itself is hard-coded in the function that executes the individual.
To respect the code's branching behavior, we define three types of nodes: statement nodes, if-nodes, and empty/root nodes. Each node has a fixed number of named slots for its children, where each slot contains either a pointer to the child node or a marker, the NULL pointer if the child is missing.
A _statement-node_ contains an instruction to be executed. They have a single child slot, named nextInBranch, which points to the next line of code in the current branch (i.e. at the current level of indent for Python code). _Root-nodes_ are similar to statement-nodes but do not contain any instructions. They are at the root of a code tree, and their purpose is to simplify inserting new code at the beginning of the program. _If-nodes_ contain the address of a boolean condition variable, which decides which branch to take, and they have three child slots. The slots correspond to the "then" branch, to the "else" branch, and to the next line of code at the same level of indent and are named thenBranch, elseBranch and nextInBranch respectively.
Given this structure, we convert the code to a Python-like representation as follows: We traverse the tree in the order currentNode\(\rightarrow\)thenBranch\(\rightarrow\)elseBranch\(\rightarrow\)nextInBranch and, keeping track of the current nesting level, we simply output the associated lines of code.
Figure 2: The internal representations of a program. Left: Python code. Middle: internal representation as code tree. Right: compiled instructions used for running the code tree.
Similarly to conversion into Python, it is possible to traverse the tree and directly execute it. However, this has some drawbacks and can be slow. Instead, we propose to use a simple compilation step (Figure 1(c)) to improve evaluation speed inside the for-loops and thus the search speed. We discuss this and other representation implementation details in Appendix A.
The instructions allowed in our representation are listed in Appendix A.2. They cover the usual arithmetic, relational and Boolean operations, if-else statements, and function calls for generating random numbers. We restrict the available variables to a limited set that is strongly typed. Note that in our setup all variables are global. For constructing the graph, we use an adjacency matrix which the algorithm can manipulate and interrogate through dedicated functions. Notably, we disallow while-loops, additional for-loops (except the two hard-coded ones), and jumps to ensure all algorithms have finite running time.
### Evolutionary Search
The evolutionary search is started by initializing each individual as an empty algorithm. Every round we use tournament selection based on the graph fitness. The tournaments are stochastic, where softmax is applied on the fitness values and the winners are then randomly sampled. Taking inspiration from simulated annealing (Kirkpatrick et al., 1983) we gradually decay the softmax temperature during the search to ensure better exploration during the initial phases. The children are generated only through mutation of the parents, without crossovers, as done by Real et al. (2020). The mutations are effectively point-wise. We either delete, insert or change a command by changing one of its variables. In each round, the whole generation is replaced with the children, with some number of the best (elite) individuals being preserved. More information is provided in Appendix B.
To determine the fitness, we use the individual to generate a set of graphs and compare those to the training set graphs. Determining if two graph distributions are the same is a hard task (O'Bray et al., 2022). For our fitness function, we chose to use a randomly initialized GIN graph neural network (Xu et al., 2019) as a powerful feature extractor (Thompson et al., 2022) to embed the training graphs and graphs produced by the generated code and then compare the two resulting distributions (Appendix B.3). The fitness value is computed by using a Radial basis function (RBF) kernel and computing Maximum Mean Discrepancy (MMD) between the two embedding distributions. The use of a randomly initialized model helps us avoid cumbersome training of the graph neural network in an adversarial setting, but still provides us with a well-encompassing feature extractor, that can implicitly capture various graph structures and even node and edge features. This contrasts with the alternative of manually computing various pre-defined graph metrics, such as clustering coefficient and node degree distributions. Note, that Dziugaite et al. (2015) have shown that it is possible to train generative adversarial networks for image generation by relying only on MMD instead of a trained discriminator. So MMD can be a well-encompassing metric for generative model training.
## 3 Experiments
Our work has similarities to autoregressive deep graph generative models (Liao et al., 2020; You et al., 2018), as our search is biased towards iterative algorithms. Thus, we use the experimental setup of GRAN (Liao et al., 2020) and compare it to GRAN and GraphRNN (You et al., 2018) as baselines for our experiments. The datasets and their 80/20 split between training and test data match the ones used by GRAN Liao et al. (2020). We only adjust the Grid dataset to filter out isomorphic copies of grids from the dataset to avoid data leakage between the training and test sets. To make training and search faster we consider grids with 9 to 81 nodes. We re-train the baselines on this modified dataset using the original hyperparameters. In our case, we do not make use of a validation set, as in our setup it is redundant, since we do not expect overfitting. During the search procedure, similarly to neural network training, we follow a mini-batch approach, where in every round of search a random subset of 16 test graphs is used to compute the fitness scores. This helps to avoid performing expensive MMD computation over the whole training set at once. See Appendix B.4 for other search hyperparameters. The code is publicly available1.
Footnote 1: [https://github.com/MihaiBabiac/graph_gen_algo](https://github.com/MihaiBabiac/graph_gen_algo)
Table 1 shows that our discovered graph algorithms perform quite well on the simpler grid and lobster graph datasets. While the deep learning models achieve slightly better statistical similarity,
the discovered algorithms are competitive and respect constraints, such as graphs having no triangles (clustering coefficient of \(0\)). Importantly, they do all this, while providing an interpretable graph generator. However, the approach struggled with more complex protein graphs. This might indicate insufficient exploration under some circumstances.
Even though we are able to find an algorithm that produces statistically similar graphs to grids, when supplied just with node count, the graphs are not true grids (Figure 3). In our setup, the algorithm can also be conditioned on additional input values. If we instead perform our search over algorithms that take both the node count and the grid width as inputs, our method manages to recover the true generative algorithm (Figure 6(d)). While it was discovered only using a dataset with up to 81 nodes, it can generalize to produce perfect grids of any size, showcasing the potential benefit of using program synthesis for the graph generation. This is impossible with current deep generative models, which are also incapable of generating perfect, rectangular grids even without considering extrapolation (Liao et al., 2020). This suggests that providing extra inputs to the algorithm can provide a user with additional control over the discovered function's outputs, and also makes the search simpler. The discovered algorithm in Figure 6(d) is also quite interpretable. This is the case for most of the algorithms we have discovered (Appendix C).
## 4 Conclusion
In this work, we show that it is possible to use program synthesis to discover interpretable graph generation algorithms. In certain cases even the true generative processes can be discovered, resulting in ideal extrapolation. The performance on more complex graph families can likely be improved, by more carefully tuning the fitness function to make it smoother. As shown by O'Bray et al. (2022) MMD metrics can be quite sensitive to their hyperparameters. Another strong extension could be a compilation of a library of primitive functions that are useful for graph construction. For example, if we provided a function to factorize a number, discovering an algorithm for grid generation when no additional inputs are given would likely be as easy as when the grid width is provided. In principle, the approach can also be applied to attributed graph generation, which is left for future work.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Grid} & \multicolumn{4}{c}{Lobster} & \multicolumn{4}{c}{Protein} \\ \cline{2-13} & Deg. & Clus. & Orbit & Spec. & Deg. & Clus. & Orbit & Spec. & Deg. & Clus. & Orbit & Spec. \\ \hline GraphRNN & \(\mathbf{5.34e^{-4}}\) & \(\mathbf{0.0}\) & \(\mathbf{8.41e^{-5}}\) & \(\mathbf{3.93e^{-2}}\) & \(\mathbf{9.26e^{-5}}\) & \(\mathbf{0.0}\) & \(\mathbf{2.19e^{-5}}\) & \(\mathbf{1.14e^{-2}}\) & \(1.06e^{-2}\) & \(0.14\) & 0.88 & \(1.88e^{-2}\) \\ GRAN & \(2.42e^{-2}\) & \(\mathbf{0.0}\) & \(4.51e^{-2}\) & \(6.87e^{-2}\) & \(3.73e^{-2}\) & \(\mathbf{0.0}\) & \(7.67e^{-4}\) & \(2.71e^{-2}\) & \(\mathbf{1.98e^{-3}}\) & \(\mathbf{4.86e^{-2}}\) & \(\mathbf{0.13}\) & \(\mathbf{5.13e^{-3}}\) \\ Ours & \(5.10e^{-3}\) & \(\mathbf{0.0}\) & \(3.38e^{-3}\) & \(0.1\) & \(2.77e^{-3}\) & \(\mathbf{0.0}\) & \(2.31e^{-2}\) & \(3.41e^{-2}\) & \(0.42\) & 1.07 & 1.14 & 0.31 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with deep graph generative models. The results for the baseline models are taken from Liao et al. (2020).
Figure 4: The discovered algorithm for grids, when the grid width \(W\) is supplied alongside the number of nodes \(N\).
Figure 3: Comparison of reference graphs from the test set to graphs generated by algorithms found with our method. Graphs in the same column have the same number of nodes. |
2301.01789 | Vortex weighing and dating of planets in protoplanetary discs | High-resolution sub-mm observations of some protoplanetary discs reveal
non-asixymmetric features, which can often be interpreted as dust
concentrations in vortices that form at the edges of gaps carved out by the
embedded planets. We use recent results on the timescale for the planet-driven
vortex development in low-viscosity discs to set constraints on the mass and
age of a planet producing the vortex. Knowledge of the age of the central star
in a vortex-bearing protoplanetary disc system allows one to set a lower limit
on the planetary mass at the level of several tens of $M_\oplus$. Also, an
independent upper limit on the planetary mass would constrain the planetary
age, although given the current direct imaging detection limits this constraint
is not yet very stringent (it is also sensitively dependent on the disc scale
height). These results can be extended to account for the history of planetary
mass accretion if it is known. We apply our calculations to several
protoplanetary discs harbouring vortex-like features as revealed by ALMA and
set limits of $(30-50)M_\oplus$ (for disc aspect ratio of $0.1$) on the minimum
masses of putative planets that could be responsible for these vortices. Our
vortex-based method provides an independent way of constraining the properties
of embedded planets, complementary to other approaches. | Roman R. Rafikov, Nicolas P. Cimerman | 2023-01-04T19:01:01Z | http://arxiv.org/abs/2301.01789v1 | # Vortex weighing and dating of planets in protoplanetary discs
###### Abstract
High-resolution sub-mm observations of some protoplanetary discs reveal non-asixymmetric features, which can often be interpreted as dust concentrations in vortices that form at the edges of gaps carved out by the embedded planets. We use recent results on the timescale for the planet-driven vortex development in low-viscosity discs to set constraints on the mass and age of a planet producing the vortex. Knowledge of the age of the central star in a vortex-bearing protoplanetary disc system allows one to set a lower limit on the planetary mass at the level of several tens of \(M_{\oplus}\). Also, an independent upper limit on the planetary mass would constrain the planetary age, although given the current direct imaging detection limits this constraint is not yet very stringent (it is also sensitively dependent on the disc scale height). These results can be extended to account for the history of planetary mass accretion if it is known. We apply our calculations to several protoplanetary discs harbouring vortex-like features as revealed by ALMA and set limits of \((30-50)M_{\oplus}\) (for disc aspect ratio of 0.1) on the minimum masses of putative planets that could be responsible for these vortices. Our vortex-based method provides an independent way of constraining the properties of embedded planets, complementary to other approaches.
keywords: hydrodynamics - instabilities - shock waves - accretion discs - planets and satellites: formation - methods: numerical
## 1 Introduction
Observations of protoplanetary discs (PPDs) in dust continuum emission with ALMA revealed a variety of substructures (Andrews, 2020), including axisymmetric gaps and rings as well as non-axisymmetric clumps and arcs. An intriguing possibility is that these features could be produced by young embedded planets. In particular, gravitational coupling between a massive planet and the disc is known to result in formation of observable gaps around the planetary orbit (Papaloizou and Lin, 1984; Rafikov, 2002b). The evolution of overdensity (potential vorticity) at the edges of these gaps (Lin and Papaloizou, 2010; Dong et al., 2011; Cimerman and Rafikov, 2021) due to shock dissipation of the planet-driven density waves (Goodman and Rafikov, 2001; Rafikov, 2002a) can trigger the Rossby Wave Instability (RWI, Lovelace et al., 1999) resulting in the formation of fluid vortices at these locations. Dust accumulation inside the vortices (Barge and Sommeria, 1995) naturally leads to observable non-axisymmetric arcs and lobes.
Formation of these structures does not necessarily require massive (Jovian) planets. For example, it was shown (Dong et al., 2017; Bae et al., 2017; Miranda and Rafikov, 2019, 2020a, 2020b) that multiple visible gaps and rings in the dust distribution can result from nonlinear damping of multiple spirals triggered by a single sub-Jovian mass planet in a low viscosity disc. The mass of the planet \(M_{\rm p}\) can in fact be below the so-called _thermal mass_ defined as \(M_{\rm th}=\left(H_{\rm p}/R_{\rm p}\right)^{3}M_{\star}=h_{\rm p}^{3}M_{\star}\), where \(H_{\rm p}\) is the disc scale height at the planetary distance \(R_{\rm p}\), \(h_{\rm p}=H_{\rm p}/R_{\rm p}\) is the disc aspect ratio there, and \(M_{\star}\) is the stellar mass. Emergence of vortices at the edges of planetary gaps also does not require massive planets if the disc is almost inviscid (Hammer et al., 2021; Hallam and Paardekooper, 2020), and sub-\(M_{\rm th}\) mass planets can easily trigger them (Cimerman and Rafikov, 2023, hereafter CR23).
In this study we focus on non-axisymmetric disc features which can be interpreted as planet-induced vortices (other ways to produce vortices are mentioned in Section 2). Their development is not an instantaneous process as the evolution of vortensity near the planetary orbit towards the RWI takes a certain amount of time. As we show in this work, this fact can be exploited to set useful constraints on the mass and/or age of a putative planet responsible for production of the observed vortices in a PPD. This method relies on the recent calculation of CR23 who studied the development of vortices in inviscid PPDs. In that work, we showed that the time it takes for vortices to emerge at the edge of the gap carved out by a sub-\(M_{\rm th}\) mass planet can be approximated as
\[\tau_{\rm VTI}\approx A\,P_{\rm p}\,\left(\frac{M_{\rm p}}{M_{\rm th }}\right)^{\alpha}\,h_{\rm p}^{\beta},\quad\text{where} \tag{1}\] \[A\approx 1.6,\quad\alpha\approx-2.7,\quad\beta\approx-0.86, \tag{2}\]
and \(P_{\rm p}\) is the orbital period at \(R_{\rm p}\). This result assumes \(M_{\rm p}\) to be fixed in time. Interestingly, CR23 found \(\tau_{\rm VTI}\) to only weakly depend on the radial profile of surface density in the disc.
We describe the general idea of our method in Section 2 and show how it can constrain the mass and age of the planet in Section 3 and Section 4, respectively. In Section 5 we extend our constraints to the case of a planet accreting its mass over an extended period of time. We apply our results to observed PPDs in Section 6 and discuss them in Section 7.
## 2 General idea of the method
Let us suppose that observations of dust continuum emission reveal a gap in a PPD, together with a non-axisymmetric lobe or arc at the gap edge, indicative of a vortex which traps dust grains at this location (e.g. van der Marel et al., 2016; Kraus et al., 2017; Dong et al., 2018; Perez et al., 2018). We will interpret this observation by assuming that a planet (not necessarily directly visible) is located within the gap and is responsible for creating both the gap and the vortex (via the RWI). The spatial association of a vortex with an adjacent gap provides strong support to this interpretation and makes some other possibilities for triggering vortices, e.g. global baroclinic instability (Klahr and Bodenheimer, 2003), convective overstability (Teod and Latter, 2021), vertical shear instability (Richard et al., 2016) less attractive.
We assume the disc viscosity to be low (essentially inviscid), consistent with many observations of PPDs (Pinte et al., 2016; Rafikov, 2017; Flaherty et al., 2020). For now we will also assume that \(M_{\rm p}\) has been constant ever since the planet appeared in the disc, a constraint that we will relax in Section 5. With these conditions fulfilled, the result (1)-(2) applies. We can then use the observation of the vortex to set a constraint on a particular combination of the planetary mass \(M_{\rm p}\) and the planetary age \(\tau_{\rm p}\) -- the time that has passed since the planet has reached its final mass \(M_{\rm p}\).
Indeed, the observation of a vortex at the gap edge implies that the RWI had enough time to fully develop into the non-linear stage in that region, i.e. that
\[\tau_{\rm p}>\tau_{\rm vrr}. \tag{3}\]
Together with equation (1) this leads to the following combined constraint on \(\tau_{\rm p}\) and \(M_{\rm p}\):
\[\tau_{\rm p}M_{\rm p}^{-\alpha}>AP_{\rm p}M_{\rm th}^{-\alpha}h_{\rm p}^{B}. \tag{4}\]
With fit parameters (2) we can write this in physical units as
\[\frac{\tau_{\rm p}}{\rm Myr}\left(\frac{M_{\rm p}}{10^{2}M_{\rm th}}\right)^{ 2.7}>0.11\left(\frac{R_{\rm p}}{50\rm AU}\right)^{1.5}\left(\frac{M_{\star}}{ M_{\odot}}\right)^{2.2}\left(\frac{h_{\rm p}}{0.1}\right)^{7.2}. \tag{5}\]
This condition must be fulfilled whenever a vortex is observed at the gap edge. It is illustrated in Fig. 1 for several values of \(h_{\rm p}\) and \(R_{\rm p}\).
In a similar vein, the _absence_ of vortex-like structures at the edges of a visible gap in a disc might be interpreted as meaning that \(\tau_{\rm p}<\tau_{\rm vrr}\), i.e. that planet-driven accumulation of vortensity has not yet led to RWI. If that were the case, the inequality in the constraint (4)-(5) would change its sign. However, this possibility has an important caveat as the absence of a vortex may also be interpreted differently: it could have formed at the gap edge earlier but then got destroyed through one of the processes that tend to destabilize vortices once they evolve into the nonlinear regime: the elliptical instability (Lesur and Papaloizou, 2009), baroclinic effects (Rometsch et al., 2021; Fung and Ono, 2021), dust feedback (Fu et al., 2014), etc. Also, vortex formation may have been delayed or suppressed altogether if disc viscosity is sufficiently high (Hammer et al., 2017; Hallam and Paardekooper, 2020). Thus, the lack of a vortex near a planetary gap cannot be unambiguously interpreted as meaning that the embedded planet did not get a chance to create it, i.e. that \(\tau_{\rm p}<\tau_{\rm vrr}\). For that reason (and unlike Hallam and Paardekooper, 2020) in the following we will not draw any conclusions from the _absence_ of vortices at the edges of putative planetary gaps found in sub-mm observations.
We will now show how equations (4) & (5) can be used to separately constrain \(M_{\rm p}\) or \(\tau_{\rm p}\).
## 3 Vortex weighing of planets
Let us suppose that the age (time since formation) of the protostar-disc system \(\tau_{\rm sys}\) is known, e.g. from isochrone fitting of the characteristics of the central star. This is usually the case at some level of accuracy. Since, obviously, the planet is younger than its parent star, one must have \(\tau_{\rm sys}>\tau_{\rm p}\). However, the presence of a vortex at the gap edge means that the inequality (3) is also fulfilled, which necessarily implies that \(\tau_{\rm sys}>\tau_{\rm vrr}\). Using equation (4), this condition can be converted into a _lower limit_ on \(M_{\rm p}\):
\[M_{\rm p}>M_{\rm vrr}=M_{\rm th}\left(A\ h_{\rm p}^{B}\frac{P_{\rm p}}{\tau_ {\rm sys}}\right)^{-1/\alpha}. \tag{6}\]
In physical units,
\[M_{\rm vrr}\approx 40M_{\oplus}\left(\frac{\tau_{\rm sys}}{\rm Myr}\right)^{-0. 37}\left(\frac{R_{\rm p}}{50\rm AU}\right)^{0.56}\left(\frac{h_{\rm p}}{0.1} \right)^{2.7}\left(\frac{M_{\star}}{M_{\odot}}\right)^{0.81}. \tag{7}\]
Note a strong dependence of \(M_{\rm vrr}\) on \(h_{\rm p}\), but a rather weak scaling with \(\tau_{\rm sys}\). This constraint is illustrated in Fig. 2.
Note that for the mass constraint (6)-(7) to be valid, the timescale fit (1) should be justified in the first place. For this to be the case, the planetary mass must be in the sub-thermal
Figure 1: Combined constraint (5) on the planetary mass \(M_{\rm p}\) and age \(\tau_{\rm p}\), shown for different parameters of a system with \(M_{\star}=M_{\odot}\). Solid lines are for \(R_{\rm p}=50\) AU and \(h_{\rm p}=0.07\) (fuchsia), \(h_{\rm p}=0.1\) (blue), \(h_{\rm p}=0.15\) (green). Blue dashed line is for \(R_{\rm p}=100\) AU, \(h_{\rm p}=0.1\). Grey shaded region is excluded as no vortices should appear in this part of the parameter space (bounded by \(h_{\rm p}=0.07\) curve for illustration). Arrows indicate \(M_{\rm th}\) calculated using \(h_{\rm p}\) corresponding to the arrow color (same as in the legend). Constraint (5) — solid curves — is strictly valid only for \(M_{\rm p}\leq M_{\rm th}\).
Figure 2: Mass constraint (6), (7) as a function of the system (stellar) age \(\tau_{\rm sys}\). Meaning of curves, arrows and shading are the same as in Fig. 1.
easily show that
\[\frac{M_{\rm vrt}}{M_{\rm th}}\approx 0.13\left(\frac{\tau_{\rm sys}}{\rm Myr} \right)^{-0.37}\left(\frac{R_{\rm p}}{50\rm AU}\right)^{0.56}\left(\frac{h_{\rm p }}{0.1}\right)^{-0.32}\left(\frac{M_{\star}}{M_{\odot}}\right)^{-0.19}, \tag{8}\]
i.e. the condition \(M_{\rm p}\lesssim M_{\rm th}\) should be not difficult to satisfy in general (in Figs. 1,2 we illustrate the values of \(M_{\rm th}\) with arrows). Thus, we expect \(M_{\rm vrt}\) to provide a lower limit on \(M_{\rm p}\) quite generally.
The constraint (6)-(7) can be improved (i.e. \(M_{\rm vrt}\) increased) if we had some independent way to set an upper limit on \(\tau_{\rm p}\), which is lower than the system age \(\tau_{\rm sys}\). In practice, however, such refined information on \(\tau_{\rm p}\) may be difficult to obtain.
## 4 Vortex Dating of Planets
One can also turn the argument around and assume that, in addition to observing a vortex adjacent to a gap, we also know the mass \(M_{\rm p}\) of the gap-opening planet -- either via atmospheric modelling if the planet is visible, or through indirect dynamical measurements if it has not been imaged. We can then use the presence of the vortex to set a _lower_ limit on the planetary age \(\tau_{\rm p}\) via equation (3), in which
\[\tau_{\rm vrt}\approx 10^{5}\rm yr\left(\frac{M_{\rm p}}{10^{2}M_{\oplus}} \right)^{-2.7}\left(\frac{R_{\rm p}}{50\rm AU}\right)^{1.5}\left(\frac{h_{\rm p }}{0.1}\right)^{7.2}\left(\frac{M_{\star}}{M_{\odot}}\right)^{2.2}. \tag{9}\]
If only an upper limit \(M_{\rm 1}\) on planetary mass is available to us, \(M_{\rm p}<M_{\rm 1}\) (e.g. from non-detection of the planet through near-IR imaging), then one should use \(M_{\rm 1}\) instead of \(M_{\rm p}\) in (9). Solid and dashed lines in Fig. 1 give \(\tau_{\rm vrt}\) (as a function of \(M_{\rm p}\) or \(M_{\rm 1}\)) for different values of \(h_{\rm p}\) and \(R_{\rm p}\).
A constraint on \(\tau_{\rm p}\) would be extremely useful for understanding the timing of planet formation. It can also serve as a consistency check for calculations of planetary evolution post-formation, since the present day temperature and luminosity of the planet are themselves functions of its age \(\tau_{\rm p}\) (e.g. Linder et al., 2019), see Section 7. Unfortunately, the accuracy of the lower limit (3) & (9) may be somewhat compromised by the uncertainties in the determination of various parameters that enter it, e.g. \(M_{\rm p}\) and, especially, \(h_{\rm p}\), given how steeply \(\tau_{\rm vrt}\) scales with them.
## 5 Accounting for Accretion History of a Planet
Our results (1) & (2) for \(\tau_{\rm vrt}\) have been obtained in CR23 for a constant \(M_{\rm p}\) (not varying in time). This implicitly assumes that planet has grown to its final \(M_{\rm p}\) very rapidly, having accreted its mass almost instantaneously; this accretion history is illustrated in panel (a) of Fig. 3. In panel (b) we also illustrate the corresponding growth of the characteristic amplitude1\(A_{\zeta}\) of the planet-induced vortensity perturbation \(\zeta\), which is the variable that eventually determines vortex generation (CR23): very crudely, one may expect the RWI to set in when \(A_{\zeta}\) reaches some threshold value (illustrated with red dotted line). The growth rate of \(A_{\zeta}\) in panel (b) is constant since it sensitively depends on \(M_{\rm p}\) and \(M_{\rm p}\) is fixed in this case.
Footnote 1: E.g. a maximum or minimum value of \(\zeta\) as a function of radius, see CR23.
One may consider other representative histories of planetary mass evolution. For example, in Fig. 3c \(M_{\rm p}\) undergoes an initial period of accretion and then stays at its final value until the RWI sets in. As another example, in panel (e) the planetary mass increases steadily and the RWI gets triggered while \(M_{\rm p}\) is still growing. For these growth histories, the increase of \(A_{\zeta}\) is no longer purely linear, see panels (d) and (f), and using the final planetary mass2 in formula (1) we would _underestimate_ the true age of the planet \(\tau_{\rm p}\) (illustrated in top panels), i.e. the time since its growth has started and until the present day when the vortex has emerged. Instead, application of equation (1) would give us some other time \(\tau_{\rm 0}\), which is illustrated by the orange lines (based on the growth rate of \(A_{\zeta}\) at the time when RWI sets in) in panels (d) & (f). Since growth of \(\zeta\) accelerates (quite steeply) for higher \(M_{\rm p}\), the growth rate of \(A_{\zeta}\) can only increase in time, so that \(\tau_{\rm p}\geq\tau_{\rm 0}\) always (with equality only for \(M_{\rm p}(t)=\) const., see Fig. 3a,b).
Footnote 2: For simplicity we neglect the possible growth of \(M_{\rm p}\) after the vortex has appeared and until the present time.
Very importantly, this complication does not affect the validity of our time constraint, since \(\tau_{\rm 0}\) is given by our equation (1) and we just saw that \(\tau_{\rm p}\geq\tau_{\rm 0}\). However, in some scenarios, e.g. in the continuous accretion case shown in panels (e),(f), \(\tau_{\rm 0}\) can be much shorter than \(\tau_{\rm p}\), making our time constraint (3) too conservative. Thus, it is desirable to find ways to somehow account for the history of accretion (provided that it is known) to improve limits on \(\tau_{\rm p}\).
One way to do this has already been discussed in CR23 and amounts to replacing \(\tau_{\rm p}M_{\rm p}^{-\alpha}\) with \(\int_{0}^{\tau_{\rm p}}\left[M_{\rm p}(t)\right]^{-\alpha}\,\rm d\tau\) in equation (4); this modification allows us to account for the evolution of the vortensity (or \(A_{\zeta}\)) growth rate, which is proportional to \(M_{\rm p}^{-\alpha}\), as \(M_{\rm p}(t)\) increases. Thus, we generalize the combined constraint on \(\tau_{\rm p}\) and \(M_{\rm p}\) in the case of an accreting planet to
\[\int_{0}^{\tau_{\rm p}}\left[M_{\rm p}(t)\right]^{-\alpha}\,\rm d\tau>AP_{\rm p }M_{\rm th}^{-\alpha}h_{\rm p}^{\beta}. \tag{10}\]
Since this constraint must reduce to the inequality (4), we will assume all its parameters -- \(\alpha\), \(\beta\), \(A\) -- to be still given by the equation3 (2). Then in physical units equation (10) becomes
Footnote 3: This assumption is only approximate since the non-trivial history of accretion may modify the radial profile of \(\zeta\), which determines the RWI stability (Cimerman and Rafikov, 2021). Also, the RWI threshold itself is not entirely universal (CR23). But this approximation should not be too bad as the RWI onset is mainly determined by the late-time behavior of \(M_{\rm p}(t)\). Finally, note that theoretical arguments suggest \(\alpha=2.6\), but the numerical results are closer to \(\alpha\approx 2.7\) (CR23).
\[(\rm Myr)^{-1}\int_{0}^{\tau_{\rm p}}\left[\frac{M_{\rm p}(t)}{10^{2}M_{ \oplus}}\right]^{2.7}\rm d\tau > 0.11\left(\frac{R_{\rm p}}{50\rm AU}\right)^{1.5} \tag{11}\] \[\times \left(\frac{M_{\star}}{M_{\odot}}\right)^{2.2}\left(\frac{h_{ \rm p}}{0.1}\right)^{7.2}.\]
For \(M_{\rm p}(t)=\) const this inequality reduces to (5).
We can apply this generalized criterion to the simulations of Hallam and Paardekooper (2020) who considered planetary accretion history in the form \(M_{\rm p}(t)=M_{\rm f}\sin^{2}\left[(\pi/2)(t/t_{\rm G})\right]\) (where \(t_{\rm G}\) is the growth time) and determined the values of the final planet mass \(M_{\rm f}\) such that the RWI would marginally set in at \(t=t_{\rm G}\). In our notation this means setting \(t_{\rm G}=\tau_{\rm vrt}\). We can use our results and determine the relation between such \(t_{\rm G}\) and \(M_{\rm f}\) by changing inequality to equality in equation (10) and setting \(\tau_{\rm p}=t_{\rm G}\). We find, using the definition of \(M_{\rm th}\) and introducing \(q_{\rm f}=M_{\rm f}/M_{\star}\),
\[t_{\rm G}=A\star^{-1}P_{\rm p}\,q_{\rm f}^{\alpha}\,h_{\rm p}^{\beta-3\alpha}, \tag{12}\]
where, for a particular accretion history of Hallam and Paardekooper (2020), \(\kappa=\int_{0}^{1}[\sin(\pi x/2)]^{2\alpha}\,dx\approx 0.33\).
As these authors also included the effects of viscosity, which is
known to delay the onset of RWI (Hammer et al., 2017), we cannot directly compare our results for \(t_{\rm G}\) with theirs. However, if we focus on their smallest \(q_{\rm f}=1.5\times 10^{-4}\) (since in their setup this corresponds to the lowest value of viscosity, closer to our inviscid setup) and adopt their \(h_{\rm p}=0.05\) and the fit parameters (2), we find the age of the planet to satisfy \(\tau_{\rm p}\gtrsim t_{\rm G}\approx 39P_{\rm p}\). This is comfortably below \(t_{\rm G}\approx 200P_{\rm p}\) that Hallam and Paardekooper (2020) find for the same \(q_{\rm f}\), consistent with the viscosity-driven delay. Equally importantly, had we used the equation (4), that assumes \(M_{\rm p}=\rm const\), instead of (10), we would have found \(\tau_{\rm p}\geq 13P_{\rm p}\) (a factor of \(\kappa\) lower), far less constraining than the result that we obtained accounting for the (known) accretion history.
Given the steep dependence of the integrand in (11) on \(M_{\rm p}\) (reflecting \(M_{\rm p}\)-dependence of the \(\zeta\) growth rate), we expect \(A_{\zeta}\) to increase the most when \(M_{\rm p}(t)\) is close to its final value. This is indeed what we see in Fig. 3d, in which the initial accretion episode contributes only weakly to the total increase of \(A_{\zeta}\), despite its duration being comparable to the time interval when \(M_{\rm p}\) stayed at its final value (our calculation in this plot assumed \({\rm d}A_{\zeta}/{\rm d}t\propto M_{\rm p}^{2.7}\) for compatibility with equation (11), see CR23). Thus, it is the history of \(M_{\rm p}\) accretion at late times that is most important for determining the age of a putative planet in an observed vortex-hosting system.
## 6 Application to Observed Discs
We apply the constraints derived above to several protostellar systems observed by _ALMA_, for which the vortices have been invoked as a possible explanation of the observed non-axisymmetric features -- arcs, clumps, etc. It is important to remember that the features detected in continuum emission by _ALMA_ are due to thermal emission of dust grains, while our results on the emergence of vortices apply to the gaseous component of the disc. However, it has been shown by a number of authors (Barge and Sommeria, 1995; Godon and Livio, 1999; Fu et al., 2014) that vortices are very efficient at trapping dust, providing support to our association of the dust asymmetries with the gas vortices in PPDs. Since our limits on \(\tau_{\rm p}\) and \(M_{\rm p}\) are highly sensitive to the disk aspect ratio \(h_{\rm p}\), which is poorly known in most cases, we will retain the scaling with \(h_{0.1}=h_{\rm p}/0.1\) in our estimates.
**HD 135344B (SAO 206462)**
This \(M_{\bullet}=1.5M_{\odot}\), \(\tau_{\rm sys}\approx 9\) Myr old (Asensio-Torres et al., 2021) Herbig F star harbours a transitional disc. _ALMA_ dust continuum observations reveal an axisymmetric inner ring separated by a gap-like structure (centered around 70 AU) from an (outer) arc that can be interpreted as a vortex at the outer gap edge (van der Marel et al., 2016; Cazzoletti et al., 2018). The possibility of a planetary origin of these structures is supported by the near-IR scattered light observations of a two-armed spiral (Muto et al., 2012), although a unified model explaining all these features at once is lacking. We will nevertheless assume that the gap and the outer vortex are due to the (unseen) gap-opening planet at \(R_{\rm p}\approx 70\) AU, and the inner ring reflects dust trapping at the pressure maximum at the inner gap edge. These data and equations (6)-(7) allow us to constrain planetary mass as \(M_{\rm p}\gtrsim 32h_{0.1}^{2.7}M_{\oplus}\).
Direct imaging of HD 135344B with VLT/SPHERE sets an upper limit of \(M_{\rm 1}\approx 4M_{\rm J}\) on the mass of a planetary object at \(\sim 10^{2}\)AU scales (Asensio-Torres et al., 2021). Unfortunately, this \(M_{\rm 1}\) is higher than the thermal mass \(M_{\rm th}=1.5h_{0.1}^{3}M_{\rm J}\), which makes the use of the timescale constraint (3),(9) unjustified (its blind application would give \(\tau_{\rm vrt}\approx 500h_{0.1}^{7.2}\) yr, comparable to the orbital period at the gap location and not constraining \(\tau_{\rm p}\) effectively).
**HD 36112 (MWC 758)**
This \(M_{\bullet}\approx 1.8M_{\odot}\), \(\tau_{\rm sys}\approx 9\) Myr old (Asensio-Torres et al., 2021) star harbours two clumps on top of the two rings separated by a gap in the outer disc (Dong et al., 2018). Neglecting the slight eccentricity of the disc and assuming the rings with clumps to correspond to the inner and outer edges of the gap carved by a planet, we will adopt \(R_{\rm p}\approx 70\) AU for the planetary orbit. Then equations (6)-(7) allow us to set a mass constraint \(M_{\rm p}\gtrsim 37h_{0.1}^{2.7}M_{\oplus}\).
Analysis of the direct imaging observations of this system by Asensio-Torres et al. (2021) suggests that the upper limit on the possible point source inside the assumed gap is \(\sim 8M_{\rm J}\), significantly higher than \(M_{\rm th}\), precluding us from meaningfully constraining the
Figure 3: Illustration of the different representative planetary accretion histories: (left) very rapid (instant) initial accretion to the final mass, (centre) extended initial interval of accretion, (right) continuous accretion. Top panels illustrate \(M_{\rm p}(t)\) (blue) while the bottom panels show the corresponding growth of the characteristic amplitude \(A_{\zeta}\) (green) of the planet-driven vortensity perturbation (this calculations assumes \({\rm d}A_{\zeta}/{\rm d}t\propto[M_{\rm p}(t)]^{2.7}\), see text). Arrows in the top panels indicate the planetary age \(\tau_{\rm p}\) (time since the start of its accretion), while in the bottom panels they show the “time to vortex formation” \(\tau_{0}\) calculated using equation (1) and assuming \(M_{\rm p}\) given by its final value. The red dotted line indicates the critical value of \(A_{\zeta}\) when the vortices are expected to appear. The key point illustrated here is that \(\tau_{0}\leq\tau_{\rm p}\) always.
age of the planet.
**HD 143006**
This G-type T Tauri star with \(M_{\bullet}=1.8M_{\odot}\) and an estimated age of \(\tau_{\rm sys}\approx 8\) Myr harbours a disc rich in substructures (Perez et al., 2018). In addition to a misaligned inner disc, it features two outer rings separated by a gap centered around 52 AU, with an arc just outside the outermost ring. Interpreting these features as produced by an unseen planet inside the gap at \(R_{\rm p}=52\) AU, we get the mass constraint \(M_{\rm p}\geq 33h_{0,1}^{2.7}M_{\rm\oplus}\) from equations (6)-(7).
NaCo/VLT direct imaging does not provide a useful constraint on the mass of a putative planet, with \(M_{\rm 1}\) at the level of several tens of \(M_{\rm 1}\) at the outer gap location (Jorquera et al., 2021). Thus, we cannot set a useful lower limit on the planetary age.
**V1247 Ori**
V1247 Ori is a \(\tau_{\rm sys}=7.5\) Myr old, \(M_{\bullet}\approx 1.9M_{\odot}\) star (Willson et al., 2019) harbouring a pre-transitional disc. _ALMA_ dust continuum observations (Kraus et al., 2017) reveal an inner disc (or ring) separated by a gap from the outer arc, which may be interpreted as a vortex at the outer gap edge. Assuming a planet to be in the gap, at \(R_{\rm p}\approx 90\) AU, one finds that the planetary mass must satisfy \(M_{\rm p}\gtrsim 48h_{0,1}^{2.7}M_{\rm\oplus}\).
While we could not find explicit limits on the mass of the putative planet in V1247 Ori system from direct imaging observations, Kraus et al. (2017) found that an \(M_{\rm p}=3M_{\rm 1}\) planet can roughly match the shape of the spiral observed in scattered light using HiCIAO. Unfortunately, this \(M_{\rm p}\) is again above \(M_{\rm th}\), not allowing the age of the planet to be meaningfully constrained.
## 7 Discussion
### Combination of multiple constraints
Our limits on \(M_{\rm p}\) and \(\tau_{\rm p}\) based on the presence of vortices next to gaps in PPDs become even more powerful when combined with additional constraints on these key parameters. In particular, young planets passively lose thermal energy that they have been endowed with at formation, resulting in their luminosity decreasing with time. As more massive planets retain more heat at formation, it takes them longer to cool. Thus, if one can observationally constrain the luminosity of a planet \(L_{\rm p}\) to lie below a certain limit (or determine it in the case of direct detection), this would provide an additional constraint on \(M_{\rm p}\) and \(\tau_{\rm p}\).
We illustrate this approach in Fig. 4, where we show the vortex-based constraints from Fig. 1 together with the constraint \(L_{\rm p}<10^{-6}L_{\odot}\) (outside the pink shaded region to the right of the black dotted curve) based on the work4 of Linder et al. (2019). We also show the \(L_{\rm p}=10^{-7}L_{\odot}\) curve (red dotted) which may be relevant for future direct imaging experiments. In addition, we impose a constraint \(\tau_{\rm p}<15\) Myr (region below the orange dot-dashed line) since protoplanetary discs usually do not survive for that long (similar to the logic used in Section 3). There are other, complementary ways of constraining planetary properties, for example gap width/depth fitting (Dong and Fung, 2017; Asensio-Torres et al., 2021) which can provide model-dependent information on \(M_{\rm p}\) for individual systems; we will not consider them here.
Footnote 4: We use the tracks for the bolometric luminosity \(L_{\rm p}\) of a fixed mass planet from Fig. 6 of Linder et al. (2019), which assume evolution with a cloud-free atmosphere of solar metallicity and use the petitCODE grid.
A combination of the three constraints -- based on planetary luminosity, age and presence of vortices -- limits planetary \(M_{\rm p}\) and \(\tau_{\rm p}\) to lie within the unshaded region. This region shrinks for hotter discs with larger \(h_{\rm p}\) (compare fuchsia and green solid curves), as well as for larger \(R_{\rm p}\) (compare blue solid and dashed lines). Thus, the vortex-based limits are more stringent in hotter discs and for more distant planets. Also, the allowed region would shrink even further as the upper limit on \(L_{\rm p}\) gets lowered in the future. It should also be remembered that the luminosity-based (dotted) curves assume that the (possible ongoing) gas accretion provides insignificant contribution to \(L_{\rm p}\). If the planetary accretion luminosity is non-negligible, this would additionally shift the dotted curves to the left, constraining \(M_{\rm p}\) and \(\tau_{\rm p}\) even further.
### Utility of the vortex-based constraint
The sub-Jovian value of \(M_{\rm vrt}\) implied by the equation (7) and our estimates in Section 6 is very relevant in light of the recent results (Dong et al., 2017; Bae et al., 2017; Miranda and Rafikov, 2019, 2020, 2020) showing that in a low viscosity disc a single sub-\(M_{\rm th}\) planet can give rise to a series of several prominent gaps and rings in the radial dust distribution. For example, for the AS 209 system (\(M_{\bullet}=0.83M_{\odot}\), \(\tau_{\rm sys}\approx 1\) Myr, Andrews et al., 2018) imaged with _ALMA_(Zhang et al., 2018) have shown that a single planet with \(M_{\rm p}\) as low as \(25M_{\oplus}\) orbiting within the outer (primary) gap at \(R_{\rm p}\approx 100\)AU can be responsible for creating all five gaps observed in this disc. This possibility makes the typical values of \(M_{\rm p}\) implied by the constraint (6)-(7) very interesting for understanding the architecture of the underlying planetary system.
Given the upper limits on \(M_{\rm p}\) based on direct imaging in several systems covered in Section 6, we found our age constraint (3) & (9) to be not very useful at present. However, things will improve as \(M_{\rm 1}\) decreases in the future. Once \(M_{\rm 1}\) is below \(M_{\rm th}\), our constraint (3) & (9) becomes valid and may provide useful information on the
Figure 4: Combination of the different constraints on the mass \(M_{\rm p}\) and age \(\tau_{\rm p}\) of a planet in a vortex-hosting PPD. Grey shaded region is excluded (for \(h_{\rm p}=0.07\)) as vortices have no time to develop in this part of the parameter space, analogous to Fig. 1 (solid and dashed lines are the same as in that figure). Pink shaded region is excluded as it corresponds to planetary (bolometric) cooling luminosity \(L_{\rm p}\) exceeding \(L_{\rm p}=10^{-6}L_{\odot}\) (black dotted curve). We also show the curve \(L_{\rm p}=10^{-7}L_{\odot}\) (red dotted curve); the \(L_{\rm p}\) curves are based on Linder et al. (2019). The orange shaded region above the orange dot-dashed curve excludes planetary ages above 15 Myr. Planets satisfying all three constraints reside in the unshaded part of the parameter space.
planetary age. The decrease of \(M_{\rm J}\) may not necessarily come from improved direct imaging capabilities. In particular, one may use the technique of multiple gap fitting used by Zhang et al. (2018) for AS 209 to get a much better measurement of \(M_{\rm P}\) or \(M_{\rm J}\).
Just for illustration, let us imagine that AS 209 did possess a vortex at the edge of its outermost gap (just outside \(R_{\rm p}=100\)AU). Then using \(M_{\rm P}\approx 25M_{\oplus}\) (based on Zhang et al. 2018) equation (9) would predict \(\tau_{\rm P}\gtrsim 8h_{\rm J}^{7.2}\) Myr. This \(\tau_{\rm p}\) is much longer (for \(h_{\rm P}=0.1\)) than the age of the system \(\tau_{\rm sys}\approx 1\) Myr, and could have implied that either the planetary mass is underestimated (by a factor of \(\sim 2\)), or that the stellar age is underestimated (by almost an order of magnitude), or that the disc is somewhat colder -- using \(h_{\rm P}=0.075\) (consistent with Paneque-Carreino et al. 2022) in (9) would reconcile \(\tau_{\rm p}\) with its estimated \(\tau_{\rm sys}\).
The latter possibility represents a simple way to resolve the age discrepancy for this imaginary AS 209-like system. It also highlights the importance of good knowledge of the thermal state of the disc near the planet, which sets \(h_{\rm p}\). Indeed, \(\tau_{\rm vrt}\) depends very sensitively on \(h_{\rm p}\) and a mis-estimate of \(h_{\rm p}\) by a factor of 2 would result in a factor of \(\approx 150\) error in the determination of \(\tau_{\rm vrt}\) and the planetary age. The situation is somewhat improved for the mass constraint (6)-(7), in which variation of \(h_{\rm p}\) by a factor of 2 results in \(M_{\rm vrt}\) changing by a factor of \(\approx 6.5\). In any case, good understanding of disc thermodynamics is clearly needed when applying the age constraint (3) & (9). Recent _ALMA_ measurements of emission heights of different molecular lines in PPDs (Law et al. 2021, 2022; Paneque-Carreino et al. 2022) provide a (model-dependent) way to determine disc aspect ratio at different radii, generally finding values in the range \(h_{\rm p}\sim(0.07-0.1)\) for \(R_{\rm p}\sim(50-100)\) AU.
On the other hand, our constraints (5),(7) & (9) should be rather insensitive to the radial profile of the disc surface density near the planet. Indeed, CR23 showed that the parameters of the fit (1),(2) show little variation when changing the slope of the surface density profile near the planet. Also, the dependence of the vortex-based constraints on \(R_{\rm p}\) and \(M_{\bullet}\) is not as steep as for \(h_{\rm p}\), and the characteristic accuracy with which these parameters can be measured is \((10-20)\%\) or better.
### Additional processes and further extensions
Since the constraints (3)-(6) are _lower_ limits on \(\tau_{\rm p}\) and \(M_{\rm P}\), respectively, they do not change if the vortices we observe in discs now are not the first generation vortices. It is possible that the vortices that formed early on have then dissolved and what we are seeing now are the second (or multiple) generation vortices (Hammer et al. 2021). Nevertheless, even in this case the condition \(\tau_{\rm p}>\tau_{\rm vrt}\) would still need to be fulfilled, definitely for the first generation of vortices, as well as for the following generations, confirming the validity of the constraints (3)-(6).
Similarly, dust trapped in vortices can maintain observable non-axisymmetric distribution even after the vortices in the gaseous component dissolve (Fu et al. 2014). Thus, when we see an asymmetry in dust continuum observations, the original vortex that has led to it may have already been gone. However, this would again not invalidate the constraints obtained in Sections 3 & 4.
The fit (1),(2) for \(\tau_{\rm vrt}\) was derived by CR23 for discs which are inviscid or have low viscosity, an assumption which is consistent with observations of many systems (see Section 2). We can roughly estimate the upper limit on the viscosity \(\nu\) below which the inviscid assumption should be valid by demanding the timescale on which the vortexity structures produced by the planet get viscously diffused away to be longer that the age of the system \(\tau_{\rm sys}\). For the characteristic radial scale of the vortexity structures \(L\sim H_{\rm p}(M_{\rm p}/M_{\rm hh})^{-0.4}\) (Dong et al. 2011; CR23) this timescale is \(\sim L^{2}/\nu\sim P_{\rm p}\alpha^{-1}(M_{\rm p}/M_{\rm hh})^{-0.8}\), where we adopted the \(\alpha\)-ansatz for the viscosity \(\nu=\alpha\Omega_{\rm p}H_{\rm p}^{2}\) (and \(\Omega_{\rm p}=2\pi P_{\rm p}^{-1}\)). For this to exceed \(\tau_{\rm sys}\) for a sub-thermal mass planet we require that roughly \(\alpha\lesssim P_{\rm p}/\tau_{\rm sys}\sim 10^{-4}\), given the long orbital periods at \(R_{\rm p}=50-100\) AU. A more refined estimate of the critical \(\alpha\) can be found in CR23.
However, even if the disc were sufficiently viscous (i.e. for \(\alpha\gtrsim 10^{-4}\)), the RWI development would get only delayed (Hallam & Paardekooper 2020) or the instability may be suppressed altogether, see Hammer et al. (2017), CR23. Because of that, our inviscid estimate for \(\tau_{\rm vrt}\) continues to provide a lower limit on \(\tau_{\rm p}\) in the presence of a vortex, i.e. the equation (3) and all other constraints remain valid (see Section 5 for application of this logic).
On the other hand, some other effects may _accelerate_ vortex production compared to the results of CR23. For example, this could happen as a result of barocliniity of the disc near the planet since the RWI is sensitive to entropy gradients (Lovelace et al. 1999). Under certain circumstances dust feedback can also promote vortex production (Lin & Youdin 2017). These processes, if they are important, may somewhat weaken our constraints on \(M_{\rm p}\) and \(\tau_{\rm p}\).
We demonstrated in Section 5 how our constraints can be modified to account for the evolution of planetary mass \(M_{\rm p}\). Other relevant parameters might change as well, for example \(R_{\rm p}\) can vary as a result of planet migration, or \(h_{\rm p}\) can change as the disc evolves in time. CR23 outlined ways in which one can account for these processes to derive a new estimate for \(\tau_{\rm vrt}\) instead of (1),(2), thus providing a pathway to modifying our constraints on \(M_{\rm p}\) and \(\tau_{\rm p}\).
Of the four systems considered in Section 6, three show vortex-like non-axisymmetries only at the outer edge of the putative planetary gap, and only one, MWC 758, has them on both sides of the gap. This is somewhat surprising, since the simulations of CR23 not only show the emergence of vortices on both sides of the gap, but also demonstrate that the time interval separating their production by RWI is typically smaller than \(\tau_{\rm vrt}\) (see Table 1 in that work). Thus, one would expect to see vortices on both sides of the gap more often. It is not clear why this expectation fails. It could be that the dust concentration is more efficient in the outer vortices5 or that it tends to survive there considerably longer than in the inner ones. Or that some physical processes neglected in our study suppress the formation of the inner vortices. Expanding the sample of observed discs with vortex-like asymmetries would help in resolving this issue in the future.
Footnote 5: Outer vortices should form first in inviscid discs with radially decreasing surface density (CR23).
## 8 Summary
In this work we used the results of CR23 on the time it takes visible gas vortices to appear next to a gap carved by a low-mass planet in a low-viscosity PPD to set constraints on the masses \(M_{\rm p}\) and ages \(\tau_{\rm p}\) of planets in PPDs with observed vortex-like structures. We found that the presence of a vortex sets a lower limit on a particular combination of \(M_{\rm p}\) and \(\tau_{\rm p}\), with separate constraints on these variables possible if some additional information (such as the system age \(\tau_{\rm sys}\) or the upper limit on the planetary mass \(M_{\rm J}\)) is available. These considerations allowed us to constrain the masses of putative planets in several vortex-bearing PPDs to be above several tens of \(M_{\rm\oplus}\). The limits
on the planetary age are not very constraining at the moment, but they will improve as future observations lower \(M_{\rm J}\). Our constraints can be extended to account for the non-trivial history of planetary mass accretion, and we provide a recipe for doing that in Section 5. Finally, we showed the robustness of our constraints in light of additional complications (e.g. non-zero disc viscosity, multiple generation of vortices, etc.) and demonstrated their useful synergy with other types of constraints on \(M_{\rm P}\) and \(\tau_{\rm P}\), e.g. based on the upper limits on the planetary cooling luminosity coming from direct imaging observations.
## Acknowledgements
_Software_: Matplotlib (Hunter, 2007). Authors are grateful to Ewine van Dishoeck for illuminating discussions and to an anonymous referee for useful suggestions. R.R.R. acknowledges financial support through the Science and Technology Facilities Council (STFC) grant ST/T00049X/1 and Ambrose Monell Foundation. N.P.C. is funded by a STFC and Isaac Newton studentship.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2307.05661 | Subtyping Context-Free Session Types | Context-free session types describe structured patterns of communication on
heterogeneously-typed channels, allowing the specification of protocols
unconstrained by tail recursion. The enhanced expressive power provided by
non-regular recursion comes, however, at the cost of the decidability of
subtyping, even if equivalence is still decidable. We present an approach to
subtyping context-free session types based on a novel kind of observational
preorder we call $\mathcal{XYZW}$-simulation, which generalizes
$\mathcal{XY}$-simulation (also known as covariant-contravariant simulation)
and therefore also bisimulation and plain simulation. We further propose a
subtyping algorithm that we prove to be sound, and present an empirical
evaluation in the context of a compiler for a programming language. Due to the
general nature of the simulation relation upon which it is built, this
algorithm may also find applications in other domains. | Gil Silva, Andreia Mordido, Vasco T. Vasconcelos | 2023-07-11T16:51:26Z | http://arxiv.org/abs/2307.05661v2 | # Subtyping Context-Free Session Types
###### Abstract
Context-free session types describe structured patterns of communication on heterogeneously typed channels, allowing the specification of protocols unconstrained by tail recursion. The enhanced expressive power provided by non-regular recursion comes, however, at the cost of the decidability of subtyping, even if equivalence is still decidable. We present an approach to subtyping context-free session types based on a novel kind of observational preorder we call \(\mathcal{XYZW}\)-simulation, which generalizes \(\mathcal{XY}\)-simulation (also known as covariant-contravariant simulation) and therefore also bisimulation and plain simulation. We further propose a subtyping algorithm that we prove to be sound, and present an empirical evaluation in the context of a compiler for a programming language. Due to the general nature of the simulation relation upon which it is built, this algorithm may also find applications in other domains.
Session types, Subtyping, Simulation, Simple grammars, Non-regular recursion
In Proceedings of the 34th International Conference on Concurrency Theory (CONCUR 2023):[https://doi.org/10.4230/LIPIcs.CONCUR.2023.11](https://doi.org/10.4230/LIPIcs.CONCUR.2023.11)[49]
Support for this research was provided by the Fundacao para a Ciencia e a Tecnologia through project SafeSessions, ref. PTDC/CCI-COM/6453/2020, and the LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/00408/2020.
We thank Luca Padovani, Diana Costa, Diogo Pocas and the anonymous reviewers for their insightful comments.
## 1 Introduction
Session types, introduced by Honda et al. [31, 32, 50], enhance traditional type systems with the ability to specify and enforce structured communication protocols on bidirectional, heterogeneously typed channels. Typically, these specifications include the type, direction (input or output) and order of the messages, as well as branching points where one participant can choose how to proceed and the other must follow.
Traditional session types are bound by tail recursion and therefore restricted to the specification of protocols described by regular languages. This excludes many protocols of practical interest, with the quintessential example being the serialization of tree-structured data on a single channel. Context-free session types, proposed by Thiemann and Vasconcelos [51], liberate types from tail recursion by introducing a sequential composition operator (_,_) with a monoidal structure and a left and right identity in type Skip, representing no action. As their name hints, context-free session types can specify protocols corresponding to (simple deterministic) context-free languages and are thus considerably more expressive than their regular counterparts.
What does it mean for a context-free session type to be a subtype of another? Our answer follows Gay and Hole's seminal work on subtyping for regular session types [25], and Liskov's _principle of safe substitution_[39]: \(S\) is a subtype of \(R\) if channels governed by type \(S\) can take the place of channels governed by type \(R\) in whatever context, without violating the guarantees offered by a type system (e.g. progress, deadlock freedom, session fidelity, etc.).
More concretely, subtyping allows increased flexibility in the interactions between participants, namely on the type of the messages (a feature inherited from the subtyped \(\pi\)-calculus [46]) and on the choices available at branching points [25], allowing a channel to be governed by a simpler session type if its context so requires. A practical benefit of this flexibility is that it promotes _modular development_: the behaviour of one participant may be refined, while the behaviour of the other is kept intact.
Consider the following context-free session types for serializing binary trees.
\[\mathsf{STree} =\mu s.\oplus\{\mathsf{Nil}:\mathsf{Skip},\mathsf{Node}\colon s; \mathsf{!Int};s\} \mathsf{SEmpty} =\oplus\{\mathsf{Nil}:\mathsf{Skip}\}\] \[\mathsf{DTree} =\mu s.\&\{\mathsf{Nil}:\mathsf{Skip},\mathsf{Node}\colon s; \mathsf{?Int};s\} \mathsf{SFullTree0} =\oplus\{\mathsf{Node}\colon\mathsf{SEmpty};\mathsf{!Int}; \mathsf{SEmpty}\}\] \[\mathsf{SFullTree1} =\oplus\{\mathsf{Node}\colon\mathsf{SFullTree0};\mathsf{!Int}; \mathsf{SFullTree0}\}\]
The recursive \(\mathsf{STree}\) and \(\mathsf{DTree}\) types specify, respectively, the serialization and deserialization of a possibly infinite arbitrary tree, while the remaining non-recursive types specify the serialization of finite trees of particular configurations. The benefit of subtyping is that it makes the particular types \(\mathsf{SEmpty}\), \(\mathsf{SFullTree0}\) and \(\mathsf{SFullTree1}\) compatible with the general \(\mathsf{DTree}\) type. Observe that its dual, \(\mathsf{STree}\), may safely take the place of any type in the right column. Consider now a function \(\mathsf{f}\) that generates full trees of height \(1\) and serializes them on a given channel end. Assigning it type \(\mathsf{STree}\rightarrow\mathsf{Unit}\) would not statically ensure that the fullness and height of the tree are as specified. Type \(\mathsf{SFullTree1}\rightarrow\mathsf{Unit}\) would do so, and subtyping would still allow the function to use an \(\mathsf{STree}\) channel (i.e., communicate with someone expecting an arbitrary \(\mathsf{DTree}\) tree).
Expressive power usually comes at the cost of decidability. While subtyping for regular session types has been formalized, shown decidable and given an algorithm by Gay and Hole [25], subtyping in the context-free setting has been proven undecidable by Padovani [43]. The proof is given by a reduction from the inclusion problem for simple languages, shown undecidable by Friedman [21]. Remarkably, the equivalence problem for simple languages is known to be decidable, as is the type equivalence of context-free session types [36, 51].
Subtyping context-free session types has until now been considered only in a limited form, where message types must be syntactically equal [43]. Consequently, the interesting co/contravariant properties of input/output types have been left unexplored. In this paper, we propose a more expressive subtyping relation, where the types of messages may vary co/contravariantly, according to the classical subtyping notion of Gay and Hole. To handle the contravariance of output types, we introduce a novel notion of observational preorder, which we call _\(\mathcal{XYZW}\)-simulation_ (by analogy with _\(\mathcal{XY}\)-simulation_[1]).
While initially formulated in the context of the \(\pi\)-calculus, considerable work has been done to integrate session types in more standard settings, such as functional languages based on the polymorphic \(\lambda\)-calculus with linear types [2, 16, 47]. In this scenario, functional types and session types are not orthogonal: sessions may carry functions, and functions may act on sessions. With this in mind, we promote our theory to a linear functional setting, thereby showing how subtyping for records, variants and (linear and unrestricted [22]) functions, usually introduced by inference rules, can be seamlessly integrated with simulation-based subtyping for context-free session types.
Functional and higher-order context-free session types
\[T,U,V,W \coloneqq\mathsf{Unit}\;\mid\;T\stackrel{{ m}}{{\mapsto}}U \;\mid\;\left(\!\ell\!:\!T\!\right)_{\ell\in L}\;\mid\;S\;\mid\;t\;\mid\;\mu t.T\] \[S,R \coloneqq\sharp T\;\mid\;\odot\{\ell\!:\!T\!\}_{\ell\in L}\;\mid \;\mathsf{Skip}\;\mid\;\mathsf{End}\;\mid\;S;R\;\mid\;s\;\mid\;\mu s.S\]
Multiplicities, records/variants, polarities and views
\[m,n\coloneqq\mathbf{1}\;\mid\;*\qquad(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Gay's proposal [26], in System F\({}^{\diamond}\)[40] and in the FreeST language [2]). Their inclusion in our system is justified by the interesting subtyping properties they exhibit [22].
Session types \(!T\) and \(?T\) represent the sending and receiving, respectively, of a value of type \(T\) (an arbitrary type, making the system higher-order). Internal choice types \(\oplus\{\ell\colon S_{\ell}\}_{\ell\in L}\) allow the selection of a label \(k\in L\) and its continuation \(S_{k}\), while external choice types \(\&\{\ell\colon S_{\ell}\}_{\ell\in L}\) represent the branching on any label \(k\in L\) and its continuation \(S_{k}\). We stipulate that the set of labels for these types must be non empty. Type Skip represents no action, while type End indicates the closing of a channel, after which no more communication can take place. Type \(R\);\(S\) denotes the sequential composition of \(R\) and \(S\), which is associative, right distributes over choices types, has (left and right) identity Skip and left-absorber End.
The final two productions in both functional and session grammars introduce self-references and the recursion operator. Their inclusion in the two grammars ensures we can have both recursive functional types and recursive session types while avoiding nonsensical types such as \(\mu t.\mathsf{Unit}\stackrel{{\rightarrow}}{{\rightarrow}} \mathsf{UUnit};t\) at the syntactical level (avoiding the need for a finding system).
Still, we do not consider all types generated by these grammars to be _well-formed_. Consider session type \(\mu r.r\);\(\mathsf{UUnit}\). No matter how many times we unfold it, we cannot resolve its first communication action. The same could be said of \(\mu r.\mathsf{Skip};r\);\(\mathsf{UUnit}\). We must therefore ensure that any self-reference in a sequential composition is preceded by a type constructor representing some meaningful action, i.e., not equivalent to Skip. This is achieved by adapting the conventional notion of contractivity (no subterms of the form \(\mu x.\mu x_{1}\).... \(\mu x_{n}.x\)) [25] to account for Skip as the identity of sequential composition. This corresponds to the notion of _guardedness_ in the theory of process algebra (e.g. [28, 42]).
In addition to contractivity, we must ensure that well-formed types contain no free references. The type formation judgement \(\Delta\vdash T\), where \(\Delta\) is a set of references, combines these requirements. The rules for the judgement can be found in Appendix A.1. We are now set to define our syntactic subtyping relation. We begin by surveying the features it should support:
* **Input and output subtyping** Input variance and output contravariance are the central features of subtyping for types that govern entities that can be written to or read from, such as channels and references [45]. They are therefore natural features of the subtyping relation for conventional session types as well [25]. Observe that \(?\{\mathsf{A}\colon\mathsf{Int},\mathsf{B}\colon\mathsf{Bool}\}\leq?\{ \mathsf{A}\colon\mathsf{Int}\}\) should be true, for the type of the received value, \(\{\mathsf{A}\colon\mathsf{Int},\mathsf{B}\colon\mathsf{Bool}\}\), safely substitutes the expected type, \(\{\mathsf{A}\colon\mathsf{Int}\}\). Observe also that \(!\{\mathsf{A}\colon\mathsf{Int}\}\leq!\{\mathsf{A}\colon\mathsf{Int},\mathsf{ B}\colon\mathsf{Bool}\}\) should be true, because the type of the value to be sent, \(\{\mathsf{A}\colon\mathsf{Int},\mathsf{B}\colon\mathsf{Bool}\}\), is a subtype of \(\{\mathsf{A}\colon\mathsf{Int}\}\), the type of the messages the substitute channel is allowed to send.
* **Choice subtyping** If we understand external and internal choice types as, respectively, the input and output of a label, then their subtyping properties are easy to derive: external choices are covariant on their label set, internal choices are contravariant on their label set, and both are covariant on the continuation of the labels (this is known as _width subtyping_). Observe that \(\&\{\mathsf{A}\colon?\mathsf{Int}\}\leq\&\{\mathsf{A}\colon?\mathsf{Int}, \mathsf{B}\colon\mathsf{Bool}\}\) should be true, for every branch in the first type can be safely handled by matching on the second type. Likewise, \(\oplus\{\mathsf{A}\colon?\mathsf{Int},\mathsf{B}\colon\mathsf{Bool}\}\leq\oplus \{\mathsf{A}\colon?\mathsf{Int}\}\) should be true, for every choice in the second type can be safely selected in the first.
* **Sequential composition** In the classical subtyping relation for regular session types, input and output types (\(\sharp T.S\)) can be characterized as covariant in their continuation. Although the same general intuition applies in the context-free setting, we cannot as easily characterize the variance of the sequential composition constructor (\(S\);\(R\)) due to its monoidal, distributive and absorbing properties. For instance, consider types
and \(R_{1}\);\(R_{2}\), with \(S_{1}=\)!Int;!Bool, \(S_{2}=\)?Int, \(R_{1}=\)!Int and \(R_{2}=\)!Bool;?Int. Although it should be true that \(S_{1}\);\(S_{2}\leq R_{1}\);\(R_{2}\), we can have neither \(S_{1}\leq R_{1}\) nor \(S_{2}\leq R_{2}\).
* **Functional subtyping** The subtyping properties of function, record and variant types are well known, and we refer the readers to Pierce's book for the reasoning behind them [45]. Succinctly, the function type constructor is contravariant on the domain and covariant on the range, and the variant and record constructors are both covariant on the type of the fields, but respectively covariant and contravariant on their label sets.
Figure 2: Syntactic subtyping.
* **Multiplicity subtyping** Using an unrestricted (\(*\)) resource where a linear (1) one is expected does not compromise safety, provided that, multiplicities aside, the type of the former may safely substitute the type of the latter. We can express this relationship between multiplicities through a preorder captured by inequality \(*\sqsubseteq 1\). In our system, function types may be either linear or unrestricted. Thus, type \(T_{1}\stackrel{{ m}}{{\rightarrow}}T_{2}\) can be considered a subtype of \(U_{1}\stackrel{{ n}}{{\rightarrow}}U_{2}\) if \(U_{1}\) and \(T_{2}\) are subtypes, respectively, of \(T_{1}\) and \(U_{2}\) and if \(m\sqsubseteq n\) (thus we can characterize the function type constructor as covariant on its multiplicity).
The rules for our syntactic subtyping relation, interpreted coinductively, are shown in Figure 2. Rules S-Unit, S-Arrow, S-Rcd, S-Vrt, S-RecL and S-RecR establish the classical subtyping properties associated with both functional and equi-recursive types, with S-Arrow additionally encoding subtyping between linear and unrestricted functions, relying on a preorder on multiplicities also defined in Figure 2. Rules S-End, S-In, S-Out, S-ExtChoice and S-IntChoice bring to the context-free setting the classical subtyping properties expected from session types, as put forth by Gay and Hole [25].
The remaining rules account for sequential composition, which distributes over choice and exhibits a monoidal structure with its neutral element in Skip and left-absorbing element in End. We include, for each session type constructor \(S\), a left rule (denoted by suffix L) of the form \(S\);\(R\leq S^{\prime}\) and a right rule (denoted by suffix R) of the form \(S^{\prime}\leq S\);\(R\). An additional rule is necessary for each constructor over which sequential composition does not distribute, associate or neutralize (S-InSeq2, S-OutSeq2 and S-EndSeq2). Since we are using a coinductive proof scheme, we include rules to'move' sequential composition down the syntax. Thus, given a type \(S\);\(R\), we inspect \(S\) to decide which rule to apply next.
The syntactic subtyping relation \(\leq\) is a preorder on types.
Let us briefly return to Example 1. It is now easy to see that \(\mathsf{STree}\leq\mathsf{SFullTree1}\): we unfold the left-hand side and apply rule S-IntChoice. Then we apply the distributivity rules as necessary until reaching an internal choice with no continuation, at which point we can apply S-IntChoice again, or until reaching a type with \(\mathsf{lInt}\) at the head, at which point we apply S-InSeq2. We repeat this process until reaching \(\mathsf{STree}\leq\mathsf{SFullTree0}\), and proceed similarly until reaching \(\mathsf{STree}\leq\mathsf{SF Empty}\), which follows from S-IntChoice and S-Skip.
Despite clearly conveying the intended meaning of the subtyping relation, the rules suggest no obvious algorithmic intepretation: on the one hand, the presence of bare metavariables makes the system not syntax-directed; on the other hand, rules S-RecL, S-RecSeqL and their right counterparts lead to infinite derivations which are not solvable by a conventional fixed-point construction [25, 45]. In the next section we develop an alternative, semantic approach to subtyping, which we use as a stepping stone to develop our subtyping algorithm.
## 3 Semantic subtyping
Semantic equivalence for context-free session types is usually based on _observational equivalence_ or _bisimilarity_, meaning that two session types are considered equivalent if they exhibit exactly the same communication behaviour [51]. An analogous notion of _semantic subtyping_ should therefore rely on an _observational preorder_. In this section we develop such a preorder.
We define the behaviour of types via a labelled transition system (LTS) by establishing relation \(T\stackrel{{ a}}{{\longrightarrow}}U\) ("type \(T\) transitions by action \(a\) to type \(U\)"). We follow Costa et al. [16] in attributing behaviour to functional types, allowing them to be encompassed in our observational preorder. The rules defining the trans ition relation, as well as the grammar that generates all possible transition actions, are shown in Figure 3.
In general, each functional type constructor generates a transition for each of its fields (Unit and End, which have none, transition to Skip). Linear functions, exhibit an additional transition to represent their restricted use (L-LinArrow), and records/variants include a default transition that is independent of their fields (L-RcdVrt). The behaviour of session types is more complex, since it must account for their algebraic properties. Message types exhibit a transition for their payload (L-Msg1, L-MsgSeq1) and another for their continuation, which is Skip by omission (L-Msg2, L-MsgSeq2). Choices behave much like records/variants when alone, but are subject to distributivity when composed (L-ChoiceFieldSeq). Type End, which absorbs its continuation, transitions to Skip (L-End, L-EndSeq). Rules L-SeqSeq, L-SkipSeq account for associativity and identity, and rules L-Rec and L-RecSeq dictate that recursive types behave just like their unfoldings. Notice that Skip has no transitions. With the behaviour of types established, we now look for an appropriate notion of observational preorder. Several such notions have been studied in the literature. _Similarity_, defined as follows, is arguably the simplest of them [41, 44].
A type relation \(\mathcal{R}\) is said to be a simulation if, whenever \(T\mathcal{R}U\), for all \(a\) and \(T^{\prime}\) with \(T\stackrel{{ a}}{{\longrightarrow}}T^{\prime}\) there is \(U^{\prime}\) such that \(U\stackrel{{ a}}{{\longrightarrow}}U^{\prime}\) and \(T^{\prime}\mathcal{R}U^{\prime}\)
Similarity, written \(\preceq\), is the union of all simulation relations. We say that a type \(U\) simulates type \(T\) if \(T\preceq U\).
Unfortunately, plain similarity is of no use to us. A small example shows why: type \(\oplus\{\mathsf{A}\colon\mathsf{End},\mathsf{B}\colon\mathsf{End}\}\) both simulates and is a subtype of \(\oplus\{\mathsf{A}\colon\mathsf{End}\}\), while type \(\&\{\mathsf{A}\colon\mathsf{End}\}\) does
Figure 3: Labelled transition system. Letters \(\mathsf{d}\), \(\mathsf{r}\), \(\mathsf{p}\), \(\mathsf{c}\) in labels stand for “domain”, “range”, “payload” and “continuation”.
not simulate yet is a subtype of \(\&\{\mathsf{A}\colon\mathsf{End},\mathsf{B}\colon\mathsf{End}\}\). Reversing the direction of the simulation would be of no avail either, as it would leave us with the reverse problem.
It is apparent that a more refined notion of simulation is necessary, where the direction of the implication depends on the transition labels. Aarts and Vaandrager provide just such a notion in the form of _\(\mathcal{XY}\)-simulation_[1], a simulation relation parameterized by two subsets of actions, \(\mathcal{X}\) and \(\mathcal{Y}\), such that actions in \(\mathcal{X}\) are simulated from left to right and those in \(\mathcal{Y}\) are simulated from right to left, selectively combining the requirements of simulation and reverse simulation.
Let \(\mathcal{X},\mathcal{Y}\subseteq\mathcal{A}\). A type relation \(\mathcal{R}\) is said to be an \(\mathcal{XY}\)-simulation if, whenever \(\mathcal{TRU}\), we have:
1. for each \(a\in\mathcal{X}\) and each \(T^{\prime}\) with \(T\stackrel{{ a}}{{\longrightarrow}}T^{\prime}\), there is \(U^{\prime}\) such that \(U\stackrel{{ a}}{{\longrightarrow}}U^{\prime}\) with \(T^{\prime}\mathcal{RU}^{\prime}\);
2. for each \(a\in\mathcal{Y}\) and each \(U^{\prime}\) with \(U\stackrel{{ a}}{{\longrightarrow}}U^{\prime}\), there is \(T^{\prime}\) such that \(T\stackrel{{ a}}{{\longrightarrow}}T^{\prime}\) with \(T^{\prime}\mathcal{RU}^{\prime}\).
\(\mathcal{XY}\)-similarity, written \(\preceq^{\mathcal{XY}}\), is the union of all \(\mathcal{XY}\)-simulation relations. We say that a type \(T\) is \(\mathcal{XY}\)-similar to type \(U\) if \(T\preceq^{\mathcal{XY}}U\).
Similar or equivalent notions have appeared throughout the literature: _modal refinement_[38], _alternating simulation_[7] and, perhaps more appropriately named (for our purposes), _covariant-contravariant simulation_[20]. Padovani's original subtyping relation for first-order context-free session types [43] can also be understood as a refined form of \(\mathcal{XY}\)-simulation.
We can tentatively define a semantic subtyping relation \(\lesssim^{\prime}\) as \(\mathcal{XY}\)-similarity, where \(\mathcal{X}\) and \(\mathcal{Y}\) are the label sets generated by the following grammars for \(a_{\mathcal{X}}\) and \(a_{\mathcal{Y}}\), respectively.
\[a_{\mathcal{X}} \coloneqq a_{\mathcal{XY}}\ \mid\ \langle\rangle_{\ell}\ \mid\ \&_{\ell} a_{\mathcal{XY}} \coloneqq\mathsf{Unit}\ \mid\ \lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \mid\lhd\ \ \mid\lhd\ \mid\lhd\ \ \mid\lhd\ \mid\lhd\ \mid\lhd\ \ \mid\lhd\ \ \mid\lhd\ \ \ \mid\lhd\ \
For any \(\mathcal{X},\mathcal{Y},\mathcal{Z},\mathcal{W}\), \(\preceq^{\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}}\) is a preorder relation on types.
Equipped with the notion of \(\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}\)-similarity, we are ready to define the semantic subtyping relation for functional and higher-order context-free session types as follows.
The semantic subtyping relation for functional and higher-order context-free session types \(\lesssim\) is defined by \(T\lesssim U\) when \(T\preceq^{\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}}U\) such that \(\mathcal{X}\), \(\mathcal{Y}\), \(\mathcal{Z}\) and \(\mathcal{W}\) are defined as the label sets generated by the following grammars for \(a_{\mathcal{X}}\), \(a_{\mathcal{Y}}\), \(a_{\mathcal{Z}}\) and \(a_{\mathcal{W}}\), respectively.
\[a_{\mathcal{X}} \mathrel{\mathop{:}}=a_{\mathcal{X}\mathcal{Y}}\;\mid\;\;\star 1\;\;\mid\;\;\langle \rangle_{\ell}\;\mid\;\;\&_{\ell} a_{\mathcal{Z}},a_{\mathcal{W}} \mathrel{\mathop{:}}=\mathrel{\mathop{:}}=\mathrel{\mathop{:}}= \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel {\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel {\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}} \mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{: }}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{ \mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:}}\mathrel{\mathop{:
1. translate the given types to a simple grammar [36] and two starting words;
2. prune unreachable symbols from productions;
3. explore an expansion tree rooted at a node containing the initial words, alternating between expansion and simplification operations until either an empty node is found (decide **True**) or all nodes fail to expand (decide **False**).
Phase 1The first phase consists of translating the two types to a grammar in _Greibach normal form_ (GNF) [27], i.e., a grammar where all productions have the form \(Y\to a\vec{Z}\), and two starting words \((\vec{X},\vec{Y})\). A word is defined as a sequence of non-terminal symbols. We can check the \(\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}\)-similarity of words in GNF grammars because they naturally induce a labelled transition system, where states are words \(\vec{X}\), actions are terminal symbols \(a\) and the transition relation is defined as \(X\vec{Y}\stackrel{{ a}}{{\longrightarrow}}_{\mathcal{P}}\vec{Z} \vec{Y}\) when \(X\to a\vec{Z}\in\mathcal{P}\). We denote the bisimilarity and \(\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}\)-similarity of grammars by, respectively, \(\sim_{\mathcal{P}}\) and \(\preceq_{\mathcal{P}}^{\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}}\), where \(\mathcal{P}\) is the set of productions. We also let \(\lesssim_{\mathcal{P}}\) denote grammar \(\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}\)-similarity with label sets as in Definition 3.1. The deterministic nature of context-free session types allows their corresponding grammars to be simple [36]: for each non-terminal \(Y\) and terminal symbol \(a\), we have at most one production of the form \(Y\to a\vec{Z}\).
The grammar translation procedure _grm_ remains unchanged from the original equivalence algorithm [4], and for this reason we omit its details (which include generating productions for all \(\mu\)-subterms in types). However, this procedure relies on two auxiliary definitions which must be adapted: the _unr_ function (Definition 3.1), which normalizes the head of session types and unravels recursive types until reaching a type constructor, and the _word_ procedure (Definition 3.1), which builds a word from a session type while updating a set \(\mathcal{P}\) of productions.
The _unraveling_ of a type \(T\) is defined by induction on the structure of \(T\):
\[\textit{unr}(\mu x.T) =\textit{unr}([\mu x.T/x]T)\qquad\textit{unr}(\mathsf{Skip};S) =\textit{unr}(S)\] \[\textit{unr}(\mathsf{End};S) =\mathsf{End}\qquad\qquad\qquad\qquad\textit{unr}((\mu s.S);R) =\textit{unr}(([\mu s.S/s]S);R)\] \[\textit{unr}(\odot\{\ell\colon S_{\ell}\}_{\ell\in L};R) =\odot\{\ell\colon S_{\ell};R\}_{\ell\in L}\qquad\textit{unr}((S _{1};S_{2});S_{3}) =\textit{unr}(S_{1};(S_{2};S_{3}))\]
and in all other cases by \(\textit{unr}(T)=T\).
The word corresponding to a well-formed type \(T\), \(\textit{word}(T)\), is built by descending on the structure of \(T\) while updating a set \(\mathcal{P}\) of productions:
\[\textit{word}(\mathsf{Unit}) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow\mathsf{Unit}\}\] \[\textit{word}(U\stackrel{{ 1}}{{\rightarrow}}V) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow\lnot\textit{word}(U),Y\rightarrow\lnot\textit{word}(V),Y \rightarrow\lnot\texttt{1}\}\] \[\textit{word}(U\stackrel{{*}}{{\rightarrow}}V) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow\lnot\textit{word}(U),Y\rightarrow\lnot\textit{word}(V)\}\] \[\textit{word}((\lx@converttounder{\{\#}};T_{\ell})_{\ell\in L}) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow(\emptyset\bot)\cup\{Y\rightarrow(\emptyset)_{k}\textit{word}(T_{k })\enspace|\enspace k\in L\}\] \[\textit{word}(\mathsf{Skip}) = \varepsilon\] \[\textit{word}(\mathsf{End}) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow\mathsf{End}\bot\}\] \[\textit{word}(\sharp U) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow\sharp\textit{word}(U)\bot,Y\rightarrow\sharp\mathsf{c}\}\] \[\textit{word}(\odot\{\ell\colon S_{\ell}\}_{\ell\in L}) = Y\text{, setting }\mathcal{P}\coloneqq\mathcal{P}\cup\{Y \rightarrow\odot\bot\}\cup\{Y\rightarrow\odot_{\mathsf{k}}\textit{word}(S_{k })\enspace|\enspace k\in L\}\] \[\textit{word}(S_{1};S_{2}) = \textit{word}(S_{1})\textit{word}(S_{2})\] \[\textit{word}(\mu x.U) = X\]
_where, in each equation, \(Y\) is understood as a fresh non-terminal symbol, \(X\) as the non-terminal symbol corresponding to type reference \(x\), and \(\bot\) as a non-terminal symbol without productions._
Consider again the types for tree serialization in Section 1. Suppose we want to know whether \(\mathsf{SFullTree0}\stackrel{{*}}{{\rightarrow}}\mathsf{Unit} \lesssim\mathsf{STree}\stackrel{{ 1}}{{\rightarrow}}\mathsf{Unit}\). We know that the grammar generated for these types is as follows, with \(X_{0}\) and \(Y_{0}\) as their starting words.
\[X_{0} \rightarrow\ \mathsf{d}X_{1} X_{2}\rightarrow\oplus\varepsilon_{\mathsf{Empty}} X_{4}\rightarrow\mathsf{Int} Y_{0}\rightarrow\neg\mathsf{d}Y_{1} Y_{1}\rightarrow\oplus\bot\] \[X_{0} \rightarrow\ \mathsf{r}X_{5} X_{2}\rightarrow\oplus\bot X_{5}\rightarrow\mathsf{Unit} Y_{0}\rightarrow\neg\mathsf{r}X_{5} Y_{1}\rightarrow\oplus\varepsilon_{\mathsf{Empty}}\] \[X_{1} \rightarrow\oplus_{\mathsf{Node}}X_{2}X_{3}X_{2} X_{3}\rightarrow\!\mathsf{lp}X_{4}\bot X_{1} Y_{0}\rightarrow\neg\mathsf{l}\phantom{X_{0}}Y_{1}\rightarrow\oplus_{\mathsf{ Node}}Y_{1}X_{3}Y_{1}\] \[X_{1} \rightarrow\oplus\bot X_{3}\rightarrow\!\mathsf{lc}\]
For the rest of this section let \(\vdash T\), \(\vdash U\), \((\vec{X}_{T},\mathcal{P}^{\prime})=\mathit{grm}(T,\emptyset)\) and \((\vec{X}_{U},\mathcal{P})=\mathit{grm}(U,\mathcal{P}^{\prime})\).
[Soundness for grammars] If \(\vec{X}_{T}\lesssim_{\mathcal{P}}\vec{X}_{U}\), then \(T\lesssim U\).
Phase 2The grammars generated by procedure \(\mathit{grm}\) may contain unreachable words, which can be ignored by the algorithm. Intuitively, these words correspond to communication actions that cannot be fulfilled, such as the part?Bool in type (\(\mu s\).llnt;s);?Bool. Formally, these words appear in productions following what are known as _unnormed words_.
Let \(\vec{a}\) be a non-empty sequence of non-terminal symbols \(a_{1},\ldots,a_{n}\). Write \(\vec{Y}\stackrel{{\vec{a}}}{{\longrightarrow}}_{\mathcal{P}} \vec{Z}\) when \(\vec{Y}\stackrel{{ a_{1}}}{{\longrightarrow}}_{\mathcal{P}} \ldots\stackrel{{ a_{n}}}{{\longrightarrow}}_{\mathcal{P}} \vec{Z}\). We say that a word \(\vec{Y}\) is normed if \(\vec{Y}\stackrel{{\vec{a}}}{{\longrightarrow}}_{\mathcal{P}} \varepsilon\) for some \(\vec{a}\), and unnormed otherwise. If \(\vec{Y}\) is normed and \(\vec{a}\) is the shortest path such that \(\vec{Y}\stackrel{{\vec{a}}}{{\longrightarrow}}_{\mathcal{P}}\varepsilon\), then \(\vec{a}\) is called the minimal path of \(\vec{Y}\), and its length is the norm of \(\vec{Y}\), denoted \(|\vec{Y}|\).
It is known that any unnormed word \(\vec{Y}\) is bisimilar to its concatenation with any other word, i.e., if \(\vec{Y}\) is unnormed, then \(\vec{Y}\leadsto_{\mathcal{P}}\vec{Y}\vec{X}\). It is also easy to show that \(\leadsto_{\mathcal{P}}\subseteq\lesssim_{\mathcal{P}}\), and hence that \(\vec{Y}\lesssim_{\mathcal{P}}\vec{Y}\vec{X}\). In this case, \(\vec{X}\) is said to be unreachable and can be safely removed from the grammar. We call the procedure of removing all unreachable symbols from a grammar _pruning_, and denote the pruned version of a grammar \(\mathcal{P}\) by \(\mathit{prune}(\mathcal{P})\).
[Pruning preserves \(\mathcal{XYZW}\)-similarity] \(\vec{X}\stackrel{{\mathcal{XYZW}}}{{\Rightarrow}}\vec{Y}\) iff \(\vec{X}\stackrel{{\mathcal{XYZW}}}{{\Rightarrow}}\vec{Y}\)
Phase 3In its third and final phase, the algorithm explores an _expansion tree_, alternating between expansion and simplification steps. An expansion tree is a tree whose nodes are sets of pairs of words, whose root is the singleton set containing the pair of starting words under test, and where every child is an _expansion_ of its parent. A branch is deemed _successful_ if it is infinite or has an empty leaf, and deemed _unsuccessful_ otherwise. The original definition of expansion ensures that the union of all nodes along a successful branch (without simplifications) constitutes a bisimulation [35]. We adapt this definition to ensure that such a union yields an \(\mathcal{XYZW}\)-simulation instead.
The \(\mathcal{XYZW}\)-expansion of a node \(N\) is defined as the minimal set \(N^{\prime}\) such that, for every pair \((\vec{X},\vec{Y})\) in \(N\), it holds that:
1. if \(\vec{X}\to a\vec{X^{\prime}}\) and \(a\in\mathcal{X}\) then \(\vec{Y}\to a\vec{Y^{\prime}}\) with \((\vec{X^{\prime}},\vec{Y^{\prime}})\in N^{\prime}\)
2. if \(\vec{Y}\to a\vec{Y^{\prime}}\) and \(a\in\mathcal{Y}\) then \(\vec{X}\to a\vec{X^{\prime}}\) with \((\vec{X^{\prime}},\vec{Y^{\prime}})\in N^{\prime}\)
3. if \(\vec{X}\to a\vec{X^{\prime}}\) and \(a\in\mathcal{Z}\) then \(\vec{Y}\to a\vec{Y^{\prime}}\) with \((\vec{Y^{\prime}},\vec{X^{\prime}})\in N^{\prime}\)
4. if \(\vec{Y}\to a\vec{Y^{\prime}}\) and \(a\in\mathcal{W}\) then \(\vec{X}\to a\vec{X^{\prime}}\) with \((\vec{Y^{\prime}},\vec{X^{\prime}})\in N^{\prime}\)
**Lemma 18** (Safeness property for \(\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}\)-simulation).: _Given a set of productions \(\mathcal{P}\), \(\vec{X}\underset{\mathcal{P}}{\overset{\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W }}{\longrightarrow}}\vec{Y}\) iff the expansion tree rooted at \(\{(\vec{X},\vec{Y})\}\) has a successful branch._
The simplification stage consists of applying rules that safely modify the expansion tree during its construction, in an attempt to keep some branches finite. The rules are iteratively applied to each node until a fixed point is reached, at which point we can proceed with expansion. To each node \(N\) we apply three simplification rules, adapted from the equivalence algorithm [4]:
1. Reflexivity: omit pairs of the form \((\vec{X},\vec{X})\);
2. Preorder: omit pairs belonging to the least preorder containing the ancestors of \(N\);
3. Split: if \((X_{0}\vec{X},Y_{0}\vec{Y})\in N\) and \(X_{0}\) and \(Y_{0}\) are normed, then: * Case \(|X_{0}|\leq|Y_{0}|\): Let \(\vec{a}\) be a minimal path for \(X_{0}\) and \(\vec{Z}\) the word such that \(Y_{0}\overset{\vec{a}}{\longrightarrow}_{\mathcal{P}}\vec{Z}\). Add a sibling node for \(N\) including pairs \((X_{0}\vec{Z},Y_{0})\) and \((\vec{X},\vec{Z}\vec{Y})\) in place of \((X_{0}\vec{X},Y_{0}\vec{Y})\); * Otherwise: Let \(\vec{a}\) be a minimal path for \(Y_{0}\) and \(\vec{Z}\) the word such that \(X_{0}\overset{\vec{a}}{\longrightarrow}_{\mathcal{P}}\vec{Z}\). Add a sibling node for \(N\) including pairs \((X_{0},Y_{0}\vec{Z})\) and \((\vec{Z}\vec{X},\vec{Y})\) in place of \((X_{0}\vec{X},Y_{0}\vec{Y})\). When a node is simplified, we keep track of the original node in a sibling, thus ensuring that along the tree we keep an "expansion-only" branch. The algorithm explores the tree by breadth-first search using a queue of node-ancestors pairs, thus avoiding getting stuck in infinite branches, and alternates between expansion and simplification steps until it terminates with **False** if all nodes fail to expand or with **True** if an empty node is reached. The following pseudo-code illustrates the procedure. \[\mathit{subG}(\vec{X},\vec{Y},\mathcal{P})=\mathit{explore}( \mathit{singletonQueue}((\{(\vec{X},\vec{Y})\},\emptyset),\mathcal{P})\] **where** \(\mathit{explore}(q,\mathcal{P})=\) **if** \(\mathit{empty}(q)\) **then False** % all nodes failed to expand **else** \(\textbf{let}\,(n,a)=\mathit{front}(q)\) **in** **if** \(\mathit{empty}(n)\) **then True** % empty node reached **else if** \(\mathit{hasExpansion}(n,\mathcal{P})\) % then expand, simplify and recur **then** \(\mathit{explore}(\mathit{simplify}(\mathit{expand}(n,\mathcal{P}),a\cup n, \mathit{dequeue}(q)),\mathcal{P})\) **else** \(\mathit{explore}(\mathit{dequeue}(q),\mathcal{P})\) % otherwise, discard node
**Example 19**.: The \(\mathcal{X}\mathcal{Y}\mathcal{Z}\mathcal{W}\)-expansion tree for Example 13 is illustrated in Figure 4.
Finally, function \(\mathit{subT}\) puts all the pieces of the algorithm together:
\[\mathit{subT}(T,U)=\textbf{let}\,\,(\vec{X},\mathcal{P}^{\prime})=\mathit{ grm}(T,\emptyset),(\vec{Y},\mathcal{P})=\mathit{grm}(U,\mathcal{P}^{\prime}) \textbf{ in }\mathit{subG}(\vec{X},\vec{Y},\mathit{prune}(\mathcal{P}))\]
It receives two well-formed types \(T\) and \(U\), computes their grammar and respective starting words \(\vec{X}\) and \(\vec{Y}\), prunes the productions of the grammar and, lastly, uses function \(\mathit{subG}\) to determine whether \(\vec{X}\underset{\mathcal{P}}{\overset{\mathcal{X}\underset{\mathcal{P}}{ \longrightarrow}}}\vec{Y}\). The following result shows that algorithm \(\mathit{subT}\) is sound with respect to semantic subtyping relation on functional and higher-order context-free session types.
**Theorem 20** (Soundness).: _If \(\mathit{subT}(T,U)\) returns **True**, then \(T\lesssim U\)._
## 5 Evaluation
We have implemented our subtyping algorithm in Haskell and integrated it in the freely available compiler for FreeST, a statically typed functional programming language featuring message-passing channels governed by context-free session types [2, 3, 6]. The FreeST compiler features a running implementation of the type equivalence algorithm of Almeida et al. [4]. With our contributions, FreeST effectively gains support for subtyping at little to no cost in performance. In this section we present an empirical study to support this claim.
We employed three test suites to evaluate the performance of our algorithm: a suite of handwritten pairs of types, a suite of randomly generated pairs of types, and a suite of handwritten FreeST programs. We focus on the last two, since they allow a more robust and realistic analysis. All data was collected on a machine featuring an Intel Core i5-6300U at 2.4GHz with 16GB of RAM.
To build our randomly generated suite we employed a type generation module, implemented using the Quickcheck library [15] and following an algorithm induced from the properties of subtyping, much like the one induced by Almeida et al. [4] from the properties of bisimilarity. It includes generators for valid and invalid subtyping pairs. We conducted our evaluation by taking the running time of the algorithm on 2000 valid pairs and 2000 invalid pairs, ranging from 2 to 730 total AST nodes, with a timeout of 30s (ensuring it terminates with either **True**, **False** or **Unknown**). The results are plotted in Figure 4(a). Despite the incompleteness of the algorithm, we encountered no false negatives, but obtained 188 timeouts. We found, as expected, that the running time increases considerably with the number of nodes. When a result was produced, valid pairs took generally longer.
Randomly generated types allow for a robust analysis, but they typically do not reflect the types encountered by a subtyping algorithm in its most obvious practical application, a compiler. For this reason, we turn our attention to our suite of FreeST programs, comprised of 286 valid and invalid programs collected throughout the development of the FreeST language. Programs range from small examples demonstrating particular features of the language to concurrent applications simulating, for example, an FTP server.
We began by integrating the algorithm in the FreeST compiler, placing next to every call to the original algorithm [4] (henceforth \(\mathtt{equivT}\)) a call to \(\mathtt{subT}\) on the same pairs of types. We then ran each program in our suite 10 times, collecting and averaging the accumulated running time of both algorithms on the same pairs of types. We then took the difference between the average accumulated running times of \(\mathtt{subT}\) and \(\mathtt{equivT}\), obtaining an average difference
Figure 4: An \(\mathcal{XYZW}\)-expansion tree for Example 13, exhibiting a finite successful branch.
of -3.85ms, with a standard deviation of 7.08ms, a minimum difference of -71.29ms and a maximum difference of 8.03ms (subT performed faster, on average). Figure 4(b) illustrates this comparison by plotting against each other the accumulated running times (for clarity, those in the 20-100ms range) of both algorithms during the typechecking phase of each.
The data collected in this evaluation suggests that replacing the original equivalence algorithm [4] with the subtyping algorithm in the FreeST typechecker generally does not incur an overhead, while providing additional expressive power for programmers.
## 6 Related work
Session types emerged as a formalism to express communication protocols and statically verify their implementations [31, 32]. Initial formulations allowed only pairwise, tail-recursive protocols, earning such types the 'binary' and'regular' epithets. Since then, considerable efforts have been made to extend the theory of session types beyond the binary and regular realms: multiparty session types allow sessions with multiple participants [33], while context-free session types [51] and nested session types [18] allow non-regular communication patterns. Our work is centered on context-free session types, which have seen considerable development since their introduction, most notably their integration in System F [2, 47], an higher-order formulation [16], as well as proposals for kind and type inference [5, 43].
Subtyping is a standard feature of many type systems, and the literature on the topic is vast [8, 10, 13, 14, 17, 19, 37]. Its conventional interpretation, based on the notion of substitutability, originates from the work of Liskov [39]. Multiple approaches to subtyping for regular session types have been proposed, and they can be classified according to the objects they consider substitutable: channels _versus_ processes (the difference being most notable in the variance of type constructors). The earliest approach, subscribing to the substitutability of channels, is that of Gay and Hole [25]. It is also the one we follow. A later formulation, proposed by Carbone et al. [12], subscribes to the substitutability of processes. A survey of both interpretations is given by Gay [24]. The interaction between subtyping and polymorphism for regular session types, in the form of bounded quantification, has been investigated by Gay [23]. Horne and Padovani study subtyping under the linear logic interpretation of regular session types [34], showing that it preserves termination of processes.
Subtyping for session types has spread beyond the regular realm. Das et al. [18] introduce subtyping for nested session types, show the problem to be undecidable and present a
Figure 5: Performance evaluation and comparison.
sound but incomplete algorithm. In the context-free setting, the first and, to the best of our knowledge, only formulation before our work is that of Padovani [43]. It proposes a simulation-based subtyping relation, proves the undecidability of the subtyping problem and provides a sound but incomplete algorithm. This undecidability proof also applies to our system, as it possesses all the required elements: width-subtyping on choices, sequential composition and recursion. The subtyping relation proposed by Padovani contemplates neither input/output subtyping nor functional subtyping. Furthermore, its implementation relies on the subtyping features of OCaml, the implementation language. In contrast, we propose a more expressive relation, featuring input/output subtyping, as well as functional subtyping. Furthermore, we provide an also sound algorithm that is independent of the implementation language.
Our subtyping relation is based on a novel form of observational preorder, \(\mathcal{XYZW}\)-simulation. There is, as far as we know, no analogue in the literature. It is a generalization of \(\mathcal{XY}\)-simulation, introduced by Aarts and Vaandrager in the context of learning automata [1] but already known, under slightly different forms, as modal refinement [38], alternating simulation [7] and covariant-contravariant simulation [20]. The contravariance on the derivatives introduced by \(\mathcal{XYZW}\)-simulation is also prefigured in contrasimulation [48, 52], but the former uses strong transitions whereas the latter uses weak ones. There is a vast literature on other observational relations, to which Sangiorgi's book provides an overview [48].
Our algorithm decides the \(\mathcal{XYZW}\)-similarity of simple grammars [36]. It is an adaptation of the bisimilarity algorithm for simple grammars of Almeida et al. [4]. To our knowledge, these are the only running algorithms of their sort. Henry and Senizergues [29] proposed an algorithm to decide the language equivalence problem on deterministic pushdown automata. On the related topic of basic process algebra (BPA), BPA processes have been shown to be equivalent to grammars in GNF [9], of which simple grammars are a particular case. This makes results and algorithms for BPA processes applicable to grammars in GNF, and _vice-versa_. A bisimilarity algorithm for general BPA processes, of doubly-exponential complexity, has been proposed by Burkart et al. [11], while an analogous polynomial-time algorithm for the special case of normed BPA processes has been proposed by Hirschfield et al. [30].
## 7 Conclusion and future work
We have proposed an intuitive notion of subtyping for context-free session types, based on a novel form of observational preorder, \(\mathcal{XYZW}\)-simulation. This preorder inverts the direction of the simulation in the derivatives covered by its \(\mathcal{W}\) and \(\mathcal{Z}\) parameters, allowing it to handle co/contravariant features of input/output types. We take advantage of the fact that \(\mathcal{XYZW}\)-simulation generalizes bisimulation to derive a sound subtyping algorithm from an existing type equivalence algorithm.
Despite its unavoidable incompleteness, stemming from the undecidability of our notion of subtyping, our algorithm has not yielded any false negatives. Thus, we conjecture that is partially correct: it may not halt, but, when it does, the answer is correct. We cannot, however, back this claim without a careful analysis of completeness and termination, which we leave for future work. We believe such an analysis will advance the understanding of the subtyping problem by clarifying the practical reasons for its undecidability.
As shown by Thiemann and Vasconcelos [51], support for polymorphism and polymorphic recursion is paramount in practical applications of context-free session types. Exploring the interaction between polymorphism and subtyping in the context-free setting, possibly in the form of _bounded quantification_, is therefore another avenue for future work. |
2304.05331 | State-specific ion mobilities of Lr^+ (Z = 103) in helium | Ion mobilities of Lr^+ (Z = 103) and of its lighter chemical homolog Lu^+ (Z
= 71) in helium were calculated for the ground state ^1S_0 and the lowest
metastable state ^3D_1. To this end we applied the multi-reference
configuration interaction (MRCI) method to calculate the ion-atom interaction
potentials in the different states. The Gram-Charlier approach to solving the
Boltzmann equation was used to deduce the mobilities of the different
electronic states, based on the calculated interaction potentials. We found
that the zero-field ion mobilities are similar for the Lr^+ and Lu^+ ions. In
addition, the ion mobilities of the different states are substantially
different for temperatures above 100K. The relative differences between the
mobilities of the ground and excited states at room temperature are about 15\%
and 13\% for Lu^+ and Lr^+ ions, respectively, which should be sufficiently
large enough to enable laser resonance chromatography (LRC) of these ions. | Harry Ramanantoanina, Anastasia Borschevsky, Michael Block, Larry Viehland, Mustapha Laatiaoui | 2023-04-11T16:46:58Z | http://arxiv.org/abs/2304.05331v2 | # State-specific ion mobilities of \(\text{Lr}^{+}\) (Z = 103) in helium
###### Abstract
Ion mobilities of \(\text{Lr}^{+}\) (\(Z=103\)) and of its lighter chemical homolog \(\text{Lu}^{+}\) (\(Z=71\)) in helium were calculated for the ground state \({}^{1}\text{S}_{0}\) and the lowest metastable state \({}^{3}\text{D}_{1}\). To this end we applied the multi-reference configuration interaction (MRCI) method to calculate the ion-atom interaction potentials in the different states. The Gram-Charlier approach to solving the Boltzmann equation was used to deduce the mobilities of the different electronic states, based on the calculated interaction potentials. We found that the zero-field ion mobilities are similar for the \(\text{Lr}^{+}\) and \(\text{Lu}^{+}\) ions. In addition, the ion mobilities of the different states are substantially different for temperatures above \(100\,\text{K}\). The relative differences between the mobilities of the ground and excited states at room temperature are about 15% and 13% for \(\text{Lu}^{+}\) and \(\text{Lr}^{+}\) ions, respectively, which should be sufficiently large enough to enable laser resonance chromatography (LRC) of these ions.
Heavy and Superheavy Elements, Electronic structures, Relativistic Calculation, ion mobility pacs: 75.40.
we use the MRCI method to treat interatomic interactions; based on the obtained interaction potentials, we calculate the state-specific mobility of Lu\({}^{+}\) and Lr\({}^{+}\) ions drifting in helium gas in their ground (\({}^{1}\)S\({}_{0}\)) and lowest excited (metastable \({}^{3}\)D\({}_{1}\)) electronic states.
## II Methodology and computational details
The _ab initio_ MRCI calculations of the interaction potentials (\(V(d)\)) were performed using the DIRAC19 code [22]. The calculations were carried out in the framework of the four component Dirac-Coulomb Hamiltonian, and the nuclei were treated within a finite-nucleus model _via_ the Gaussian charge distribution [23]. The uncontracted Gaussian-type Dyall basis sets [24; 25] of single-augmented triple-zeta (s-aug-cv3z) quality were used for all the elements. The metal ion and the neutral helium atom were placed along the \(z\)-axis in a system of Cartesian coordinates, separated by an inter-atomic distance \(d\) that was varied from 2.0 A to 40.0 A for the calculation of the interaction potentials. We use the Boys-Bernardi counterpoise correction to tackle basis set superposition error [26]: \(V(d)=E_{M^{+}-He}(d)-E_{M^{+}}(d)-E_{He}(d)\), with M = Lu and Lr. \(E_{M^{+}-He}(d)\) is the MRCI energy of the M\({}^{+}\)-He system at an inter-atomic distance \(d\). \(E_{M^{+}}(d)\) and \(E_{He}(d)\) are the energies of the systems M\({}^{+}\)-Gh and Gh-He, respectively, where He and M atoms are replaced by a ghost atom (Gh) without charge but carrying the the full basis sets of the He and M elements, respectively.
The electronic structure was obtained in two steps. In the first step, Dirac-Hartree-Fock calculations were performed using the average of configuration (AOC) type calculation. The AOC allowed us to represent the open-shell electronic structure system with 2 valence electrons that were evenly distributed over 12 valence spinors (6 Kramers pairs) of \(s\) and \(d\) atomic characters. The resulting wavefunction was used as reference for the CI calculations. In the second step, the energy levels and the spectroscopic properties were calculated using the MRCI approach, within the Kramers-restricted configuration interaction (KRCI) module in DIRAC19 [27; 28; 29; 22]. In this implementation, the KRCI calculations use the concept of generalized active space (GAS) [30], which enables MRCI calculations with single and double electron excitations for different GAS set-ups [27]. The MRCI model _a priori_ takes into consideration the dynamical correlation of the active electrons [31].
We report in Table 1 the GAS set-up together with the technical specifications that were important in the MRCI calculation. In total, we considered 4 GAS that were selectively chosen to activate 26 electrons within 21 semi-core and valence orbitals as well as virtual orbitals with energies below 30 atomic units, i.e. 194 and 199 for the Lu\({}^{+}\)-He and Lr\({}^{+}\)-He systems, respectively. Because the total number of configuration state functions was too large, we defined the parameters \(m\) and \(q\) to control the electron excitation process that occurred at the semi-core level. These parameters were set to _m_=2 and _q_=1, which signified that double- and single-electron excitations were allowed from the selective GAS. It is noteworthy to point out that truncated configuration interaction method is not size-consistent [32]. We did not explicitly use the Davidson (+Q) corrections [22] to solve this problem. But we surmise that including higher order excitation in the GAS scheme (see Table 1) has helped to mitigate the size-conciency issue in the present MRCI method. Furthermore, we also employed size-extensive Fock space coupled cluster (FSCC) [22] to validate the MRCI method (_vide infra_).
To validate the MRCI results, we have also conducted calculations on the relativistic multireference FSCC level of theory. The relativistic FSCC approach is considered to be a very powerful method for the treatment of heavy atomic and small molecular systems and it is also available in DIRAC19 [22]; this method is particularly well-suited for treating systems with two valence electrons, via the sector(0,2) algorithm [33], such as Lu\({}^{+}\)-He and Lr\({}^{+}\)-He investigated here.
The FSCC calculation started with the closed-shell reference electronic state, that was, in our case, the Lr\({}^{3+}\)-He and Lu\({}^{3+}\)-He systems. For the sake of comparison, the FSCC computational details (relativistic method, basis sets, treatment of the nuclei) were the same as those used in the MRCI calculations (see above). In total, 60 and 74 core and semi-core electrons of the Lu\({}^{+}\) and Lr\({}^{+}\) ions, respectively, plus 2 He 1\(s\) electrons, were correlated. Virtual orbitals up to energies of 30 atomic units were also included in the correlation space. Then, two electrons were added to the selected virtual orbitals (the model space) to obtain the singly ionized Lr\({}^{+}\)-He and Lu\({}^{+}\)-He systems, for which the appropriate coupled cluster equations were solved in an iterative way. Convergence difficulties were lifted by complementing the FSCC method with the intermediate Hamiltonian approach [33].
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline GAS & \multicolumn{2}{c|}{Accumulated} & \multicolumn{1}{c|}{Number of} & \multicolumn{1}{c}{Characters\({}^{a}\)} \\ Space & \multicolumn{2}{c|}{Electrons} & \multicolumn{1}{c|}{Kramers} & \\ & Min\({}^{b}\) & Max & pairs & \\ \hline
1 & 8-_m_ & 8 & 4 & (n-1)_s_, (n-1)_p_ \\
2 & 24-_q_ & 24 & 8 & (n-2)_f_, He 1\(s\) \\
3 & 24 & 26 & 9 & n_s_, (n-1)_d_, n_p_ \\
4 & 26 & 26 & \(\leq\) 30 a.u. & Virtual \\ \hline \hline \end{tabular} \({}^{a}\)For Lu\({}^{+}\)-He and Lr\({}^{+}\)-He, n = 6 and 7, respectively
\({}^{b}\)_m_ and \(q\) are variables that control the electron excitation process attributed to the selective GAS
\end{table}
Table 1: Specification of the generalized active space (GAS) scheme used in the calculations of the Lu\({}^{+}\)-He and Lr\({}^{+}\)-He systems. See text for details.
We described the electronic states that corresponded to the interaction potential between the heavy metal ions and the neutral He atom by means of the quantum numbers \(J\) and \(\Omega\), representing the total angular momentum of the metal ions and its projection onto the inter-atomic axis, respectively. In particular, we labelled the electronic states using the Hund's case (c) notation, i.e. \(\Omega_{J}^{\sigma}\), for consistency with conventional practice for the calculations of ion mobility and transport properties (\(\sigma=+\) or \(-\) is an additional notation proper to the linear \(C_{\infty v}\) point group that represents the invariability of the wavefunction with respect to the \(\sigma_{v}\) symmetry operator) [18; 19]. Thus, the ground state \({}^{1}\)S\({}_{0}\) of the free Lr\({}^{+}\) and Lu\({}^{+}\) ions give rise to the \(\Omega=0\) state in the Lr\({}^{+}\)-He and Lu\({}^{+}\)-He systems, i.e. \(X0^{+}\). The metastable \({}^{3}\)D\({}_{1}\) state, on the other hand, transformed to the non-degenerate \(0_{1}^{-}\) and double-degenerate \(1_{1}\) states, since \(\Omega=0\) and \(\pm 1\), respectively. Using the same notation, the next excited states \({}^{3}\)D\({}_{2}\) and \({}^{3}\)D\({}_{3}\) transformed to the single-degenerate \(0_{2}^{+}\) and \(0_{3}^{-}\) as well as the double-degenerate \(1_{2}\), \(1_{3}\), \(2_{2}\), \(2_{3}\), and \(3_{3}\) states.
The ion mobilities were calculated from the ion-atom interaction potentials by solving the Boltzmann equation by the Gram-Charlier approach [34]. To this end we used the program PC [35], which delivers the momentum transfer and other transport cross sections as a function of the collision energy. From this we then calculated the reduced ion mobility \(K_{0}\) either as a function of temperature at a given electric field-to-gas-number density (\(E/n_{0}\)) or as a function of \(E/n_{0}\) at different temperatures, utilizing the program VARY [36]. Here \(K_{0}\) is the ion mobility \(K\) normalized to the standard pressure \(P_{0}\) and the standard temperature \(T_{0}\) according to \(K_{0}=K\frac{P}{P_{0}}\frac{T_{0}}{T}\). Beyond \(d=40\,\)A, the interaction potentials were adjusted to asymptotically mimic the long-range induced ion-dipole attraction given by \(V_{pol}(d)=e^{2}\alpha_{p}/(2(4\pi\epsilon_{0})^{2}d^{4})\), with the static average dipole polarizability of helium of \(\alpha_{p}=0.205\,\)A\({}^{3}\)[37].
For the \({}^{1}\)S\({}_{0}\) ground state, there is only one potential per ionic species and thus a single mobility curve was obtained for each ion. For the \({}^{3}\)D\({}_{1}\) excited state an isotropic averaged potential was first used to calculate the isotropic ion mobility \(K_{0}^{iso}\), which we then compared with the averaged mobility \(K_{0}^{av}\)[18]. This latter was obtained by averaging the mobilities from interaction potentials for each \(\Omega\) component with their statistical weights according to
\[K_{0}^{av}(T)=[K_{0}^{0_{1}^{-}}(T)+2K_{0}^{1_{1}^{-}}(T)]/3. \tag{1}\]
In the Supplementary Material, Figure S1 shows the calculated isotropic ion mobilities together with the averaged ion mobilities corresponding to the \({}^{3}\)D\({}_{1}\) state for both Lu\({}^{+}\)-He and Lr\({}^{+}\)-He systems.
## III Results and Discussion
Table 2 lists the calculated energies of the ground (\({}^{1}\)S) and the metastable (\({}^{3}\)D) electronic states of the Lu\({}^{+}\) and Lr\({}^{+}\) obtained from the MRCI and the FSCC calculations. To calculate these energies following the models described in the Methodology section, we set the distance between the metal ion and the neutral atom to 40.0 A, which is large enough to make sure that the energy difference between the multiple components of the free ion multiplets are negligible, so that they are comparable to atomic energies. For Lu\({}^{+}\), experimental data are also listed for comparison. The two theoretical models at hand yield similar results, with good agreement with the experiments and previous theoretical data [21; 16]. The good agreement with both the experimental values and the FSCC results confirms the suitability of MRCI for electronic structure calculations of heavy metal ions.
The calculated interaction potentials of the ground and the metastable electronic states of Lu\({}^{+}\)-He and Lr\({}^{+}\)-He are shown in Figure 1. Since the MRCI and FSCC calculations are both based on the relativistic Dirac-Coulomb Hamiltonian and employ the same basis sets, the discrepancies between the two sets of calculations can be interpreted in terms of the treatment of electron correlation. There is a good agreement between the FSCC and MRCI data of the ground-state X0\({}^{+}\) (\({}^{1}\)S\({}_{0}\)) for both Lr\({}^{+}\)-He and Lu\({}^{+}\)-He systems. For the latter, the ground-state interaction potentials are also comparable with the scalar-relativistic potentials reported in Refs.[18; 19]. However, we obtain larger discrepancies between the two methods for the metastable states, namely in the \(1_{1}\) (\({}^{3}\)D\({}_{1}\)).
The _ab initio_ interaction potentials are fitted by the Morse potential energy function [39],
\[V(d)=D_{e}(e^{-2\alpha(d-d_{min})}-2e^{-\alpha(d-d_{min})}) \tag{2}\]
to derive the the equilibrium inter-atomic distance (\(d_{min}\)), the dissociation energy (\(D_{e}\)), and the range
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} & & \multicolumn{2}{c|}{Lu\({}^{+}\)} & \multicolumn{2}{c}{Lr\({}^{+}\)} \\ \({}^{2S+1}L_{J}\) & \(\Omega_{J}^{\sigma}\) & FSCC & MRCI & Exp.1 & FSCC & MRCI \\ \hline \({}^{1}\)S\({}_{0}\) & \(X0^{+}\) & 0 & 0 & 0 & 0 & 0 \\ \({}^{3}\)D\({}_{1}\) & \(0_{1}^{-}\)+\(1_{1}\) & 12354 & 12184 & 11796 & 20265 & 21751 \\ \({}^{3}\)D\({}_{2}\) & \(0_{2}^{+}\)+\(1_{2}\)+\(2_{2}\) & 12985 & 12642 & 12435 & 21623 & 22442 \\ \({}^{3}\)D\({}_{3}\) & \(0_{3}^{-}\)+\(1_{3}\)+\(2_{3}\)+\(3_{3}\) & 14702 & 13881 & 14199 & 26210 & 24708 \\ \end{tabular}
\end{table}
Table 2: Calculated low-lying energy levels of Lu\({}^{+}\)-He and Lr\({}^{+}\)-He ions (in cm\({}^{-1}\)) obtained from the FSCC and MRCI calculations (the separation distance between the metal Lu\({}^{+}\)/Lr\({}^{+}\) ions and He are set to 40.0 Å, showing the energy of the ground state \({}^{1}\)S and metastable states \({}^{3}\)D of the metal ions.). For Lu\({}^{+}\)-He, the experimental energy values reported for the Lu\({}^{+}\) ion is also listed for comparison.
parameter \(\alpha\). These are listed in Table 3 and Table 4. The energy splitting between the 1\({}_{1}\) and 0\({}_{1}^{-}\) (\({}^{3}\)D\({}_{1}\)) metastable states in the short-range interaction is larger in the FSCC data. Thus we see a slight shift of the equilibrium distances (\(d_{min}\)) to smaller values from the MRCI to the FSCC 1\({}_{1}\) (\({}^{3}\)D\({}_{1}\)) interaction potentials: 0.06 \(\AA\) and 0.12 \(\AA\) for the Lu\({}^{+}\)-He and Lr\({}^{+}\)-He systems, respectively. We also see a slightly higher FSCC dissociation energy of this state in Lr\({}^{+}\)-He.
The calculated \(d_{min}\) values by the two models follow the same trend in the Lu\({}^{+}\)-He (Table 3) and Lr\({}^{+}\)-He (Table 4) systems: \(d_{min}\) associated with the X0\({}^{+}\) (\({}^{1}\)S\({}_{0}\)) ground state is larger compared with the two 1\({}_{1}\) and 0\({}_{1}^{-}\) (\({}^{3}\)D\({}_{1}\)) metastable states. We can compare the spectroscopic parameters for Lu\({}^{+}\)-He in both the ground and metastable states (cf. Table 3) as well as Lr\({}^{+}\)-He in the ground state (cf. Table 4) to earlier coupled cluster studies [18; 19]. We find that the present \(d_{min}\) values (both MRCI and the FSCC values) are slightly larger (+2.5%) than the results reported in the aforementioned Refs. [18; 19] for the ground and the metastable states.
The dissociation energy (\(D_{e}\)) calculated within the MRCI and the FSCC models also follows a similar trend in the two systems. The metastable 0\({}_{1}^{-}\) (\({}^{3}\)D\({}_{1}\)) shows the weakest interaction, whereas its counterpart 1\({}_{1}\) (\({}^{3}\)D\({}_{1}\)) state has the strongest. This is in contrast to the earlier predictions (Table 3), where the calculated \(D_{e}\) for the 0\({}_{1}^{-}\) and 1\({}_{1}\) (\({}^{3}\)D\({}_{1}\)) metastable states in Lu\({}^{+}\)-He are very close (with an energy difference of 2.3 cm\({}^{-1}\) only). This energy difference is 6.94 cm\({}^{-1}\) and 13.1 cm\({}^{-1}\), respectively, for the present MRCI and FSCC results (Table 3). This disagreement with previous calculations could be due to the difference in the treatment of relativistic effects. We used the four-component DCHF-based model as a basis of our calculations, whereas in Ref. [19], the spin-orbit coupling is treated within perturbation theory based on a scalar relativistic electronic structure. However, the many different computational parameters (choice of basis set and the correlation space and the treatment of correlation) make direct comparison between these values difficult. Overall, the bonding interaction between the metal ion and the helium atom is shown to be very weak, regardless of the theoretical approach.
The ion mobilities were calculated based on the interaction potentials obtained from the MRCI calculations. Figure 2 shows an overview of the obtained zero-field mobilities of Lu\({}^{+}\) and Lr\({}^{+}\) in helium as function of the gas temperature. We have also calculated the ion mobilities based on the FSCC interaction potentials. In the Supplementary Material (Figure S2), a comparison between mobilities obtained with the MRCI and FSCC interaction potentials are also presented. In general, the mobilities of the ground states of the two ions obtained based on the different _ab initio_ approaches agree well with each other. In addition a good agreement of the Lu\({}^{+}\)-He and Lr\({}^{+}\)-He ground-state mobilities is achieved with those reported in Refs. [18; 19]
The principal difference between the two method concerns the \(\Omega=1\) components; the MRCI calculations result in smaller dissociation energies at larger equilibrium distances (see Table 3 and Table 4) and thus in smaller state-specific reduced mobilities compared with the FSCC based predictions, see also the Supplementary Material Figure S2. For brevity we discuss here only the mobility results obtained from predictions based on the MRCI interaction potentials; the minor differences between the MRCI and FSCC results mean that the conclusions of this work are not dependent on the computational approach.
The mobilities of the two systems are distinct for the different ionic states in a wide temperature range and converge towards the polarization limit \(K_{pol}=(13.876/\alpha_{p}^{1/2})[(M_{He}+M_{ion})/M_{He}M_{ion}]^{1/2}\) at about 15.5 cm\({}^{2}\)/Vs in the mK temperature regime [41]. Here, the number 13.876 is obtained when \(\alpha_{p}\) is given in units of A\({}^{3}\) and the masses \(M\) are in atomic mass units. The mobility increases with increasing temperature to reach a local maximum at 100 K for the ions in the \({}^{1}\)S\({}_{0}\) ground state (X0\({}^{+}\)) before it decreases below 13 cm\({}^{2}\)/Vs at temperatures beyond 1000 K. The predicted mobility for Lu\({}^{+}\) in the ground state is found to be in excellent agreement with the only available experimental data reported for a
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Lr\({}^{+}\)-He} & \multicolumn{3}{c|}{MRCI} & \multicolumn{3}{c|}{FSCC} & Ref.\({}^{a}\) \\ & & \(D_{e}\) & \(d_{min}\) & \(\alpha\) & \(D_{e}\) & \(d_{min}\) & \(\alpha\) & \(D_{e}\) & \(d_{min}\) \\ \hline \({}^{1}\)S\({}_{0}\) & X0\({}^{+}\) & 46.78 & 4.134 & 1.192 & 48.59 & 4.179 & 1.205 & 52 & 4.08 \\ \({}^{3}\)D\({}_{1}\) & 0\({}_{1}^{-}\) & 39.49 & 4.177 & 1.118 & 43.92 & 4.108 & 1.174 & & \\ & 1\({}_{1}\) & 51.84 & 3.880 & 1.094 & 72.37 & 3.756 & 1.111 & & \\ \hline \hline \end{tabular} \({}^{a}\)taken from ref. [18]
\end{table}
Table 4: Calculated spectroscopic dissociation energy \(D_{e}\) (in cm\({}^{-1}\)), equilibrium distance \(d_{min}\) (in Å), and range parameters \(\alpha\) (in 1/Å) of the interaction potential of Lr\({}^{+}\)-He corresponding to the Lr\({}^{+}\)\({}^{1}\)S\({}_{0}\) and \({}^{3}\)D\({}_{1}\) electronic states, obtained by a mathematical fit to Equation 2, and compared with previous calculations (Ref.).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Lu\({}^{+}\)-He} & \multicolumn{3}{c|}{MRCI} & \multicolumn{3}{c|}{FSCC} & \multicolumn{3}{c}{Ref.\({}^{a}\)} \\ & & \(D_{e}\) & \(d_{min}\) & \(\alpha\) & \(D_{e}\) & \(d_{min}\) & \(\alpha\) & \(D_{e}\) & \(d_{min}\) \\ \hline \({}^{1}\)S\({}_{0}\) & X0\({}^{+}\) & 45.23 & 4.224 & 1.213 & 42.71 & 4.264 & 1.197 & 47.3 & 4.17 \\ \({}^{3}\)D\({}_{1}\) & 0\({}_{1}^{-}\) & 42.13 & 4.171 & 1.133 & 39.16 & 4.187 & 1.123 & 49.9 & 4.11 \\ & 1\({}_{1}\) & 49.07 & 4.006 & 1.114 & 52.26 & 3.948 & 1.075 & 52.2 & 3.91 \\ \hline \hline \end{tabular} \({}^{a}\)taken from ref. [19]
\end{table}
Table 3: Calculated spectroscopic dissociation energy \(D_{e}\) (in cm\({}^{-1}\)), equilibrium distance \(d_{min}\) (in Å), and range parameters \(\alpha\) (in 1/Å) of the interaction potential of Lu\({}^{+}\)-He corresponding to the Lu\({}^{+}\)\({}^{1}\)S\({}_{0}\) and \({}^{3}\)D\({}_{1}\) electronic states obtained by a mathematical fit to Equation 2, and compared with previous calculations (Ref.).
temperature of 295 K [40].
For the ions in the \({}^{3}\)D\({}_{1}\) states we calculated the mobilities for each of the \(\Omega\) components, \(0_{1}^{-}\) and \(1_{1}\), to deduce the average mobility \(K_{0}^{av}\) according to Equation 1. The resulting curves are indicated with dashed lines in Figure 2. Although these average mobilities are deemed to be more accurate than the isotropic ones [18] we found that both mobilities are in excellent agreement with each other for the two ion species, Lu\({}^{+}\) and Lr\({}^{+}\) in the \({}^{3}\)D\({}_{1}\) states, see the Supplementary Material Figure S1. Similarly to the ground state mobilities, the average excited state mobility increases with increasing temperature to reach a maximum at about 105 K for both ions before it decreases towards higher temperatures. Noteworthy, however, is the difference in zero-field mobility between the ground state and the average mobility of the excited state: e.g., 14.8% and 13% at room temperature for the Lu\({}^{+}\)-He and Lr\({}^{+}\)-He systems, respectively, with a tendency of increasing relative differences towards higher temperatures.
In Figure 3 we show the reduced mobilities of the different states of Lu\({}^{+}\) and Lr\({}^{+}\) obtained based on the MRCI interaction potentials for different gas temperatures as a function of the electric-field-to-gas-number density \(E/n_{0}\) (the so-called reduced electric field, which is given in units of Townsend, 1 Td\(=10^{-17}\) Vcm\({}^{2}\)). In the Supplementary Material, Figure S3 shows a comparison between the ion mobilities calculated with the MRCI and FSCC interaction potentials. In general, the mobility is roughly constant at reduced fields below 10 Td, such that it depends mainly on the gas temperature, with a tendency of being larger at lower temperatures (down to 100 K). As the reduced field increases, the ion mobility decreases almost exponentially to values below 12 cm\({}^{2}\)/Vs
Figure 1: Graphical representations of the MRCI interaction potentials of the Lu\({}^{+}\)-He (A) and Lr\({}^{+}\)-He (B) systems, compared with the FSCC interaction potentials of Lu\({}^{+}\)-He (C) and Lr\({}^{+}\)-He (D). In each panel, the ground XO\({}^{+}\) (\({}^{1}\)S\({}_{0}\)) (solid black curve), and the two low-lying metastable \(1_{1}\) (\({}^{3}\)D\({}_{1}\)) (red) \(0_{1}^{-}\) (\({}^{3}\)D\({}_{1}\)) (green) states are represented, together with the calculated average interaction potential for the metastable states (dashed grey curve). Note that for clarity the potentials are normalized to the same dissociation limit.
at \(E/n_{0}\geq 100\) Td for the ions in the ground states. It nearly decouples from the temperature dependency at extremely large \(E/n_{0}\) values as the energy gained from the electric field dominates the effective ion temperature. The average mobility for the excited states exhibits a similar behavior as a function of the reduced field. Although the 0-orbital-projection components have rather small mobilities, close to those of the ground states, the average mobilities of the excited states are dominated by the \(\Omega=1\) component, due its higher multiplicity (Equation 1), see also the Supplementary Material Figure S4. Similar reduced mobilities were obtained for both investigated ionic species, Lu\({}^{+}\) and Lr\({}^{+}\), in the \({}^{3}\)D\({}_{1}\) state, as can be seen in Figure 3.
In order to reach significant time resolution in future LRC applications the relative drift time differences, due to the relative mobility differences, have to be maximized [19]. Our results suggest a similar trend for both investigated ionic species, see Figure 4. At gas temperatures above 100 K the relative mobility differences for the ground and excited states are between 7% and 17% at reduced fields below 200 Td, with a tendency of becoming larger for higher temperatures. These differences stay above 12% at room temperature and become rather insensitive to the reduced field at 400 K.
Figure 3: Reduced mobilities of the Lu\({}^{+}\)-He (A) and Lr\({}^{+}\)-He (B) systems as function of E/n\({}_{0}\) and at selected temperatures, derived from the MRCI interaction potential, corresponding to the ground X0\({}^{+}\) (\({}^{1}S_{0}\)) (solid lines) and the metastable \({}^{3}\)D\({}_{1}\) (dashed lines) states. For the metastable states, the depicted mobility curves correspond to the average calculated mobility of the \(1_{1}\) and the \(0_{1}^{-}\) electronic states obtained using Equation 1 (See also the Supplementary Material, Figure S4).
Figure 2: Reduced zero-field mobilities of the Lu\({}^{+}\)-He (A) and Lr\({}^{+}\)-He (B) systems in the ground X0\({}^{+}\) (\({}^{1}S_{0}\)) (in black), as well as in the metastable \(1_{1}\) (\({}^{3}\)D\({}_{1}\)) (in red) and the \(0_{1}^{-}\) (\({}^{3}\)D\({}_{1}\)) (in green) states as a function of the temperature, derived from the MRCI interaction potential. The calculated average mobilities for the metastable states (\({}^{3}\)D\({}_{1}\)) are also depicted (in grey); the data point that represents the experimental result for the ground state of Lu\({}^{+}\) ion is also shown (orange dot and error bar) [40].
## IV Summary and conclusion
We have carried out MRCI and FSCC calculations of the interaction potentials of the \(\mathrm{L}\mathrm{r}^{+}\)-He and \(\mathrm{L}\mathrm{u}^{+}\)-He systems. Based on these potentials we predicted the ion mobility of \(\mathrm{L}\mathrm{u}^{+}\) and \(\mathrm{L}\mathrm{r}^{+}\) ions in helium gas, and found the two approaches to be in excellent agreement, justifying the use of either method in future investigations. In particular, the MRCI-based method for calculating interaction potentials and ion mobilities will be relevant for the study of the \(\mathrm{R}\mathrm{f}^{+}\)-He systems, which will be the next step of our theoretical work.
The predicted ion mobility value for \(\mathrm{L}\mathrm{u}^{+}\) in the ground state at room-temperature is in a striking agreement with the experimentally reported one [42]. Similar accuracy can be expected for the room temperature mobility of \(\mathrm{L}\mathrm{r}^{+}\), at least in its ground state, due to the similarities of their electronic structures. As long as the reduced fields are below 200\(\,\)Td, we expect the relative drift time differences to be above 7%; the higher the gas temperature the larger these differences become. Laser resonance chromatography on both ionic species should thus be feasible in terms of time resolution already at room temperature, i.e., without involving sophisticated cryogenic ion mobility spectrometers. Since quenching of states may become strong at elevated effective ion temperatures [19], one may prefer to apply moderate reduced fields below 40\(\,\)Td and room temperature gas environments in order to maintain state populations in the envisaged LRC experiments [43]. In such a case we expect relative drift time differences between ground and metastable states to be about 15% and 13\(\,\)% for \(\mathrm{L}\mathrm{u}^{+}\) and \(\mathrm{L}\mathrm{r}^{+}\), respectively, which should enable disentangling the different drift behaviours upon resonant excitations.
## V Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programm (Grant Agreement No. 819957). We also gratefully acknowledge high performance computing (HPC) support, time and infrastructure provided by: SURFsara HPC at Snellius via the HPC-Europa3 programm, the Center for Information Technology of the University of Groningen (Peregrine), the Johannes Gutenberg University of Mainz (Mogon), and the HPC group of GSI. We thank Alexei Buchachenko for useful discussion.
|
2307.13003 | Quantum GravitoElectromagnetic Dynamics | We propose a renormalisable quantum theory of gravity (QGED) based on the
standard BRST quantisation used to quantise the Yang-Mills theory. The
BRST-invariant Lagrangian of the gravitationally interacting $U(1)$-gauge
theory, including gauge fixing and ghost parts, is provided. From this
Lagrangian, we extract a set of Feynman rules in the local inertial frame where
gravity vanishes locally. Utilising Feynman rules of the QGED prepared here, we
construct all renormalisation constants and show that the theory is
perturbatively renormalisable in one-loop order. We replace infinite-valued
bare objects in the bare Lagrangian with experimentally measured ones. In
addition to standard QED parameters, we show that the gravitational coupling
constant is measurable experimentally.
We also discuss a running effect of the gravitational coupling constant and
the perturbative estimation of the Hawking radiation as examples of the
perturbative QGED. | Yoshimasa Kurihara | 2023-07-24T06:23:46Z | http://arxiv.org/abs/2307.13003v2 | # Quantum GravitoElectromagnetic Dynamics
###### Abstract
We propose a renormalisable quantum theory of gravity (QGED) based on the standard BRST quantisation used to quantise the Yang-Mills theory. The BRST-invariant Lagrangian of the gravitationally interacting \(U(1)\)-gauge theory, including gauge fixing and ghost parts, is provided. From this Lagrangian, we extract a set of Feynman rules in the local inertial frame where gravity vanishes locally. Utilising Feynman rules of the QGED prepared here, we construct all renormalisation constants and show that the theory is perturbatively renormalisable in one-loop order. We replace infinite-valued bare objects in the bare Lagrangian with experimentally measured ones. In addition to standard QED parameters, we show that the gravitational coupling constant is measurable experimentally.
We also discuss a running effect of the gravitational coupling constant and the perturbative estimation of the Hawking radiation as examples of the perturbative QGED.
###### Contents
* 1 Introduction
* 2 Mathematical preliminaries
* 2.1 Space-time manifold
* 2.2 Principal bundles
* 2.3 Connections and curvatures
* 2.3.1 Standard definition
* 2.3.2 Coupling constant
* 2.4 Spinor-gauge bundle
* 3 QGED Lagrangian
* 3.1 Bare Lagrangian
* 3.2 BRST transformation
* 3.2.1 Yang-Mills theory
* 3.2.2 General relativity
* 3.3 Ghost and gauge fixing Lagrangian
* 3.3.1 Yang-Mills theory
* 3.3.2 General relativity
* 4 QGED Feynman rules
* 4.1 Green's function of vierbein and spin-connection fields
* 4.2 External field
* 4.2.1 Electromagnetic part
* 4.2.2 Gravitational part:
* 4.3 Vertices
* 4.4 Propagators
* 4.5 Integration measure and S-matrix
* 5
* 5 Renormalisation of the QGED
* 5.1 Power counting
* 5.2 One-loop renormalisation
* 5.2.1 Renormlisation constants
* 5.2.2 Photon and spin-connection renormalisation
* 5.2.3 Electron field and mass renormalisation
* 5.2.4 Coupling constant renormalisation
* 5.2.5 Determination of coupling constants and electron mass
* 5.3 Gravitational running coupling and the Landau pole
* 6 Particle creation in the strong field
* 6.1 Schwinger effect in the QED
* 6.2 Hawking radiation in the QGED
* 7 Summary
* A Proof of nilpotent
* B Measurement of the gravitational coupling constant
## 1 Introduction
For the microscopic aspect of Nature, the standard theory of particle physics based on quantum field theory has many experimental supports. On the other hand, general relativity is among the fundamental theories concerning the large space-time structure of the Universe, and many experiments have established its correctness. Thus, our understanding of Nature covers a wide range of lengthscales and timescales, from the Universe's large-scale structure to the microscopic behaviour of subatomic particles. However, two fundamental theories are inconsistent; the former is quantum, and the latter is a classical theory. Owing to the uncertainty principle, the fundamental physical theory must be a quantum theory; thus, constructing a quantum theory of gravity is one of the fundamental goals of modern physics.
We understand the _quantum theory of gravity_ as a theory that describes the behaviour of four-dimensional space-time (the metric tensor) in regions where the uncertainty principle is essential and a theory consistent with well-established general relativity at large space-time scales. Moreover, a theory that can provide experimentally measurable predictions is desirable. Immediately after the establishment of general relativity and quantum mechanics during the 1920s, the development of quantum gravity began in the 1930s. For a detailed history, see Refs.[1, 2] and references therein. It is commonly recognised that perturbative expansion of general relativity in four-dimensional space-time is not renormalisable. Although trials to construct a perturbatively renormalised theory of general relativity have failed, its non-existence has yet to be proven. The current author discussed the renormalisability of four-dimensional general relativity in[3, 4, 5]. On the other hand, the quantisation of the Yang-Mills theory in flat space-time is well established, and the theory is proven to be renormalisable in all orders of perturbation expansions. However, it is not proven in curved space-time and is not trivial. A primary objective of this report is to construct the renormalisable quantum gravity with the Yang-Mills theory simultaneously.
General relativity is a kind of gauge theory, which is at first developed in a pioneering work by Utiyama[6] (see also Ref.[7]). Common aspects between general relativity and the Yang-Mills theory may give a hint for renormalisable perturbation for general relativity. We summarise correspondences of mathematical properties between them in Table-1. Comparing two gauge theories, it is clear that the spin-connection (gauge potential) and vierbein field (section) are appropriate targets of quantisation[8]. Moreover, the gravitational coupling constant must be included in the Lagrangian, and it must be renormalised, even if it is unity valued after renormalisation, which is not considered in previous studies. Renormalizability of the Yang-Mills theory must be ensured, including gravitational fields \(\mathcal{E}\) and \(\omega\). In the curved space-time, the Yang-Mills Lagrangian also has an interaction term of Dirac spinor \(\psi\), spin-connection \(\omega\) and vierbein \(\mathcal{E}\)[9]. In this report, we develop quantum general relativity quantised with the U(1)-gauge theory, namely the "Quantum GravitoElectromagnetic Dynamics (QGED)".
The QGED does not quantise space-time itself; thus, the space-time coordinates \(x^{\mu}\) are not the quantum operator but the classical continuous object[10, 11]. The subject for quantisation is the vierbein field (equivalently, the metric tensor \(g^{(c)}_{\,\mu\nu}(x)\)), which is obtained as a solution of the Einstein equation. In classical general relativity, the geometrical metric tensor \(g^{(g)}_{\,\mu\nu}(x)\) is given by the solution of the classical Einstein equation, such as \(g^{(c)}_{\mu\nu}(x)\)=\(g^{(g)}_{\,\mu\nu}(x)\), i.e., it is Einstein's equivalence principle. This relation is not simply true at the quantum level, and the geometrical metric tensor is given as the expected value of the quantum metric tensor \(g^{(g)}_{\,\mu\nu}(x)=\langle g^{(a)}_{\,\mu\nu}(x)\rangle\).
This report is organised as follows: After this introduction section, **section 2** provides mathematical preliminaries describing geometrical backgrounds of general relativity and the Yang-Mills theory based on Ref.[12]. We introduce a gravitational coupling constant in a covariant differential concerning a local \(SO(1,3)\) group. Although this method is not standard in general relativity, it is commonly utilised in the Yang-Mills theory. **Section 3** introduces the classical (un-renormalised) QGED Lagrangian consisting of the pure and interaction Lagrangian for general relativity and the Yang-Mills theory. The interaction Lagrangian includes a gravitational interaction between a fermion and a gravitational gauge boson. In addition to that, we have to include a gauge fixing and ghost Lagrangian to quantise this system. In this section, we give all necessary Lagrangians and show their BRST invariance, ensuring the gauge theory's quantisation. From the QGED Lagrangian given in **section 3**, **section 4** extracts Feynman rules for the perturbative calculation of amplitudes. Successively, **section 4** develops the renormalised QGED using Feynman rules given in **section 3**. We prepare all necessary renormalisation constants and show that all ultraviolet divergences are absorbed in a finite number of renormalisation constants at the one-loop level. Consequently, the renormalisation-group equation provides an energy scale dependence of the effective gravitational coupling (a running gravitational coupling). **Section 5** provides an application of the QGED, i.e., the Hawking radiation from the Schwartzschild black hole in contrast with the Schwinger effect of the QED. Finally, the summary section summarises the main results of this report and discusses essential aspects of the QGED. In addition, we provide two appendixes, a proof of nilpotent for all fields appearing in the Lagrangian in **Appendix A** and the possible method to measure the gravitational coupling constant in particle physics experiments by **Appendix B**.
We use the following physical units in this study: We set the speed of light and the vacuum permittivity to unity as \(c=1\) and \(\epsilon_{0}=1\). Einstein's constant of gravitation, \(\kappa_{\rm E}=G_{\rm N}/8\pi\), and reduced Plank-constant, \(\hbar=h/2\pi\), are written explicitly, where \(G_{\rm N}\) is the Newtonian constant of gravitation. We define the elementary charge, \(e\), appearing in the Lagrangian as a dimensionless object, and the fine-structure constant is \(\alpha=e^{2}/4\pi\). In these units, physical dimensions of fundamental constants are \([\hbar\,G_{\rm N}]_{\rm p.d.}=L^{2}=T^{2}\) and \([\hbar/G_{\rm N}]_{\rm p.d.}=E^{2}=M^{2}\), where \(L\), \(T\), \(E\) and \(M\) are respectively length, time, energy and mass dimensions. Here, \([\bullet]_{\rm p.d.}\) gives a physical dimension of the quantity \(\bullet\). Planck mass \(m_{\rm p}\) and Planck length \(l_{\rm p}\) are respectively defined as \(m_{\rm p}:=\sqrt{\hbar/G_{\rm N}}\) and \(l_{\rm p}:=\sqrt{\hbar G_{\rm N}}\). We also introduce \(L_{\rm p}=\sqrt{2\hbar\kappa_{\rm E}}\).
## 2 Mathematical preliminaries
### Space-time manifold
We introduce a four-dimensional Riemannian manifold \((\mathscr{M},\mathbf{g})\), where \(\mathscr{M}\) is a smooth and oriented four-dimensional manifold, and \(\mathbf{g}\) is a metric tensor with a negative signature in \(\mathscr{M}\). We refer to \(\mathscr{M}\) as the global manifold. In an open neighbourhood \(U_{\rm pe\mathscr{M}}\subset\mathscr{M}\), we introduce the standard coordinate \(x^{\mu}\). Orthonormal
\begin{table}
\begin{tabular}{l||l l l|l} Property & \multicolumn{2}{c}{General Relativity} & \multicolumn{1}{c|}{Electromagnetism} & dimension \\ \hline structural group & \multicolumn{2}{c}{\(SO(1,3)\)} & \multicolumn{2}{c|}{\(Spin(1,3)\oplus SU(N)\)} & \\ connection & spin-connection: & \(\omega_{\mu}^{\,ab}\) & gauge potential: & \(\mathscr{A}^{\mu}\) & \(L^{-1}\) \\ curvature & curvature: & \(R^{ab}_{\,\mu\nu}(x)\) & field strength: & \(\mathscr{F}^{\mu\nu}\) & \(L^{-2}\) \\ section & vierbein: & \(\mathcal{E}^{A}_{a}/\sqrt{\kappa_{\rm E}\hbar}\) & Dirac spinor: & \(\psi^{2/3}\) & \(L^{-1}\) \\ coupling constant & & \(\mathcal{E}_{\mathscr{F}}\) & & \(e\) & \(1\) \\ \end{tabular}
\end{table}
Table 1: Comparison between general relativity and the Yang–Mills theory at a classical level.
bases in \(T\mathscr{M}_{p}\) and \(T^{*}\mathscr{M}_{p}\) are denoted as \(\partial\)/\(\partial\)\(x^{\mu}\) and \(dx^{\mu}\), respectively. Abbreviation \(\partial_{\mu}:=\partial\)/\(\partial\)\(x^{\mu}\) is used throughout this report. Two trivial vector bundles \(T\mathscr{M}:=\bigcup_{p}T\mathscr{M}_{p}\) and \(T^{*}\mathscr{M}:=\bigcup_{p}T^{*}\mathscr{M}_{p}\) are referred to as a tangent and cotangent bundles in \(\mathscr{M}\), respectively.
An inertial system, in which the Levi-Civita connection vanishes, exists locally at any point in \(\mathscr{M}\). An inertial frame at point \(p\in\mathscr{M}\) is denoted as \(\mathcal{M}_{p}\); namely, a local inertial manifold at point \(p\). \(\mathcal{M}_{p}\) has a \(SO(1,3)\) symmetry. A trivial bundle \(\mathcal{M}:=\bigcup_{p\in\mathcal{M}}\mathcal{M}_{p}\) is referred to as the inertial bundle. Trivial bundles \(T\mathcal{M}:=\bigcup_{p}T\mathcal{M}_{p}\) and \(T^{*}\mathcal{M}:=\bigcup_{p}T^{*}\mathcal{M}_{p}\) also exist over \(\mathcal{M}\). In an open neighbourhood \(U_{p\in\mathcal{M}}\subset\mathcal{M}\), we introduce the standard coordinate \(\xi^{a}\). Roman letter suffixes are used for components of the standard basis in \(T\mathcal{M}_{p}\) throughout this study; on the other hand, Greek letters are used for them in \(\mathscr{M}_{p}\). This convention allows us to distinguish two abbreviated vectors, such as \(\partial_{\mu}\in V(T\mathscr{M})\) and \(\partial_{a}=\partial\)/\(\partial\)\(\xi^{a}\in V(T\mathcal{M})\). The metric tensor in \(\mathcal{M}_{p}\) is \(\boldsymbol{\eta}=\operatorname{diag}(1,-1,-1,-1)\). The Levi-Civita tensor (complete anti-symmetric tensor) \(\boldsymbol{\epsilon}\), whose component is \([\boldsymbol{\epsilon}]_{0123}=\epsilon_{0123}=+1\), and \(\boldsymbol{\eta}\) are constant tensors in \(\mathcal{M}\).
We define the vierbein \(\mathcal{E}^{a}_{\mu}(x)\in C^{\infty}(\mathscr{M})\) as a map transferring a vector in \(T^{*}\mathscr{M}\) to that in \(T^{*}\mathcal{M}\) such that:
\[\mathcal{E}^{a}_{\mu}(x\in T^{*}\mathscr{M})dx^{\mu}\big{|}_{p\in\mathscr{M}_ {p}}=d\xi^{a}\big{|}_{p\in\mathcal{M}_{p}}\in\Omega^{1}(T^{*}\mathcal{M}) \otimes\mathfrak{so}(1,3),\]
where \(\Omega^{p}(\bullet)\) is a space of \(p\)-form objects defined in bundle \(\bullet\). The vierbein is a smooth and invertible function globally defined in \(\mathscr{M}\). The standard bases in \(T^{*}\mathcal{M}\) is referred to as the vierbein one-form denoted as \(\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}} \mathfrak{}} \mathfrak{a}\)\(:=\mathcal{E}^{a}_{\mu}dx^{\mu}\). The vierbein inverse \([\mathcal{E}^{-1}]^{\mu}_{a}=\mathcal{E}^{\mu}_{a}\in C^{\infty}(\mathcal{M})\), which is also called the vierbein, is an inverse transformation such that
\[\mathcal{E}^{a}_{\mu}(x)\mathcal{E}^{\nu}_{a}\left(\xi(x)\right)=\delta^{\nu }_{\mu}\in T^{*}\mathscr{M},\text{ and }\mathcal{E}^{\mu}_{a}\left(\xi\right)\mathcal{E}^{b}_{\mu}(x\left(\xi \right))=\delta^{b}_{a}\in T^{*}\mathcal{M}.\]
Metric tensors in \(\mathscr{M}\) and \(\mathcal{M}\) are related to each other such that: \(\eta_{ab}=\mathcal{E}^{\mu}_{a}\mathcal{E}^{\nu}_{b}g_{\mu\nu}\). The \(GL(4)\) invariant two-, three- and four-dimensional volume forms are, respectively, defined using vierbein forms as
\[\mathfrak{S}_{ab} :=\frac{1}{2}\epsilon_{ab\circ}\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak \mathfrakmathfrakmathfrakmathfrak{ \mathfrak{\mathfrakmathfrakmathfrak{\mathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrakmathfrak{ \mathfrak
A space of complex-valued scalar fields, namely the Higgs field \(\boldsymbol{\phi}=(\phi^{1},\phi^{2},\cdots,\phi^{N})^{T}\), is introduced as a section \(\boldsymbol{\phi}\in\boldsymbol{\Phi}=\Gamma\left(\mathcal{M},\Omega^{0}(T^{*} \!\mathcal{M}^{\otimes N})\otimes SU(N)\right)\) belonging to the fundamental representation of the \(SU(N)\) symmetry. \(SU(N)\) group operator \(G_{\!\mathcal{S}\!U}\) acts on the section as
\[\left[G_{\!\mathcal{S}\!U}\left(\boldsymbol{\phi}\right)\right]^{I}=\left[ \boldsymbol{g}_{\!\mathcal{S}\!U}\right]_{J}^{I}\phi^{J},\]
where \(\boldsymbol{g}_{\!\mathcal{S}\!U}\) is a unitary matrix with \(\det[\boldsymbol{g}_{\!\mathcal{S}\!U}]=1\). Lie algebra of \(SU(N)\) group is
\[\left[\tau_{I},\tau_{J}\right]:=i\sum_{K}f_{IJK}\,\tau_{K},\]
where \(f_{\star\star\star}\) is a structure constant of \(SU(N)\), and \(\tau=(\tau_{1},\tau_{2},\cdots,\tau_{N^{2}-1})\in\mathfrak{su}(N)\).
A spinor-gauge bundle is a Whitney sum of spinor and gauge bundles which is a tuple such as
\[\left(\mathcal{S}_{\!\mathcal{P}}^{\otimes N},\pi_{\!\mathcal{P}}\oplus\pi_ {\!\mathcal{S}\!U},\mathcal{M},Spin(1,3)\otimes SU(N)\right)\text{ where }\mathcal{S}_{\! \mathcal{P}}^{\otimes N}:=\overbrace{\mathcal{S}_{\!\mathcal{P}}\otimes \cdots\otimes\mathcal{S}_{\!\mathcal{P}}}^{N}.\]
The total space of the gauge bundle is lifted to the spin manifold.
### Connections and curvatures
This section introduces connections and curvatures in principal bundles given in previous section. In physics, a connection provides a gauge-boson field mediating a force between matter fields, and a curvature is named a field strength and provides, e.g., electromagnetic fields in the \(U(1)\) gauge theory. The Lagrangian consists of the covariant differential of matter and gauge fields, ensuring the gauge invariance of the theory. The covariant differential includes a _coupling constant_ in physics, which provides relative strength among fundamental forces in Nature.
On the other hand, the standard treatment of the connection and the curvature in mathematics does not include a coupling constant in their definitions. We introduce the gravitational coupling constant to simultaneously discuss a gravitational force and other forces. We first provide the standard definitions of the connection and the curvature in general relativity; then, we introduce the gravitational coupling constant in our theory.
#### 2.3.1 Standard definition
We introduce connection one-form \(\hat{\mathfrak{w}}\) in the spinor bundle, namely the spin-connection form. An object with a hat, \(\hat{\bullet}\), represents one by the standard definition in this section. The \(SO(1,3)\)-covariant differential of the \(p\)-form object \(\mathfrak{a}^{a_{1}a_{2}\cdots a_{q}}\in\Omega^{p}(T^{*}\!\mathcal{M})\otimes V ^{q}(T\!\mathcal{M})\) is defined using the spin-connection form as
\[d_{\hat{\mathfrak{w}}}\mathfrak{a}^{a_{1}\cdots a_{q}}:=d\mathfrak{a}^{a_{1} \cdots a_{q}}+\hat{\mathfrak{w}}^{a_{1}}{}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Local \(SO(1,3)\) group action \(G_{\rm SO}:T^{*}\!\!{\cal M}\to T^{*}\!\!{\cal M}\) is known as the Lorentz transformation. That for the vierbein form and the spin-connection form are
\[G_{\rm SO}: \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Spinor-gauge bundle
A connection in the gauge bundle \(\mathfrak{A}_{SU}:=\mathfrak{A}^{I}_{SU}\,\tau_{I}\in\Omega^{1}(T^{*}\!\mathcal{M}) \otimes\mathfrak{su}(N)\) is an \(SU(N)\) Lie-algebra valued one-form object. We define covariant differential \(d_{\mathcal{SU}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3 QGED Lagrangian
### Bare Lagrangian
We introduce the Lagrangian four-form in the inertial manifold as \(\mathfrak{L}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Lagrangian densities are defined in \(T^{\bullet}\!\mathcal{M}\). We decompose the bare QGED Lagrangian into free and interaction parts according to the standard perturbative quantization method such as
\[\left\{\begin{array}{rcl}\mathcal{L}^{\!c\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We note that \(d_{\mathfrak{m}}s(\xi)=ds(\xi)\) for a local scalar function \(s(\xi)\). The BRST transformation on the Grassmann numbers \(X\) and \(Y\) fulfils the Leibniz rule such that
\[\delta^{\mathcal{K}\!U}_{\mathrm{B}}[XY]=\delta^{\mathcal{K}\!U}_{\mathrm{B}}[X] Y+\epsilon_{X}X\,\delta^{\mathcal{K}\!U}_{\mathrm{B}}[Y],\]
where the signature \(\epsilon_{X}=-1\) for \(X\in\{\chi^{I},\bar{\chi}^{I}\}\), and \(\epsilon_{X}=+1\) otherwise. We define the BRST transformations for auxiliary and ghost/anti-ghost fields as
\[\delta^{\mathcal{K}\!U}_{\mathrm{B}}[B^{I}] :=0,\] \[\delta^{\mathcal{K}\!U}_{\mathrm{B}}[\chi^{I}] :=-\frac{1}{2}c_{\mathcal{K}\!U}f^{I}_{\,JK}\,\chi^{J}\chi^{k},\] \[\delta^{\mathcal{K}\!U}_{\mathrm{B}}[\bar{\chi}^{I}] :=B^{I}.\]
The BRST transformation for vierbein and spin-connection fields gives zero. Simple calculations show the BRST transformation is nilpotent for any fields.
#### 3.2.2 General relativity
The current author has developed the canonical quantization of general relativity in the Heisenberg picture utilising the Nakanishi-Kugo-Ojima quantization method[22]. This section discusses the BRST transformation of gravitational fields in the interaction picture following the method in the preceding section.
We introduce the auxiliary field and Faddeev-Popov ghost/anti-ghost fields as follows:
* auxiliary field: \[\beta_{\mu}^{\,\,a}(x),\]
* ghost fields: \[\chi^{a}_{\,\,b}(\xi)\text{ and }\chi^{\mu}(x),\]
* anti-ghost field: \[\bar{\chi}^{\,\,a}_{\mu}(x).\]
Here, ghost fields are Grassmannian. Auxiliary field \(\beta_{\mu}^{\,\,a}(x)\) plays a role in fixing \(Spin(1,3)\) gauge symmetry in the quantization of the spin-connection and is decoupled from the physical system after the gauge fixing. Ghost fields \(\chi^{a}_{\,\,b}(\xi)\) and \(\chi^{\mu}(x)\) were introduced to preserve the unitarity of the scattering amplitude, corresponding to the local Lorentz and global coordinate transformations, respectively. We assign a ghost number \(+1\) for \(\chi^{a}_{\,\,b}\) and \(-1\) for \(\bar{\chi}^{\,\,a}_{\mu}\). An anti-ghost field corresponding to the global ghost is unnecessary since it is decoupled from the physical system in the inertial space.
In this study, we denote the BRST transformation for gravitational fields as \(\delta^{\mathcal{G}\!R}_{\mathrm{B}}[\bullet]\), and require rules introduced by Nakanishi[23] as follows: The BRST transformation of the coordinate vector in \(\mathscr{M}\) should obey the general linear transformation as follows:
\[\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[x^{\mu}\right]=\chi^{\mu}. \tag{13}\]
In addition, we require the postulate given in [23], such as
\[\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[\partial_{\mu}X\right]=\partial_{ \mu}\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[X\right]-\left(\partial_{\mu} \delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[x^{\nu}\right]\right)\partial_{\nu} X=\partial_{\mu}\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[X\right]-\left( \partial_{\mu}\chi^{\nu}\right)\partial_{\nu}X, \tag{14}\]
where \(X\) is any field defined in \(T\mathscr{M}\); thus, the BRST transformation acts on a one-form object in \(T\mathsf{*}\mathscr{M}\) as
\[\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[dx^{\mu}\right]=\left(\partial_{ \nu}\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[x^{\mu}\right]\right)dx^{\nu}=d \left(\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[x^{\mu}\right]\right)=\ d\chi^{\mu}.\]
Consequently, the BRST transformation and external derivative are commute with each other, i.e.,
\[\left[\delta^{\mathcal{G}\!R}_{\mathrm{B}},d\right]\bullet=\delta^{\mathcal{ G}\!R}_{\mathrm{B}}\left[d\bullet\right]-d\left(\delta^{\mathcal{G}\!R}_{ \mathrm{B}}\left[\bullet\right]\right)=\ 0,\]
We define the BRST transformations of auxiliary and ghost/anti-ghost fields by reference to those of the Yang-Mills theory as
\[\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[\beta_{\mu}^{\,\,a} \right] =\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[\chi^{\mu}\right]\ =\ 0, \tag{15a}\] \[\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[\chi^{a}_{\,\,b}\right] =c_{\mathcal{G}}\chi^{\,\,a}_{\,\,c}\chi^{\,\,b}_{\,\,b},\] (15b) \[\delta^{\mathcal{G}\!R}_{\mathrm{B}}\left[\bar{\chi}^{\,\,a}_{\, \,\mu}\right] =L_{\mathrm{p}}\beta_{\mu}^{\,\,a}. \tag{15c}\]
Here, we have set the BRST transformations to preserve the physical dimension of each field. The BRST transformation of the position vector given in (13) provides the global ghost field with a Length dimension. The auxiliary field has the same physical dimension as the spin-connection field; it is \(L^{-1}\) dimension. The ghost field must be a null dimension object owing to (15b). We also set anti-ghost fields as a null physical dimension; thus, we introduce the Planck length in (15c) to adjust a physical dimension.
The BRST transformation satisfies the following Leibniz rule:
\[\delta^{GR}_{\mathrm{B}}\left[XY\right]=\delta^{GR}_{\mathrm{B}} \left[X\right]Y+\epsilon_{X}X\delta^{GR}_{\mathrm{B}}\left[Y\right], \tag{16}\]
where the signature \(\epsilon_{X}=-1\) for \(X\in\{\chi^{\mu},\chi_{a}^{\;b},\bar{\chi}_{\mu}^{\;a}\}\), and \(\epsilon_{X}=+1\) otherwise. The BRST transformations of vierbein and spin-connection forms are defined as
\[\delta^{GR}_{\mathrm{B}}\left[\mathfrak{e}^{a}\right] :=c_{\mathcal{G}}\chi^{a}_{\;\circ}\mathfrak{e}^{\circ},\] \[\delta^{GR}_{\mathrm{B}}\left[\mathfrak{w}^{ab}\right] :=d\chi^{ab}-c_{\mathcal{G}}\left(\mathfrak{w}^{a}_{\;\circ} \chi^{\circ b}+\mathfrak{w}^{b}_{\;\circ}\chi^{a\circ}\right),\]
where \(\chi^{ab}:=\chi^{a}_{\;\circ}\eta^{\circ b}\). The local ghost field is anti-symmetric owing to the above definition. They induce the BRST transformations of vierbein and spin-connection fields as
\[\delta^{GR}_{\mathrm{B}}\left[\mathfrak{E}^{a}_{\mu}\right] =-\mathcal{E}^{a}_{\mu}\partial_{\mu}\chi^{\nu}+c_{\mathcal{G}} \chi^{a}_{\;\circ}\mathcal{E}^{\mu}_{\mu}, \tag{17a}\] \[\delta^{GR}_{\mathrm{B}}\left[\omega^{\;ab}_{\mu}\right] =\mathcal{E}^{\;\circ}_{\mu}\partial_{\circ}\chi^{ab}+\left( \partial_{\mu}\chi^{\nu}\right)\omega^{\;ab}_{\nu}-c_{\mathcal{G}}\left( \omega^{\;a}_{\mu\;\circ}\chi^{\circ b}+\omega^{\;b}_{\mu\;\circ}\chi^{a\circ }\right). \tag{17b}\]
The BRST transformation of the vierbein inverse is obtained after simple calculations as
\[\delta^{GR}_{\mathrm{B}}\left[\mathcal{E}^{\mu}_{a}\right] =\mathcal{E}^{\;\nu}_{a}\partial_{\nu}\chi^{\mu}-c_{\mathcal{G}} \chi^{\;\circ}_{\;a}\mathcal{E}^{\mu}_{\circ}, \tag{17c}\]
where \(\delta^{GR}_{\mathrm{B}}\left[\mathcal{E}^{a}_{\mu}\,\mathcal{E}^{\nu}_{a} \right]=\delta^{GR}_{\mathrm{B}}\left[\delta^{\nu}_{\mu}\right]=0\) is used.
We introduce one-form objects of auxiliary and ghost/anti-ghost fields as
\[\mathfrak{b}^{a} :=\left(L_{\mathrm{p}}\,\beta^{\;a}_{\mu}-\bar{\chi}^{\;a}_{\nu} \,\partial_{\mu}\chi^{\nu}\right)dx^{\mu}, \tag{18a}\] \[\mathfrak{c}^{a} :=\chi^{a}_{\;\circ}\mathfrak{e}^{\circ},\] (18b) \[\bar{\mathfrak{c}}^{a} :=\bar{\chi}^{\;a}_{\mu}\,dx^{\mu}=\bar{\chi}^{\;a}_{\;\circ} \mathfrak{e}^{\circ}, \tag{18c}\]
where \(\bar{\chi}^{\;a}_{b}:=\bar{\chi}^{\;a}_{\mu}\,\mathcal{E}^{\mu}_{b}\). They have a physical dimension as \(\left[\mathfrak{b}^{a}\right]=\left[\mathfrak{c}^{a}\right]=\left[\bar{ \mathfrak{c}}^{a}\right]=L\), and their BRST transformations are
\[\delta^{GR}_{\mathrm{B}}\left[\mathfrak{b}^{a}\right] =\delta^{GR}_{\mathrm{B}}\left[\mathfrak{c}^{a}\right]=0\;\;\text{ and }\;\;\delta^{GR}_{\mathrm{B}}\left[\mathfrak{c}^{a}\right]=\mathfrak{b}^{a}.\]
Auxiliary form has representations in the inertial space as
\[\left(\text{18a}\right)=\left(L_{\mathrm{p}}\beta^{\;a}_{\circ}-\bar{\chi}^{ \;a}_{\star}\,\partial_{\circ}\chi^{\star}\right)\mathfrak{c}^{\circ}\in T^{ \star}\mathcal{M},\;\;\text{ where }\;\;\beta^{\;a}_{b}:=\beta^{\;a}_{\mu}\,\mathcal{E}^{\mu}_{b},\;\;\chi^{a}:= \mathcal{E}^{a}_{\mu}\chi^{\mu}.\]
We represent the auxiliary form in \(T^{\star}\mathcal{M}\) as \(\mathfrak{b}^{a}=B^{a}_{\;\circ}\mathfrak{e}^{\circ}\) using components defined as
\[B^{a}_{b}:=\left(L_{\mathrm{p}}\beta^{\;a}_{b}-\bar{\chi}^{\;a}_{\circ}\, \partial_{b}\chi^{\circ}\right).\]
Since we define the Lagrangian form in \(T^{\star}\mathcal{M}\), Romanised forms appear in the Lagrangian. We provide the BRST transformation of (14) and (17b) in the inertial space for future convenience:
\[\delta^{GR}_{\mathrm{B}}\left[\partial_{a}X(\xi)\right] =\partial_{a}\left(\delta^{GR}_{\mathrm{B}}\left[X\right](\xi) \right)-c_{\mathcal{G}}\chi^{\;\circ}_{\;a}\left(\partial_{\circ}X(\xi) \right),\] \[\delta^{GR}_{\mathrm{B}}\left[\omega^{\;ab}_{c}\right] =\partial_{c}\chi^{ab}-c_{\mathcal{G}}\left(\omega^{\;a}_{c\; \circ}\chi^{\circ b}+\omega^{\;b}_{c\;\circ}\chi^{a\circ}+\omega^{\;ab}_{\; \circ}\chi^{\;\circ}_{\;c}\right).\]
We note that the BRST transformation on the object defined in the inertial space does not include the global ghost field, as shown above. Consequently, the global ghost field does not appear in the ghost Lagrangian defined in the inertial space.
Direct calculations show that the BRST transformations are nilpotent for all forms and fields. Consequently, the classical Lagrangian \(\mathcal{L}_{GR}\) is BRST-invariant. In this study, we simplified definitions of auxiliary and ghost/anti-ghost fields from those in Ref.[22]. Although calculations are almost identical to those in Ref.[22], we provide proof of nilpotency for all necessary forms in **Appendix A** for completeness.
### Ghost and gauge fixing Lagrangian
This section introduces gauge fixing and ghost Lagrangians to the bare quantum Lagrangian according to the standard prescription: Prepare the necessary number of functions \(F[\mathscr{A},B,\chi,\bar{\chi}]\), which includes the same number of ghost and anti-ghost fields. Then, add a term
\[\mathcal{L}_{\bullet;g\text{fir}}+\mathcal{L}_{\bullet;gh}:=\delta^{\bullet}_{ \text{B}}[\bar{\chi}F]\]
to the Lagrangian. As a result, the total Lagrangian is BRST-invariant, and after fixing all gauge degrees, the Lagrangian is still keeping the BRST symmetry. We omit suffix "\(\circ\))" suffix for the bare fields in this section, too.
#### 3.3.1 Yang-Mills theory
For the \(SU(N)\) gauge part, we introduce the scalar field as
\[H^{I}:=\eta^{\circ\circ}\partial_{\circ}\mathscr{A}^{I}_{\ \circ}+\frac{1}{2}\xi_{A}B^{I},\]
which gives the gauge fixing and ghost Lagrangians, such that
\[\mathcal{L}_{\mathcal{SU};g\text{fir}}+\mathcal{L}_{\mathcal{SU};gh}:=\sum_{I= 1}^{N}\delta^{\mathcal{SU}}_{\text{B}}[\bar{\chi}^{I}H^{I}].\]
It is decomposed into
\[\mathcal{L}_{\mathcal{SU};g\text{fir}} :=\sum_{I=1}^{N}\delta^{\mathcal{SU}}_{\text{B}}[\bar{\chi}^{I}] H^{I}=\sum_{I=1}^{N}\left(B^{I}\eta^{\circ\circ}\partial_{\circ}\mathscr{A}^{I} _{\ \circ}+\frac{\xi_{A}}{2}\,B^{I}B^{I}\right), \tag{19a}\] \[\mathcal{L}_{\mathcal{SU};gh} :=\sum_{I=1}^{N}\bar{\chi}^{I}\delta^{\mathcal{SU}}_{\text{B}}[H^ {I}]=\bar{\chi}^{I}\eta^{\circ\circ}\partial_{\circ}\left(\partial_{\circ} \chi^{I}+c_{\mathcal{SU}}f^{I}_{\ JK}\mathscr{A}^{J}_{\ \circ}\chi^{K}\right). \tag{19b}\]
We can write the gauge fixing Lagrangian as
\[\text{(19a)}=\sum_{I=1}^{N}\frac{1}{2\xi_{A}}\left\{\left(C^{I} \right)^{2}-\left(\eta^{\circ\circ}\partial_{\circ}\mathscr{A}^{I}_{\ \circ}\right)^{2}\right\}, \tag{19c}\]
where
\[C^{I}:=\eta^{\circ\circ}\partial_{\circ}\mathscr{A}^{I}_{\ \circ}+\xi_{A}B^{I}.\]
Therefore, scalar field \(C^{I}\) (and thus \(B^{I}\), too) is decoupled from the other parts, and the standard gauge fixing term
\[\mathcal{L}_{\mathcal{SU};g\text{fir}}:=-\frac{1}{2\xi_{A}}\sum_{I=1}^{N}\left( \eta^{\circ\circ}\partial_{\circ}\mathscr{A}^{I}_{\ \circ}\right)^{2}, \tag{20}\]
and interaction vertices among gauge bosons and ghosts remain in the Lagrangian. The gauge fixing condition provides one constraint on four components of \(\mathscr{A}^{a}\). Together with an on-shell (mass-less) condition, two dynamical degrees (transverse polarization) remain on a \(SU(N)\) gauge boson.
Owing to definitions, we can confirm that the Lagrangian is BRST-invariant:
\[\delta^{\mathcal{SU}}_{\text{B}}\left[\mathcal{L}_{\mathcal{SU}}+\mathcal{L}_ {\mathcal{SU};g\text{fir}}+\mathcal{L}_{\mathcal{SU};gh}+\mathcal{L}_{\text{ MT}}\right]=0.\]
#### 3.3.2 General relativity
This section provides gauge fixing and ghost Lagrangians for a gravity part according to the same prescription used for the Yang-Mills theory. We introduce a one-form object
\[\mathfrak{H}^{a}:=\mathfrak{W}\mathfrak{w}^{a}+\frac{1}{2L_{\mathrm{p}}^{2}} \xi_{\omega}\,\mathfrak{b}^{a},\ \mathrm{with}\ \ \mathfrak{w}\mathfrak{w}^{a}:=\left(\partial_{\circ}\omega_{\bullet}^{\,a \circ}\right)\mathfrak{e}^{\bullet}.\]
We note that \(\mathfrak{w}\mathfrak{w}^{a}\) has a length-inverse dimension and \(\mathfrak{b}^{a}\) has length dimension; thus, \(\mathfrak{H}^{a}\) has a length-inverse dimension. We define gauge fixing and ghost Lagrangians using this one-form object as
\[\mathfrak{L}_{\mathit{GR};\mathit{gf\!a\!f\!u}}+\mathfrak{L}_{\mathit{GR}; \mathit{gh}}:=\frac{1}{L_{\mathrm{p}}^{2}}\delta^{\mathit{GR}}_{\mathrm{B}} \left[\mathfrak{C}^{\circ}\wedge\mathfrak{H}^{\circ}\wedge\mathfrak{S}_{ \circ\circ}\right],\]
where the Planck length appears to adjust the Lagrangian forms to a null-dimension. Owing to the definition and nilpotent of each field, this is BRST invariant. We decompose it into the gauge fixing and ghost parts, such that:
\[L_{\mathrm{p}}^{\,2}\,\mathfrak{L}_{\mathit{GR};\mathit{g\!f\! a\!f\!u}} :=\delta^{\mathit{GR}}_{\mathrm{B}}\left[\mathfrak{C}^{\circ} \right]\wedge\mathfrak{H}^{\circ}\wedge\mathfrak{S}_{\circ\circ}=\mathfrak{b }^{\circ}\wedge\left(\mathfrak{w}\mathfrak{w}^{\circ}+\frac{1}{2L_{\mathrm{p} }^{\,2}}\xi_{\omega}\,\mathfrak{b}^{\circ}\right)\wedge\mathfrak{S}_{\circ \circ}, \tag{21a}\] \[L_{\mathrm{p}}^{\,2}\,\mathfrak{L}_{\mathit{GR};\mathit{gh}} :=-\mathfrak{C}^{\circ}\wedge\delta^{\mathit{GR}}_{\mathrm{B}} \left[\mathfrak{H}^{\circ}\wedge\mathfrak{S}_{\circ\circ}\right]. \tag{21b}\]
First, we look at the gauge fixing Lagrangian. We define a new one-form object
\[\mathfrak{C}^{a}:=\mathfrak{w}\mathfrak{w}^{a}+\frac{\xi_{\omega}}{L_{ \mathrm{p}}^{\,2}}\,\mathfrak{b}^{a}\]
yielding
\[\mathfrak{L}_{\mathit{GR};\mathit{g\!f\!a\!f\!u}}=\frac{\left(\ref{eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq
Equation (25b) provides four constraints on \(\omega_{a}{}^{b}\). In addition, there are six on-shell conditions; thus, two physical degrees of freedom remain in theory. The graviton is the quantized spin-connection field in our theory and is a massless spin-two particle. The little group concerning the graviton in four-dimensional space-time is \(SO(2)\), which has two degrees of freedom and is consistent with the above discussion.
Next, we construct the ghost Lagrangian, which provides an interaction term among ghosts and physical fields. We require further simplification for the ghost/anti-ghost fields. The role of the Faddeev-Popov ghost is to preserve the Ward identity and to ensure the unitarity of gravitational scattering amplitudes. The quantum theory requires ghost fields as many as a dimension of the gauge group; that of the Lorentz group is six. However, all six components of the graviton are not independent, but it has only two independent components, as discussed above. Thus, we need a scalar field each for ghost and anti-ghost fields.
Owing to the BRST transformation of the spin-connection field, the local ghost is anti-symmetric concerning its two suffixes. We parameterize the local ghost field as
\[\eta_{a\circ}\chi_{\;b}^{\circ}(\xi)=\chi_{ab}(\xi):=\chi(\xi)\,c_{a}c_{b},\] (26a) where \[c_{a}:=(c_{0},c_{1},c_{2},c_{3})\] is a Grassmannian vector. On the other hand, we set the anti-ghost fields a symmetric tensor: \[\bar{\chi}_{b}^{\;a}(\xi)=\bar{\chi}(\xi)\,\delta_{b}^{a}.\] (26b) Both \[\chi(\xi)\] and \[\bar{\chi}(\xi)\] are Grassmannian scalar fields. The auxiliary form also has the same degree of freedom and the same convention as that of the anti-ghost field, such that \[B_{\;b}^{a}(\xi)=B(\xi)\,\delta_{b}^{a}. \tag{26c}\]
In the following calculations, we keep all components of the spin-connection field without requiring the torsion-less condition.
The ghost Lagrangian splits into two parts owing to the Leibniz rule of the BRST transformation, such that:
\[-(21b)=\bar{\mathfrak{c}}^{\circ}\wedge\delta_{\rm B}^{GR}\left[\mathfrak{H}^ {\circ}\right]\wedge\mathfrak{S}_{\circ\circ}+\bar{\mathfrak{c}}^{\circ} \wedge\mathfrak{H}^{\circ}\wedge\delta_{\rm B}^{GR}\left[\mathfrak{S}_{\circ \circ}\right]. \tag{27}\]
Simple calculations provide the BRST transformation of the surface form and \(\mathfrak{H}^{a}\), respectively, as
\[\delta_{\rm B}^{GR}\left[\mathfrak{S}_{ab}\right] =\frac{1}{2}\epsilon_{ab\circ\circ}\delta_{\rm B}^{GR}\left[ \mathfrak{c}^{\circ}\wedge\mathfrak{c}^{\circ}\right]=-c_{\mathfrak{gr}} \epsilon_{ab\circ\star}\chi c^{\circ}c_{\star}\mathfrak{c}^{\times}\wedge \mathfrak{c}^{\star},\] \[\delta_{\rm B}^{GR}\left[\mathfrak{H}^{\circ}\right] =\left\{\left(\partial_{\circ}\partial_{\times}\chi\right)c^{a}c^{ \circ}+c_{\mathfrak{gr}}\left(\omega_{\times}^{\;a}\left(\partial_{\circ} \chi\right)c^{\star}c^{\circ}+\left(\partial_{\circ}\omega_{\times}^{\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
QGED Feynman rules
This section formulates Feynman rules concerning the QGED Lagrangian. Hereafter, we treat only \(U(1)\) gauge as a gauge group and one spinor field (electron field). Thus, the theory is electrodynamics, and the dimensionless coupling constant is an electric charge; \(c_{\vec{S}U}=e\). For curved space-time, a momentum space, in which Feynman rules are defined, is not trivial. In this study, we utilise the definition of the momentum space introduced in Ref.[24] and denote it by \(T^{\bullet}\!\!\tilde{\cal M}\). We follow a phase-convention of the Feynman rules in Ref.[25].
Feynman rules for bare fields are common to those for renormalised fields. Therefore, continuing from the previous section, this section also omits the suffix "\(\circ\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \({\cal E}^{(a)}_{\mu}\) is polarization vectors of the vierbein field. Polarization vectors are components of a spin-one vector defined in \(T^{*}\!\!\mathscr{M}\) such that
\[\mathfrak{e}^{(a)}:={\cal E}^{(a)}_{\mu}(x)dx^{\mu},\]
where \(a=1,2\) shows two independent polarization vectors. We interpret that the subscript represents a vector component concerning the global space-time manifold, and the superscript points to an inner degree of freedom corresponding to two polarization vectors. When we interpret a vierbein field \({\cal E}^{a}_{\mu}\) as a component of the global vector defined in \(T^{*}\!\!\mathscr{M}\) concerning the subscript \(\mu\), a local \(SO(1,3)\) symmetry is an internal degree of freedom concerning the superscripts \(a\), which is interpreted as a polarization degree of a vierbein field. We can construct a spin-two polarization vector of the metric tensor using the coupling of angular momenta[24].
### External field
The QGED Lagrangian consists of four external physical fields: vierbein field, spin-connection field, photon, and electron. In addition to that, the QGED has ghost/anti-ghost fields. Among them, photons and electrons are defined purely in the inertial space. The spin-connection and local ghost/anti-ghost fields are also objects defined in the inertial space after the Romanisation of the subscript. The spin-connection field propagates in the inertial bundle as a wave according to equation (30). On the other hand, the vierbein field \({\cal E}^{a}_{\mu}(x\in\mathscr{M})\) is an object essentially defined in the global space-time and propagating in it according to the wave equation (33). The role of the vierbein is twofold in the QGED: one is a dynamic field propagating in global space-time, and the other is a transformation function converting a Greek index into a Roman index. An electron and a photon do not couple to the vierbein field directly through the interaction Lagrangian. Therefore, this study treats the vierbein field as a transformation function between the global and inertial space and does not discuss an external vierbein field in curved space-time in this study.
#### 4.2.1 Electromagnetic part
spinor:According to the standard perturbation method, we introduce electron and positron wave functions as follows: Spinor fields of electron and positron, \(u(p,s)\) and \(v(p,s)\), are positive and negative frequency parts of the plane-wave solution of the Dirac equation with momentum \(p_{a}=(p_{0},\vec{p})\in V^{1}(T^{*}\!\!\mathscr{\bar{M}})\) and spin \(s\) as follows:
\[\hbar^{3/2}\psi(\xi)=:\frac{1}{(2\pi\hbar)^{3}}\int\frac{d^{3}\vec{p}}{\sqrt{2 E_{p}}}\sum_{s=1,2}\left(a_{e}(p,s)u(p,s)e^{-i\xi\cdot p/\hbar}+b_{e}^{\dagger}(p,s)v (p,s)e^{+i\xi\cdot p/\hbar}\right), \tag{35}\]
where \(E_{p}:=\sqrt{\vec{p}\cdot\vec{p}+m^{2}}\) is an on-shell energy, and \(a_{e}(p,s)\) and \(b_{e}^{\dagger}(p,s)\) are an electron's creation and annihilation operators, respectively. The canonical commutation relation of the creation/annihilation operators is
\[\left\{a_{e}(\vec{p},s),a_{e}^{\dagger}(\vec{p}^{\prime},s^{\prime})\right\}= \left\{b_{e}(\vec{p},s),b_{e}^{\dagger}(\vec{p}^{\prime},s^{\prime})\right\}= \hbar^{3}\delta(\vec{p}-\vec{p}^{\prime})\delta_{ss^{\prime}}, \tag{36}\]
and otherwise zero. Electron's creation and annihilation operators have \(L^{3/2}\) dimension owing to the commutationrelation (36); thus, spinors \(u(p,s)\) and \(v(p,s)\) have a physical dimension of \(E^{1/2}\). Spinors are normalized as
\[\bar{u}(p,s)\,\gamma_{0}\,u(p,s^{\prime})=2E_{p}\,\delta_{ss^{\prime}},\ \mbox{ and }\ \bar{v}(p,s)\,\gamma_{0}\,v(p,s^{\prime})=2E_{p}\,\delta_{ss^{\prime}}.\]
The polarization sums of an electron and a positron are
\[\sum_{s}u(p,s)\bar{u}(p,s)=\not{p}+m,\ \mbox{and}\ \ \sum_{s}v(p,s)\bar{v}(p,s)= \not{p}-m. \tag{37}\]
photon:The Fourier transformation of transversely polarised \(U(1)\) gauge field \(\mathscr{A}_{a}^{T}(\xi)\) provides a polarization vector of a physical photon, such that:
\[\hbar^{-1/2}\,\mathscr{A}_{a}^{T}(\xi)=:\frac{1}{(2\pi\hbar)^{3}} \int\frac{d^{3}\vec{k}}{\sqrt{2k_{0}}}\sum_{\lambda=1,2}\epsilon_{a}^{(\lambda) }\left(a_{\gamma}(k,\lambda)e^{-i\xi\cdot k/\hbar}+a_{\gamma}^{\dagger}(k, \lambda)e^{+i\xi\cdot k/\hbar}\right),\]
where \(k_{0}\) is an on-shell energy such that \(k_{0}^{2}-\vec{k}\cdot\vec{k}=0\), and \(a_{\gamma}(k,\lambda)\) and \(a_{\gamma}^{\dagger}(k,\lambda)\) are, respectively, creation and annihilation operator of a photon with momentum \(k\) and polarization \(\lambda\). Creation and annihilation operators has \(L^{3/2}\) physical dimension owing to the canonical quantization conditions, e.g.,
\[\left[a_{\gamma}(\vec{k},\lambda),a_{\gamma}^{\dagger}(\vec{k}^{ \prime},\lambda^{\prime})\right]=\hbar^{3}\delta^{3}\Big{(}\vec{k}-\vec{k}^{ \prime}\Big{)}\,\delta_{\lambda^{\prime}\lambda}. \tag{38}\]
Consequently, polarization vector \(\epsilon_{a}^{(\lambda)}\) has null physical dimension. A photon polarization vector is normalized as
\[\sum_{\lambda=1,2}\epsilon_{a}^{(\lambda)}(k)\epsilon_{b}^{( \lambda)}(k)=-\eta_{ab}+(1-\xi_{A})\,\frac{k_{a}k_{b}}{\eta_{\circ\circ}k^{ \circ}k^{\circ}}, \tag{39}\]
in the covariant gauge (20).
#### 4.2.2 Gravitational part:
vierbein:We consider an asymptotic state of the vierbein field in the asymptotically flat space-time with \(g_{\mu\nu}=\mathrm{diag}\,(1,-1,-1,-1)\). The Fourier transformation of the vierbein field of the fundamental solution (34) provides a polarization vector in the momentum space, such that:
\[\left(/\hbar\right)^{-1}\mathcal{E}_{\mu}^{a}(x)=:\frac{1}{(2\pi \hbar)^{4}}\int d^{4}p\,\tilde{\mathcal{E}}_{\mu}^{(\lambda)}\left(a_{\mathcal{ E}}(p,\lambda)e^{-ix\cdot p/\hbar}+a_{\mathcal{E}}^{\dagger}(p,\lambda)e^{+ix \cdot p/\hbar}\right)\biggr{|}_{\lambda=a}\,. \tag{40}\]
We note that the vacuum concerning the vierbein field is not a flat metric in general, and the Fourier transformation and the momentum space in the curved space-time is not trivial[24]. The vierbein fields appear in the integral kernel of the Fourier transformation as
\[x\cdot p:=g_{\mu\nu}(x)\,x^{\mu}p^{\nu}=\eta_{\circ\circ}\mathcal{E}_{\mu}^{ \circ}(x)\mathcal{E}_{\nu}^{\circ}(x)x^{\mu}p^{\nu},\]
and thus, the integration (40) is not well-defined in the curved space-time. We consider the Fourier transformation only in the asymptotic frame assumed to be flat.
The equation of motion of the free vierbein field in the flat space-time shown in (33) is the same as that of a photon; thus, the Fock space of a free (asymptotic) vierbein operator is the same as that of a photon. The canonical quantization condition requires an equal-time commutation relation on creation and annihilation operators of the physical vierbein fields, such as[10]
\[\left[a_{\mathcal{E}}(p,\lambda),a_{\mathcal{E}}^{\dagger}(p^{ \prime},\lambda^{\prime})\right]=\hbar^{4}\delta\big{(}p_{0}-p_{0}^{\prime} \big{)}\,\delta^{3}\big{(}\vec{p}-\vec{p}^{\prime}\big{)}\,\delta_{\lambda^{ \prime}\lambda}. \tag{41}\]
Consequently, annihilation and creation operators of the vierbein field have the length squared dimension, and a polarization vector in the momentum space has a null physical dimension.
We introduce four polarization vectors of the asymptotic vierbein field propagating along, e.g., the \(x^{1}\)-axis, such as
\[\tilde{\mathcal{E}}_{\mu}^{(\lambda)}:=\frac{1}{\sqrt{2}}\left\{ \begin{array}{ccc}(1,&i,&0,&0),&(\lambda=0),\\ (i,&1,&0,&0),&(\lambda=1),\\ (0,&0,&1,&1),&(\lambda=2),\\ (0,&0,&1,-1),&(\lambda=3).\end{array}\right.\]
with a circular polarization. The vierbein field is constructed from polarization vectors, such as
\[\mathcal{E}_{\mu}^{a}:= \tilde{\mathcal{E}}_{\mu}^{(\lambda=a)},\]
which provides a flat metric tensor
\[g_{\mu\nu}\ = \eta_{\circ\diamond}\mathcal{E}_{\mu}^{\circ}\mathcal{E}_{\nu}^{ \diamond}=\operatorname{diag}\left(1,-1,-1,-1\right).\]
This relation implies the normalisation of the polarization vector as
\[\sum_{\lambda,\lambda^{\prime}}\eta_{\lambda^{\prime}}\tilde{\mathcal{E}}_{ \mu}^{(\lambda)}(p)\tilde{\mathcal{E}}_{\nu}^{(\lambda^{\prime})}(p)=g_{\mu\nu}. \tag{42}\]
Each vierbein polarization vector represents the spin-one state, and a symmetric product (42) concerning two Lorentz indexes constructs a spin-two state of the quantised metric tensor.
Two polarization vectors,
\[\mathfrak{p}^{(\lambda=+)}:=\tilde{\mathcal{E}}_{\mu}^{(2)}dp^{\mu}=\frac{1}{ \sqrt{2}}\left(dp^{2}+dp^{3}\right)\text{ and }\quad\mathfrak{p}^{(\lambda=\times)}:=\tilde{\mathcal{E}}_{\mu}^{(3)}dk^{ \mu}=\frac{1}{\sqrt{2}}\left(dp^{2}-dp^{3}\right),\]
have a dynamic degree and are physically observable in the quantum field theory, similar to the polarization vector of the transversely polarised photon in the QED. We use representations \(\lambda=2=:+\) and \(\lambda=3=:\times\) according to the standard convention. Polarization vectors
\[\tilde{\mathcal{E}}_{\mu}^{(s)}:=\frac{1}{\sqrt{2}}\left(\tilde{\mathcal{E}}_ {\mu}^{(0)}-i\tilde{\mathcal{E}}_{\mu}^{(1)}\right),\text{ and }\ \tilde{\mathcal{E}}_{\mu}^{(l)}:=\frac{1}{\sqrt{2i}}\left(\tilde{\mathcal{E}}_{ \mu}^{(0)}+i\tilde{\mathcal{E}}_{\mu}^{(1)}\right),\]
correspond to a scalar vierbein and a longitudinally polarised vierbein, respectively. polarization vectors satisfy the covariant gauge condition for the on-shell momentum \(\tilde{p}_{\mu}=(\tilde{p},0,0,\tilde{p})\) as
\[g^{\mu\nu}\tilde{p}_{\mu}\tilde{\mathcal{E}}_{\nu}^{(\lambda)}=0\]
for \(\lambda=+,\times,s\).
spin-connection:The Fourier transformation of the spin-connection field provides a polarization vector in the momentum space, such that:
\[\hbar^{-1/2}\,\omega^{(\lambda)}{}_{\dot{a}}^{b}(\xi)=:\frac{1}{(2\pi\hbar)^{ 3}}\int\frac{d^{3}\vec{p}}{\sqrt{2p_{0}}}\sum_{\lambda=1,2}\tilde{\omega}^{( \lambda)}{}_{\dot{a}}^{b}(p)\left(a_{\omega}(p,\lambda)e^{-i\xi\cdot p/\hbar} +a_{\omega}^{\dagger}(p,\lambda)e^{+i\xi\cdot p/\hbar}\right).\]
In contrast to the Vierbein field defined on the global manifold, the spin connection field is defined on the local inertial manifold after Romanising the subscripts. Its Fourier transform is, therefore, well-defined in inertial space. The equation of motion for the free spin-connection field \(\omega_{\diamond}{}^{\circ a}\) shown in (30) is the same as that of a photon; thus, the Fock space of a free vierbein operator is the same as that of a photon, too. We set a canonical quantisation condition of an equal-time commutation relation on creation and annihilation operators of physical spin-connection fields as
\[\left[a_{\omega}(p,\lambda),a_{\omega}^{\dagger}(p^{\prime},\lambda^{\prime}) \right]\bigr{|}_{p_{0}=p_{0^{\prime}}}=\hbar^{3}\,\delta^{3}\Bigl{(}\vec{p}- \vec{p}^{\prime}\Bigr{)}\,\delta_{\lambda^{\prime}\lambda}.\]
Consequently, a polarisation vector of the spin-connection field in the momentum space has null physical dimension.
We can construct polarization vectors of the spin-two field for the spin-connection in the momentum space. For a spin-connection momentum \(p_{a}:=(p_{0},p\sin\vartheta\cos\varphi,p\sin\vartheta\sin\varphi,p\cos\vartheta)\), we introduce three linearly polarised vectors, such that:
\[\tilde{\varepsilon}_{a}^{(\lambda=1)} :=(0,\cos\vartheta\cos\varphi,\cos\vartheta\sin\varphi,-\sin \theta)\,,\] \[\tilde{\varepsilon}_{a}^{(\lambda=2)} :=(0,-\sin\varphi,\cos\varphi,0)\,,\] \[\tilde{\varepsilon}_{a}^{(\lambda=3)} :=\frac{1}{\sqrt{p_{0}^{2}-p^{2}}}\left(p,p_{0}\sin\vartheta\cos \varphi,p_{0}\sin\vartheta\sin\varphi,p_{0}\cos\vartheta\right).\]
The polarizations \(\lambda=1,2\) are the transverse components of the physical spin-connection, and \(\lambda=3\) corresponds longitudinal one appearing in the virtual state in the propagator. The circularly polarised vectors are constructed using them as
\[\tilde{\varepsilon}_{a}^{(+)}:=\frac{1}{\sqrt{2}}\left(\tilde{ \varepsilon}_{1}^{(1)}+i\tilde{\varepsilon}_{a}^{(2)}\right), \tag{43a}\] \[\tilde{\varepsilon}_{a}^{(-)}:=\frac{1}{\sqrt{2}}\left(\tilde{ \varepsilon}_{1}^{(1)}-i\tilde{\varepsilon}_{a}^{(2)}\right),\] (43b) \[\tilde{\varepsilon}_{a}^{(l)}:=\tilde{\varepsilon}_{1}^{(3)}. \tag{43c}\]
They satisfy the covariant gauge fixing conditions of \(\eta^{\circ\circ}p_{\circ}\bar{\varepsilon}_{\circ}^{(\lambda)}=0\).
We can construct polarization vectors of the spin-connection fields owing to spin-one polarization vectors (43a)\(\sim\)(43c) as
\[\tilde{\omega}^{(0)}{}_{\dot{a}\dot{b}}:=\tilde{\omega}^{(+)} \tilde{\varepsilon}_{b}^{(-)}-\tilde{\varepsilon}_{a}^{(-)}\tilde{ \varepsilon}_{b}^{(+)}, \tag{44a}\] \[\tilde{\omega}^{(+)}{}_{\dot{a}\dot{b}}:=\tilde{\varepsilon}_{a}^ {(+)}\tilde{\varepsilon}_{b}^{(l)}-\tilde{\varepsilon}_{a}^{(l)}\tilde{ \varepsilon}_{b}^{(+)},\] (44b) \[\tilde{\omega}^{(-)}{}_{\dot{a}\dot{b}}:=\tilde{\varepsilon}_{a}^ {(-)}\tilde{\varepsilon}_{b}^{(l)}-\tilde{\varepsilon}_{a}^{(l)}\tilde{ \varepsilon}_{b}^{(-)}, \tag{44c}\]
where a value of \(\dot{a}\) equals to that of \(a\). Here, we utilise a convention
\[\tilde{\omega}^{(\lambda)}{}_{\dot{a}\dot{b}}:=\tilde{\omega}^{(\lambda)}{}_ {\dot{a}}^{\dot{a}\circ}\eta_{\circ\dot{b}}\big{|}_{\dot{a}\to\dot{a}}\ \text{and}\ \ \tilde{\omega}^{(\lambda)}{}_{\dot{a}\dot{b}}:=\tilde{\omega}^{(\lambda)}{}_{ \dot{b}}^{\dot{c}\dot{b}}\eta_{\circ\dot{a}}\big{|}_{\dot{b}\to\dot{b}}\]
to distinguish two different polarization vectors concerning two subscripts. They are anti-symmetric concerning two suffixes and satisfy the covariant gauge condition as
\[\tilde{\omega}^{(\lambda)}{}_{\dot{a}\dot{b}}=-\tilde{\omega}^{(\lambda)}{}_ {\dot{b}\dot{a}}\ \text{and}\ \ \eta^{\circ\circ}p_{\circ}\tilde{\omega}^{(\lambda)}{}_{\dot{a}\dot{\circ}}=0.\]
Polarization vectors \(\tilde{\omega}^{(+1)}{}_{\dot{a}\dot{b}}\), \(\tilde{\omega}^{(0)}{}_{\dot{a}\dot{b}}\) and \(\tilde{\omega}^{(-1)}{}_{\dot{a}\dot{b}}\) provide the spin-two polarization tensors with a helicity state \(-1\), \(0\) and \(+1\).
Polarization vectors (44a)\(\sim\)(44c) provide the polarization sum such that
\[\sum_{\lambda=-1,0,1}\tilde{\omega}^{(\lambda)}{}_{\dot{a}\dot{b} }\,\tilde{\omega}^{(\lambda)}{}_{\dot{c}\dot{d}} =\left(\eta_{ac}+\left(1-\xi_{\omega}\right)\frac{p_{a}p_{c}}{ \eta^{\circ\circ}p_{\circ}}\right)\left(\eta_{bd}+\left(1-\xi_{\omega} \right)\frac{p_{b}p_{d}}{\eta^{\circ\circ}p_{\circ}p_{\circ}}\right)\] \[-\left(\eta_{ad}+\left(1-\xi_{\omega}\right)\frac{p_{a}p_{d}}{ \eta^{\circ\circ}p_{\circ}p_{\circ}}\right)\left(\eta_{bc}+\left(1-\xi_{\omega }\right)\frac{p_{b}p_{c}}{\eta^{\circ\circ}p_{\circ}p_{\circ}}\right) \tag{45}\]
in the covariant gauge fixing (25a) with the unitary gauge (\(\xi_{\omega}=0\)).
ghosts:Although a ghost field for the \(U(1)\) gauge does not appear in scattering processes, that for gravity is indispensable in the QGED. We define generalised four-momenta concerning ghost/anti-ghost fields as
\[\pi_{\chi}^{a}:=\frac{\delta\mathcal{L}_{GR;gh}}{\delta\left(\partial_{a} \chi\right)}=-\eta^{a\circ}\partial_{\circ}\bar{\chi}\ \ \text{and}\ \ \pi_{\chi}^{a}:=\frac{\delta\mathcal{L}_{GR;gh}}{\delta\left(\partial_{a}\bar{ \chi}\right)}=\eta^{a\circ}\partial_{\circ}\chi-\varsigma_{\mathcal{P}}\omega _{\star}^{a\bullet}\chi,\]
owing to ghost Lagrangian (28). The Fourier transformation of ghost/anti-ghost fields provides those in the momentum space, such that:
\[\left(\kappa_{\mathrm{E}}\hbar\right)^{-1}\chi(\xi)=:\frac{1}{ \left(2\pi\hbar\right)^{4}}\int d^{4}p\,\tilde{\chi}(p)\left(a_{\chi}(p)e^{-i \xi\cdot p/\hbar}+a_{\chi}^{\dagger}(p)e^{+i\xi\cdot p/\hbar}\right)\ \ \text{and}\ \ (\chi\to\bar{\chi},\ \bar{\chi}\to\bar{\bar{\chi}}). \tag{46}\]
The equation of motion for ghost/anti-ghost fields are the Klein-Gordon equation; thus, the Fock space of a free ghost/anti-ghost operators is the same as that of the standard scalar field. Commutation relations for creation and annihilation operators of ghost/anti-ghost fields are
\[\left\{a_{\chi}(p),a_{\chi}^{\dagger}(p^{\prime})\right\}=\left\{a_{\tilde{\chi }}(p),a_{\tilde{\chi}}^{\dagger}(p^{\prime})\right\}=\hbar^{4}\delta\!\left(p _{0}-p_{0}^{\prime}\right)\delta^{3}\!\left(\vec{p}-\vec{p}\,^{\prime}\right).\]
Consequently, ghost/anti-ghost fields in the momentum space have a null physical dimension. We normalised ghost/anti-ghost fields in the momentum space such that
\[\tilde{\chi}(p)^{2}=\tilde{\chi}(p)^{2}=1. \tag{47}\]
Fields appearing in the QGED and their physical dimensions are summarised in Table 2.
### Vertices
In our convention, we define the Feynman rule for the interaction vertex owing to the Fourier transform of the Lagrangian without any extra phase (imaginary unit). The vertex rule provides a multiplicable factor after truncated wave functions and creation/annihilation operators. We provide a gravitational interaction vertex of an electron as
\[\widetilde{V}_{\omega\psi} :=-i\frac{c_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where the factor \(L_{p}^{-2}\) is omitted since an overall factor of the Lagrangian does not contribute to the Feynman rules. This vertex term is defined in the inertial space. Vierbein functions are factored out and give determinant \(\det[\mathcal{E}]\), which is absorbed in a part of the volume form. After the Fouries transformation, the second term of (49) is
\[\eta_{\circ\circ}\delta^{b}_{c}\delta^{a}_{d}\tilde{\omega}^{\dagger}_{a}{}^{ \circ\circ}_{a}{}^{d\circ}_{b}=\sum_{\tilde{a}}\tilde{\omega}^{\dagger}_{ \tilde{a}\circ}\tilde{\omega}_{\tilde{a}\circ}\eta^{\circ\circ},\]
which provides a summation of the normalization constant of the connection field; thus, it does not contribute to scattering amplitudes.
We apply a further modification to the Lagrangian using the torsion-less condition. Given the vierbein field, we can algebraically solve the torsion-less equation (31) easily using our choice of independent spin-connection components (24).
\[(\ref{eq:1}) \implies\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
the _Feynman propagator_ and is omitted hereafter. The Feynman propagator naturally appears in some extended Riemannian metric (the \(\theta\)-metric[24]). The classical counterpart of a quantum propagator is Green's function for the equation of motion for the free gauge field, which is a wave equation; thus, their Fock space of creation and annihilation operators is the same as that of the QED gauge field.
The Fourier transformation of (51) provides a quantum propagator in the momentum space such that:
\[D^{\phi}(p)=\frac{i}{(2\pi\hbar)^{2}}\int_{T^{\bullet}\mathcal{ M}}\mathfrak{v}\,e^{-ip\cdot\xi/\hbar}\langle 0|T^{\bullet}\phi(\xi)\phi(0)|0 \rangle=\frac{\sum_{\lambda}\tilde{\phi}^{(\lambda)}(p)\tilde{\phi}^{(\lambda) }(p)}{-\eta_{\circ\circ}p^{\circ}p^{\circ}-i\delta}, \tag{52}\]
where \(p\) is a momentum of the field and \(\tilde{\phi}^{(\lambda)}(p)\) is a wave function in the momentum space with polarization \(\lambda\). A contraction of vectors in \(p\in T^{\bullet}\mathcal{\hat{M}}\) and \(\xi\in T^{\bullet}\mathcal{M}\) is \(p\cdot\xi:=\eta^{\circ}p_{\circ}\xi_{\circ}\).
We obtain propagators for QGED fields owing to (52) as follows:
* Photon propagator: \[D^{\mathcal{A}}_{\,ab}(p)=\frac{\sum_{\lambda}\epsilon^{(\lambda) }_{a}(p)\epsilon^{(\lambda)}_{b}(p)}{-\eta_{\circ\circ}p^{\circ}p^{\circ}-i\delta}\] (53a)
* Electron propagator: \[S_{\psi}(p)=\frac{\sum_{s}u(p,s)\bar{u}(p,s)}{-\eta_{\circ\circ}p^{\circ}p^{ \circ}+m^{2}-i\delta}\] (53b)
* Vierbein propagator: \[D^{\mathcal{E}}_{\mu\nu}(p)=\frac{\sum_{\lambda}\tilde{\mathcal{E}}^{(\lambda )}_{\mu}(p)\tilde{\mathcal{E}}^{(\lambda)}_{\nu}(p)}{-\eta_{\circ\circ}p^{ \circ}p^{\circ}-i\delta}\] (53c)
* Spin-connection propagator: \[D^{\omega}_{\,ab;cd}(p)=\frac{\sum_{\lambda}\tilde{\omega}^{( \lambda)}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
### Integration measure and S-matrix
An integration measure for each loop momentum \(l_{j}\) and outgoing external momentum \(k_{j}\) are, respectively, taken to be
\[\prod_{j}\left(-1\right)^{\sigma_{j}}\left(\mu_{R}^{2}\right)^{ \varepsilon_{UV}}\frac{d^{D}l_{j}}{i(2\pi)^{D}}\ \ \text{and}\ \ \prod_{j}\frac{d^{3}k_{j}}{2\pi)^{3}}\frac{1}{2k_{j}^{0}}, \tag{54}\]
where \(k_{j}^{0}\) is the zeroth component of the \(j\)'th outgoing momentum \(k_{j}\). In the dimensional regularisation method, we need to put an arbitrary parameter with energy dimension, \(\mu_{R}\), to adjust the physical dimension of the loop contribution. We set \(\sigma_{j}=1\), if a \(j\)'th loop particle is an electron or a ghost; otherwise, \(\sigma_{j}=0\). An imaginary unit in the loop integration is eliminated owing to the analytic continuation (the Wick rotation) concerning the zeroth component \(dl_{j}^{0}\).
The \(S\)-matrix is defined owing to the interaction Lagrangian as
\[S:=T^{\ast}\exp{\left[i\int d^{4}\xi\mathcal{L}_{int}\left(\xi \right)\right]},\]
and its perturbative expansion is
\[S=1+\sum_{N=1}^{\infty}\frac{i^{N}}{N!}\int d^{4}\xi_{1}\cdots \int d^{4}\xi_{N}T^{\ast}\left[\mathcal{L}_{int}\left(\xi_{1}\right)\cdots \mathcal{L}_{int}\left(\xi_{N}\right)\right].\]
The scattering matrix, namely \(T\)-matrix, is defined owing to the \(S\)-matrix as
\[S=1+iT. \tag{55}\]
Matrix elements \(S_{fi}:=\langle f|S|i\rangle\) is expressed as
\[S_{fi}=\delta_{fi}+i(2\pi)^{4}\delta(p_{f}-p_{i})\sum_{pol}\bigl{|}T_{fi} \bigr{|}^{2},\]
where \(|i\rangle\) and \(|f\rangle\) are, respectively, initial and final states, and \(p_{i}\) and \(p_{f}\) are their total momentum. Our convention of Feynman rules, including the gravitational bosons, gives a factor of the scattering matrix in total, such as
\[i^{N}=i\times i^{N+L-1}\times i^{-L},\]
where the second factor is absorbed into the propagators and the last factor into loop integrals; thus, the first \(i\) gives a correct factor of \(iT\).
## 5 Renormalisation of the QGED
This section demonstrates a renormalisation of the QGED at a one-loop order owing to the Feynman rules provided in previous section. In following calculations, We exploit the Feynman gauge for both photon and spin-connection fields, such that \(\xi_{A}=\xi_{\omega}=1\).
Figure 3: Figures provide pictorial representations of the QEDG propagators. A solid circle shows the interaction vertex. Equation numbers show the propagator function and filed normalisation of the corresponding field.
### Power counting
Before starting the one-loop renormalisation of the QGED, we apply the power counting on the QGED to confirm its divergence degree. For details of the power counting, see, e.g. Ref.[26]. For each vertex in the interaction Lagrangian, we can judge an ultraviolet divergence degree using
\[\rho=D-\frac{D-1}{2}N_{f}-\frac{D-2}{2}N_{b}-N_{\delta},\]
where \(N_{f}\), \(N_{b}\) and \(N_{\delta}\) are the numbers of electrons, bosons and derivative couplings in the vertex, respectively. When the degree exceeds zero, it induces an ultraviolet divergence, and the theory is uncont renormalisable. On the other hand, when the degree is equal to zero, it induces a logarithmic divergence, and the theory is possibly renormalisable. The degrees of vertices in Figure 2 in four-dimensional space-time are
\[\rho_{a}=\rho_{b}=4-\frac{3}{2}\times 2-1-0=0,\quad\rho_{c}=\rho_{d}=4- \frac{3}{2}\times 0-3-1=0;\]
thus, the QGED may be renormalisable.
We can assign the superficial degree of divergence \(d\) for any Feynman diagrams. When some Feynman diagrams have \(d\geqslant 0\), these diagrams give ultraviolet-divergent integrals. We denote the number of loops, external/internal fields and vertices as shown in Table 3. In the four-dimensional space-time, the superficial degree of divergence is provided as
\[d=DL-2I_{\gamma}-2I_{\omega}-2I_{\chi}-2I_{\mathcal{E}}-I_{\psi}+V_{\mathcal{E }}+V_{\chi}. \tag{56}\]
We can express the superficial degree of divergence only by the number of external particles using identities such as
\[V_{\omega}+V_{\psi} =I_{\psi}+\frac{1}{2}E_{\psi},\] \[V_{\psi} =2I_{\gamma}+E_{\gamma},\] \[V_{\mathcal{E}} =I_{\mathcal{E}}+\frac{1}{2}E_{\mathcal{E}},\] \[V_{\chi} =I_{\chi},\] \[E_{\omega}+2I_{\omega} =V_{\mathcal{E}}+V_{\omega}+V_{\chi},\] \[L-1 =I_{\gamma}+I_{\psi}+I_{\mathcal{E}}+I_{\omega}+I_{\chi}-V_{ \mathcal{E}}-V_{\chi}-V_{\omega}-V_{\psi}.\]
\begin{table}
\begin{tabular}{c c c} Type & \multicolumn{2}{c}{the number in diagrams} \\ \hline & electron & \(E_{\psi}\) \\ External lines & photon & \(E_{\gamma}\) \\ & vierbein & \(E_{\mathcal{E}}\) \\ & spin-connection & \(E_{\omega}\) \\ \hline & electron & \(I_{\psi}\) \\ Internal lines & photon & \(I_{\gamma}\) \\ & vierbein & \(I_{\mathcal{E}}\) \\ & spin-connection & \(I_{\omega}\) \\ & ghost/anti-ghost & \(I_{\chi}\) \\ \hline & (electron)\({}^{2}\)-(photon) & \(V_{\gamma}\) \\ Vertex & (electron)\({}^{2}\)-(spin-connection) & \(V_{\omega}\) \\ & (vierbein)\({}^{2}\)-(spin-connection)*) & \(V_{\mathcal{E}}\) \\ & (ghost)-(anti-ghost)-(spin-connection)*) & \(V_{\chi}\) \\ \hline Loops & & \(L\) \\ & & \\ \end{tabular}
*) derivative coupling
\end{table}
Table 3: The number of ingredients of the Feynman diagrams.
Consequently, we obtain the superficial degree of divergence in four-dimensional space-time, such that:
\[d=4-E_{\gamma}-E_{\mathcal{E}}-E_{\omega}-\frac{3}{2}E_{\psi}.\]
Therefore, ultraviolet-divergent diagrams have external particles as
\[E_{\gamma}+E_{\mathcal{E}}+E_{\omega}+\frac{3}{2}E_{\psi}<4;\]
thus, the finite number of diagrams gives an ultraviolet divergent, which can be eliminated by the finite number of counter terms.
### One-loop renormalisation
This section provides renormalisation constants and counter terms of the QGED at a one-loop level. Although the QED's renormalisation procedure is established and known well, we repeat it as a guide for the renormalisation of a gravitational part. We exploit the on-shell renormalisation conditions such that
1. the pole position of propagators should locate at physical mass,
2. residues of propagators at the pole should be unity,
and the charge renormalisation utilised in Refs.[25, 27, 28]. This report refers to charge renormalisation as the coupling constant renormalisation. Owing to the second condition of the on-shell condition, we can omit self-energy correction diagrams on the external line.
Throughout this study, we exploit dimensional regularisation to regulate the ultraviolet divergence and put fictitious masses into massless bosons to treat infrared divergence.
#### 5.2.1 Renormlisation constants
For a QED part, renormalisation constants \(Y_{\omega^{\prime}}\), \(Z_{\psi}\), \(Z_{\omega^{\prime}3}\) and \(\delta m_{e\omega}\) are introduced to the Lagrangian. In the QGED, additional renormalisation constants, \(Y_{\omega}\), \(Z_{\omega 3}\), \(Z_{\xi 3}\) and \(\delta m_{e\omega}\) appears. Bare objects in the Lagrangian are replaced as
\[\psi^{\mathcal{E}} =Z_{\psi\omega^{\prime}}{}^{1/2}\,Z_{\psi\omega}{}^{1/2}\,\psi,\] \[\mathscr{A}^{\mathcal{C}}{}_{a}^{\mathcal{C}} =Z_{\omega^{\prime}}{}^{1/2}\,\mathscr{A}^{\prime}_{a},\quad \omega^{\mathcal{C}}{}_{c}^{ab}=\,Z_{\omega}{}^{1/2}\,\omega_{c}{}^{ab},\quad \mathcal{E}^{\mathcal{C}}{}_{\mu}^{\alpha}=\,Z_{\mathcal{E}}{}^{1/2}\, \mathcal{E}^{a}_{\mu}\] \[e^{\mathcal{C}} =Y_{\omega^{\prime}}\,e,\quad c^{\mathcal{C}}_{\mathcal{F}}=Y_{ \omega}\,c_{\mathcal{F}},\] \[m^{\mathcal{C}}_{e} =m_{e}+\left(\delta m^{a\prime}_{e}+\delta m^{\omega}_{e}\right).\]
In addition to that, there are several mixed corrections at \(\mathcal{O}(e\,c_{\mathcal{F}})\). Then, we add the counter-term Lagrangian to the renormalised one.
#### 5.2.2 Photon and spin-connection renormalisation
First, we provide a gauge boson renormalisation in the QGED. Vacuum polarisation diagrams in the QGED are depicted in Figures 4 and 5; the former shows the electron-loop diagrams, and the latter shows boson-loop diagrams. We calculate vacuum polarization at \(\mathcal{O}(e^{2})\), \(\mathcal{O}(c^{2}_{\mathcal{F}})\) and \(\mathcal{O}(e\,c_{\mathcal{F}})\).
Owing to the Lorentz invariance of the photon wave function, the vacuum polarization diagram of a photon shown in Figure 4-(a) can be expressed as
\[\text{Figure \ref{fig:QGED}-(a)}=:\Pi^{(\gamma e\gamma)}_{ab}(q^{2})=\left( \eta_{ab}-\frac{q_{a}q_{b}}{q^{2}}\right)a_{\gamma e\gamma}(q^{2})+\frac{q_{a }q_{b}}{q^{2}}b_{\gamma e\gamma}(q^{2})\] (57a) in general, where \[q_{a}\] is a photon momentum. After standard calculations, we obtain the results as \[a_{\gamma e\gamma}(q^{2}) =-\frac{\alpha}{4\pi}\frac{4}{3}q^{2}\left(C_{UV}-\log\frac{m^{ 2}_{e}}{\mu^{2}_{R}}\right), \tag{57b}\] \[b_{\gamma e\gamma}(q^{2}) =0, \tag{57c}\]
where \(C_{UV}:=1/\varepsilon_{UV}-\gamma_{E}+\log 4\pi\) and \(\alpha:=e^{2}/4\pi\). The vacuum polarisation fulfils a current conservation such as
\[\eta^{\circ\circ}q_{\circ}\Pi^{(\gamma_{E}\pi)}_{\alpha\circ}(q^{2})=0.\]
We obtain the renormalisation constant for a photon wave function as
\[Z_{\mathscr{A}}^{\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
a factor of one-half in the vierbein loop is due to the symmetric factor of two identical particles in the loop.
\[a_{\omega\mathcal{E}\omega}(q^{2}) =0,\quad a_{\omega\chi\omega}(q^{2})=-\frac{\alpha_{\vec{\sigma}}}{ 4\pi}q^{2}\,\frac{1}{6}\left(C_{UV}+\frac{5}{3}-\log\frac{-q^{2}}{\mu_{R}^{2}} \right),\] \[b_{\omega\mathcal{E}\omega}(q^{2}) =-b_{\omega\chi\omega}(q^{2})=-\frac{\alpha_{\vec{\sigma}}}{4\pi} q^{2}\,\frac{1}{4}\left(C_{UV}+2-\log\frac{-q^{2}}{\mu_{R}^{2}}\right)\implies b_{ \omega\mathcal{E}\omega}(q^{2})+b_{\omega\chi\omega}(q^{2})=0.\]
Thus, the vacuum polarisation of boson loops fulfils current conservation after summing up two contributions, even though each of them does _NOT_.
At last in the section, we provide a vacuum polarization of a vierbein field depicted in Figure 5-(c). After calculations following the Feynman rules, we obtain that
\[\text{Figure \ref{fig:vierbein}-(c)}=:\Pi_{ab}^{(\mathcal{E} \omega\mathcal{E})}(q^{2}) =c_{\vec{\sigma}}{}^{2}\left(\mu_{R}^{2}\right)^{\epsilon_{UV}} \int\frac{d^{D}k}{i(2\pi)^{D}}\frac{\eta^{\circ\circ}q_{\circ}(k_{\circ}+q_{ \circ})\eta_{ab}-q_{a}(q_{b}+k_{b})}{k^{2}(k-q)^{2}},\] \[=\left(\eta_{ab}-\frac{q_{a}q_{b}}{q^{2}}\right)a_{\mathcal{E} \omega\mathcal{E}}(q^{2})+\frac{q_{a}q_{b}}{q^{2}}b_{\mathcal{E}\omega \mathcal{E}}(q^{2}),\]
where
\[a_{\mathcal{E}\omega\mathcal{E}}(q^{2}) =-\frac{\alpha_{\vec{\sigma}}}{4\pi}q^{2}\left(\frac{1}{2}C_{UV}+ 1-\frac{1}{2}\log\frac{-q^{2}}{\mu_{R}^{2}}\right),\] \[b_{\mathcal{E}\omega\mathcal{E}}(q^{2}) =0,\]
which fulfils the current conservation by itself.
We introduce total \(N_{\!f}\) fermions, in which \(N_{\!cf}\) fermions are electromagnetically charged, in the QGED
Figure 5: spin-connection and vierbein vacuum-polarization diagrams with a boson loop.
and summarise the renormalisation constant for bosonic fields:
\[Z_{\mathscr{A}}\raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{ \scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} =1-\frac{1}{2}\left.\frac{\partial}{\partial q^{2}}\sum_{i=1}^{N_{\!\!cf}}a_{ \gamma e\gamma}(m_{i};q^{2})\right|_{q^{2}=0},\] \[=1-\frac{1}{2}\frac{\alpha}{4\pi}\frac{4}{3}\left(C_{UV}-\sum_{i= 1}^{N_{\!\!cf}}\log\frac{m_{i}^{2}}{\mu_{R}^{2}}\right),\] (59a) \[Z_{\omega}\raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{ \scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox {1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} =1-\frac{1}{2}\left.\frac{\partial}{\partial q^{2}}\left(\sum_{i=1}^{N_{\!\!f }}a_{\omega e\omega}\left(q^{2};m_{i}\right)+a_{\omega\chi\omega}(q^{2})+a_{ \omega\xi\omega}(q^{2})\right)\right|_{q^{2}=-\mu_{R}^{2}},\] \[=1-\frac{1}{2}\frac{\alpha_{\!\!\!gr}}{4\pi}\frac{1}{12}\left( \left(1-8N_{\!\!f}\right)C_{UV}-\frac{1}{3}+24N_{f}+8\sum_{i=1}^{N_{\!\!f}} \log\frac{m_{i}^{2}}{\mu_{R}^{2}}-\log\frac{\mu_{R}^{2}}{\mu_{R}^{2}}\right),\] (59b) \[Z_{\mathscr{A}\omega}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0 }{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}} \raisebox{-2.0pt}{\scalebox{1.0}{1}}\raisebox{-2.0pt}{\scalebox{1.
is set to zero in the rightmost expressions. The Feynman parameter integration of the two-point function \(F_{n}\) is defined as
\[F_{n}(\mu_{1},\mu_{2},\mu_{3}):=\int dx\,x^{n}\log\left(\mu_{1}\left(1-x\right)+ \mu_{2}\,x-\mu_{3}\,x(1-x)\right).\]
Owing to these results, we can obtain a derivative of these functions as
\[\partial\Sigma^{1}_{\mathscr{A}}(m_{e}^{2}) :=\left.\frac{\partial}{\partial\not{p}}\Sigma^{1}_{\mathscr{A}} (p^{2})\right|_{p^{2}=m_{e}^{2}}=\quad\frac{\alpha}{4\pi}\frac{1}{m_{e}} \left(8+4\log\frac{\lambda^{2}}{m_{e}^{2}}\right),\] \[\partial\Sigma^{\gamma}_{\mathscr{A}}(m_{e}^{2}) :=\left.\frac{\partial}{\partial\not{p}}\Sigma^{\gamma}_{\mathscr{ A}}(p^{2})\right|_{p^{2}=m_{e}^{2}}=-\frac{\alpha}{4\pi}\frac{1}{m_{e}^{2}} \left(6+2\log\frac{\lambda^{2}}{m_{e}^{2}}\right).\]
The fictitious photon mass will be eliminated from scattering amplitudes by adding real-photon radiation diagrams. The QED renormalisation constants of the electron field and mass are obtained using the above functions, such that
\[Z_{\psi\mathscr{A}}{}^{\dagger 2} =1-\frac{1}{2}\left(\Sigma^{\gamma}_{\mathscr{A}}\left(m_{e}^{2} \right)+m_{e}\left(m_{e}\,\partial\Sigma^{\gamma}_{\mathscr{A}}\left(m_{e}^{2} \right)+\partial\Sigma^{1}_{\mathscr{A}}\left(m_{e}^{2}\right)\right)\right),\] \[=1-\frac{1}{2}\frac{\alpha}{4\pi}\left(C_{UV}+4-\log\frac{m_{e}^ {2}}{\mu_{R}^{2}}+2\log\frac{\lambda^{2}}{m_{e}^{2}}\right)\] (60a) and \[\delta m_{e}^{\mathscr{A}} =m_{e}\Sigma^{\gamma}_{\mathscr{A}}\left(m_{e}^{2}\right)+\Sigma ^{1}_{\mathscr{A}}\left(m_{e}^{2}\right),\] \[=-\frac{\alpha}{4\pi}m_{e}\left(3C_{UV}+4-3\log\frac{m_{e}^{2}}{ \mu_{R}^{2}}\right). \tag{60b}\]
Similarly, we obtain the electron self-energy in the QGED as
\[\text{Figure 6-(b)} =:\Sigma_{\omega}(\not{p}),\] \[=(1-\varepsilon_{UV})^{2}\,\sigma_{\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\delta m_{e}^{\omega} =m_{e}\,\Sigma_{\omega}^{\gamma}\left(m_{e}^{2}\right)+\Sigma_{ \omega}^{1}\left(m_{e}^{2}\right),\] \[=-\frac{\alpha_{\varphi}}{4\pi}\,9\left(C_{UV}-\frac{4}{3}-\log \frac{m_{e}^{2}}{\mu_{R}^{2}}\right) \tag{61b}\]
#### 5.2.4 Coupling constant renormalisation
Coupling constant renormalisaion constants are provided using vertex diagrams in Figures 7 and 8 as
\[\text{Figure \ref{fig:vertex}-(a)}=: Z_{\omega\omega}^{V}\to Y_{\omega\omega}^{\ \ \ }=Z_{\omega\omega}^{V}\,Z_{\psi\omega}^{\ \ \ -1}\,Z_{\omega}^{\ \ \ -1 \!/2},\] \[\text{Figure \ref{fig:vertex}-(b)}=: Z_{\omega\omega}^{V}\to Y_{\omega\omega}^{\ \ \ }=Z_{\omega\omega}^{V}\,Z_{\psi\omega}^{\ \ \ -1}\,Z_{\omega}^{\ \ \ -1 \!/2},\] \[\text{Figure \ref{fig:vertex}-(a)}=: Z_{\omega\omega}^{V}\to Y_{\omega\omega}^{\ \ \ }=Z_{\omega\omega}^{V}\,Z_{\psi\omega}^{\ \ \ -1}\,Z_{\omega}^{\ \ \ -1 \!/2},\] \[\text{Figure \ref{fig:vertex}-(b)}=: Z_{\omega\omega}^{V}\to Y_{\omega\omega}^{\ \ \ }=Z_{\omega\omega}^{V}\,Z_{\psi\omega}^{\ \ \ -1}\,Z_{\omega}^{\ \ \ -1 \!/2}.\]
To complete the one-loop renormalisation of the QGED, vertex renormalisation constants are needed. However, current conservation induces the Ward-Takahashi identity[29, 30], such that
\[Z_{\omega\omega}^{V} =Z_{\psi\omega},\quad Z_{\omega\sigma}^{V}=\left(1-\varepsilon_{ UV}\right)Z_{\psi\sigma}, \tag{62a}\] \[Z_{\omega\omega}^{V} =\left(1-\varepsilon_{UV}\right)Z_{\psi\omega}, \tag{62b}\]
in the QGED; thus, we immediately obtain the coupling renormalisation as
\[Y_{\omega\omega} =1+\frac{\alpha}{4\pi}\,\frac{2}{3}\left(C_{UV}-\sum_{i=1}^{N_{ \!f}}\log\frac{m_{i}^{2}}{\mu_{R}^{2}}\right), \tag{63a}\] \[Y_{\omega\omega} =1+\frac{\left(\alpha\alpha_{\varphi}\right)^{1/2}}{4\pi}\,\frac {8}{3}\left(C_{UV}-\frac{1}{4}-\log\frac{m_{e}^{2}}{\mu_{R}^{2}}+3\log\frac{ \lambda^{2}}{m_{e}^{2}}\right),\] (63b) \[Y_{\omega\omega} =1-\frac{\left(\alpha\alpha_{\varphi}\right)^{1/2}}{4\pi}\,\frac {4}{3}\left(C_{UV}-\frac{5}{4}-\log\frac{m_{e}^{2}}{\mu_{R}^{2}}+3\log\frac{ \lambda^{2}}{m_{e}^{2}}\right),\] (63c) \[Y_{\omega\omega} =1+\frac{\alpha_{\varphi}}{4\pi}\,\frac{1}{12}\left(\left(8N_{f}- 1\right)C_{UV}-24N_{f}+\frac{1}{3}-8\sum_{i=1}^{N_{\!f}}\log\frac{m_{i}^{2}}{ \mu_{R}^{2}}+\log\frac{\mu_{R}^{2}}{\mu_{R}^{2}}\right). \tag{63d}\]
Tadpole renormalisation:At last, we note that the QGED has no fundamental scalar particle; thus, the tadpole diagram does not appear in the physical processes.
#### 5.2.5 Determination of coupling constants and electron mass
We provide all renormalisation constants appearing in one-loop diagrams in the QGED and summarise them in Table 4. All divergent loop corrections are encapsulated in three physical parameters in the bare Lagrangian, \(e^{\mathcal{L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
in the quantum field theory is equivalent to the gravitational mass in general relativity"_. Experimental results, e.g., the proton pole mass equals to its gravitational mass, support the microscopic equivalent principle.
We cannot measure the gravitational mass and a coupling constant simply using Newton's law of universal gravity since we formulate a perturbative approach in the inertial system, where gravity is eliminated locally. Consider an experiment measuring the gravitational coupling constant using the Earth's gravity, other than Newton's law of universal gravity. We propose to utilise the _gravitomagnetic effect[34, 35]_. The spin-connection couples to the electron through a tensor coupling; thus, the gravitational field directly interacts with an electron spin. This effect is measurable using existing[36] and future[37] experimental apparatus. In reality, it gives a possible resolution of the muon \(g_{\mu}\!-\!2\) anomaly and \(c_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
respectively. An energy dependence of gravitational and electromagnetic effective couplings with the threshold effect is shown in Figure 9. For the gravitational coupling, the running effect concerning only an electron is also shown in the figure. For the QED, an electron is the lightest charged particle; thus, the coupling stays constant below \(\Lambda_{\rm QED}\) and, above that, increases according to an increase in the energy scale. On the other hand, light neutral fermions, i.e. neutrinos, the vierbein boson and ghosts, also contribute to the gravitational \(\beta\)-function. Consequently, the effective coupling decreases from the renormalisation energy scale to the threshold of neutrino pair creation and then increases according to the energy. The number of particles contributing to the gravitational \(\beta\)-function is larger than those for the electromagnetic one; thus, the gravitational coupling runs faster than the electromagnetic one. In reality, the renormalisation group equation for the gravitational interaction with all standard model particles has the Landau pole around 10 MeV, and the perturbative QGED calculation loses its validity above this energy, even though the perturbative QED is extendable over the Planck energy. As mentioned above, it does not mean a brake down of the QGED itself. In the perturbative QGED, the Planck energy plays no significant role. Although the Landau pole is at a relatively low energy region, we still have many applications of the perturbative QGED. When we treat only one species of fermions, the Landau pole of the QGED is above the Planck energy.
## 6 Particle creation in the strong field
This section treats particle production due to the strong electromagnetic field or the strong gravitational field as an application of the perturbative QGED. The former is referred to as the Schwinger effect, and the latter is nothing other than the Hawking radiation[40].
### Schwinger effect in the QED
The Schwinger effect, or the Sauter-Schwinger effect, is a phenomenon that the strong electric field creates a pair of charged particles[41, 42, 43], e.g., an electron-positron pair. A strong electric field separates the created pair from each other, preventing the pair annihilating back into the vacuum. The Schwinger effect is not confirmed experimentally to date. The effective action of the Schwinger effect, \(S_{\rm QED}\), is provided
\begin{table}
\begin{tabular}{c c c c} renormalisation constants & order of coupling(s) & equation number \\ \hline \multirow{4}{*}{Firld renomalisation} & \(Z_{\omega}\)\({}^{1/2}\) & \({\cal O}(e^{2})\) & (58) \\ & \(Z_{\omega}\)\({}^{1/2}\) & \({\cal O}(\sigma_{\rm gr}^{2})\) & (59b) \\ & \(Z_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e\sigma_{\rm gr})\) & (59c) \\ & \(Z_{\rm gr}\)\({}^{1/2}\) & \({\cal O}(\sigma_{\rm gr}^{2})\) & (59d) \\ & \(Z_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(\sigma_{\rm gr}^{2})\) & (60a) \\ & \(Z_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(\sigma_{\rm gr}^{2})\) & (61a) \\ \hline Mass renomalisation & \(\delta m_{e}^{\omega}\) & \({\cal O}(e^{2})\) & (60b) \\ & \(\delta m_{e}^{\omega}\) & \({\cal O}(\sigma_{\rm gr}^{2})\) & (61b) \\ \hline \multirow{4}{*}{Vertex renomalisation} & \(Z_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e^{2})\) & (62a) \\ & \(Z_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e\sigma_{\rm gr})\) & (62b) \\ & \(Z_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e\sigma_{\rm gr})\) & (62c) \\ \hline \multirow{4}{*}{Coupling constant renomalisation} & \(Y_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e^{2})\) & (63a) \\ & \(Y_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e\sigma_{\rm gr})\) & (63b) \\ \cline{1-1} & \(Y_{\omega\omega}\)\({}^{1/2}\) & \({\cal O}(e\sigma_{\rm gr})\) & (63c) \\ \hline \end{tabular}
\end{table}
Table 4: Table of renormalisation constants
as[42, 44]
\[S_{\text{QED}}=\frac{\mathcal{F}_{e}^{2}}{\left(2\pi\right)^{3}}\sum_{n=1}^{ \infty}\frac{1}{n^{2}}\exp{\left(-n\pi\frac{\tilde{m}_{e}^{2}}{\mathcal{F}_{e}} \right)},\]
with
\[\mathcal{F}_{e}:=\frac{eE}{\hbar}\ \ \text{and}\ \ \ \tilde{m}_{e}=\frac{m_{e}}{ \hbar},\]
where \(E\) is a electric field strength and \(\mathcal{F}_{e}\) is an electric force on an electron. We note on the physical dimension that \([\mathcal{F}_{e}]_{\text{p.d.}}=L^{-2}\) and \([\tilde{m}_{e}]_{\text{p.d.}}=L^{-1}\) in our units, thus the argument of exponential has a null physical dimension, and the action shall be dimensionless after four-dimensional spatial integrate.
Consider the Schwinger effect owing to the Coulomb-type electric field. The critical electric-field strength to the electron-positron pair creation owing to the Schwinger effect is \(V_{c}=1\times 10^{18}\) V/m[43], yielding the critical radius (\(r_{c}\)) and the critical energy (\(\mathcal{E}_{c}\)) of the Coulomb potential as
\[V_{c}\,=\frac{1}{4\pi\epsilon_{0}}\frac{e}{r_{c}^{2}}\implies r_{c}\simeq 37.9\ \text{fm}\ \implies\ \ \mathcal{E}_{c}=\frac{\hbar}{r_{c}}\simeq 5.19\ \text{MeV},\]
where critical radius is larger than the proton charge radius. We note that the critical radius is smaller than the atomic scale and larger than the nucleus scale. Elementary charged particles cannot occur the Schwinger effect due to the energy-momentum conservation. However, this simple estimation suggests that the Schwinger effect may occur around a charged composite particle if the composite system can supply enough energy to create an electron-positron pair.
The infinite sum of vacuum-to-vacuum transition diagrams gives the effective action of the Schwinger effect in the QED[43]. We re-calculate the vacuum-decay rate of the Coulomb electric field at a one-loop order to compare with the Hawking radiation owing to the QGED. We write the vacuum-to-vacuum transition amplitude in the QED as
\[\langle 0_{\text{out}}|0_{\text{in}}\rangle=e^{iS_{\text{eff}}},\ \ \ \text{ yielding}\ \ P_{\text{v.d.}}:=|\langle 0_{\text{out}}|0_{\text{in}}\rangle|^{2}=e^{-2\,\text{Im}[S_{ \text{eff}}]},\]
where \(P_{\text{v.d.}}\) is the vacuum decay probability. The effective action of the vacuum-to-vacuum transition, denoted as \(S_{\text{eff}}\), is related to the vacuum polarization (57a)\(\sim\)(57c) such that[45]
\[\Pi_{ab}^{(\gamma\varepsilon\gamma)}(q_{i}^{2})\delta q_{i}^{2}=\frac{ \delta^{2}S_{\text{eff}}(\mathscr{A})}{\delta\mathscr{A}^{a}\delta\mathscr{A }^{b}},\]
Figure 9: The figure shows effective electric and gravitational coupling constant inverse as a function of the renormalisation scale. The effective couplings owing to single fermion species and owing to all standard model particles are shown for the gravitational one.
which has a solution
\[S_{\rm eff}(\mathscr{A}^{\prime})=\left(\mathscr{A}^{\circ}\,\Pi^{( \gamma e\gamma)}_{\circ\circ}\mathscr{A}^{\circ}\right)\left(q_{i}^{2}\right) \delta q_{i}^{2}, \tag{64}\]
where \(\delta q_{i}^{2}\) is the possible energy spectrum due to the Schwinger effect. We consider the Coulomb electric field that provides the electric potential in the momentum space as
\[\mathscr{A}^{a}=\left(\frac{1}{\tilde{q}^{2}},0,0,0\right), \tag{65}\]
where \(\tilde{q}:=|\vec{q}|\) is a virtual photon three-momentum. We exploit an ansatz for the virtual photon four-vector such that \(\eta_{\circ\circ}q^{\circ}q^{\circ}=q^{2}=\vec{q}^{2}\), yielding
\[q^{a}=\left(\sqrt{2}\,\tilde{q},0,0,\tilde{q}\right).\]
Consequently, we obtain the effective action as
\[-2\,{\rm Im}[S_{\rm eff}(\mathscr{A})] =-2\times(4\pi)\sum_{i}{\rm Im}\left[\left(\mathscr{A}^{\circ}\, \Pi^{(\gamma e\gamma)}_{\circ\circ}\mathscr{A}^{\circ}\right)\left(q_{i}^{2} \right)\right]\delta q_{i}^{2},\] \[\simeq-2\times(4\pi)\int_{4m_{e}^{2}}^{E_{\rm Max}^{2}}{\rm Im} \left[\left(\mathscr{A}^{\circ}\,\Pi^{(\gamma e\gamma)}_{\circ\circ}\mathscr{ A}^{\circ}\right)\left(q^{2}\right)\right]dq^{2},\] \[=-\int_{2m_{e}}^{E_{\rm Max}}\Pi_{\gamma^{\Re}\to e^{+}e^{-}} \left(\tilde{q}\right)d\tilde{q},\]
with
\[\Pi_{\gamma^{\Re}\to e^{+}e^{-}}\left(\tilde{q}\right):=4\pi \alpha\frac{\sqrt{\tilde{q}^{2}-4m_{e}^{2}}\left(\tilde{q}^{2}+2m_{e}^{2} \right)}{\tilde{q}^{4}}.\]
We consider the effect on the electromagnetic decay of the \(\Delta^{+}\) baryon in Figure 10. The critical radius is larger than proton and \(\Delta^{+}\) baryon charge radii; thus, the Schwinger effect may occur due to the Coulomb electric field induced by a total charge of the \(\Delta^{+}\) baryon rather than individual quarks. We estimate the
Figure 10: The figure depicts a schematic view of the \(\Delta^{+}\to pe^{+}e^{-}\) decay. A photon interacts with the hadronic current rather than the individual quarks. A spin flip can occur in any one of three valence quarks.
effective action with the \(\Delta^{+}\to p\,e^{+}e^{-}\) decay parameters as
\[-2\,{\rm Im}\big{[}S_{\rm eff}\big{(}\Delta^{+}\to p\,e^{+}e^{-} \big{)}\big{]}\] \[\qquad=\int_{2m_{e}}^{E_{\rm Max}}\Pi_{\gamma^{\sharp}\to e^{+}e^{-} }(\widetilde{q})\,d\widetilde{q},\] \[\qquad=\,\frac{16\pi\alpha}{3}\left\{\frac{\sqrt{E_{\rm Max}^{2}-4 m_{e}^{2}}\left(5E_{\rm Max}^{2}+4m_{e}^{2}\right)}{6E_{\rm Max}^{2}}+\log \left(\frac{2m_{e}}{E_{\rm Max}+\sqrt{E_{\rm Max}^{2}-4m_{e}^{2}}}\right) \right\},\] \[\qquad\simeq 6.75\times 10^{-1},\]
where \(E_{\rm Max}:=m_{\Delta}-m_{p}-2m_{e}\).
A Feynman diagramatic derivation of the \(\Delta^{+}\) decay is as follows: The decay amplitude of the \(\Delta^{+}\to p\,e^{+}e^{-}\) decay is
\[{\cal M}\left(\Delta^{+}\to p\,e^{+}e^{-}\right)=J_{\Delta^{+}\to p+\gamma^{ \sharp}}\left(q_{\gamma^{\sharp}}^{2}\right)\frac{1}{q_{\gamma^{\sharp}}^{2} }J_{\gamma^{\sharp}\to e^{+}e^{-}}\left(q_{\gamma^{\sharp}}^{2}\right)dq_{ \gamma^{\sharp}}^{2},\]
where \(J_{\Delta^{+}\to p+\gamma^{\sharp}}(q_{\gamma^{\sharp}}^{2})\) and \(J_{\gamma^{\sharp}\to e^{+}e^{-}}(q_{\gamma^{\sharp}}^{2})\) are hadronic and electromagnetic currents such that
\[J_{\Delta^{+}\to p+\gamma^{\sharp}}:=e\,\bar{\psi}_{p}\Gamma_{\mu}\psi_{ \Delta}\epsilon_{\gamma^{\sharp}}^{\mu},\ \ \mbox{and}\ \ \ J_{\gamma^{\sharp}\to e^{+}e^{-}}:=e\,\bar{\psi}_{e} \gamma_{\mu}\psi_{e}\,\epsilon_{\gamma^{\sharp}}^{\mu},\]
respectively, where, \(\bar{\psi}_{p}\Gamma_{\mu}\psi_{\Delta}\) is the hadron current including an electric form-factor. They have physical dimension of \([J]_{\rm p.d.}=E\). We obtain the decay width using the optical theorem and the decoupling approximation such that
\[\Gamma\left(\Delta^{+}\to p\,e^{+}e^{-}\right) =\frac{1}{2m_{\Delta}}\int\big{|}{\cal M}\left(\Delta^{+}\to p\,e^{+}e^{-} \right)\big{|}^{2}\,d\Omega,\] \[\simeq\frac{1}{2m_{\Delta}}\left|J_{\Delta^{+}\to p+\gamma^{ \sharp}}\,dq_{\gamma^{\sharp}}^{2}\right|^{2}2\,{\rm Im}\big{[}S_{\rm eff} \big{(}\gamma^{\sharp}\to e^{+}e^{-}\big{)}\big{]}\,, \tag{66}\]
where \(d\Omega\) is a phase space integration of the three body decay. \(q_{\gamma^{\sharp}}^{-2}\) is absorbed by \({\mathscr{A}}^{2}\) (see (65)) in \(S_{\rm eff}\). For the hadronic part, we do not have any detailed information. We treat the hadronic current as a source of particle pair creation owing to the strong electric field; thus, we replace it simply with the critical energy of the Schwinger effect of the Coulomb electric field, such that:
\[\big{|}J_{\Delta^{+}\to p+\gamma^{\sharp}}\,dq_{\gamma^{\sharp}}^{2}\big{|}^{ 2}\simeq{\cal E}_{c}^{2},\]
due to the typical energy scale of the \(e^{+}e^{-}\) pair creation; thus, we obtain
\[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
of the particles in the pair falls into the hole, allowing the rest to reach the asymptotic observer without annihilation. This phenomenon is known as the _Hawking radiation_. We estimate the thermodynamical temperature of the Hawking radiation from the Schwarzschild black hole utilising the same method used for the Schwinger effect.
Start from the effective action (64) for the spin-connection vacuum polarization, which is
\[S_{\text{eff}}(\omega)=\left(\tilde{\omega}^{\circ}\Pi^{(\omega \omega\omega)}_{\circ}\tilde{\omega}^{\circ}\right)\left(q_{i}^{2}\right)\delta q _{i}^{2},\]
where \(\Pi^{(\omega\omega\omega)}_{ab}(q^{2})\) is given in **section 5.2.2** and \(\tilde{\omega}^{a}\) is a polarization vector of the spin-connection in the momentum space, which for the Schwarzschild metric in the frame fixed on a hole are given in (B.2a) and (B.2b). In this section, we set the spin-connection and the momentum vector as
\[\left.\tilde{\omega}^{\text{S}}_{a}(\tilde{q})\right|_{\text{lab}} =\left(\left.\tilde{\omega}^{\text{S}}_{E}(\tilde{q})\right|_{ \text{lab}},\left.\tilde{\omega}^{\text{S}}_{A}(\tilde{q})\right|_{\text{lab} },0,0\right),\] \[q^{a} =\left(\sqrt{2}\tilde{q},\tilde{q}\sin\theta\cos\phi,\tilde{q} \sin\theta\sin\phi,\tilde{q}\cos\theta\right),\]
where the laboratory frame is defined as in **Appendix B** concerning the hole instead of the Earth. The Hawking radiation is possible only at the event horizon, thus, the momentum must be
\[\tilde{q}^{\text{S}}=\frac{\hbar}{R_{\bullet}^{\text{S}}}\ \ \text{with}\ \ \ R_{\bullet}^{\text{S}}=\frac{\kappa_{\text{E}}M_{\bullet}}{\hbar},\]
where \(R_{\bullet}^{\text{S}}\) is the Schwarzschild radius and \(M_{\bullet}\) is a hole mass. Consequently, the effective action is
\[-2\operatorname{Im}[S_{\text{eff}}(\omega)]=-2\int_{-1}^{1}d\cos\theta\int_{0 }^{2\pi}\!d\phi\;2\tilde{q}\,\operatorname{Im}\left[\omega^{\circ}\Pi^{( \omega\omega\omega)}_{\circ}\omega^{\circ}\right]\Bigr{|}_{\tilde{q}=\tilde{q }^{\text{S}}}\delta\tilde{q},\]
with
\[\operatorname{Im}\left[\omega^{\circ}\Pi^{(\omega\omega\omega)}_{ \circ}\omega^{\circ}\right]=\alpha_{\text{gr}}\frac{\sqrt{q^{2}-4m_{{}_{e}}^{2 }}\left(q^{2}+2m_{{}_{e}^{2}}\right)}{3\left(q^{2}\right)^{1/2}}\left(\left. \tilde{\omega}^{\text{S}}_{\circ}(\tilde{q})\right|_{\text{lab}}\left(\eta_{ \circ}-\frac{q_{{}_{e}}q_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{}_{{}_{}_{{}_{}_{ }_{{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}}_{{}_{}_{{}_{{}_{}_{}_{{}_{}_{{}_{}}_{{}_{{}_{}_{}_{{}_{}_{{}}_{}_{{}_{}_{{}_{}_{{}_{}}_{{}_{{}_{}_{{}}_{{}_{}_{{}}_{}_{{}_{}_{{}_{}{}_{{}_{}_{{}_{}}_{{}_{{}_{}_{{}}_{{}_{{}_{{}}_{{}_{}_{{}_{{}}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{}{}_{{}_{}{}_{{}}_{{}_{}{}_{}_{{}_{{}}{}_{{}_{}{}_{}{}_{{}_{{}{}_{}{}_{{}{}_{}{{}_{}_{{}{}_{{}{}_{{}_{{}}{}_{{}_{{}}_{{}{}_{{}{}_{{}{}_{}{}_{{}{}_{{}_{}{}{}_{{}{}_{}{}_{{}{}_{}{}{}_{{}{}_{{}{}{}_{}{}_{{}{}_{{}{}{}_{{}{}_{}{{}{}_{}{}{{}_{}{}_{}{{}{}_{}{}{}_{{}_{{}{}{}{}_{{}_{}{}{}{}_{}{}_{{}{}{}_{{}{}{}_{{}{}_{{}{}{}_{{}{}{}_{}{{}_{}{}{}_{{}{}{}_{}{}{}_{{}{}{}_{{}_{{}{}{}{}_{{}{}_{}{{}{}_{}{}{}_{{}{}{}_{{}{}_{{}{}{}_{{}{}_{}{{}_{}{}{}{}_{}{{}_{}{}{}_{{}{}{}_{{}{}_{{}{}_{{}{}{}{}_{}{}_{{}{}{}_{{}{}{}_{}{}{}_{{}{}_{}{{}_{{}{}_{{}{}{}_{}{}{}_{{}{}{}_{{}{}_{{}_{{}{}{}{}_{}{}_{{}{}_{{}_{{}{}{}{}_{{}_{{}}{{}_{{}_{{}}{{}_{{}}{{}_{}_{{}}{{}_{{}_{{}}{{}_{{}}_{{}{{}}{}_{{}_{{}}{{}_{{}_{}{{}}{}_{{}_{{}}_{{{}}_{{}{}_{{}}{{}_{{}}_{{{}}_{{}}{{}_{{}_{{}}_{{}{}_{{}}_{{}{{}}_{{}_{{}}_{{}{{}}_{{}_{}{}_{{}{}_{{}{}_{}{{}_{}{}_{{}_{{}{}_{}{}{{}_{}{}_{}{{}_{}{}_{{}_{{}{}_{}{}_{{}{}_{}{}_{{}_{{}{}_{}{}_{{}{}_{{}{}{}_{}_{{}{}_{{}{}_{}{}_{{}{}_{{}{}_{}{}_{{}_{{}{}_{}{}_{}{{}_{}{}_{{}_{{}{}_{}_{{}{}{}_{}_{{}{}_{{}{}_{{}{}_{}_{{}{}{}_{{}_{{}{}{}_{{}{}_{}{}_{{}_{{}{}_{{}{}_{}{}_{{}{}_{{}_{{}{}}_{{}{}_{}_{{}_{{}{}{}_{{}_{}{}_{{}{}_{{}{}_{{}{}_{{}{}_{}_{{}{}_{{}{}_{}{}_{{}_{{}{}_{}{{}_{{}{}_{{}_{}{}{}_{{}_{{}{}_{{}{}_{}{}_{{}_{{}{}_{{}{}_{}_{{}{}_{{}_{}{}_{{}{}_{{}_{{}{}_{}{{}_{}_{{}{}_{{}_{}{}_{{}_{}{{}_{}{{}_{{}}_{{}{}_{{}_{{}_{}{{}_{}{}_{{}_{{}{}_{{}{}_{{}_{{}{}_{{}{}}{{}_{{}_{}_{{}{{}}_{{}_{{}_{{}}{{}_{{}_{{}_{{}}{{}_{{}_{{}}{}_{{}_{{}_{{}{}_{{}_{{}{}{}_{{}_{{}}{{}_{{}_{{}}_{{}_{{}_ {{{}}_{{}{}_{{}{}_{{}{}_{{{}}_{{}_{{}{}_{{}_}{{{}_}{{{}_}{{{}_}{
the metric tensor, instead of a solution of the classical equation of motion. The space-time manifold itself is not a quantisation target; thus, it is a smooth manifold even after quantisation. To be brief, we quantise not the space-time but the ruler to measure it. The author has discussed a space whose metric tensor is provided as the expected value of a stochastic process in Ref.[47].
To quantise both general relativity and the Yang-Mills theory simultaneously, the current author has reformulated both theories as the geometrical theory on an equal footing in the previous study[12]. Moreover, the author has been applying the BRST quantisation non-perturbatively for pure gravity in the Heisenberg picture to them[22]. This report took them further and provided a perturbative extension of quantum gravity. Although the quantisation method utilised for general relativity in this report is faithful to the standard BRST quantisation of the Yang-Mills theory, it is not standard compared with traditional methods of quantum general relativity in some aspects; we identify a spin-connection as a gauge boson instead of the metric tensor and utilise the covariant differential including a gravitational coupling constant. The unconventional gravitational coupling constant provides several benefits to the theory:
1. We can quantise general relativity completely parallel to the Yang-Mills theory.
2. The bare gravitational coupling constant can absorb the ultraviolet divergence of a scattering vertex.
3. Free gravitational fields with vanishing coupling constant provide a linearised Einstein equation, which is also parallel to the Yang-Mills theory.
The old quantum theoretical consideration provides the evidence of the spin-connection to be the gauge boson [8], and a gravitational coupling constant is natural to be there to quantise the gauge theory[22]. Then we proved all fields are nilpotent and the Lagrangian is BRST invariant, which ensured the renormalisation with ghosts is anomaly free. Moreover, after defining the physical states of gravitation appropriately[22], it ensures the unitarity of the physical amplitude.
We extracted a set of Feynman rules based on the QGED Lagrangian with gauge fixing and ghost parts. Propagators in the momentum space for the curved space-time are not trivial because the Fourier transformation kernel includes the metric tensor. This report defined all Feynman rules in the local inertial frame in which the gravity is eliminated locally; thus, the transformation kernel has a flat metric, and the Fourier transformation is well-defined[24]. Utilising Feynman rules of the QGED prepared here, we provided all renormalisation constants and showed that the theory is perturbatively renormalisable at a one-loop level. In the renormalisation theory, we must replace infinite-valued bare objects with experimentally measured ones. We showed that the gravitational coupling constant is measurable experimentally[38].
Above all, the existence of the gravitational coupling constant allows us to discuss the running effect of gravitational coupling. The Einstein (Newtonian) gravitational constant is a fundamental constant of Nature and does not change depending on the energy scale we are looking at. On the other hand, the effective coupling may depend on the energy scale of observables through the renormalisation group equation. Since a boson loop provides an opposite sign of a fermion loop, a boson loop decreases the effective coupling according to an increased energy scale when the fermion loop is vice versa. In the QED, only charged fermions contribute to the beta function and provide an increasing effective coupling according to the increase in the energy scale. In general, the number of fermions contributing to the gravitational \(\beta\)-function is larger than those for the electromagnetic one; thus, the gravitational coupling runs faster than the electromagnetic one. Moreover, \(\alpha_{\mbox{\tiny gr}}\) is larger than \(\alpha\) at the energy scale \(\mu_{R}\simeq m_{e}\); thus, two effective couplings are never coincide at any energy scale.
The gravitational Landau pole of the QGED with one fermion species is above the Planck energy, and it, with all standard model particles, is at around 10 MeV. Although the perturbative QGED calculation loses its validity above the Landau pole, it does not mean a brake-down of the QGED itself. For the QCD, even though the perturbative QCD fails under the \(\Lambda_{\rm QCD}\), non-perturbative treatment works appropriately, i.e., the lattice QCD. One of the primary candidates of non-perturbative quantum theories of gravity is a loop quantum gravity (see, e.g., Refs.[48, 49]). Although the loop quantum gravity has common aspects with the lattice QCD, one of the main differences is that the former does not assume the existence of a smooth manifold as a background. In contrast, the latter takes the continuum limit of the lattice space to simulate a smooth space-time. The loop quantum gravity keeps a size parameter finite, which protects the theory from UV divergence. On the other hand, the lattice QCD provides finite results at the short-distance limit since the QCD is asymptotic free and non-perturbative QCD is well-defined at
zero distance. The lattice approximation of the QGED, if any, is different from both theories. For a lattice approximation of the QGED, we have to take a short-distance limit to obtain realistic physical quantities since the QGED theory is constructed based on the smooth manifold. However, the QGED is not an asymptotic-free theory; thus, it may have a divergence. We need yet another formulation of the non-perturbative quantum theory than utilising the discrete space-time.
This report also provided a perturbative estimation of the Hawking radiation using the QGED, with reference to the Schwinger effect of the QED. The Hawking radiation and the Schwinger effect are the particle-pair creation owing to the strong static fields; the former is due to the gravitational field, and the latter is due to the electric field. The Schwinger effect owing to the electric force has yet to be observed experimentally since the critical field strength is enormous. On the other hand, a quark pair creation owing to the strong force is an indispensable part of the hadronisation models in jet simulation programs, e.g., PYTHIA[50] and HERWIG[51]. These hadronisation models can describe a jet structure quite well; thus, the Schwinger effect owing to the QCD has experimental support. We provided a possible interpretation of the \(\Delta^{+}\to pe^{+}e^{-}\) decay owing to the Schwinger effect. In contrast with the electric force, which has attractive and repulsive forces, the gravitational force is attractive only. Consequently, though a strong electric field separates a created charged particle pair, they re-combine immediately for the gravitational field case. One possible loophole in this problem is using the black hole's event horizon, i.e., Hawking radiation. The perturbative QGED shows that the hole temperature is proportional to the hole mass for the Schwarzschild black hole.
Ultimately, we emphasise that the proposed perturbative QGED is an experimentally testable theory, e.g., future high-precision measurements of a muon anomalous magnetic moment. Other applications, such as the state of arts measurements of the atomic energy spectrum, the Rydberg constant and others, are also possible testbench of the QGED. A quest for the non-perturbative approach of the QGED is also an essential direction of future study.
## Acknowledgements
I would like to thank Dr Y. Sugiyama, Prof. J. Fujimoto and Prof. T. Ueda for their continuous encouragement and fruitful discussions.
**Coordinate vector**
The coordinate vectors are fundamental vectors on \(T\mathscr{M}\), which is nilpotent as
\[\delta_{\mathrm{B}}^{GR}\left[\delta_{\mathrm{B}}^{GR}\left[x^{\mu}\right] \right]=\delta_{\mathrm{B}}^{GR}\left[\chi^{\mu}\right]=0.\]
**Ghost field**
Ghost \(\chi^{\mu}\) is trivially nilpotent. For \(\chi^{a}_{\phantom{a}b}:=\eta_{\mathrm{b}c}\chi^{a_{\phantom{a}c}}\), we obtain
\[\delta_{\mathrm{B}}^{GR}\left[\delta_{\mathrm{B}}^{GR}\left[\chi^{a}_{\phantom{a }b}\right]\right]=\delta_{\mathrm{B}}^{GR}\left[\chi^{a}_{\phantom{a}c}\chi^{ \phantom{a}b}\right]=\chi^{a}_{\phantom{a}c}\chi^{\phantom{a}c}_{\phantom{a }c_{\phantom{a}1}}\chi^{\phantom{a}c_{\phantom{a}b}}-\chi^{a}_{\phantom{a}c_{ \phantom{a}1}}\chi^{\phantom{a}c_{\phantom{a}1}}\chi^{\phantom{a}c_{\phantom{a }2}}\chi^{\phantom{a}c_{\phantom{a}b}}=0,\]
due to anticommutativity of the ghost field. A tensor \(\hat{\sigma}_{\mu}\chi^{\nu}\) is also nilpotent as
\[\delta_{\mathrm{B}}^{GR}\left[\delta_{\mathrm{B}}^{GR}\left[\hat{\sigma}_{\mu }\chi^{\nu}\right]\right]=-\delta_{\mathrm{B}}^{GR}\left[\hat{\sigma}_{\mu} \chi^{\rho}\hat{\sigma}_{\rho}\chi^{\nu}\right]=-\hat{\sigma}_{\mu}\chi^{\rho _{1}}\hat{\sigma}_{\rho_{1}}\chi^{\rho_{2}}\hat{\sigma}_{\rho_{2}}\chi^{\nu}+ \hat{\sigma}_{\mu}\chi^{\rho_{1}}\hat{\sigma}_{\rho_{1}}\chi^{\rho_{2}}\hat{ \sigma}_{\rho_{2}}\chi^{\nu}=0.\]
**Vierbein form**
\[\delta_{\mathrm{B}}^{GR}\left[\delta_{\mathrm{B}}^{GR}\left[\mathfrak{e}^{a} \right]\right]=-c_{\mathcal{P}}\delta_{\mathrm{B}}^{GR}\left[\mathfrak{e}^{b} \chi^{a}_{\phantom{a}b}\right]=c_{\mathcal{P}}{}^{2}\left(\mathfrak{e}^{b_{1}} \chi^{\phantom{b_{2}}b_{\phantom{2}b_{\phantom{2}b_{\phantom{2}b_{\phantom{2}b_{ \phantom{2}b_{\phantom{2}b_{\phantom{2}b_{\phantom{2}b_{\phantom{2}b_{\phantom{2}b_{ \phantom{\phantom{2}b_{\phantom{2}b_{\phantom{2}b_{\phantom{\phantom{2}b_{\
**Spin form**
\[\delta^{GR}_{\rm B}\left[\delta^{GR}_{\rm B}\left[\mathfrak{n}^{ab}\right]\right] =\delta^{GR}_{\rm B}\left[d\chi^{ab}-c_{\mathfrak{gr}}\left( \mathfrak{n}^{a}_{\ \circ}\chi^{\diamond b}+\mathfrak{n}^{b}_{\ \circ}\chi^{a \diamond}\right)\right],\] \[=c_{\mathfrak{gr}}\ d\left(\chi^{a}_{\ \circ}\chi^{\diamond b}\right)-c_{ \mathfrak{gr}}\left\{\left(d\chi^{a}_{\ \circ}\right)\chi^{\diamond b}+\left(d\chi^{b}_{\ \circ}\right)\chi^{a \diamond}-c_{\mathfrak{gr}}\mathfrak{n}^{\circ}_{\ \bullet}\left(\chi^{a\bullet}\chi^{b}_{\ \circ}+\chi^{b\bullet}\chi^{a}_{\ \bullet}\right)\right\},\] \[=0.\]
**Surface form**
Applying the BRST-transformation on it again, one can get
\[\delta^{GR}_{\rm B}\left[\delta^{GR}_{\rm B}\left[\mathfrak{S}_{ ab}\right]\right] =c_{\mathfrak{gr}}\epsilon_{abc_{1}c_{2}}\delta^{GR}_{\rm B} \left[\chi^{c_{1}}_{\ \ c_{8}}\mathfrak{e}^{\varsigma_{3}}\wedge\mathfrak{e}^{c_{2}}\right],\] \[=c_{\mathfrak{gr}}^{2}\epsilon_{abc_{1}c_{2}}\Big{\{}\chi^{c_{1} }_{\ \ c_{4}}\chi^{c_{4}}_{\ \ c_{3}}\mathfrak{e}^{\varsigma_{3}}\wedge\mathfrak{e}^{c_{2}}-\chi^{c_{1} }_{\ \ c_{3}}\chi^{c_{3}}_{\ \ c_{4}}\mathfrak{e}^{\varsigma_{4}}\wedge\mathfrak{e}^{c_{2}}-\chi^{c_{1} }_{\ \ c_{3}}\chi^{c_{2}}_{\ \ c_{4}}\mathfrak{e}^{\varsigma_{3}}\wedge\mathfrak{e}^{ \varsigma_{4}}\Big{\}},\] \[=0,\]
because first term is the same as the second term and the third term is symmetric with \(c_{1}\) and \(c_{2}\) exchange.
**Volume form**
The volume form is global scalar and their BRST transformation is expected to vanish, which can be confirmed as
\[\delta^{GR}_{\rm B}\left[\mathfrak{v}\right]=\frac{1}{4!}\epsilon_{\circ \circ\circ}\delta^{GR}_{\rm B}\left[\mathfrak{e}^{\circ}\wedge\mathfrak{e}^{ \circ}\wedge\mathfrak{e}^{\circ}\wedge\mathfrak{e}^{\circ}\right]=\frac{1}{3!} \epsilon_{a_{1}\circ\circ\circ}\delta^{GR}_{\rm B}\left[\chi^{a_{1}}_{\ \ \ a_{2}} \mathfrak{e}^{a_{2}}\wedge\mathfrak{e}^{\circ}\wedge\mathfrak{e}^{\circ} \wedge\mathfrak{e}^{\circ}\right]=0,\]
due to \(\mathfrak{e}^{\circ}\wedge\mathfrak{e}^{\circ}\wedge\mathfrak{e}^{\circ} \wedge\mathfrak{e}^{\circ}\wedge\mathfrak{e}^{\circ\circ\circ\circ\circ}\) and \(\chi^{a_{1}}_{\ \ a_{2}}=0\) when \(a_{1}=a_{2}\).
**ghost forms**
The BRST transformation of \(\mathfrak{c}^{a}\) is given by
\[\delta^{GR}_{\rm B}\left[\mathfrak{c}^{a}\right] = \delta^{GR}_{\rm B}\left[\chi^{a}_{\ b}\ \mathcal{E}^{b}_{\ \mu}\ dx^{\mu}\right],\] \[= \chi^{a}_{\ b_{1}}\chi^{b_{1}}_{\ b_{2}}\mathcal{E}^{b_{2}}_{\mu} dx^{\mu}-\chi^{a}_{\ b_{1}}\ \mathcal{E}^{b_{2}}_{\mu}\chi^{b_{2}}_{\ b_{1}}dx^{\mu}+\chi^{a}_{\ b}\ \left(\partial_{\mu}\chi^{\nu}\right)\mathcal{E}^{b}_{\nu}\ dx^{\mu}-\chi^{a}_{\ b }\ \mathcal{E}^{b}_{\mu}d\chi^{\mu}=0,\]
**Other forms**
Nilpotent of other forms are trivial and the proof is omitted here.
**Gravitational Lagrangian**
The quantum Lagrangian must be the BRST-null. Gauge-fixing and Fadeef-Popov Lagrangians are constructed to satisfy the BRST-null condition in section 6-2. Nilpotent of only the gravitational Lagrangian is given here. The BRST transformation of the gravitational Lagrangian is provided as
\[\delta^{GR}_{\rm B}\left[\mathfrak{L}_{G}\right] = \frac{1}{2}\delta^{GR}_{\rm B}\left[\left(d\mathfrak{n}^{\circ \circ}+c_{\mathfrak{gr}}\mathfrak{n}^{\circ}\star\mathfrak{n}^{\bullet\circ} \right)\wedge\mathfrak{S}_{\circ\circ}-\frac{\Lambda}{3!}\mathfrak{v}\right].\]
The BRST transformation for the volume form is vanished by itself. For the derivative term,
\[\delta^{GR}_{\rm B}\left[d\mathfrak{n}^{\circ\circ}\wedge\mathfrak{S}_{ \circ\circ}\right] = \epsilon_{abc_{2}c_{3}}\chi^{b}_{\ c_{1}}d\mathfrak{n}^{ac_{1}} \wedge\mathfrak{e}^{c_{2}}\wedge\mathfrak{e}^{\varsigma_{3}}+\epsilon_{abc_{2}c_ {3}}\mathfrak{n}^{ac_{1}}\wedge d\chi^{b}_{\ c_{1}}\wedge\mathfrak{e}^{c_{2}} \wedge\mathfrak{e}^{\epsilon_{3}}\] \[+\epsilon_{abc_{1}c_{2}}\chi^{c_{1}}_{\ c_{3}}\ d\mathfrak{n}^{ab} \wedge\mathfrak{e}^{\varsigma_{3}}\wedge\mathfrak{e}^{c_{2}}\] \[= 2\ \mathfrak{n}^{ac_{1}}\wedge d\chi^{b}_{\ c_{1}}\wedge\mathfrak{S}_{ ab},\]
where first- and third-terms are cancelled each other. Remnant term is transformed as
\[\delta^{GR}_{\rm B}\left[\mathfrak{n}^{\circ}\wedge\mathfrak{n}^{ \bullet\circ}\wedge\mathfrak{S}_{\circ\circ}\right] = \epsilon_{abc_{2}c_{3}}\chi^{c_{2}}_{\ c_{4}}\mathfrak{n}^{ac_{1}} \wedge\mathfrak{n}_{\ c_{1}}^{\ \ b}\wedge\mathfrak{e}^{\epsilon_{4}}\wedge\mathfrak{e}^{c_{3}}+\epsilon_{abc_{3}c_{4} }\chi^{c_{2}}_{\ c_{1}}\mathfrak{n}^{ac_{1}}\wedge\mathfrak{n}_{\ c_{2}}^{\ b}\wedge\mathfrak{e}^{ \varsigma_{3}}\wedge\mathfrak{e}^{\varsigma_{4}}\] \[+\epsilon_{abc_{3}c_{4}}\chi^{b}_{\ c_{1}}\mathfrak{n}^{ac_{1}} \wedge\mathfrak{n}_{\ c_{1}}^{\ c_{2}}\wedge\mathfrak{e}^{\varsigma_{3}}\wedge \mathfrak{e}^{\varsigma_{4}}-2c_{\mathfrak{gr}}\mathfrak{n}^{ac_{1}}\wedge d \chi^{b}_{\ c_{1}}\wedge\mathfrak{S}_{ab},\] \[= -2c_{gr}-\mathfrak{n}^{ac_{1}}\wedge d\chi^{b}_{\ c_{1}}\wedge\mathfrak{S}_{ab}.\]
In the r.h.s of the first line, the second term is zero as itself, and first- and third-terms are cancelled each other. Therefore one can confirmed \(\delta_{\rm B}^{GR}\left[\mathfrak{L}_{G}\right]=0\) after summing up all terms.
If we use a following remake, we can give simpler proofs for above forms.
**Remark**
If both of two fields, \(\alpha\) and \(\beta\), are nilpotent, \(\alpha\beta\) is also nilpotent.
_Proof:_
If a field \(X\) is nilpotent, signatures of the Leibniz rule satisfy \(\epsilon_{X}=-\epsilon_{\delta X}\) due to \(\delta_{\rm B}^{GR}[\delta_{\rm B}^{GR}[X]]=0\) and (16), where \(\epsilon_{X}\) (\(\epsilon_{\delta X}\)) is a signature of \(X\) (\(\delta_{\rm B}^{GR}[X]\)), respectively. Therefore
\[\delta_{\rm B}^{GR}\left[\delta_{\rm B}^{GR}\left[\alpha\beta \right]\right] = \epsilon_{\alpha}\delta_{\rm B}^{GR}\left[\alpha\right]\delta_{ \rm B}^{GR}\left[\beta\right]+\epsilon_{\delta\alpha}\delta_{\rm B}^{GR} \left[a\right]\delta_{\rm B}^{GR}\left[\beta\right]\ =\ 0.\]
## Appendix B Measurement of the gravitational coupling constant
We consider an electron scattering with the Earth's gravity in Figure 11 to measure the gravitational coupling constant.
Morishima, Futamase and Shimizu[52] proposed a possible resolution to the muon \(g_{\mu}-2\) anomaly owing to the static gravitational potential of the Earth using the post-Newtonian approximation. Visser[53] immediately criticised their proposal as contradicting Einstein's equivalence principle and pointed out also that they missed counting the effect of the Sun's and the Galaxy's static potential, which are much stronger than the Earth's. However, a frame fixed on the Earth is the inertial system concerning the Sun's and the Galaxy's gravity. Due to Einstein's equivalence principle, the static gravitational potential does not contribute to any local observable obtained in the inertial system. On the other hand, experimental apparatus on the Earth is not in an inertial system but in an accelerating system concerning the Earth's gravity. A finite effect on the muon anomalous coupling owing to the Earth's gravity is measurable[38].
Schwarzscild spin-connection:The Schwarzschild solution provides a gravitational field induced by the Earth. We first calculate a classical spin-connection of the Schwarzschild solution in the inertial space. We set the Earth at rest in the global manifold and the origin of the standard basis at the Earth's centre, as shown in Figure 12. We denote the standard basis in the global manifold as \(dx^{\mu}=(dt,dr,d\theta,d\phi)\) in the polar coordinate. We equate the configuration space to the inertial space and utilise the polar coordinate of \(d\xi^{a}=(d\tau,d\rho,d\vartheta,d\varphi)\) as the local standard basis. The local inertial manifold located at the Earth's surface at time \(t=\tau=0\). It has a standard basis whose spatial axes are parallel to the global ones. At first, we set the origin of the local standard basis at the Earth's centre. After the Fourier transformation, we set the origin of the momentum coordinate at the Earth's surface. (See Figure 12.) The Schwarzschild spin-connection in the global manifold is given as[9]
\[\omega^{\rm S\,\,tr}_{\ \ \ \ \ \ t}=-\omega^{\rm S\,\,tr}_{\ \ \ \ t}=-\frac{R_{\rm\Theta}^{\rm S}}{2r^{2}}, \omega^{\rm S\,\,r\theta}_{\ \ \ \ \ \theta}=-\omega^{\rm S\,\,\theta r}_{\ \ \ \ \ \ \ \ \vartheta}=f_{\rm S}(r),\] \[\omega^{\rm S\,\,r\phi}_{\ \ \ \ \ \phi}=-\omega^{\rm S\,\,\phi r}_{\ \ \ \ \ \phi}=f_{\rm S}(\rho)\sin\theta, \omega^{\rm S\,\,\theta\phi}_{\ \ \ \ \phi}=-\omega^{\rm S\,\,\phi\theta}_{\ \ \ \ \ \phi}=\cos\theta,\]
Figure 11: The figure depicts a Feynman diagram of a gravitational interaction of an electron with the Earth’s gravity.
otherwise zero, where
\[f_{\mathrm{S}}(\rho):=\sqrt{1-\frac{R_{\Phi}^{\mathrm{S}}}{\rho}},\qquad\quad R_{ \Phi}^{\mathrm{S}}:=2G_{\mathrm{N}}M_{\Phi},\]
and \(M_{\Phi}\) and \(R_{\Phi}^{\mathrm{S}}\) are a mass and the Schwarzschild radius of the Earth, respectively. The spin-connection in the inertial space is provided using \(\omega_{c}^{\mathrm{S}\,ab}(\xi)=\omega_{\mu}^{\mathrm{S}\,ab}(x(\xi))\mathcal{ E}_{c}^{\mathrm{S}\,\mu}(x(\xi))\), such that:
\[\omega_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
where \(q_{e}\) is momentum transfer between an electron and the Earth through the spin-connection. After changing the polar coordinate to the Cartesian coordinate, we obtain the Schwarzschild spin-connection as
\[\tilde{\omega}^{a}(\bar{q}_{\text{lab}}) =\left(\tilde{\omega}_{\tau}^{\rho\tau}(\bar{q}_{\text{lab}}),\, \bar{q}_{\text{lab}}\,\tilde{\omega}_{\theta}^{\rho\vartheta}(\bar{q}_{\text{ lab}}),0,0\right),\]
with
\[\tilde{\omega}^{0}(\bar{q}_{\text{lab}}) =\frac{\pi}{2}\left(\frac{R_{\Phi}^{\text{S}}}{\hbar}\right)^{2} \left(\frac{\pi}{\bar{q}_{\text{lab}}}-2\log\frac{\bar{q}_{\text{lab}}}{2}-2 \gamma_{E}+\mathcal{O}\left(\bar{q}_{\text{lab}}\right)\right),\] (B.2a) \[\tilde{\omega}^{1}(\bar{q}_{\text{lab}}) =\frac{\pi}{2}\left(\frac{R_{\Phi}^{\text{S}}}{\hbar}\right)^{2} \left(\frac{2}{\bar{q}_{\text{lab}}}+\bar{q}_{\text{lab}}\log\frac{\bar{q}_{ \text{lab}}}{2}-\pi+\mathcal{O}\left(\bar{q}_{\text{lab}}\right)\right).\] (B.2b)
We set them to have an inverse energy-squared dimension, such as \([\tilde{\omega}^{0}]_{\text{p.d.}}=[\tilde{\omega}^{1}]_{\text{p.d.}}=E^{-2}\), which is the same as the spin-connection propagator. The Schwarzschild spin-connection has only an index type \(\tilde{\omega}_{\bullet}^{\rho\pi}\) yielding a non-zero value, consistent with the observation in (23); thus, the spin-connection has a vector representation, even if it is a tensor object.
To estimate the gravitomagnetic effect quantitatively for the \(c_{\text{gr}}\) measurement, we utilise the Breit frame in which we have \(p_{\text{Br}}+p_{\text{Br}}^{\prime}=(2m_{e},0,0,0)\) with neglecting \(\mathcal{O}(q_{e}/m_{e})\), such that:
\[p_{\text{Br}} =\left(m_{e}-\frac{\sqrt{E_{e}^{2}-m_{e}^{2}}}{2m_{2}}q_{e},\,\, \,\frac{E_{e}^{2}-m_{e}^{2}}{2E_{e}m_{e}}q_{e},0,-\frac{q_{e}}{2}\right),\] \[p_{\text{Br}}^{\prime} =\left(m_{e}+\frac{\sqrt{E_{e}^{2}-m_{e}^{2}}}{2m_{2}}q_{e},- \frac{E_{e}^{2}-m_{e}^{2}}{2E_{e}m_{e}}q_{e},0,\,\,\,\frac{q_{e}}{2}\right),\] \[q_{\text{Br}} =p_{\text{Br}}^{\prime}-p_{\text{Br}}.\]
We ignore effects due to the centrifugal and Coriolis forces in following calculations. In the Breit frame, the spin-connection of the Schwarzschild solution in the momentum space is
\[\tilde{\omega}^{0}(\bar{q}_{\text{Br}})=\gamma_{e}\tilde{\omega}^{0}(\bar{q}_ {\text{lab}})-\gamma_{e}\beta_{e}\tilde{\omega}^{1}(\bar{q}_{\text{lab}}), \quad\tilde{\omega}^{1}(\bar{q}_{\text{Br}})=\gamma_{e}\tilde{\omega}^{1}( \bar{q}_{\text{lab}})-\gamma_{e}\beta_{e}\tilde{\omega}^{0}(\bar{q}_{\text{ lab}})\]
after the Lorentz boost, where \(\beta_{e}=p_{e}/E_{e}\) and \(\gamma_{e}=E_{e}/m_{e}\). In the following, we discuss the scattering amplitude in the Breit frame and omit the subscript "Br" for simplicity.
Scattering amplitude:We formulate the scattering amplitude by applying the Feynman rule (50b) with setting \(\varepsilon_{UV}=0\) to Figure 11, such that:
\[i\tau_{\Phi}^{\text{S}}:=\text{Figure \ref{fig:fig
and \(I_{2}\) is a \((2\times 2)\) unit-matrix. We expand \(u(p^{a})\) concerning \(q_{e}/m_{e}\) up to \(\mathcal{O}(q_{e}^{2}/m_{e}^{2})\), since \(\tilde{\omega}^{\star}\) has a \(\mathcal{O}(q_{e}^{-1})\) term. Two-component spinor \(\boldsymbol{\xi}\) is normalised as \(\sum_{\lambda}\boldsymbol{\xi}^{\lambda}\cdot\boldsymbol{\xi}^{\lambda\dagger }=\boldsymbol{1}\) owing to (35).
Consider the physical interpretation of tensor coupling terms of the amplitude. The first term of (B.3b) up to \(\mathcal{O}(q_{e}^{2}/m_{e}^{2})\) is
\[\tilde{\omega}^{0}(\bar{q})\,\bar{u}(p^{\prime})u(p)\simeq\tilde{\omega}^{0}( \bar{q})\left(2+\frac{3q_{e}^{2}}{4m_{e}^{2}}\right)m_{e}\boldsymbol{\xi}^{ \prime\dagger}\boldsymbol{\xi},\]
which does not couple to the electron spin; thus, we ignore it. The second term can be written as
\[iq_{e}\bar{u}(p^{\prime})\frac{\tilde{\omega}^{0}(\bar{q})\sigma^{03}-\tilde {\omega}^{1}(\bar{q})\sigma^{13}}{2m_{e}}u(p)\simeq i\tilde{\omega}^{x}(\bar{ q})\left(2+\frac{q_{e}^{2}}{m_{e}^{2}}\right)\boldsymbol{\xi}^{\prime\dagger} \frac{\sigma^{2}}{2}\boldsymbol{\xi},\] (B.4)
yielding high energy limit \(\gamma_{e}\to\infty\) as
\[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eq
energy transfer from the Earth) may rotate an electron spin to an upper direction in a coordinate frame fixed on the Earth's surface.
Amplitude \(\tau^{\rm S}_{\Earth}\) provides the Born approximation of the scattering of an electron with the potential due to the Earth's gravity:
\[i\tau_{\rm Born}=-i\left(2m_{e}\right)\mathbf{\xi}^{\prime\dagger} \tilde{V}(\bar{q})\mathbf{\xi}\ \mbox{\small Figure\ \ref{fig:Bord}}\ i\tau^{\rm S}_{\Earth}=-i(2m_{e})\mathbf{\xi}^{\prime\dagger}\tilde{V}_{\mathscr{gr}}(\bar{q})\mathbf{\xi}.\]
By comparing (B.6) with (B.7), we obtain that
\[\tilde{V}_{\mathscr{gr}}(0)=-\frac{c_{\mathscr{gr}}}{m_{e}}g_{\mathscr{gr}}\, s^{y}B_{\mathscr{gr}}\ \ \mbox{with}\ \ g_{\mathscr{gr}}=2,\]
in the Breit frame.
A coefficient of an angular momentum operator \(s^{y}=\mathbf{\xi}^{\prime\dagger}(\sigma^{2}/2)\mathbf{\xi}\) gives a rotation angle of an electron spin; thus, the amplitude (B.6) generates a counter-clockwise spin rotation around a \(y\)-axis such that an initial horizontal (along a \(x\)-axis) spin vector to a \(z\)-axis (upward). The gravitational force \(B_{\mathscr{gr}}\) induce an orbital motion with the orbital angular frequency \(\omega_{O}^{2}=|B_{\mathscr{gr}}|\). On the other hand, an angular frequency of a spin rotation \(\omega_{S}\) is provided using a relation between a rotation energy of an angular momentum and an angular frequency, such that:
\[\frac{1}{2}|\vec{\mu}_{\mathscr{gr}}|\omega_{S}^{2}=c_{\mathscr{gr}}g_{ \mathscr{gr}}\,|B_{\mathscr{gr}}|\rightarrow\omega_{S}=\pm\omega_{O}.\]
In a clockwise orbital motion of an electron, it provides a \(2\pi\) counter-clockwise spin rotation to a spin in the inertial system during one cycle. On the other hand, the electron spin point in the same direction during orbital motion in the global system, which is fixed on the Earth. The spin rotation in the inertial system is spurious due to the observation of the rotational coordinate system. The first-order approximation under the weak gravitational field provides \(g_{\mathscr{gr}}\!=\!2\) to induce no spin precession. This first-order result is consistent with a classical spin precession of a gyroscope in the weak Schwarzschild gravitational field owing to the Earth[54]. It is known that the higher order correction induces a geodesic precession[54].
For the finite momentum transfer case with \(\bar{q}\neq 0\), the second term in (B.3b) proportional to \(\mathbf{\xi}^{\prime\dagger}(\sigma^{2}/2)\mathbf{\xi}\) provides a gravitomagnetic interaction of the Earth's gravity to an electron spin, which gives a gravitomagnetic moment slightly differ from \(g_{\mathscr{gr}}\!=\!2\) owing to the gravitomagnetic effect. More precisely, this term generates a counterclockwise spin rotation around a \(y\)-axis such that an initial horizontal (along a \(x\)-axis) spin vector to a negative \(z\)-axis (upward). We define the anomalous gravitomagnetic moment as
\[a_{\mathscr{gr}}:=\frac{g_{\mathscr{gr}}-2}{2}=\frac{1}{2}\left(g_{\mathscr{gr }}\left(\bar{q}\right)-g_{\mathscr{gr}}\left(0\right)\right),\ \ \mbox{with}\ \ g_{\mathscr{gr}}(\bar{q}):=-\left(\frac{c_{\mathscr{gr}}}{m_{e}}\,s^{y}B_{ \mathscr{gr}}\right)^{-1}\tilde{V}_{\mathscr{gr}}(\bar{q}).\] (B.8)
The anomalous magnetic-moment measurements may include an effect of the above contribution.
Experimental measurements:In the electron anomalous magnetic-moment measurements[55, 56], an electron has a small Lorentz factor \(\gamma_{e}\) and \(\gamma_{e}\beta_{e}\). Moreover, these experiments utilise free-falling electrons with \(\bar{q}\simeq 0\). Thus, the spin precession owing to the gravitomagnetic effect is negligible compared with the magnetic one.
On the other hand, the BNL-FNAL type \(g_{\mu}\!-\!2\) measurement used the _magic momentum[57, 36]_ of \(p=4.094\)GeV corresponding to Lorentz factors \(\gamma=29.4\) and \(\beta=0.999421\) to eliminate the spin precession due to the focusing electric field; thus, we can expect a sizable precession owing to the gravitomagnetic moment with the Earth's gravity. The electrostatic quadrupole (ESQ) covers \(13/30\) of the muon storage ring[58]. Muons in the storage ring are kept horizontally due to the electric field of ESQ on average. Therefore, we estimate each muon receives a momentum transfer of \(\bar{q}_{\Earth}\!\simeq\!G_{\rm N}M_{\Earth}/r_{\Earth}\!\simeq\!6.95\times 10^{-10}\) on average, which induces the spin precession owing to (B.8), such that:
\[a_{\mathscr{gr}}/c_{\mathscr{gr}}=257\,858\times 10^{-11}.\]
On the other hand, the muon anomalous magnetic moment is estimated theoretically[59] as
\[a_{\text{SM}}:=\frac{g_{\text{SM}}-2}{2}=116\,591\,810\times 10^{-11}.\]
A total precesstion effect is estimated as
\[a_{T}^{2}:=a_{\text{SM}}{}^{2}+\left(c_{\text{gr}}\,a_{\text{gr}}\right)^{2}+2c _{\text{gr}}\,a_{\text{SM}}\,a_{\text{gr}}\cos\theta_{a},\]
where \(\theta_{a}\) is a angle between two precession axes. The precession owing to the anomalous grativomagnetic moment is around the \(y\)-axis and that owing to the anomalous magnetic moment is around the \(z\)-axis in the lab-frame. Consequently, \(\theta_{a}=\pi/2\) and the grativomagnetic contribution to the anomalous magnetic moment is
\[\delta a_{\text{gr}}:=\left.\left(\,a_{T}-a_{\text{SM}}\right)\right|_{c_{ \text{gr}}=1}=2.8\times 10^{-9}.\]
The measured muon anomalous magnetic moment is reported[36] as
\[a_{\text{E}\text{z}p}=116\,592\,061\times 10^{-11},\]
yielding
\[\delta a:=a_{\text{E}\text{z}p}-a_{\text{SM}}=(2.51\pm 0.41\pm 0.43)\times 10^{-9},\]
which suggests the gravitational coupling constant is consistent with unity.
|
2308.03495 | Balanced Face Dataset: Guiding StyleGAN to Generate Labeled Synthetic
Face Image Dataset for Underrepresented Group | For a machine learning model to generalize effectively to unseen data within
a particular problem domain, it is well-understood that the data needs to be of
sufficient size and representative of real-world scenarios. Nonetheless,
real-world datasets frequently have overrepresented and underrepresented
groups. One solution to mitigate bias in machine learning is to leverage a
diverse and representative dataset. Training a model on a dataset that covers
all demographics is crucial to reducing bias in machine learning. However,
collecting and labeling large-scale datasets has been challenging, prompting
the use of synthetic data generation and active labeling to decrease the costs
of manual labeling. The focus of this study was to generate a robust face image
dataset using the StyleGAN model. In order to achieve a balanced distribution
of the dataset among different demographic groups, a synthetic dataset was
created by controlling the generation process of StyleGaN and annotated for
different downstream tasks. | Kidist Amde Mekonnen | 2023-08-07T11:42:50Z | http://arxiv.org/abs/2308.03495v1 | Balanced Face Dataset: Guiding Stylegan to Generate Labeled Synthetic Face Image Dataset for Underrepresented Group
###### Abstract
For a machine learning model to generalize effectively to unseen data within a particular problem domain, it is well-understood that the data needs to be of sufficient size and representative of real-world scenarios. Nonetheless, real-world datasets frequently have overrepresented and underrepresented groups. One solution to mitigate bias in machine learning is to leverage a diverse and representative dataset. Training a model on a dataset that covers all demographics is crucial to reducing bias in machine learning. However, collecting and labeling large-scale datasets has been challenging, prompting the use of synthetic data generation and active labeling to decrease the costs of manual labeling. The focus of this study was to generate a robust face image dataset using the StyleGAN model. In order to achieve a balanced distribution of the dataset among different demographic groups, a synthetic dataset was created by controlling the generation process of StyleGaN and annotated for different downstream tasks.
Kidist Amde Mekonnen AIMS-AMMI, Rwanda [email protected] \(|\) [email protected] StyleGAN, Fairness, Representation, Representation Bias, Imbalance Dataset, Generative Adversarial Networks, Synthetic images.
## 1 Introduction
Deep learning has proven to be a successful approach in several machine learning domains, such as Computer vision, Natural language processing, and Speech processing.[1, 2, 3, 4, 5]. However, one of the limitations of deep learning is that it requires a lot of data and is often labeled datasets[6, 7, 8, 9, 10].
Advancement in high-performance hardware has made it feasible to train large deep-learning models. These models generally consist of a large number of trainable parameters and require a vast dataset for their training. In the domain of facial images, the large-scale datasets currently employed are frequently gathered without considering the demographic distribution, leading to biased data. The training dataset used to train these large models is often lacking in transparency and careful discretion, resulting in a lack of geodiversity that inadvertently produces data resulting in gender, ethnic, and cultural biases. When we disregard how data is collected, processed, and organized, it leads to uneven distribution and bias in the dataset, ultimately causing a biased model when trained on the dataset[11, 12, 13, 14, 15].
Generative Adversarial Networks (GANs) based models have been used to generate synthetic datasets[16, 17]. GANs [18] are deep generative models that have two networks namely a generator, and a discriminator which compete with one against the other. The task of the generator network is to generate synthetic data that imitate the real data to trick the discriminator and the role of the discriminator network is to distinguish the fake image from the real one. However, GANs are limited to small dataset sizes, with low-resolution image generation. Due to this generating a high-resolution image has been a challenging task. Controlling the attributes of the generated image also poses another challenge to image generation.
StyleGAN[19] is an extension to the GAN architecture that advances the generator model. StyleGAN tackled the traditional GAN model challenges by building Style based GAN architectures to generate high-quality images and disentangling the latent factor of variation. StyleGAN has also been trained on the Flickr-Faces-HQ dataset and it has shown that it can generate high-quality (1024 x 1024) realistic synthetic human face images. Disentangling the latent factor of variation in StyleGAN has enabled the guided generation of images.
This research aims to investigate how the distribution of datasets across different demographic groups impacts the model bias. Specifically, we seek to answer the following research questions:
* How does an uneven distribution of datasets across different demographic groups contribute to model bias?
* What are the cost-effective ways to create an evenly distributed dataset across different demographic groups?
* Can generating synthetic datasets help reduce the bi |
2310.19687 | Sentiment Analysis in Digital Spaces: An Overview of Reviews | Sentiment analysis (SA) is commonly applied to digital textual data,
revealing insight into opinions and feelings. Many systematic reviews have
summarized existing work, but often overlook discussions of validity and
scientific practices. Here, we present an overview of reviews, synthesizing 38
systematic reviews, containing 2,275 primary studies. We devise a bespoke
quality assessment framework designed to assess the rigor and quality of
systematic review methodologies and reporting standards. Our findings show
diverse applications and methods, limited reporting rigor, and challenges over
time. We discuss how future research and practitioners can address these issues
and highlight their importance across numerous applications. | Laura E. M. Ayravainen, Joanne Hinds, Brittany I. Davidson | 2023-10-30T16:04:35Z | http://arxiv.org/abs/2310.19687v1 | # Sentiment Analysis in Digital Spaces: An Overview of Reviews
###### Abstract
Digital data generated via social media have become a prosperous entity for sentiment analysis (SA) researchers seeking to understand individuals' feelings, attitudes, and emotions. Numerous systematic reviews have sought to synthesize the diverse contexts, media, and methodologies studied across a variety of applications. While these reviews are useful for synthesizing _what_ work has been conducted, they rarely address questions about validity of SA methods or critical assessment of scientific practices. Our overview of 38 reviews, comprises 2,275 primary studies using SA to dissect online digital data. We provide a high-level overview of current applications, methods, outcomes, and common challenges in SA research. A bespoke quality assessment framework was designed to assess the rigor and quality of systematic review methodologies and reporting standards. We found diverse applications, methods, outcomes, and persistent challenges outlined in the reviews, which we discuss in relation to validity of SA research. Importantly, the methodological rigor of the reviews was limited, which we therefore discuss considering how existing systematic reviews might influence our understanding of the field and impact the subsequent decisions that researchers and practitioners may take. We discuss how future research can address these issues and highlight their importance across numerous societal applications.
CCS CONCEPTS \(\bullet\) Computing methodologies Artificial intelligence Natural language processing
**Additional Keywords and Phrases:** Sentiment analysis, review methodology, psychology, digital data, social media
**ACM Reference Format:**
## 1 Introduction
The proliferation of the internet, social media, and digital devices have resulted in the generation of vast amounts of data. For example, language, or textual data published across websites, online communities, news sites, and social media platforms can reflect individuals' subjective states, such as emotions, opinions, and attitudes (e.g., Giachanou & Crestani,
2016; Qiu et al. 2010; Xu et al. 2012). Since the early 2000's the popularity of sentiment analysis (SA) has grown substantially; where researchers can access sentiment data from diverse populations (i.e., individuals that may be difficult to access otherwise and from countries around the world) at scale. Such data can offer vast insights into feelings about, and public opinions on the societal issues of our time. For example, investigations into people's sentiments during infectious disease outbreaks, epidemics, and pandemics including Ebola and Covid-19 have contributed to a deeper understanding of people's sentiments toward unfolding events, the spread of misinformation, and the prevalence of fake news (e.g., Bangyal et al. 2021; Iwendi et al. 2022). As other examples, research of student feedback in massive open online courses (MOOCS) has been used to leverage insights into dropout rates and influence policies in higher education (e.g., Crossley et al. 2016; Kastrati et al. 2021) and investigations into emotions derived from social media posts have been used to predict behavior on the stock market (e.g., Bollen et al. 2011; Pagolu et al. 2016).
The rapid rise of SA applications has prompted multiple systematic reviews seeking to synthesize existing work in particular domains. Such reviews have covered topics including online reviews and public opinion (Jain et al. 2021; Kumar & Sharma, 2017; Skoric et al. 2020), Covid-19 (e.g., Aljedaani et al., 2022; Alamoodi et al., 2021), and SA in different languages (e.g., Handayani et al. 2018; Ghallab et al. 2020; Obiedat et al. 2021; Osorio Angel et al. 2021). A common approach applied involves reviewing what methods have been employed in previous research, for instance those used for data collection, data processing, and the models used for prediction (e.g., Asghar et al. 2014; Yadav & Vishwakarma, 2020; Zad et al. 2021). While such approaches are often useful for summarizing methods and for creating taxonomies of the findings across specific applications or sub-disciplines of SA (e.g., Alamoodi et al. 2021; Ghallab et al. 2020), they rarely provide insights into or critical assessment of the validity of different SA methods, such as inspection of how the construct of interest ('sentiment') is defined and operationalized in extant literature, or the extent to which proposed SA systems can be used beyond their original target population, context, or time period. Further, the guidance on performing and reporting systematic reviews in computer science is limited; researchers often consult guidelines provided from medicine (such as the Cochrane Collaboration, e.g., Chalmers, 1993, the EQUATOR network, [https://www.equator-network.org/](https://www.equator-network.org/)) or software engineering (Kitchenham, 2004; Kitchenham & Charters, 2007), which may not be entirely appropriate or applicable to SA contexts. Consequently, research may be inadequately synthesized, analyzed, and reported, which could weaken the evidence presented, impacting subsequent research decisions and directions. Insights could also misdirect policy and implementations of SA-based tools if synthesized findings are unreliable.
### Sentiment analysis in digital spaces: the need for an overview of reviews
Digital spaces encompass a vast range of media from which researchers aim to infer individuals' emotions and attitudes, via social media, blogs, online communities, and news sites. Each of these platforms introduces unique contextual nuances; for instance, the way users communicate and present themselves on Instagram may greatly differ from their interactions on TikTok (Davidson & Joinson, 2021). These disparities are shaped by community norms, established conventions (e.g., De Souza & Preece 2004; Preece & Shneiderman, 2009), and design affordances that influence users' interactions (e.g., Chen et al. 2019; Norman, 1998; Zhao et al. 2013). For example, users of online health communities may be motivated to gain reputation points, prompting them to write informative and supportive posts (e.g., Chen et al. 2019), while users on X (formerly known as Twitter) may seek to generate viral content by creating highly contentious messages (e.g., Pressgrove et al. 2018). The digital landscape also facilitates multimodal communication, where SA can span diverse sources such as conversational/audio data from videos, alongside textual data, likes, emojis and so forth (e.g., Abdu et al. 2021; Perdana & Pinandito, 2018; Shiha & Ayvaz, 2017). Indeed, these distinctive contexts and data modalities diverge significantly from offline or alternative forms of media such as essays (Rude et al. 2004), stories (Alm et al. 2005),
and conversations (Huddar et al., 2021; Shenoy and Sardana, 2020), which comprise different styles of writing or speech and tend to focus on language only. The characteristics of online content make it a rich source of data for SA, while also posing specific challenges, such as variation in language use (e.g., misspellings, slang). As such, our overview targets online content, due to its popularity, potential, and persisting challenges for SA.
Although the majority of SA research tends to revolve around discussions embedded within computer science communities, the study of language and emotions in psychology (for example see the work by Pennebaker and colleagues; Chung and Pennebaker, 2011; Pennebaker et al., 2003; Tausczik and Pennebaker, 2010) could provide valuable insights into the way in which researchers conceptualize and operationalize individuals' sentiments in their research. As such, we explore how researchers define sentiment in systematic reviews to investigate (and search for evidence on) particular topics because SA has been applied widely across applied disciplines (e.g., politics), and increasingly SA is being used as a tool within mental health diagnostics (e.g., Rajput, 2020; Wang et al., 2020; Yeow and Chua, 2022), which suffer with their own inconsistences of definitions and therefore measurement of mental health symptomologies (Fried, 2017). Hence, definitions are critical for research because they lay the foundations for scientific inquiry and are influenced by prior evidence or theory or even cultural differences (Fried,2017; Lim, 2016)--this is critical in clinical settings or when working with vulnerable people.
Thus, without formal definitions and consensus regarding how we understand a construct, such as sentiment, the ways to measure it across contexts (e.g., verbal cues, body language, in text, audio, video) will likely be inconsistent and incomparable. Further, a lack of 'ground truth' for sentiment (or emotion or feeling more generally) is an example of issues related to a latent construct, wherein lies difficulty in measuring something inherently unobservable. Therefore, our conceptualizations, definitions, and measurement of sentiment rely on natural language (akin to much of psychological constructs), meaning that our consistency of measurement remains questionable and thus will have a knock-on impact on reproducing, replicating, and generalizing from current work (Bollen, 2002; Yarkoni, 2020).
Systematic reviews are often considered to be the most valid approaches for evaluating research findings, according to the so-called "hierarchy of evidence" that appraises different methodologies in terms of their effectiveness, appropriateness, and feasibility (e.g., Evans, 2003; Kitchenham, 2004). By providing a "complete" picture of evidence on a given topic, readers can evaluate the strengths and weaknesses of existing research to make informed decisions about future actions (MacKenzie et al., 2012). Systematic reviews can therefore be highly informative for the study of SA in digital spaces because they can educate readers on the most effective techniques and strategies for analysis and can identify gaps in current findings, guiding future researchers to address them. This is especially pertinent given the many varied data modalities and platforms that evolve rapidly amidst technological advances and unfolding events. Thus, systematic reviews can provide cutting-edge insights into public opinion and sentiments surrounding societal and global issues. They can inform policy makers' decisions and responses to public events, such as political developments, natural disasters, the spread of misinformation and hate speech. Systematic reviews can also direct technical decisions, designs, and implementations based on a collection of evidence that should provide direction on what has worked well and what has been less successful, thus providing evidence to help research focus on fruitful avenues of work. An overview of reviews can therefore synthesize a broad array of evidence and identify key trends, gaps and persistent challenges across an entire field that could not be identified otherwise. We therefore investigate the topics, methods, findings, and challenges identified in systematic reviews of SA in digital spaces.
It is critical that systematic reviews are performed meticulously and comprehensively in order to effectively use their findings in evidence-based decision making and in subsequent research (note - this also applies to the insights derived from an overview of reviews). In the fields of medicine and healthcare (from where systematic reviews originated, Cochrane,
1972; Chalmers, 1993) it is widely expected that researchers will adhere to strict protocols when performing systematic reviews (MacKenzie, 2012; Moher et al., 2015; Page et al., 2021). However, fields outside of medicine often have no (or limited) discipline-specific guidelines for conducting systematic reviews, creating situations where researchers commonly consult guidelines from medicine and adjust them to suit their project needs. Moreover, researchers performing systematic reviews of SA often possess expertise in SA or computer science methodologies rather than systematic reviewing techniques. This stands in contrast to medical researchers who are typically specialists in systematic reviews. Consequently, SA researchers might lack the experience and expertise required to formulate new protocols tailored toward reviewing SA in digital spaces. For instance, a researcher conducting a risk of bias assessment may attempt to deconstruct the reporting of methodological procedures such as the model set up, tuning processes, data processing and evaluation to investigate the factors that may influence the validity of the findings, and whether publication bias may exist. However, they may not have systematic reviewing expertise to formulate and apply such criteria in this context. Similarly, SA experts may not have the expertise, awareness, or motivation to document their methods and findings in a way that supports the detail and transparency required to perform a systematic review. Researchers conducting systematic reviews may therefore misinterpret the methods and results or incorrectly apply systematic reviewing protocols to existing work. As a result, the quality and rigor of the procedures applied may be compromised, leading to inconsistent interpretation and application of existing frameworks. Alternatively, some researchers might even resort to devising their own reviewing protocols, or in some cases overlook the application of protocols entirely.
Adhering to procedures that are well-defined and structured ensures that the review is systematic, as well as replicable and free from bias (e.g., Page & Moher, 2017; Sarkis-Onofre et al., 2021). Effective systematic reviews are therefore reliant on the detailed and thorough reporting of the research, from the conceptualization of the research questions, through to the findings and conclusions. Concerns over the inaccurate, misleading, or incomplete reporting of machine learning models are being increasingly reported in fields such as medicine (e.g., Christodoulou et al., 2019; DeMassi et al., 2017; Faes et al., 2020; Yusuf et al., 2020). These have been reflected in increasing calls for computer scientists to document their methods transparently, for instance see "model cards for model reporting" (a framework for documenting the detailed performance of model characteristics, Mitchell et al., 2019) and "datasheets for datasets" (a framework for the documenting the provenance, creation, and use of machine learning datasets to avoid discriminatory outcomes, Gebru et al., 2021) amidst changes in journal policies to promote computational reproducibility (the sharing of data and code to support the replication of the published findings, Stodden et al., 2018). However, we do not currently know the extent to which systematic reviewing procedures are followed, or how transparently methods and findings are reported within the context of SA in digital spaces.
This article therefore aims to address this gap by presenting an overview of reviews (i.e., a review of systematic reviews) that synthesizes existing systematic reviews. As such, we first investigate the current state-of-the-art of research by collating the diverse range of topics, methods, findings, and challenges of SA research. We then develop a bespoke quality assessment framework designed to assess the rigor and quality of systematic review methodologies and reporting standards of existing research. This evaluation is further complemented with inspection of how the construct of interest ('sentiment') is defined and the extent to which open science practices are followed in the current research. By evaluating the quality of research, we discuss the strength of existing findings and identify directions for future research. Our findings provide a comprehensive resource that can aid researchers, practitioners, data scientists, and policy makers in making decisions surrounding research design and the application of SA in digital spaces in future work.
## 2 Method
An overview of reviews is a synthesis of results from several systematic reviews and meta-analyses on a particular topic. The purpose of an overview of reviews is to summarize and evaluate research evidence at a broad level, and to provide an "entry point" for readers to access more detailed findings reported in the systematic reviews and primary studies (Caird et al., 2015; Hunt et al., 2018). Accordingly, to synthesize existing systematic reviews, we followed the Preferred Reporting of Items for Overviews of Reviews (PRIOR) (Gates et al., 2022), which is akin to the PRISMA protocol (Page et al., 2021). The PRIOR checklist outlining where the present overview meets each of the criteria is available on the Open Science Framework (OSF, here), along with all supplementary materials associated with this project.
**Protocol and Pre-registration.** The research protocol was pre-registered before data extraction and analysis (available here). During our research, we found it necessary to update and make a. number of deviations from our original pre-registered protocol. These modifications were essential to extract more detailed and suitable information from the reviews. A detailed overview of these deviations are listed in a separate document on OSF (file Protocol_Amendments.xlsx, here).
**Eligibility criteria.** Only systematic reviews, meta-analyses or mapping studies were included, which we defined as studies that report a reproducible search strategy implemented in at least one database, including year range, clearly outlined search terms and inclusion/exclusion criteria for the primary studies. Additionally, the inclusion criteria for the reviews were that they (i) evaluate, summarize or compare SA tools, (ii) include primary studies that use digital data (e.g., from online platforms or devices) and (iii) include empirical studies. During our full-text eligibility assessment, we enforced an additional requirement: the study's search terms had to align with one of our search terms for sentiment analysis (see below in Search Strategy) or a synonym of these terms. This criterion was included to ensure that we capture only studies that aimed to review SA methods, rather than studies with a broader focus on natural language processing (NLP) or social media analytics. All studies reported in English and available on the final date of search (31st of January 2023) were included. The eligibility of articles was not restricted by publication status; both published and unpublished documents were included (e.g., peer-reviewed journal articles, preprints, and dissertations). Studies examining any human populations, regardless of age, gender, or geographic location were included.
**Information sources.** We consulted four electronic databases in performing our search. These included: ACM Digital Library, IEEE Xplore, PsychInfo (including PsychExtra for unpublished papers) and Scopus. This combination of databases enabled us to perform an exhaustive search, and to capture a breadth of literature on SA across the computational (ACM Digital Library and IEEE Xplore) and psychological (PsychInfo and PsychExtra) sciences as well as across other disciplines that may have conducted such work (Scopus).
**Search Strategy.** Our first step was to search for all overviews of reviews of SA across all online domains and to note the general topics and findings reported. This was to ensure that we did not duplicate existing overviews of reviews that had already been conducted. Our findings here informed any changes to our pre-registration and allowed us to focus more specifically on certain aspects of SA. This process resulted in the identification of one previous overview of reviews of SA by Ligthart et al. (2021), which focused on systematic reviews of text-based SA. Our overview differs from theirs in that (1) we synthesized only systematic reviews conducting SA with online data, (2) we did not restrict data modality to text-based SA and (3) we focused on the methodological and reporting quality of the reviews by developing a quality assessment framework.
The final search string was arrived at after pilot-testing several search string variations. The aim of this exercise was to capture as many relevant documents as possible, while excluding documents that were irrelevant for our research aims. For instance, the preliminary searches revealed a significant number of survey papers. Browsing through several of these documents led us to exclude'survey' as one of our search terms because these articles tended to be non-systematic in
nature (e.g., narrative reviews, editorials, or commentaries). This exercise also revealed that several studies used the terms "sentiment analysis" and "opinion mining" interchangeably. Thus, we included both terms in our search (see section 4.1.1 for discussion). After iterative testing of different search term combinations, our final search string consisted of three clauses of search terms, which focused on sentiment analysis, online contexts, and types of reviews, as follows:
"sentiment analysis" OR "sentiment detection" OR "sentiment classification" OR "opinion mining" AND online OR platform OR community OR forum OR "social network"" OR "social media" OR facebook OR twitter AND SLR OR "Systematic review" OR "systematic literature review" OR "systematic mapping" OR "mapping study" OR "meta analysis" OR "meta-analysis" OR "taxonomy" OR "scoping review"
Papers with titles or abstracts that contained at least one term listed in each search clause were included (e.g., a paper including "sentiment analysis", "social network" and "systematic review" in its title or abstract). Finally, we conducted a further search for systematic reviews by extracting the citations of systematic reviews analyzed in other overviews of reviews. This resulted in searching through the systematic reviews studied in the one overview study identified (i.e., by Ligthart et al., 2021).
### Selection process
The documents identified via the literature search were imported into Zotero and duplicates were removed. The documents were then screened for inclusion, first based on titles and abstracts, and then based on the full text of the documents. Figure 1 depicts a PRISMA flowchart of the study selection process.
**Document inclusion.** In the title-abstract phase, the first author (LA) screened all 210 documents. A sample of 82 documents were then screened by the other two reviewers (second and third authors, henceforth JH and BID), 41 unique documents each. Half of these 82 documents were categorized as'maybe' by LA and were used as a basis for the calibration discussions. The remaining 41 documents were used to assess inter-rater agreement. Cohen's \(\kappa=~{}0.72\) for 21 documents screened by LA and JH and 0.6 for 20 documents screened by LA and BID indicated a moderate and substantial agreement between raters, respectively (Landis & Koch, 1977). Discrepancies were discussed and addressed by the reviewer team, after which the remaining documents were screened again by LA.
In the full text phase, 94 documents (including additional five documents from Ligthart et al., 2021, see Figure 1) were screened by LA. A sample of 54 documents were then screened by JH and BID, where again 20 documents marked as'maybe' by LA were used as a calibration exercise (5 screened by all three reviewers, 8 by LA and JH and 7 by LA and BID) and the remaining 34 documents were used to assess inter-rater agreement. Cohen's \(\kappa=0.88\) for 17 documents screened by LA and JH and 0.76 for 17 documents screened by LA and BID indicated substantial agreement between raters. Discrepancies from the screening were discussed by the reviewer team before the rest of the documents were screened again by LA. 41 documents were included in the data extraction phase, but three of these documents were later deemed not eligible during the data extraction process, after a discussion with LA, JH, and BID. This was because more thorough examination of these documents revealed that two of them were not strictly systematic based on our criteria and one of them did not focus on SA. Hence, we then proceeded with 38 documents. Figure 2 displays the screening process and the full list of excluded studies with reasons for exclusion are available on OSF (file Full_Text_Exclusion.xlsx, here).
**Overlap of reviews.** The list of primary studies included in each of the reviews were extracted and compared by LA. Overall and pairwise overlap percentages, estimated as corrected covered area (CCA) were then calculated using the ccaR package (Bougioukas et al., 2023) on R (version 4.2.0). The overlap of primary studies was assessed for all the primary studies that could be confirmed from the reviews. A total of 127 primary studies were missing due to insufficient reporting in the reviews, including all of the primary studies for one review. Thus, 37 reviews were included in this assessment. The analysis consisted of 2,148 primary studies, of which 1,935 were unique. The primary studies across reviews were matched based on first author, year and title. The form of the citations was unified by removing special characters, typographical errors and by capitalizing the author-year-title strings.
Figure 1: PRISMA flow chart of study selection
### Extraction and Coding of Study Data
**Data collection process.** The data from the studies were extracted by LA, and a sample of 10 reviews was validated by JH and BID. In the event of discrepant data for the same primary study presented in different reviews, we consulted the original primary study for each review (e.g., the version of the primary study used for data extraction), and if necessary, contacted the authors of the reviews reporting discrepant data.
**Data items.** The following basic descriptors of the records were extracted: Authors, year, title of the study, publication status, journal name (for published articles) and DOI. Additionally, the following characteristics were extracted from the reviews to summarize the key findings regarding SA methods, applications and challenges discussed in the reviews:
* Data source, modality, and language used in the primary studies (Table 3)
* Pre-processing steps and features extracted in the primary studies (Table 4)
* Sentiment classifiers and lexicons used in the primary studies (Table 4)
* Applications and outcomes of SA in the primary studies (Table 5)
* Challenges discussed in the reviews (section 3.3)
Figure 2: The process of document screening
All data extraction was performed by LA. Extracted data regarding SA methods, applications and outcomes were then summarized and tabulated. For the extracts about challenges discussed in the reviews, the discussion points were first inspected and grouped into themes before tabulation.
### Quality assessment framework
To assess the methodological quality of the reviews, we developed a framework that assessed the design, method, implementation, and reporting presented in each article. The need to improve reporting and data sharing has been widely discussed in fields including medicine and psychology (for example see Patzer et al., 2021; Roche et al., 2015; Towse et al., 2021; Yusuf et al., 2020). However, related discussions or recommendations for reporting research are not prevalent within computer science communities (to our knowledge). Similarly, there are no guidelines/frameworks for assessing the risk of bias or the methodological quality of computer science reviews. We therefore consulted a range of frameworks and checklists from different disciplines and applied/adapted relevant criteria to the present study. These included Hinds et al.'s (2021) framework that assesses reporting standards and the practical utility of automated approaches to predicting personality, as well as the PRISMA (Page et al., 2020) and PRIOR (Gates et al. 2022) guidelines, which delineate protocols for reporting systematic and overviews of reviews respectively. For each criterion, we extracted associated data and developed a scoring system that assessed whether each criterion was reported (yes/no, or yes/partly/no - Table 1 provides a breakdown of the scoring system applied). To explore the influence of review guidelines on the quality of the reviews, we also compared the quality of reviews based on whether and which review guidelines were used in each review.
### Open science, reproducibility, and replicability
We further inspected how the term'sentiment' was defined in the reviews, to gauge the level of consensus about this core concept of the field, as unclear definitions may lead to inconsistent operationalization of constructs of interest, thus hindering replicability of research. This involved extracting any definitions provided in the reviews and grouping this information based on whether a definition was provided, and how it was defined (i.e., which terms were used to define sentiment). We also tracked the accessibility of the reviews (i.e., whether the reviews were open access or otherwise freely available) and any related materials (e.g., whether any protocols or code were shared), as additional indicators of open science practices in SA research. This is because by nature systematic reviews (and any systematic methods such as meta-analyses) should be open and reproducible. This is critically important to ensure the way in which the project was composed and executed is credible and verifiable. Hence, we deemed it important to capture how open, reproducible, and replicable the reviews are. The full list of extracted data is presented in supplementary materials on OSF (file Data_Extraction.xlsx, here).
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline Quality criterion & Yes & \multicolumn{1}{c}{
\begin{tabular}{c} No \\ \end{tabular} } \\ Guidelines & The systematic review is informed by review guidelines (such as those written by Kitchenham, 2004, Tranfield et al., 2003, or the PRISMA framework, Page et al., 2020). & The systematic review cities nor their sources or their guidelines in performing the review question but does not use any specific guidelines. \\ Rationale & A clear and well supported rationale for conducting the systematic review is provided. & A rationale for performing the systematic review is a provided but is limited in terms of the importance of the topic or need for a systematic review. \\ Objective & A clear list of research questions/aims is provided. & A clear list of research questions is missing but the general aim of the review is expressed \\ Inclusion/Exclusion & Inclusion/exclusion criteria for the systematic review - are specified. & Inclusion/exclusion criteria for the systematic review are not specified. \\ Search Strategy: Study Identification & An exhaustive search strategy to identify studies is reported. This includes the adoption of multiple approaches, which may include (some combination of) searching multiple databases, forwardbackward searching, announcements (making calls for papers), and manual searching. & A search strategy is employed but may not be exhaustive based on the detail reported. For instance, a systematic review may report a single database search with no further searches. \\ Search Strategy: Search Terms & A clear and exhaustive list of search terms is reported, including specification of the relationship of the terms (e.g., use of Boolean operators). & Search terms are clearly reported but their relationship (e.g., Boolean operators) is unclear. & A complete list of search terms is missing from the systematic review. \\ Search Strategy: Search dates & The date of the search is reported and includes the month and year that the search was performed. & The date of the search is reported and includes the year that the date of the search is not reported. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Scoring criteria of the quality assessment framework
* [14] M. C. C. C. and R. C. C. (2000) A model for the estimation of the variance
Results
### Characteristics of the reviews
Our final sample comprised 38 articles in total, which included 33 systematic reviews, two scoping reviews, two systematic mapping studies and one meta-analysis (we refer to these collectively as reviews). The key information for each review is provided in Table 2 and the full reference list of each included review is provided in Appendix A.
As seen in Table 2, the majority of the 38 reviews were published in 2020 or later; 70% of these were published between 2020-2022, and the earliest review was published in 2013. The temporal coverage of the reviews ranged from two to 13 years, and the year 2013 was most often covered in different reviews (10 reviews). Based on the subject areas listed for each journal in Scopus, the most common disciplines in which the reviews were published (including multidisciplinary journals/conferences) were computer science (85%) and engineering (40%). Five reviews were excluded from these calculations (Agustiningsih et al., 2021; Elnagar et al., 2021; Handayani et al., 2018; Pinto et al., 2021; Tho et al., 2020), as these were published in conferences for which the subject area was not listed in Scopus.
The overlap of primary studies across the reviews was estimated as overall and pairwise CCA. The overall CCA across the 37 reviews was 0.3%, and is considered slight following Pieper et al. (2014) thresholds. Four review pairs shared a moderate overlap (6-10%) and one review pair reached an overlap of 28.7%, considered very high. All the review pairs with moderate or higher overlap were in the health domain. See Figure 3 for a heatmap depicting each pairwise CCA.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Study & Study ID & Primary & Year range of & Review & Article type & Discipline & Inclusion \\ & & studies & included studies & type & & & \\ \hline Abdullah \& Rusli 2021 & S1 & 45 & 2011-2018 & SLR & Journal & AgriBioSci; ChemEng: Studies using multilingual SA, published 2010-2019 & CompSci; EnviNci & \\ Agustiningsih et al. 2021 & S2 & 21 & 2020-2021 & SLR & Conference & - & Studies using SA on Covid-19 vaccine posts in Twitter, published 2020-2021 & \\ Ahmad et al. 2018 & S3 & 8 & 2013-2016 & SLR & Journal & CompSci & Studies using SVM for SA, published 2012-2017 & \\ Alamodi et al. 2021a & S4 & 33 & 2017-2021 & SLR & Journal & CompSci; Med & Studies using SA on vaccine hesitancy, published 2010-2021 & \\ Alamodi et al. 2021b & S5 & 28 & 2013-2020 & SLR & Journal & CompSci; Eng & Studies using SA on infectious diseases, published 2010-2020 & \\ Aljedami et al. 2022 & S6 & 47 & 2020-2022 & SLR & Journal & ComSci; Eng; Math & Studies using AI techniques for SA about Covid-19 vaccine, published 2020-2021 & \\ Al-Moslmi et al. 2017 & S7 & 28 & 2010-2016 & SLR & Journal & ComSci; Eng; MatSci & Studies using cross-domain SA, published 2010-2016 & \\ Cortis \& Davis 2021 & S8 & 485 & 2010-2018 & SLR & Journal & A\&H; CompSci; SocSci & Studies using opinion mining or SA on social media data, published 2007-2018 & \\ Dalipi et al. 2021 & S9 & 40 & 2014-2021 & SLR & Journal & CompSci & Studies using SA on MOOC student feedback, published 2015-2021 & \\ de Oliveira Lima et al. 2018 & S10 & 29 & 2013-2018 & SM & Conference & CompSci; Math & Studies using opinion mining on online hotel reviews, published 2013-2017 & \\ Elnaggar et al. 2021 & S11 & 60 & 2012-2020 & SLR & Conference & - & Studies using ML for Arabic SA, published 2012-2020 & \\ Ghallab et al. 2020 & S12 & 108 & 2013-2018 & SLR & Journal & CompSci; Eng & Studies using Arabic SA, published 2013-2018 & \\ Hajjali 2020 & S13 & 23 & 2014-2019 & SLR & Journal & CompSci; Math & Studies using IgA and SA, published 2005-2019 & \\ Handyani et al. 2018 & S14 & 10 & 2013-2017 & SLR & Conference & - & Studies using Malay SA, published 2008-2018 & \\ He et al. 2021 & S15 & 89 & 2011-2019 & SLR & Journal & Med & Studies using computational SA methods on English social media about health topics, published 2010-2019 & \\ Ibrahim \& Salim 2013 & S16 & 65 & 2009-2013 & SLR & Journal & CompSci; Math & Studies using SA on Twitter, published 2003-2013 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Key information of the reviews
[MISSING_PAGE_POST]
### Specific sentiment analysis methods
SA methods were summarized and compared in varied levels of detail in the selected reviews. Furthermore, most reviews focused on a particular domain or method/s, such as SA in health or SA using deep learning methods. Table 3 summarizes the foci of the reviews, along with the most common sources, languages and modalities of the data studied.
Figure 3: The pairwise corrected covered area (CCA) of the reviews
Table 4 summarizes the most common characteristics of the SA methods, such as the pre-processing steps and classifiers used. Due to the variability in the level of detail provided, and with the aim to provide a concise overview of the key characteristics in SA methods (e.g., data sources, classifiers), we summarized the top 3 most common characteristics within each review. Similar choices for summarizing the diverse field of SA have been made in previous literature syntheses, for instance, Cortis and Davis (2021) report the top 6 most used lexicons, stating that 55 unique lexicons across studies were used, and an additional 19 studies created their own lexicons. We opted for summarizing the top 3 characteristics (e.g., data sources, classifiers), rather than the top 6, top 10 etc., due to a large variety of methods and the small number of primary studies in some of the reviews, as this would result in listing methods that were only used in a single or a handful of studies. These summaries were created to the best of our understanding, given the varying level of detail provided in the reviews. Finally, Table 5 contains brief summaries of the applications and outcomes of SA, when available. Further, see Appendix B for definitions of the technical terms used in the following summaries.
The key insights from Table 3 reveal that the reviews gauge SA literature from a variety of perspectives; the most common categories were reviews in a particular language (11 reviews), reviews of specific SA methods (10 reviews) and reviews in the health domain (8 reviews). The most common data sources in the reviews, quantified as those occurring in the top 3 sources within each review, were Twitter (22 reviews), Facebook (11 reviews) and user reviews (5 reviews), excluding reviews with a particular data source as an inclusion criterion. The languages of the data most often in the top 3 were English (7 reviews), and Chinese (5 reviews). Notably, Arabic was an inclusion criterion for seven reviews. The modality of the data was predominantly text, when specified or implied in the reviews (6 reviews). A further seven reviews only included studies using textual data.
The key insights from Table 4 are that pre-processing steps in SA were not reported in sufficient detail in the majority of the reviews (29 reviews). When these were reported, the most frequently mentioned in the top 3 were tokenization (6 reviews) and stop word removal (4 reviews). The most common features extracted for SA were TF-IDF (6 reviews), BoW, n-grams and WE (5 reviews of each), whereas features were reported in insufficient detail in 26 reviews. The ML classifiers that were most frequently reported in the top 3 were SVM (24 reviews), NB (22 reviews) and KNN (7 reviews), while the most common deep learning methods were LSTM and CNN (6 reviews each) and Bi-LSTM (3 reviews). Finally, lexicons were often not reported in detail (27 reviews), however when reported, a variety of different lexicons, including language specific lexicons, were described. The lexicons most often in the top 3 were SentiWordNet (6 reviews), LIWC, VADER and TexBlob (3 reviews each).
The key insights from Table 5 are that SA has been applied in several different domains, and a considerable number of reviews (15 reviews) did not provide systematic summaries of the applications. The applications in reviews focusing on health-related SA were more homogenous, revolving around opinions and experiences about epidemics, health services and treatments (including different medicines and vaccines). By contrast, the applications for language-focused reviews were often not specified in detail (7/11 reviews), and the remaining reviews listed varied application areas, such as economy, social and politics, or SA method development in a given language. In the reviews of specific SA methods, SA was applied in diverse settings, most notably marketing and sales, and method assessment or development. The remaining reviews were in domains of education, finance, politics, and services and tourism, with too few reviews per domain to arrive at a meaningful summary of the applications. The outcomes of SA were either summarized as the conclusions drawn from SA in the subject area (i.e., whether public opinion on Covid-19 vaccines was generally positive or negative) or as comparisons or strengths and weaknesses of different SA methods, with 20 reviews not providing synthesis of the outcomes. Method comparisons appeared particularly fragmented, such that several reviews did not provide a synthesis of the varied approaches taken in the primary studies (see section 3.3).
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Study ID & Review focus & Data source & Modality & Language \\ \hline S1 & Language & - & - & 1. English + others 2. English + Chinese + others \({}^{*}\) \\ S2 & Health & Twitter (required) & - & 1. English 2. Indonesian \\ S3 & SA methods & Twitter, User reviews, Student comments & - & - \\ S4 & Health & Twitter 2. Facebook 3. Scholarsly journals & - & 1. English \\ S5 & Health & 1. Twitter 2. Facebook 2. Sina Weibo & 1. Text 2. Image & 1. English 2. Arabic 3. Chinese \\ S6 & Health & 1. Twitter 2. Online survey 3. Reddit & - & 1. English 2. Chinese, Japanese, Hinglish (Hindi English), Turkish \\ S7 & SA methods & 1. Amazon reviews & - & Turkish \\ S8 & SA methods & 1. Twitter 2. Sina Weibo 3. Facebook & 1. Text 2. Text \& image & 1. English 2. Chinese 3. Norwegian \\ S9 & Education & 1. Twitter & 1. Text & - \\ S10 & Services \& tourism & - & Text only (required) & - \\ S11 & Language & 1. Twitter 2. Text corpus 3. User reviews & - & Arabic (required) \\ S12 & Language & 1. Twitter 2. User reviews 3. Facebook & - & Arabic (required) \\ S13 & SA methods & - & Text only (required) & - \\ S14 & Language & 1. Twitter 2. Facebook & - & Malay (required) \\ S15 & Health & 1. Twitter 2. Health-specific online communities 3. Text only (required) & English (required) \\ Facebook & & & & \\ S16 & Language & Twitter (required) & - & Arabic (required) \\ S17 & Services \& tourism & 1. Tripalyvisor (or 2. Yelp.com 3. Online surveys) & 1. Text & English (required) \\ S18 & Education & 1. Surveys \& questionnaires 2. MOOC platforms 3. Text only (required) & English, Chinese \\ \hline S19 & SA methods & 1. Twitter 2. Imdb 3. Amazon & - & - \\ S20 & SA methods & Twitter (required) & Text only (required) & English (required) \\ S21 & Political & 1. Twitter \({}^{*}\) & Text only (required) & - \\ S22 & Services \& tourism & 1. Tripalyvisor and Dadoon.com 2. Twitter and Sina & - & - \\ Weibo 3. Flicky\({}^{*}\) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Characteristics of SA data reported in the selected reviews
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Study ID & Pre-processing & Features & ML & DL & Transfer & Lexicons \\ \hline S1 & 1. Machine translation & 2. - & 1. SVM 2. NB 3. KNN & - & - & - \\ & Tokenization 3. N-gram & & & & & \\ S2 & 1. Stop word removal & 2. 1. BoW / 1. TF-IDF & 1. NB 2. SVM 3. RF & 1. Bi-LSTM 2. LSTM / 2. CNN & 1. BERT & - \\ & \multicolumn{1}{c}{Punctuation and link removal 3.} & & & & \\ & \multicolumn{1}{c}{Case folding / 3. Tokenization} & & & & \\ S3 & - & - & SVM (required) & - & - & - \\ S4 & - & - & - & - & - & - \\ S5 & - & - & - & - & - & - \\ S6 & - & 1. BoW / 1. TF-IDF & 1. SVM / 1. RF 2. NB 3. KNN / 1. BERT 2. Bi-LSTM & - & 1. VALER 2. TextBibo 3. LIWC \\ & \multicolumn{1}{c}{2. Word2Vec} & 3. LR & & & / 3. Amazon/Comprehend \\ \hline \hline \end{tabular}
\end{table}
Table 4: Characteristics of SA methods reported in the selected reviews
\begin{tabular}{l l c c c c c} \hline Study ID & Pre-processing & Features & ML & DL & Transfer & Lexicons \\ \hline S7 & - & - & - & - & - & - & - \\ S8 & Tokenization, & Stemming, WE & 1. NB 2. SVM 3. LR & 1. LSTM 2. CNN 3. RNN & - & 1. SentiWordNet 2. Hu \& Liu 3. \\ & Lemmatization, & NER, & & & & & AFNN / 3. SentiStrength \\ & & Dictionaries for stop words, & & & & & \\ & acronyms and slang words & & & & & \\ S9 & - & - & 1. NN 2. NB 3. SVM & - & - & 1. VADER 2. TextBlob 3. \\ & & - & 1. SVM 2. NB 3. LDA & - & & SemiWordNet \\ S10 & - & - & 1. SVM 2. NB 3. LDA & - & - & 1. SentiWordNet \\ S11 & - & - & 1. SVM 2. NB 3. KNN & 1. LSTM 2. CNN & - & - \\ S12 & 1. Stemming 2. Normalization 3. & 1. N-grams 2. TF- & 1. SVM 2. NB 3. KNN & - & - & - \\ & Tokenization & IDF 3. Information & & & & \\ & gain / 3. WE & & & & \\ S13 & - & - & - & - & - & 1. Opinion corpus for Malay \\ S14 & - & - & - & 1. KNN 2. SVM / 2. NB & - & - & 1. LIWC 2. SentimentSrength 3. \\ S15 & - & 1. BoW 2. WE / 2. & 1. SVM / 1. NB 2. LR 3. & - & - & 1. LIWC 2. SentimentSrength 3. \\ & & Linguistic features & AdaBoost & & & & LabMT \\ & & (e.g., PoS and post length) & & & & \\ S16 & - & 1. N-grams 2. & 1. SVM 2. Combined classifiers & - & - & - \\ & Combined features & 3. NB & & & & \\ & 3. TF-IDF / 3. BoW & & & & & \\ S17 & - & - & 1. Regression 2. SVM 3. NB / & - & - & - \\ & & 3. LR & & & & \\ S18 & Tokenization, & pos, - & 1. NB 2. SVM 3. DT / 3. NN & - & - & 1. VADER 2. SentiWordNet 3. \\ & Normalization, Text cleaning & & & & & TextBlob / 3. Semantria \\ S19 & - & - & 1. SVM 2. LB including hybrid & - & - & - \\ & & 3. NB & & & & \\ S20 & - & - & - & 1. SVM 2. NB 3. Ensemble & - & - & - \\ & & methods & & & & \\ S21 & - & - & - & - & - & - \\ S22 & - & - & - & - & - & - \\ S23 & - & 1. WE & - & 1. CNN 2. LSTM 3. Bi-LSTM & - & - \\ S24 & - & - & - & - & - & - \\ S25 & 1. Tokenization / & 1. PoS 2. N-grams 3. & 1. SVM 2. RNN 3. CNN / 3. NB & - & - & 1. Arabic Sentiment Lexicon 2. \\ & Normalization 2. Stop word & NER / 3. WE & / 3. KNN / 3. DT & & & SemiWordNet / 2. ABRL lexicon \\ & removal 3. Stemming & & & & & \\ S26 & 1. Tokenization 2. Pos 3. Stop 1. N-grams 2. TF- & 1. SVM 2. Multinomial NB 3. & 1. CNN 2. LSTM (alone or in
\begin{table}
\begin{tabular}{l l l l l l l} \hline Study ID & Pre-processing & Features & MI & DL & Transfer & Lexicons \\ \hline S27 & - & - & 1. SVM 2. DL 3. NN & - & - & - & - \\ S28 & - & - & - & - & - & - & - \\ S29 & - & - & 1. SVM 2. NB 3. Maximum & - & - & - & - \\ & & & & entropy / 3. Unsupervised & & & \\ & & & & information extraction / 3. & & & \\ & & & & Relaxation labelling & & & \\ S30 & - & - & - & 1. LDA 2. Probabilistic latent & - & - & - & - \\ & & & semantic analysis & & & & \\ S31 & - & & 1. PoS 2. Frequency & 1. Dictionaries 2. SVM 3. NB & - & - & - \\ & & / 2. Syntax & 3. & & & & \\ & & Negation & & & & & \\ S32 & - & - & 1. SVM 2. NB 3. KNN & 1. CNN 2. LSTM & & - & - \\ S33 & Tokenization & - & - & 1. SVM 2. AdaBoost & - & - & 1. flu \& Liu’s opinion lexicon \\ S34 & - & - & - & - & - & - & - \\ S35 & - & - & - & - & - & - & - \\ S36 & 1. Tokenization 2. Emoticon & 1. N-grams & 1. SVM 2. LR / 2. NB & - & - & - \\ & processing / 2. URL removal & 3. & & & & \\ S37 & - & 1. TF-IDE / 1. & - & - & - & - \\ & & Absolute frequency & & & & \\ S38 & - & - & - & 1. SVM 2. NB 3. DT & - & - & 1. SemiWordNet 2. Opinion lexicon 3. WordNet-Affect / 3. Multi-Perspective Question Answering \\ \hline \end{tabular} Note. 1./2. /3. – the most the second/ the third most common characteristic of SA method; when these are listed without the ranking numbers, their frequency was not clear from the review. ‘-’ = no mention/insufficient or unsystematic description. (required) = the given SA method was an inclusion criterion for the review.
\end{table}
Table 5: Applications and outcomes of SA reported in the selected reviews
* [52] S. S. (2016). The 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-1000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-10000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-100000 challenge: A survey of the 2016-1000000 challenge: A survey of the 2016-1000000 challenge: A survey of the 2016-1000000 challenge: A survey of the 2016-1000000 challenge: A survey of the 2016-10000000 challenge: A survey of the 2016-10
* [16] M. C. C.
### Challenges discussed in the reviews
Diverse discussion points relating to challenges and areas of improvement in future studies on SA were discussed in our selected reviews. Here we outline the most discussed issues. Firstly, the characteristics of natural language data, such as slang, negation, and spelling errors were recognized as a challenge in 12 reviews. Particularly, sarcasm and irony were considered an issue for SA systems (6 reviews). The general need for SA resources was also voiced in several reviews, such as the need for larger and publicly available datasets (6 reviews) and lexicons for specific languages and domains (5 reviews). Another common theme discussed was the need for more studies and resources in languages other than English (10 reviews), including research efforts on SA in multilingual or code-switched (i.e., alternating between two or more languages in a conversation) settings.
Importantly, 15 reviews touched the topic of difficulties in SA method comparisons, either by acknowledging factors that influence method performance and how these factors vary across studies, by noting that performance metrics were reported rarely or in varied ways, or by suggesting solutions for improving method comparison in the future. More specifically, the need for standardization in SA research was discussed (13 reviews), particularly in terms of datasets used for evaluating SA methods (6 reviews). Other concerns relating to standardization focused on reporting (6 reviews), especially regarding performance metrics - several primary studies did not report these at all, or the variability of the metrics used across studies made quantitative comparisons between SA systems difficult. These commonly recognized issues in method comparisons and standardization may explain why outcomes of SA studies regarding methods were rarely reported or summarized in the included reviews.
### Quality of the selected reviews
We assessed the methodological and reporting quality of the selected reviews against our quality assessment framework (Table 1). We also inspected which guidelines are commonly followed in conducting SA literature reviews, and whether the use of guidelines was related to the quality of the reviews. Based on our quality assessment of the selected reviews (Table 6, scoring for each individual review is available on OSF, file Quality_Assessment.xlsx, here), the methodological rigor in synthesizing SA literature is limited. This is seen most acutely in the reporting validity of screening and data extraction, the quality assessment of primary studies, in the critical assessment of the review itself (i.e., limitations), sampling, and pre-registration of the review. These methodological steps were either not conducted or reported at all, or they were not reported in sufficient detail in over 60% of the reviews. Conversely, research objectives, inclusion/exclusion criteria and search terms were generally reported adequately, in over 65% of the reviews. Below, we outline the most critical areas of the review quality and discuss these further in section 4.3.
\begin{table}
\begin{tabular}{l l l l} \hline Quality Assessment Criterion & Yes & Partly & No \\ \hline Guidelines & 24 & 8 & 6 \\ Rationale & 15 & 13 & 10 \\ \hline \end{tabular}
\end{table}
Table 6: Summary of quality assessment of the reviews
#### 3.4.1Guidelines followed in the reviews
A variety of guidelines for conducting a review were followed in the selected studies. The most mentioned guideline (13 reviews) was that of Kitchenham (2004) and its updated version Kitchenham & Charters (2007), which outlines steps in conducting a systematic review in software engineering. A further eight reviews applied the PRISMA protocol, and three studies followed other guidelines (Tranfield et al., 2003, Denyer & Transfield, 2009, Arksey & O'Malley, 2005). We inspected visually whether using a guideline was associated with improved quality of the reviews and if so, whether the type of guideline used was important. To this end, the reviews were grouped as those using a guideline (Yes) and those that did not (Partly and No, see Table 1). The reviews following a guideline were further grouped into those following Kitchenham's guidelines and those following the PRISMA framework - the most common guidelines adopted in our sample. The quality of the reviews in each of these four groups ("Yes", "Partly and No", "Kitchenham", and "PRISMA", see Figure 4) were quantified as the proportion of reviews scored as 'yes' in our quality assessment. As seen in Figure 4, while following existing guidelines generally resulted in a higher quality of the reviews than not using guidelines, this was not completely consistent, and the quality of the reviews remained low for several items even when guidelines were followed. Furthermore, the review quality was not clearly associated with a particular review guideline followed (Kitchenham or PRISMA).
#### 3.4.2Search strategy and included studies
The majority of the reviews reported appropriate search strategies and clearly defined search terms. However, the time of the search performed was not reported clearly in 19 of the reviews. This hinders attempts to replicate these literature searches. Furthermore, identifying some of the included primary studies was not possible from 10 of the reviews, which prevents verifying, assessing or replicating these reviews in full.
#### 3.4.3Validity of screening and data extraction
Only nine reviews clearly reported the measures taken to ensure valid study screening, and only two studies included a measure of inter-rater reliability of this process. As study screening has an element of subjectivity to it, valid screening approaches (such as calibration exercises or involvement of multiple reviewers) are essential for minimizing errors and bias in the study selection process (e.g., Higgins et al., 2023; Kitchenham & Charters, 2007; Xiao & Watson, 2019). As these procedures were generally reported inadequately or not at all in our selected reviews, it is unclear how representative the samples of the primary studies are in these reviews. Furthermore, only three reviews provide a clear description of measures taken to ensure validity of data extraction from the primary studies. As data extraction is also a process that is prone to errors and bias, it should involve calibration exercises or the involvement of multiple reviewers (e.g., Higgins et al., 2023; Kitchenham & Charters, 2007; Xiao & Watson, 2019). As this step of the review process was generally neglected, the reliability of the outcomes in the selected reviews may be compromised due to this shortcoming.
#### 3.4.4Quality assessment and limitations
Quality assessment or risk of bias assessment of the selected primary studies can serve several functions, and it is recommended as the final stage of study screening or as a guide for interpreting findings (e.g., Higgins et al., 2023; Kitchenham & Charters, 2007; Xiao & Watson, 2019). Whether quality assessment was used for gaining a better picture of the potential limitations of the included studies or as a final screening for inclusion, this step of the review process was reported in only 14 reviews. This finding, coupled with the general lack of critical evaluation of the reviews themselves (reported in 10 reviews), indicates that methodological considerations do not receive enough attention in SA literature syntheses. As such, assessing the trustworthiness of the conclusions from these reviews is compromised.
Figure 4: Proportion of reviews scored as ’yes’ for each quality assessment criteria by guideline use (refer to Appendix C for a table containing the information in this bar plot)
#### 3.4.5Sampling and pre-registration
Only seven reviews contained some discussion of sampling, primarily in terms of data representativeness of online data. However, more critical considerations, such as risk of bias due to samples used in the primary studies were not discussed. Finally, pre-registration was not mentioned in any of the 38 reviews. As such, it is unclear how systematic the review processes were, particularly regarding whether methodological choices were pre-determined or made on an ad hoc basis. One of the benefits of pre-registering a review protocol is that it reduces the risk of bias during the review, for example in primary study inclusion (e.g., Higgins et al., 2023). Developing a protocol is also recommended in disciplines outside of medicine, although pre-registration is not considered mandatory (Kitchenham & Charters, 2007; Xiao & Watson, 2019).
### Open science, reproducibility, and replicability
In addition to our quality assessment framework, we considered three important aspects of the reviews - the definitions of the core concept in the reviews (i.e.,'sentiment'), accessibility of the reviews and any shared material. Although these are not strictly required in some review guidelines (e.g., Kitchenham & Charters, 2007, cf. Page et al., 2021 for publicly available materials), they are nevertheless a crucial part of good scientific practices, as clear definitions of the concept of interest directly relates to construct validity of SA research (see section 4.1.1) and accessibility of the reviews and related materials promotes transparency and inclusivity of the information provided in the reviews.
#### 3.5.1Definitions of sentiment
Inspecting whether and how the core concept of sentiment was defined in the selected reviews, we found that this term was defined explicitly in only seven reviews (Kumar & Garg, 2020; Osorio Angel et al., 2021; Qazi et al., 2017). Although these definitions differed from one another, sentiment was typically described as a closely related term to opinion, attitude, emotion, feeling etc., while pointing out that these terms have subtle differences in their meanings. In 20 reviews, the meaning of sentiment was implied via description of what is analyzed in SA (e.g., Dalipi et al., 2021; Ghallab et al., 2020; Shamsi & Abdallah, 2022). In these types of definitions, what was being analyzed differed across reviews, but was often equated with opinions (17 reviews), emotions (12 reviews), attitudes (9 reviews) or feelings (5 reviews). For instance, that SA is used "to analyze people's opinions and feelings about organizations, services provided, and products." (Shamsi & Abdallah, 2022). The remaining 11 reviews provided uninformative, circular or no definitions for sentiment (e.g., Ahmad et al., 2018; de Oliveira Lima et al., 2018; Zunic et al., 2020). For instance, the aim of SA was described as "to automatically classify the sentiment expressed in a free text." (Zunic et al., 2020). Overall, our sample of reviews does not demonstrate consensus regarding how sentiment is defined. Explicit definitions of the term were rare, and the definitions (explicit or implied) differed from one another. However, several reviews appeared to equate sentiment with opinion, emotion, or attitude (see section 4.1.1 for discussion). These variances in definition and comprehension of what'sentiment' actually is will have a knock-on impact regarding how we operationalize and thus measure sentiment, which is important for the inferences and conclusions made about people and data (Davidson, 2022).
#### 3.5.2Accessibility of the reviews and related materials
Out of 38 reviews, 20 were accessible from the publisher's website, and an additional four reviews were available from another source, e.g., as a pre-print. Full access to the remaining 14 articles was paywalled or subscription based. Only three reviews provided related material, such as details about the screening process or data extraction sheets. These were available upon request from authors for two reviews (Dalipi et al., 2021; Sharma et al., 2020) and provided as supplementary material for one review (He et al., 2021), which, however, was not an open access article. Overall, open science practices
were widely neglected in our sample of reviews. Nearly 40% of the reviews were not freely accessible, and over 90% of the reviews lacked any supporting material that would allow full assessment and replication of the reviews.
## 4 Discussion
In this overview, we synthesized the key applications, methods, outcomes, and common challenges in SA research in digital spaces, and evaluated the quality of reviews in the field. Below, we focus on considering our findings relative to the validity of SA research, expanding on commonly raised challenges in SA research and discussing the quality of literature syntheses in the SA field. This is followed by recommendation for future research for SA and NLP more broadly. We then outline limitations of this overview and end this section with concluding remarks.
### Validity in SA research
For a method to be useful, it must be valid. For the purposes of discussing validity in SA research, we focus on construct validity, i.e., whether the method measures the phenomenon or construct it was intended to measure (e.g., Cronbach & Meehl, 1955; Bollen, 2002), concurrent validity, i.e., whether the outcome of the method correlates with that of another accepted criterion (Cronbach & Meehl, 1955), and external validity, i.e., whether the method performance can be generalized to different settings and populations (e.g., Bracht & Glass, 1968; Campbell & Stanley, 2015).
#### 4.1.1Construct validity
The central concept SA systems aim to detect and classify is sentiment, which is not directly measurable (Bollen, 2002), but which should be adequately operationalized using measurable indicators of sentiment. Determining such indicators requires a clear definition of the concept of interest (Sjoberg & Bergersen, 2022). This has been largely neglected in SA literature, as no consensus on the definition of the core concepts exists. This is exemplified by our finding of missing or insufficient and diverse definitions for sentiment in the selected reviews (section 3.1.2). The lack of consensus in the terms used not only creates further incongruity in primary studies but also impacts synthesizing evidence in the field.
Munezero et al. (2014) provide an in-depth examination of concepts referring to subjectivity, such as affect, emotion, feeling, sentiment and opinion. These concepts often share synonyms in dictionary definitions, making it difficult to capture the exact differences and similarities of the phenomena these concepts refer to. Munzenero and colleagues (2014) describe how in psychological literature, more fine-grained definitions are suggested, which outline differences in the level of consciousness, duration of time and cultural influence associated with the subjective states. For example, sentiments are characterized as longer lasting and requiring an object, whereas emotions are shorter in duration and might be general rather than towards a particular object (e.g., waking up feeling sad for no clear reason). While these two subjective states are described as influenced by cultural context, opinions, by contrast, are defined as "personal interpretations of information that may or may not be emotionally charged" (Munzenero et al., 2014). Compared to these distinctions in psychological and related disciplines, more coarse measures of subjectivity appear to be adopted in NLP research, as demonstrated, for example, by commonly interchangeable use of the terms sentiment and opinion. This is why we also used both terms during our literature search, to capture as much of the relevant literature as possible.
The distinction between subjective states is not trivial when SA is used for predicting behavior, because different subjective states are associated with behavior in different ways. For example, stable attitudes are more predictive of behavior than those with less temporal stability (Doll & Ajzen, 1992; Glasman & Albarracin, 2006). Expressions of negative and positive mood online have also been shown to vary by circadian rhythm (Golder & Macy, 2011). Thus, operationalizing sentiment as any indication of subjectivity becomes problematic, as it adds noise to the analysis. For
instance, if both transient expressions of emotions and longer-lasting sentiments towards a political party are equated with sentiment in SA for political forecasting, this can lead to limited predictive power and misleading results. Hence, it is critical to have agreed-upon definitions and conceptualizations of latent constructs, such as sentiment, to ensure these are measured consistently. However, akin to much of psychology, which relies heavily on natural language for theory, measurement and conceptualization, the ability to generalize findings remains much to be desired (Yarkoni, 2020). This has a domino effect when we start to look at data from people digitally and at-scale, because the way in which people use and behave across various digital devices and services changes (Davidson & Joinson, 2021) and the affordances associated with digital engagement influence behavior (Brown et al., 2022). This complicates whether and how we can capture one's sentiment consistently and accurately.
In addition to coarse definitions of sentiment, operationalizing sentiment has largely consisted of binary (positive, negative) or three-class (positive, neutral, negative) approaches. However, finer-grained classification of subjectivity is necessary for many applications. For instance, investigations on the public view of the COVID-19 vaccines would benefit from more detailed categories of emotions, such as fear and anticipation, to inform the best strategies for increasing vaccine acceptance and trust, as also discussed in Karafillakis and colleagues (2021). As such, developing more valid and effective SA systems requires a clear understanding of what exactly these systems classify, and what level of detail is needed for the systems to effectively serve their purpose.
#### 4.1.2Concurrent validity.
It is customary to compare the classification of SA systems to that of human annotators for a particular dataset, which can be considered an evaluation of the concurrent validity of the system. However, the accepted external criterion for SA method evaluation (i.e., the dataset) is not well-established in the field. Instead, varied datasets are used for testing SA systems, which severely compromises comparisons of different SA methods (as frequently discussed in our selected reviews, section 3.3). Lack of standardization in method evaluations ultimately hinders progress in SA research, as the relative success of different SA systems is crucial for informing further method development. Overlooking the importance of standardized method assessment risks misplaced trust in a system's capabilities as a result of inappropriate method evaluation, and misleading conclusions if unsuitable SA systems are applied in real-world settings.
Apart from a lack of consensus on which evaluation datasets to use, the quality of these datasets also varies, which further amplifies the differences in SA method assessments. That is, some datasets used as ground-truth are more reliable than others. For instance, employing user-annotated datasets (such as star ratings of product reviews) or crowdsourced datasets where each item is labelled by a single annotator are less reliable than agreement-based, multi-annotator datasets, where only unanimous or majority annotations for each item are included (Van Atteveldt et al., 2021). Even with multiple, trained annotators, the inter-annotator agreement likely remains low, especially for more complex constructs (e.g., anxious writing rather than angry or sad or worrisome; similar to how well we can identify sarcasm if we are not part of an in-group). If we have labelled data that is not inherently reliable, any issues with these labels will feed into later training of SA systems. Further, it is important to note that cultural differences could easily come into play when labelling datasets, as the way in which emotions are described and expressed can differ between cultures (Jackson et al., 2019; Mesquita et al., 2016).
Even though several benchmark datasets for different SA tasks exist, such as the SemEval series (e.g., Pontiki et al., 2016; Rosenthal et al., 2017), Stanford Sentiment Treebank (Socher et al., 2013), and STS-Gold (Saif et al., 2013), researchers may opt to not use them for several reasons, such as their unsuitability for the specific task, language or domain of a particular SA system, or if the research targets a specific group online. This issue can be further exacerbated by the
increased difficulty to obtain digital data from some online platforms, where APIs are being monetized and data access being restricted (Davidson et al., 2023). Further, it is not uncommon for digital platforms to restrict the use of data. For example, Reddit does not allow employing user-generated content for training data for machine learning, while X no longer allows inferring characteristics (e.g., political affiliation, health statuses, sexual orientation) on the individual level using their data (Davidson et al., 2023).
#### 4.1.3External validity
This dimension of validity, relating to the generalizability of a method, is also largely neglected in SA research (Wijnhoven and Bloemen, 2014). A focus on external validity is particularly crucial when a method is applied in domains in which it was not developed or evaluated (van Atteveldt et al., 2021). For instance, investigations on general-purpose SA classifiers applied in the medical field point to high inconsistencies in method validity (He and Zheng, 2019; Weissman et al., 2019). While developing a general SA system may not even be a feasible goal due to differences in domains, cultural expressions of subjectivity (section 4.2.3) and linguistic variation in online communities (section 4.2.2), external validity of SA systems requires much more attention from the research community, so that the scope of each system is well understood and only applied in appropriate settings.
Our overview shows employment of homogenous online data sources (Twitter), language (English) and modality (text), which likely hinders the generalizability of findings from method assessments. This is because user demographics differ across social media platforms (e.g., Auxier and Anderson, 2021; Duggan and Brenner, 2013) and linguistic norms differ across online communities (Lucy and Bamman, 2021). As such, a well-performing SA system in one linguistic context may not be suitable in another (see section 4.2.2). Therefore, more comprehensive method evaluations are needed, especially regarding the types of datasets used. As SA systems are typically not tested with a variety of datasets and across domains, this likely leads to overestimation of the systems' success. Similar observations have been made about evaluation of NLP systems more generally. For instance, evaluation datasets are often "clean" (e.g., without misspellings), and as such do not resemble the type of data systems should handle in real life scenarios (Belinkov and Bisk, 2018; Rychalska et al., 2019).
### Challenges in SA research
The most commonly mentioned challenges for SA research in our selected reviews were specific characteristics of the natural language (e.g., sarcasm, variations in spelling), need for more non-English resources (such as publicly available lexicons and datasets) and, most importantly, the need for standardization in SA research. The lack of standardization was discussed in terms of method evaluation metrics, evaluation datasets and reporting. Below, we discuss these challenges.
#### 4.2.1Sarcasm
A common challenge recognized in SA research is detecting sentiment from utterances using figurative language, such as sarcasm (e.g., Kastrati et al., 2021; Kumar and Garg, 2020; Poria et al., 2023). This is because sarcastic comments often express the opposite sentiment to what a literal interpretation of the comment might suggest. For instance, a comment "I love this phone, it only worked for two hours!" only contains positive words, yet the sentiment it conveys is negative. Given the background knowledge that phones are expected to last longer than two hours suggests that the comment was intended to ridicule the quality of the phone rather than praise it. The human ability to correctly interpret sarcastic expressions often relies on this type of background knowledge, or world knowledge (i.e., extra-linguistic information about entities, concepts, relationships, and facts about the world). Drawing from what is known about human language processing is likely beneficial for developing more efficient sarcasm detection in NLP systems. For example, the influence of world
knowledge in human language processing is demonstrated in psychological research on human sarcasm comprehension, which points to an involvement of a variety of contextual cues, such as social-cultural stereotypes regarding gender (Colston & Lee, 2004) and occupation (Katz & Pexman, 1997; Pexman & Olineck, 2002). These stereotypes are used as an aid in interpreting potentially sarcastic messages, e.g., that males are more likely to use sarcasm, or that medians are more likely to use irony than doctors. This type of world knowledge, formed based on personal and cultural experience, is typically not available in world knowledge implemented in NLP systems, via knowledge bases, such as Wikidata (Vrandecic & Krotzsch, 2014), ConceptNet (Liu & Singh, 2004) and Atomic (Sap et al., 2019).
Apart from world knowledge, accurate interpretation of sarcasm often requires wider context than the comment or sentence of interest. For instance, determining whether the comment "he sure played well!" is sarcastic requires further context, such as "his team lost today". Sarcasm detection is particularly challenging in online content such as tweets, where contextual cues facilitating correct interpretation are often lacking due to limited length of the content.
Thus, development of more accurate NLP systems could mean 1) additional, psychologically grounded contextual/user information utilized in enhancing the reliability of sarcasm detection and 2) world knowledge representations that capture more nuanced relationships between concepts, such as those acquired in a particular cultural context.
#### 4.2.2Linguistic variation
Language use in digital spaces is diverse and varies across different online platforms and communities (Lucy & Bamman, 2021; Nguyen et al., 2016). For instance, different subreddits contain specific communication norms and specialized lingo that might be difficult to comprehend by individuals outside of the community (Roberts, 2018). These community-specific language and specialized terms pose a challenge to accurate sentiment extraction if inappropriate SA resources are used. For instance, untrained or untuned language models would likely be less effective in these settings, leading to inaccurate predictions.
Apart from online language being diverse, it is also dynamic. That is, language and linguistic norms within online language communities are constantly evolving (Al-Kadi et al., 2018; Danescu-Niculescu-Mizil et al., 2013). It is therefore important to consider when the used SA resources were developed and whether the linguistic style or norms have changed since then in the intended target population. Updating resources such as lexicons and training data are important considerations in SA research and relevant aspects to report for the adequate evaluation of SA studies.
Analyzing online content is also challenging due to it containing non-standard spelling, vocabulary and syntax. Increasing the accuracy of automatic text analysis in the presence of these variations of language has been approached with normalization, that is, converting non-standard language into standard language (e.g., "imma" into "I'm going to"). However, it is not straightforward to determine where the boundaries of non-standard language lie and what exactly the standard language that online content should conform to. For instance, should "f lvr" be converted in to "flavor" or "flavour"? (Eisenstein, 2013). It is also worth asking whether the process of normalization results in excluding information embedded in linguistic style that may carry meaning, such as signaling identity (Nguyen et al., 2016).
Indeed, some commonly adopted approaches in SA research may in fact hinder the ultimate goal of detecting subjective states from the content of interest. For example, stop word removal is a frequently performed pre-processing step, during which function words, such as articles (a, the), pronouns (I, he, she) and prepositions (in, on, at) are removed from the data. However, removing function words excludes potentially valuable information for the task at hand. For instance, there is evidence to suggest that the frequency of using the first person singular ("I") is a better indicator of a depressed state than analysis of negative emotion words (Chung & Pennebaker, 2011), consistent with psychological evidence linking depression/negative affect with self-focus (e.g., Pyszczynski & Greenberg, 1987; Mor & Winquist, 2002). Drawing from
literature in psychological research on the purpose of function words and what they can reveal about subjective states of language users could thus be highly informative. Before function words are disregarded as noise, it would be wise to reconsider in which situations they might add to the efficiency and accuracy of NLP systems.
#### 4.2.3Non-English language resources.
Our work suggests that SA has mainly been applied in English online content and, more recently, in Arabic. A focus on English is a common finding in other literature syntheses on SA (e.g., Cui et al., 2023; Lightart et al., 2021), as well as in NLP research more generally (e.g., Blasi et al. 2022; Joshi et al., 2020). As NLP technologies occupy an increasingly significant place in today's societies, neglecting low-resource languages in this development risks silencing less central communities and hindering representation of diverse voices. For example, research in the field of international relations is largely reduced to Western- and Anglo-centric investigations, due to limited non-English language resources (Windsor, 2022). As another example, when NLP solutions provide more accurate results in resource-rich languages, the utility of technological innovation mostly serves the members of the larger language communities, creating inequality in access to economic opportunities (Weidinger et al., 2021).
In SA research, attempts to by-pass the lack of resources in certain languages has been to use machine translation, although there is some evidence suggesting that this approach decreases the quality of SA, compared to language-specific SA (e.g., Balamurali et al., 2013; Saadany and Orasan, 2020). Furthermore, the availability of language resources remains an issue in machine translation, particularly for low-resource language-pairs, hindering development of quality translation as well as the evaluation of it (see Haddow et al., 2022 for a survey). A related challenge is that concepts and their relationships are culture-specific and language-specific. As relationships between concepts differ across languages, translation equivalents may be used in different situations across languages, including concepts communicating emotion (e.g., Kollareth et al., 2018; Jackson et al., 2019). This highlights the complexity of SA across languages and suggests that mere parallel datasets for machine translation are not enough, as these mostly lack the culture-specific aspects of the given language (e.g., Akinade et al., 2023; Yao et al., 2023).
Whether increasing language diversity in NLP research is attempted by more developed machine translation or language-specific resources, these endeavors require more involvement from agents with expertise in the target languages. Some promising approaches for this are Nekoto and colleagues' (2020) participatory research in African languages, in which the participants contributed to different stages of the language resource development pipeline, such as dataset curation and benchmarking. Importantly, this project also included knowledge sharing and mentorship between international researchers and the participants with expertise in the local languages. Together these elements of the project addressed both the need for NLP technology know-how in low-resource language communities as well as the need for language and cultural expertise in each point of the language resource development.
#### 4.2.4Lack of standardization in method evaluation.
Apart from a lack of consensus on the evaluation datasets that should be used in SA research (discussed in section 4.1.3), the field needs standardized procedures for method evaluation in general. This includes performance metrics used and the level of detail in reporting in SA studies (section 3.3). Inconsistent evaluation procedures and insufficient reporting of SA systems not only prevent appropriate comparison of these systems, but also the accumulation of knowledge, as replication of different studies, or further development of different systems is rendered impossible in the absence of sufficient information. This is exemplified with our finding that, for instance, pre-processing steps were not reported in sufficient detail in majority of the reviews, which may be indicative of neglecting the importance of this information, or
insufficient reporting in the primary studies. These issues are prevalent in NLP field more broadly (e.g., Escartin et al., 2021; Fokkens et al., 2013; Van Miltenburg et al., 2021).
Furthermore, a deeper understanding of how different NLP systems perform and where their weaknesses lie requires method evaluation that goes beyond commonly used performance metrics (e.g., precision, recall, accuracy, F1-score, etc.). This is needed both for more realistic assessment of the systems' capabilities as well as more informative diagnostics of the systems (Ribeiro et al., 2020). For instance, detailed error analysis can offer valuable insights into when NLP systems fail, which areas of method improvement should be focused on next, and whether the errors produced by an NLP system are qualitatively similar to errors made by humans (for error analysis in SA research, see, for example, Xing et al., 2020; Zimbra et al., 2018). Apart from error analysis, more comprehensive evaluation frameworks have been developed, with focus on general linguistic capabilities of systems (Ribeiro et al., 2020) or their robustness for handling naturally occurring noise in real-life data (Rychalska et al., 2019). These types of evaluation processes should be adopted as a default approach in NLP studies, in order to achieve more reliable method assessment and to maximize the information value from method evaluations.
### Methodological quality of literature reviews in SA
With the large and increasing volume of literature on SA, systematic literature reviews in this area are undoubtedly needed. However, the reviews should be conducted with adequate methodological rigor and transparency to avoid inaccurate representation of the accumulated knowledge in the field. The methodological rigor required in systematic reviews is essential for reducing bias (Higgins et al., 2023) and for allowing evaluation and replication of the reviews (Kitchenham and Charters, 2007; Xiao and Watson, 2019).
The methodological shortcomings we found in SA literature reviews (reported in section 3.4) raise concerns about the extent of bias introduced at different stages of the review process, such as sampling primary studies. These shortcomings also cast doubt on the reliability of the findings in the reviews, making interpretation of the conclusions from these reviews problematic. Furthermore, the issues of methodological transparency may render replication attempts impossible, thus impeding scientific progress and leading to wasted research efforts and resources (sections 3.4 and 3.5).
As researchers and policy makers rely on high quality overviews of the current state-of-the-art, unreliable systematic reviews risk misplaced research efforts and misguided policies. For instance, employing SA methods to inform government decision making without accurate knowledge of the capabilities and limitations of these methods can lead to initiatives that do not yield meaningful outcomes, thus wasting taxpayers' money and government resources. In public health, misguided policies can have direct health consequences. For example, they may lead to delayed or inappropriate responses to health crises, inadequate support service allocation, or insufficient public health measures, potentially causing harm or loss of life. These examples highlight the potential consequences of low methodological quality in SA literature reviews, and why reliable literature syntheses are paramount for evidence-based decisions on if and when to employ SA in real-life applications.
We found methodological shortcomings in several reviews, regardless of whether or which guidelines (e.g., PRISMA, Kitchenham and Charters, 2007) were followed in conducting the review. Kitchenham and colleagues (2009) also found that the quality of literature reviews in software engineering was not related to whether these reviews cited review guidelines or not. It is unclear why the quality of many reviews in these fields appears to not benefit from consulting review guidelines. This may be due to lack of expertise in review methodology or difficulties in adapting quality criteria from guidelines in different disciplines. We acknowledge that different guidelines contain different requirements for conducting a review, and that our framework for quality assessment borrows heavily from medical and health research standards (e.g.,
PRISMA). However, many of these standards of quality could be applied in SA research, either directly or with minor adaptations. It is also worth noting that the most common issues identified via application of our quality assessment framework are requirements included in review guidelines for different disciplines, such as medical and health research (Higgins et al., 2023), software engineering (Kitchemham and Charters, 2007) and planning research (Xiao and Watson, 2019).
Different publication outlets also differ in the requirements for methodological rigor, which also adds to the variability in the quality of published reviews. Some obstacles in full reporting of reviews may stem from journals' restrictions in word count or supplementary materials (Page and Moher, 2017). In fields like SA research, the issue of review quality is likely also related to the lack of widely accepted standards for reviews. Even in the case of the PRISMA reporting standards, which is widely adopted in the health and medical fields, the adherence to the reporting standards was still found suboptimal in MEDLINE publications (Page and Moher, 2017). To remedy the situation, Page and Moher suggested up-dated and clearer guidelines as well as more intense endorsement of the PRISMA statement by journals. Similarly, Stevens and colleagues (2014) point out that how journals endorse the use of guidelines is unknown, and that one possibility for ensuring adherence to guidelines would be to require reviewers to check this during peer-review process. Review and reporting guidelines have a longer history and widespread acceptance in health and medical fields, yet adherence to these guidelines still has room for improvement. Research in SA clearly has even further to go, but some of the ideas outlined above for improving adherence to review guidelines could also be implemented in the SA context, such as stricter methodological requirements evaluated as part of the peer-review process.
### Future directions
We have discussed SA in digital spaces particularly regarding insufficient focus on different types of validity of SA research, specific challenges in SA and limited methodological and reporting quality of literature syntheses on SA.
Future work in SA should draw more heavily from psychology and related fields for clearer and more suitable definitions and operationalization of subjectivity terms of interest. Several challenges in SA due to the nature of natural language should also be re-considered from a more psychologically grounded perspective - for instance, what information is lost when normalization and stop word removal are applied to data, or how sarcasm detection could be improved. Approaching SA problems from a multidisciplinary perspective would mean not only that development of SA systems should draw from psychological research, but also that psychological research should target questions relevant to SA system development. These joint efforts would at best lead to advancement of the goals in each field. We further urge researchers to consider the variability in online data, both in terms of platform, group, and cultural differences, and in terms of change over time. At the very least, this should involve documenting these aspects of data, to allow more informed decisions regarding the appropriateness of their use in different contexts.
There is a need for commonly accepted benchmark datasets, which should be from varied sources and representative of the real-world online data (i.e., data from different online platforms and language communities, which contain misspellings and other commonly occurring characteristics of natural language in digital spaces). If suitable evaluation sets are not available, then appropriate, multi-rater annotated datasets should be created and made publicly available to allow further comparisons and thus aid moving the field forward rather than creating further fragmentation in the field.
Research in SA is plagued with non-standard and often shallow evaluations of the SA systems. For the field to move towards better understanding of the best performing and appropriate SA methods, far more focus on comprehensive and informative testing of SA systems is needed. As some examples, this could include detailed error analysis as a part of any SA study, as well as stricter requirements for full reporting and open data/code, endorsed by publication outlets.
At present, as no broadly recognized guidelines exist specifically for SA research, or NLP research more generally, development of such a guideline is paramount for improving the methodological and reporting quality in these fields. This type of guideline should include items relevant particularly for NLP work, such as detailed requirements for how to report datasets, pre-processing procedures, characteristics of the algorithms and ground truth. In the absence of a comprehensive guideline, a starting point for these improvements would be to follow existing recommendations for a dataset (Gebru et al., 2021) and model documentation (Mitchell et al., 2019). Guidelines for literature reviews should also take the specific characteristics of the field into account, to provide clear and easily applied methodology for researchers that may not be experts in literature synthesis. Another avenue for improving the quality of reviews (and primary studies) in NLP research is stricter methodological requirements as part of the peer-review process, as suggested for health research by Stevens and colleagues (2014).
### Limitations
Systematic reviews arise when there is a need or a certain amount of evidence that warrants performing a review on a particular topic. As our overview only included systematic reviews, the topics covered here are limited to the areas of SA that have been reviewed thus far. As such, we cannot provide a complete picture of SA, particularly regarding application areas, for which we recommend the reader to consult recent reviews with particular focus on application areas (e.g., Cui et al., 2023; Kumar & Sharma, 2017; Mantyla et al., 2018). Similarly, this overview does not represent all tasks in SA, which, as noted by Poria et al. (2023) include most of the core problems tackled by NLP in general, such as fake review detection, negation processing and domain adaptation, to name a few.
The balance between inclusion of high-quality reviews and synthesizing a representative sample of the literature is often difficult to find. This was also the case in the current overview, where our criteria for the quality of the reviews occasionally resulted in difficult decisions regarding the inclusion of reviews - some otherwise interesting works were excluded due to this and may be worth considering in broader overviews in this area. Some such reviews are Adak and colleagues' (2022) review focusing on deep learning and explainable AI used for SA on customer reviews, Kumar and colleagues' (2021) review in text mining, which included automated analyses to investigate the application domains and thematic foci of the reviews, and the review by Genc-Nayebi and Abran (2017) on text mining in mobile app store reviews.
Similarly, our focus on SA in online spaces resulted in exclusion of reviews or surveys that may still be relevant (e.g., Alshuwaier et al., 2022; Alyami et al., 2022; Mehraliyev et al., 2022), especially given that a large portion of SA research today is conducted using online content (Ligthart et al., 2021; Palanivinayagam et al., 2023), which may lead to authors not explicitly mentioning online data in the article's title or abstract.
### Conclusion
Sentiment analysis research is a dynamic field, characterized by its diverse methodologies and wide-ranging application domains, with great promise for harnessing the rich information available in digital environments. Before we can fully embrace the potential of sentiment analysis, however, various challenges need to be addressed. These include thorough evaluation and comparison of methods, careful consideration of the scope of the methods (such as the appropriate domain and populations the methods are applied to) and commitment to sound methodological procedures and transparent reporting in both primary studies and literature reviews. These improvements should facilitate progress in the field and generate SA methodology that can be used in real-world settings with confidence and accountability.
## Acknowledgments
This work has been funded by the UK Government awarded to BID & JH. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2302.07870 | Astrophysical properties of 15062 Gaia DR3 gravity-mode pulsators:
pulsation amplitudes, rotation, and spectral line broadening | Gravito-inertial asteroseismology saw its birth thanks to high-precision
CoRoT and Kepler space photometric light curves. So far, it gave rise to the
internal rotation frequency of a few hundred intermediate-mass stars, yet only
several tens of these have been weighed, sized, and age-dated with high
precision from asteroseismic modelling. We aim to increase the sample of
optimal targets for future gravito-inertial asteroseismology by assessing the
properties of 15062 newly found Gaia DR3 gravity-mode pulsators. We also wish
to investigate if there is any connection between their fundamental parameters
and dominant mode on the one hand, and their spectral line broadening measured
by Gaia on the other hand. After re-classifying about 22% of the F-type
gravity-mode pulsators as B-type according to their effective temperature, we
construct histograms of the fundamental parameters and mode properties of the
15062 new Gaia DR3 pulsators. We compare these histograms with those of 63
Kepler bona fide class members. We fit errors-in-variables regression models to
couple the effective temperature, luminosity, gravity, and oscillation
properties to the two Gaia DR3 parameters capturing spectral line broadening
for a fraction of the pulsators. We find that the selected 15062 gravity-mode
pulsators have properties fully in line with those of their well-known Kepler
analogues, revealing that Gaia has a role to play in asteroseismology. The
dominant g-mode frequency is a significant predictor of the spectral line
broadening for the class members having this quantity measured. We show that
the Gaia vbroad parameter captures the joint effect of time-independent
intrinsic and rotational line broadening and time-dependent tangential
pulsational broadening. Gaia was not desiged to detect non-radial oscillations,
yet its homogeneous data treatment allow us to identify many new gravity-mode
pulsators. | Conny Aerts, Geert Molenberghs, Joris De Ridder | 2023-02-15T18:59:56Z | http://arxiv.org/abs/2302.07870v1 | # Astrophysical properties of 15062 Gaia DR3 gravity-mode pulsators
###### Abstract
Context:Gravito-inertial asteroseismology saw its birth thanks to high-precision CoRoT and _Kepler_ space photometric light curves. So far, it gave rise to the internal rotation frequency of a few hundred intermediate-mass stars, yet only several tens of these have been weighed, sized, and age-dated with high precision from asteroseismic modelling.
Aims:We aim to increase the sample of optimal targets for future gravito-inertial asteroseismology by assessing the properties of 15062 newly found Gaia DR3 gravity-mode pulsators. We also wish to investigate if there is any connection between their fundamental parameters and dominant mode on the one hand, and their spectral line broadening measured by Gaia on the other hand.
Methods:After re-classifying about 22% of the F-type gravity-mode pulsators as B-type according to their effective temperature, we construct histograms of the fundamental parameters and mode properties of the 15062 new Gaia DR3 pulsators. We compare these histograms with those of 63 _Kepler_ bona fide class members. We fit errors-in-variables regression models to couple the effective temperature, luminosity, gravity, and oscillation properties to the two Gaia DR3 parameters capturing spectral line broadening for a fraction of the pulsators.
Results:We find that the selected 15062 gravity-mode pulsators have properties fully in line with those of their well-known _Kepler_ analogues, revealing that Gaia has a role to play in asteroseismology. The dominant g-mode frequency is a significant predictor of the spectral line broadening for the class members having this quantity measured. We show that the Gaia vbroad parameter captures the joint effect of time-independent intrinsic and rotational line broadening and time-dependent tangential pulsational broadening.
Conclusions:While the Gaia mission was not designed to detect non-radial oscillation modes, its multitude of data and homogeneous data treatment allow us to identify a vast amount of new gravity-mode pulsators having fundamental parameters and dominant mode properties in agreement with those of such _Kepler_ bona fide pulsators. This large new sample of Gaia DR3 pulsators can be followed up with dedicated high-precision photometric or high-resolution spectroscopic instruments to embark upon asteroseismic modelling.
## 1 Introduction
The Gaia space mission of the European Space Agency (Gaia Collaboration et al., 2016) is currently revolutionising the entire field of astrophysics. Even though Gaia is in the first place an astrometric mission, it also delivers the largest homogeneous survey of broad-band photometric and medium-resolution spectroscopic data (Gaia Collaboration et al., 2016). While the Gaia mission was not at all designed to deliver input for the research field of asteroseismology (Aerts et al., 2010), it does contribute important information for that recent emerging topic within stellar astrophysics. Indeed, aside from stellar luminosities deduced from the high-precision parallaxes (Bailer-Jones et al., 2018), the Gaia instrumentation also delivers years-long photometric time-series data at milli-magnitude (mmag) precision in the Gaia G band. Although these Gaia G light curves are only sparsely sampled, they do allow to populate a wide range of stellar variability classes (see Eyer & Mowlavi, 2008, for a description of the "variability tree"). In particular, Gaia data allows us to study the classes of pulsating variables (cf., Aerts et al., 2010, Chapter 2) with unprecedented numbers of membership. Rimoldini et al. (2022) classified more than 12 million variables, among which RR Lyr stars (Clementini et al., 2022), Cepheids (Ripepi et al., 2022), young stellar objects (Marton et al., 2022), and long period variables (Lebzelter et al., 2022).
In this work, we focus on stars observed by Gaia and classified from its Data Release 3 (DR3 Gaia Collaboration et al., 2021, 2022) as gravity-mode (g-mode hereafter) pulsators by Coordination Unit 7 treating variable stars (Holf et al., 2018; Gaia Collaboration et al., 2019; Eyer et al., 2022). In their Gaia DR3 Performance Verification Paper (PVP),
Gaia Collaboration et al. (2022a, hereafter termed Paper I) assigned the stars we revisit in the present study to the classes of the Slowly Pulsating B (SPB stars, Waelkens, 1991; De Cat & Aerts, 2002) or \(\gamma\) Doradus stars (\(\gamma\) Dor, Kaye et al., 1999; Handler, 1999). These main-sequence g-mode pulsators are the best laboratories for asteroseismic probing of the deep interior of dwarfs with a mass between 1.3 M\({}_{\odot}\) and 9 M\({}_{\odot}\)(Aerts, 2021, for a general review of asteroseismology of such stars). By now, hundreds of single and binary dwarfs with a convective core have their rotation measured just outside their core from series of consecutive radial-order dipole g-mode oscillations (Kurtz et al., 2014; Triana et al., 2015; Keen et al., 2015; Saio et al., 2015; Van Reeth et al., 2016; Moraveij et al., 2016; Murphy et al., 2016; Papics et al., 2017; Zwintz et al., 2017; Van Reeth et al., 2018; Li et al., 2019, 2020; Sekaran et al., 2021). Both the \(\gamma\) Dor and SPB pulsators reveal time-dependent spectral line variations due to the tangential velocity fields at the stellar surface (Aerts et al., 1999; De Cat & Aerts, 2002; Aerts et al., 2004; De Cat et al., 2006). These pulsators are intermediate-mass dwarfs in the core-hydrogen burning phase without strong stellar winds.
The Gaia DR3 light curves analysed in Paper I resulted in an order-of-magnitude increase in the population of the two classes of non-radial g-mode main-sequence pulsators. The position of these new candidate SPB and \(\gamma\) Dor pulsators in the Hertzsprung-Russell diagram was compared with theoretically predicted instability strips, each of which based on the dominant excitation mechanism for one particular choice of input physics, leading to coherent g modes with infinite lifetime in Paper I. It was found that many of the Gaia g-mode pulsators occur outside the borders of such instability strips for these two classes of g-mode pulsators. This was ascribed to inaccuracies in the Gaia effective temperature, their fast rotation and/or different input physics or (past) binarity not treated in instability predictions, in addition to too low opacities of heavy elements such as iron and nickel in the excitation layers as is well-known from previous excitation studies of SPB pulsators (Moraveji, 2016; Daszynska-Daszkiewicz et al., 2017; Szewczuk & Daszynska-Daszkiewicz, 2017). Moreover, aside from coherent eigenmodes with long lifetime driven by opacity layers or at the bottom of the outer convective envelope, internal gravity waves with short lifetimes excited at the interface between the convective core and/or the convective outer envelope and radiative zones have been suggested from multi-dimensional hydrodynamical simulations mimicking dwarfs in the considered mass regime (Rogers et al., 2013; Grassitelli et al., 2015, 2015; de Edelmann et al., 2019; Horst et al., 2020). All these predicted g modes and internal gravity waves act together in the stellar interior and those reaching the stellar surface with detectable amplitude give rise to complex light curves and time-dependent spectral line profile variations. This is in agreement with modern time-resolved space photometric large surveys delivering \(\mu\)mag precision and highlighting a continuous coverage of observed intermediate-mass pulsating dwarfs along the main sequence (Uytterhoeven et al., 2011; Bowman et al., 2019; Pedersen et al., 2019; Antoci et al., 2019; Bowman et al., 2020; Balona & Ozuyer, 2020, and Paper I), after earlier similar findings from high-resolution ground-based time-resolved spectroscopy mentioned above.
Here, we study the astrophysical properties of the new Gaia DR3 g-mode pulsators found in Paper I. We consider all these pulsators having their luminosity, effective temperature, and gravity determined by the Gaia Data Processing Analysis Consortium (DPAC, Gaia Collaboration et al., 2016, 2021). We compare the properties of these Gaia DR3 g-mode pulsators with those of the sample of 63 best-known bona fide g-mode pulsators observed with the NASA _Kepler_ space telescope, for which high-resolution follow-up spectroscopy was assembled and interpreted. These _Kepler_ and spectroscopic data led to the identification of dipole g modes of consecutive radial order and to asteroseismic modelling for these 63 dwarfs (Mombarg et al., 2021; Pedersen et al., 2021).
In the current follow-up study of Paper I we consider the amplitude of the dominant frequency in the Gaia G band and relate it to the fundamental parameters for both the Gaia DR3 and _Kepler_ g-mode pulsators. Moreover, we consider the sub-samples of Gaia DR3 g-mode pulsators for which an estimation of the spectral line broadening is available from Gaia's Radial Velocity Spectrometer (RVS) within the large homogeneous Gaia DR3 data set (Creevey et al., 2022; Fremat et al., 2022). Our general aim is to search for relationships between the fundamental parameters and pulsational properties of g-mode pulsators. In particular, we investigate if there is any connection between the properties of the dominant g mode of the stars and their rotation and/or spectral line broadening. So far, similar studies have been hampered by small sample sizes (Aerts et al., 2014) or by separate and/or inhomogeneous treatment of statistical modelling based on the observables deduced from photometric and spectroscopic data (Simon-Diaz et al., 2017; Burssens et al., 2020). Even though the Gaia photometric and spectroscopic instruments were not designed to study non-radial oscillations nor optimised to deduce the line broadening of stars hotter than 7 000 K, DR3 does provide unprecedentedly large samples of homogeneously treated g-mode pulsators in terms of their line broadening and fundamental parameters compared to those available in the literature prior to Gaia DR3.
We discuss the sample selection for the current paper in Sect. 2 and consider the fundamental parameters and dominant variability characteristics of 15062 g-mode pulsators in Sect. 3 and Sect. 4, respectively. Section 5 focuses on the astrophysical interpretation of the measured spectral line broadening of the sample, based on the method of errors-in-variables and on multi-variable regression models constructed via the technique of backward selection. We discuss our findings and conclude in Sect. 6.
## 2 Sample selection
Paper I resulted in 106 207 Gaia DR3 main-sequence pulsators of spectral types O, B, A, or F fullfilling four criteria: 1) their Gaia DR3 G light curve consists of at least 40 epochs; 2) their dominant cycle frequency (denoted here as \(\gamma\)) in the Gaia G light curve occurs in the range \([0.7,25]\) d\({}^{-1}\); 3) this frequency \(\nu\) differs from any of the instrumental frequencies 4, 8, 12, 16, 20, and 24 d\({}^{-1}\) by more than 0.05 d\({}^{-1}\); 4) \(\nu\) has a false alarm probability in the definition by Baluev (2008) below 0.001. Despite these already strict selection rules, additional restrictions on the frequency interval to which \(\nu\) had to belong for each of the four considered pulsation classes were imposed in Paper I to beat instrumental effects in Fourier space, because they occur at mmag level and intervene with the signal of non-radial oscillations also occuring at such level for dwarf stars of intermediate mass.
The following four classes of pulsators were considered in Paper I: \(\beta\) Cep stars, Slowly Pulsating B (SPB) stars, \(\delta\) Sct stars, and \(\gamma\) Dor stars (cf. Aerts et al., 2010, Chapter 2). We refer to Paper I and its literature references for the details of the additional selection rules imposed upon \(\nu\) based on common knowledge of the pulsational properties for these four well-known classes
of variables, but recall here that the dominant modes of \(\beta\) Cep and \(\delta\) Sct stars are p modes with observed frequencies typically above 3 d\({}^{-1}\), while SPB and \(\gamma\) Dor stars have dominant g modes with observed frequencies mostly below 3 d\({}^{-1}\), except for the fastest rotators.
Within the sample of 106 207 candidate pulsators assigned to the four pulsation classes in Paper I, those having frequencies above the spin frequency of the Gaia satellite are most affected by mmag-level instrumental effects, which may lead to spurious frequencies unrelated to the star. For this reason, we focus the current work on the two classes of main-sequence pulsators having their dominant frequency well below the 4 d\({}^{-1}\) spin frequency of the Gaia satellite. For now, with the relatively sparse DR3 light curves, this is the best approach to study the astrophysical properties of the Gaia DR3 g-mode pulsators without being contaminated by spurious instrumental frequencies.
Appendix B of Paper I discussed the results for the dominant frequency \(\nu\) in the Gaia DR3 light curves of the 63 bona fide g-mode pulsators (26 SPB and 37 \(\gamma\) Dor stars) whose entire amplitude spectrum is known with a level of precision better than about \(10^{-6}\) d\({}^{-1}\) in cyclic frequency and a few \(\mu\)mag in amplitude (Van Reeth et al. 2015; Pedersen et al. 2021). Aerts et al. (2021) relied on the mode identification for all these stars' detected and identified dipole modes of consecutive radial order to deduce the convective and wave Rossby numbers for these best-known _Kepler_ g-mode pulsators, covering the mass range from 1.3 M\({}_{\odot}\) up to about 9 M\({}_{\odot}\). All these 63 pulsators have a dominant Gaia G amplitude, \(A_{\nu}\), below 35 mmag and their dominant frequency occurs in the interval \(\nu\in[0.7,3.2]\) d\({}^{-1}\) (see Figs. 1 and 2 in Paper I). Some of the new g-mode pulsators identified from Gaia DR3 in Paper I have higher dominant frequencies. Moreover, some of the Gaia DR3 g-mode pulsators have frequencies that are hard to unravel from the Gaia instrument frequencies caused by the spinning of the satellite.
Guided by Figs. 1 and 2 in Paper I summarising the dominant frequency and amplitude for the 63 bona fide _Kepler_ g-mode pulsators, we further apply a fifth and sixth constraint in this work, in addition to the selection criteria of Paper I mentioned above, namely we demand that 5) \(\nu\in[0.7,3.2]\) d\({}^{-1}\) and 6) \(A_{\nu}\leq 35\) mmag. These two extra restrictions are applied to the SPB and \(\gamma\) Dor stars assigned to those two g-mode pulsator classes in Paper I. This is to ensure that we are dealing with non-radial oscillations rather than satellite frequencies. Moreover, we restrict these two samples to those pulsators having a measurement of log \(L\), log \(T_{\rm eff}\), and log \(g\) in the DR3 gspphot tables. We use those values in order to maximise the sample size of g-mode pulsators treated in one homogeneous way by DPAC routines, given that we need to cover temperatures from \(\sim\) 6 500 K all the way up to 25 000 K (see Paper I for details and Gaia Collaboration et al. 2022b; Creevey et al. 2022).
A continuous coverage of pulsating B, A, and F stars along the main sequence was found in Paper I, in agreement with _Kepler_ and TESS results (e.g., Balona & Ozyar 2020). Since the variability classification used in Paper I relied only on the Gaia G-band DR3 light curves, it cannot distinguish between B- and F-type pulsators without spectroscopic information (cf., Pedersen et al. 2019; Gebruers et al. 2021). On the other hand, the _Kepler_ data clearly revealed that \(\gamma\) Dor and SPB pulsators have different astrophysical and pulsational properties (Van Reeth et al. 2015; Saio et al. 2018; Pedersen et al. 2021). Thus, we wish to treat them as two separate classes. We do so by relying on the Gaia DR3 effective temperature to reclassify the g-mode pulsators. Following the upper limit in effective temperature from the instability predictions by Xiong et al. (2016) for \(\gamma\) Dor stars as treshold, we use \(T_{\rm eff}=8500\) K to distinguish between \(\gamma\) Dor and SPB candidates. In practice, we reclassify all \(\gamma\) Dor candidates as SPB if their effective temperature is above 8500 K and, vice versa, we re-assign all SPB stars with a temperature below 8500 K as \(\gamma\) Dor pulsator. This leads to a re-classification of 3244 \(\gamma\) Dor candidates as new SPB pulsators based on their Gaia DR3 effective temperature. This re-assignment gives a fractional memberships of 29% SPB and 71% \(\gamma\) Dor pulsators, which is fully in line with a Salpeter-type initial mass function (IMF, Salpeter 1955) for the typical masses of 1.6 M\({}_{\odot}\) for \(\gamma\) Dor stars (Momberg et al. 2021) and of 4 M\({}_{\odot}\) for SPB stars (Pedersen 2022b). As we discuss later in the paper, another choice of the treshold temperature to distinguish the two classes does not impact any of the results.
A critical aspect of the current study compared to other surveys of g-mode pulsating dwarfs is that all the DR3 data and observables were obtained in one homogeneous way in terms of data selection and analysis. This is in contrast to the treatment of ground-based photometry and spectroscopy obtained for the much smaller dedicated asteroseismology samples for these two classes so far. While Gaia DR3 can only deliver the dominant mode at this stage, it provides by far the largest homogeneous survey of \(\gamma\) Dor and SPB pulsators to date.
radius distribution of the 3426 SPB stars is compatible with the one of the 26 bona fide SPB stars (Fig. 4, right panel).
Overall, the distributions for the three fundamental parameters \(T_{\rm eff}\), log \(g\), and log \((L/{\cal L}_{\odot})\) of the Gaia \(\gamma\) Dor and SPB stars are in good agreement with the asteroseismic values of the bona
Figure 1: Histograms (normalised to 100% occurrence) of the gspphot values for \(T_{\rm eff}\) for 11636 Gaia DR3 \(\gamma\) Dor stars (left, grey) and 3426 SPB stars (right, cyan). The width of the bars is according to the average error (150 K for \(\gamma\) Dor and 400 K for the SPB stars). Asteroseismic values obtained from _Kepler_ data are shown for 37 \(\gamma\) Dor (grey, hatched) and 26 SPB stars (black cross-hatched), respectively. For the right panel, 31 SPB stars with a temperature above 22 000 K in the Gaia DR3 sample were omitted for visibility reasons.
Figure 3: Same as Fig. 1 but for log \((L/{\cal L}_{\odot})\).
Figure 2: Same as Fig. 1 but for log \(g\).
fide _Kepler_ pulsators in these two classes, keeping in mind that the Gaia sample of SPB stars mainly contains cool class members. We conclude that Gaia DR3 gspphot values reveal proper distributions of radii for g-mode pulsators from their luminosity and effective temperature estimates when compared with the radii of the 63 bona fide pulsators from the best asteroseismic model for each of them.
## 4 Pulsational properties of the dominant g modes
As already emphasised in Paper I, Gaia has good capacity to detect non-radial oscillation modes in main-sequence stars. The two Gaia g-mode pulsator samples treated here result from quite strict selection rules on the Gaia G photometric light curves, yet they are already an order of magnitude larger than the corresponding _Kepler_ samples. Despite the sparse Gaia sampling, it is to be anticipated that many more g-mode pulsators will be selected once the DR4 and DR5 data sets will become available.
Figure 5 shows the distributions for the dominant frequency in the DR3 Gaia G band light curves. Overplotted are the distributions for the dominant frequency deduced from the far higher-precision and 4-\(\gamma\)ur uninterrupted high-cadence _Kepler_ light curves taken from Van Reeth et al. (2015) and Pedersen et al. (2021) for the \(\gamma\) Dor and SPB stars, respectively. We recall that we used the dominant g-mode frequency range covered by these two samples of bona fide g-mode pulsators as selection criterion, to restrict the Gaia DR3 samples to pulsators adhering to this same appropriate frequency range. Hence it is built in that we find compatible ranges. Yet, also the distributions are reasonably well in agreement between the Gaia DR3 and _Kepler_ pulsators, keeping in mind the small samples sizes for the latter.
Figure 6 shows the histograms for the amplitude of the dominant frequency found in the Gaia G light curve. These bona fide pulsators did not survive our six selection criteria, mainly because they have fewer than 40 epochs in Gaia DR3 and/or their dominant frequency did not meet the FAP criterion. In order to be able to compare the amplitudes between the Gaia DR3 and _Kepler_ pulsators, and to exclude instrumental effects for the bona fide pulsators, we computed their Gaia G amplitude by imposing their dominant frequency found in their _Kepler_ light curve onto the Gaia G data, irrespective of the number of epochs in the latter and their FAP value. Both histograms in Fig. 6 visualise the current detection threshold to find g modes in the Gaia G light curves. It is seen that DR3 allows us to detect g-mode frequencies with an amplitude above 4 mmag. The _Kepler_ data delivered g modes with far lower amplitudes as seen in the histogram, because the mission was designed to assemble \(\mu\)mag-precision uninterrupted 30-min cadence photometry for exoplanet hunting (Borucki et al. 2010) and for asteroseismology (Gilliland et al. 2010). We find that the g-mode amplitude distributions for Gaia DR3 and _Kepler_ hardly overlap for the class of the SPB pulsators as most of the 26 bona fide _Kepler_ SPB stars have low dominant amplitudes outside Gaia's reach.
## 5 Properties of spectral line broadening
Aside from photometric and astrometric data, the Gaia satellite also delivers spectroscopic data. Its spectrometer RVS has a median resolving power of 11 500 and is sensitive to the wavelength range from 846 to 870 nm. While it was built with the primary goal to measure the radial velocity of as many Gaia sources as possible (Katz et al. 2022), we use the RVS data to study the spectral line broadening of g-mode pulsators. Our aim is to investigate if, aside from rotational line broadening, there is any connection between the overall line broadening, the fundamental parameters, and the oscillation properties for the two large Gaia DR3 samples of g-mode pulsators, as suggested previously based on line-profile simulations (Aerts et al. 2009; Aerts & Rogers 2015). While RVS on average provides a resolving power of only \(\sim\) 26 km s\({}^{-1}\), non-radial oscillations generate variations in the width and the skewness of spectral lines (Aerts & De Cat 2003) and these may affect the way that the line broadening values were determined (we refer to Fremat et al. 2022, for a detailed description omitted here).
Line-profile variations caused by the g modes of \(\gamma\) Dor and SPB stars occur at the level of several to tens of km s\({}^{-1}\) in the centroid of the line (e.g., Aerts et al. 1999; De Cat et al. 2000; Mathias et al. 2001; De Cat & Aerts 2002; Mathias et al. 2004; De Cat et al. 2006). High-resolution time-series spectroscopy of bright g-mode pulsators offers a powerful tool to identify the spherical wavenumbers \((l,m)\) of the dominant oscillation mode(s) provided that the oscillation cycle is well covered (Briquet et al. 2003; De Cat et al. 2005). Such applications couple the velocity field, computed from the theory of non-radial oscillations, to the observed line-profile variations to infer the radial and tangential components of the velocity vector due to each non-radial oscillation mode (Aerts et al. 2010, Chapter 6). This requires the spectroscopy to have high resolving power and
Figure 4: Same as Fig. 1 but for the radius of the stars deduced from \(\log\left(L/\mathcal{L}_{\odot}\right)\) and \(T_{\rm eff}\).
signal-to-noise ratio (SNR) (typically above 50 000 and 300, respectively; Aerts & De Cat 2003).
In the absence of high-quality spectroscopy, or in the case where only a few snapshot spectra are available, line-profile modelling by means of the proper time-dependent pulsational velocity field is impossible. In such a case, it is customary to approximate the overall line broadening due to oscillations and rotation together by a single time-independent function called macroturbulence (Simon-Diaz et al. 2010). Even though its functional form assumes a symmetrical line profile (cf. Aerts et al. 2014b, Fig. 9), the macroturbulence correlates strongly with quantities representing the line-profile variability (Simon-Diaz et al. 2017), such as the velocity moments (Balona 1986; Aerts et al. 1992).
Fitting time-resolved line-profile variations due to oscillations or spots artificially with a symmetrical macroturbulent profile leads to time-variability in the macroturbulence which is in excellent agreement with the mode frequencies or rotation periods of intermediate-mass dwarfs (Aerts et al. 2014b). This suggests that macroturbulence is merely a downgraded (and often poor) time-independent symmetrical simplification of the true spectral line profiles caused by oscillations and/or spots. Nevertheless, in the absence of time-resolved spectroscopy, it is a sensible approach to fit the line profiles of snapshot spectra with a synthetic time-independent macroturbulent broadening profile, particularly for large surveys of stars such as offered by Gaia.
The g modes have dominant tangential displacements, implying that their velocity at the limb of the star dominates the detected line-profile variability. Yet, a common attitude in the literature has been to rely on the ad-hoc assumption that the radial and tangential components of the macroturbulent broadening profile are equal (Simon-Diaz & Herrero 2014). This is the reason why unrealistic, often supersonic, values for the macroturbulent surface velocities are obtained. This in turn affects the estimation of the surface rotation (Aerts et al. 2014b). For that reason, it is essential to estimate the surface rotation first, independently from the macroturbulent broadening (Serebriakova et al. 2023).
In the following sections we investigate Gaia's capacity to shed light on the astrophysical cause(s) of its measurements of the spectral line broadening from RVS. We do so for the two classes of \(\gamma\) Dor and SPB stars, whose velocity field at the stellar surface due to their non-radial oscillations is dominantly tangential (De Cat & Aerts 2002; Aerts et al. 2004).
Figure 5: Same as Fig. 1 but for the dominant frequency in the Gaia G light curve.
Figure 6: Same as Fig. 1 but for the amplitude of the dominant frequency in the Gaia G light curve. For the bona fide \(\gamma\) Dor and SPB pulsators, we computed the amplitude by fitting a harmonic signal to the Gaia time series using the main frequency derived from the 4-yr _Kepler_ light curve.
### Gaia DR3 line broadening parameters
While Gaia's medium-resolution RVS was not built to assess line broadening by stellar oscillations combined with rotation, it offers unprecedently large stellar samples analysed with a common methodology. We test the behaviour of line broadening measured with RVS with respect to the acting velocity fields at the stellar surface, where we know that our two samples undergo the joint effect of time-independent rotational and time-dependent pulsational line broadening. To do so, we rely on two Gaia DR3 parameters offered as measurement of spectral line broadening: vsni_esphs(Freuer et al., 2022) and vsnii_esphs(Creevey et al., 2022). Fremat et al. (2022) already made a careful study of these two quantities to interpret spectral line broadening for more than 33 million stars having \(T_{\rm eff}\in[3.1;14.5]\)kK. This range fully encapsulates the one of our \(\gamma\) Dor sample and largely overlaps with the SPB sample.
The Gaia DR3 parameter vbroad captures the overall line broadening after deconvolving the spectra with Gaia's RVS instrument Along-Scan Line Spread Function(Sartoretti et al., 2022). This parameter vbroad thus includes the joint effect of all possible astrophysical causes that give rise to spectral line broadening, such as oscillations, rotation, spots, turbulent convective velocities, etc. The catalogued value of vbroad is the median value obtained by the Multiple Transit Analysis (MTA) over at least six valid transits. The corresponding catalogued uncertainty is the standard deviation with respect to this median value. This implies that the uncertainty may be a measure of the line-profile variability, because its range captures the line broadening occurring at minimally six different epochs aside from the contribution of the noise. For the current study, we have vbroad measurements for 1775 of the 11636 \(\gamma\) Dor stars and for 190 of the 3426 SPB pulsators.
Another estimate of the RVS line broadening denoted as vsini_esphs is obtained by the so-called Extended Stellar Parametrizer module developed within the Astrophysical ParameterS Inferences System APSIS(Creevey et al., 2022). APISS is able to treat the parameters of hot stars and delivers vsini_esphs as an intermediate data product approximating time-independent rotational broadening. Its computation was done on the basis of the averaged values of BP and RP and the averaged RVS spectrum. Although this implies the variability to be filtered out from the quantity vsini_esphs to some level, it still contains a contribution from the oscillations (cf., De Pauw et al., 1993, for the theoretical expression of the line width due to non-radial oscillations). Nevertheless, vsini_esphs is a cleaner measurement of the time-independent projected surface rotation velocity than vbroad when rotation dominates the spectral line broadening, as is the case for g-mode pulsators. The error published for vsini_esphs is an approximation of the statistical error and does not represent a measurement of the time-variable line broadening like the standard deviation for vbroad. The quantity vsini_esphs deduced by APSIS results from an optimal RVS and BP/RP treatment for stars having a \(T_{\rm eff}\) above 7500 K, while it relies on BP/RP alone for all stars with \(T_{\rm eff}>7000\) K that were not observed by RVS. Values for vsini_esphs are available for 384 of the 11636 \(\gamma\) Dor stars and for 1104 of the 3426 SPB stars.
The wavelength coverage of RVS was constructed so as to achieve optimal radial-velocity data for a broad range of stellar populations. Here we rely on its data for the purpose of studying spectral line broadening, for which the RVS wavelength domain is not optimal. This is particularly so for the hottest stars under study here. As a consequence, some of the vbroad and vsini_esphs measurements have large uncertainties. It is therefore essential to include these uncertainties, aside from the values themselves, in any proper astrophysical interpretation. We refer to Fremat et al. (2022) for a detailed and nuanced discussion about the quality of the vbroad and vsini_esphs measurements deduced from template spectra relying on the Gaia \(T_{\rm eff}\) estimates. In particular, Fremat et al. (2022) discussed in detail the correlations between these two parameters for stars covering a broad range of magnitudes and temperatures. Of particular relevance for the current work is Fig. 16 in Fremat et al. (2022), illustrating two Hertzsprung-Russell diagrams with density maps as a function of median vbroad values. That figure clearly reveals high vbroad values along the upper main sequence, where the \(\gamma\) Dor and SPB pulsators are situated (encompassing the p-mode dominated class of \(\delta\) Sct stars not treated here due to the much higher risk that their dominant frequency has an instrumental origin, as explained in Sect. 2). The figure reveals higher vbroad values for hotter stars but Fremat et al. (2022) do not provide any formal quantitative comparisons between vbroad and the stellar parameters.
We show the 100 \(\gamma\) Dor and 156 SPB pulsators having a measurement of both vbroad and vsini_esphs in Fig. 7, along with their uncertainties. It can be seen that the overall range of the two quantities is rougly the same for these \(\gamma\) Dor and SPB stars. For each of the stars in Fig. 7, the two plotted quantities have similar yet not always equal values according to the uncertainty estimates. We remind that similarities between the two samples as a whole also occur for their ranges of the dominant mode amplitude and mode frequency (cf. Figs 5 and 6). It is then a natural question whether the oscillation properties cause the time-variability measured by vbroad and its standard deviation. On the other hand, we investigate whether the decrease of observed mode amplitude for faster rotators, as found in Paper I for the dominant p modes of the Gaia DR3 \(\delta\) Sct stars, also occurs for g-mode pulsators.
In what follows we offer regression models accomodating errors-in-variables, which allows one to handle different mea
Figure 7: Comparison between Gaia DR3 measurements of vsini_esphs and vbroad for the 100 \(\gamma\) Dor (grey triangles) and 156 SPB (cyan squares) stars having both quantities available. The full dotted line indicates the bisector while the coloured dashed lines represent the best linear regression models for both samples.
surements of the same astrophysical quantity (here the overall time-averaged spectral line broadening) having both star-specific and measurement-specific errors. These errors must be propagated properly when constructing the regression models and interpreting their outcome. This has been used in the context of Gaia data before, for example in a comparison between asteroseismic and astrometric parallaxes following DR1 (De Ridder et al., 2016). We first provide a general description of the methodology. Subsequently we apply it to the sample of the 37 bona fide \(\gamma\) Dor stars. For all these 37 stars we also have, in addition to their Gaia DR3 data, estimates of their "true" pulsational and rotational line broadening deduced from one homogeneous treatment of high-resolution high signal-to-noise ground-based spectroscopy taken with one instrument (such homogeneous spectroscopic information is not available for all 26 bona fide SPB pulsators). We use the results obtained for the 37 bona fide \(\gamma\) Dor stars to treat the Gaia DR3 g-mode samples optimally with the aim to interpret their spectral line broadening properties.
### Errors-in-variables model
Let us denote two observed quantities by \(Y_{i}\) and \(X_{i}\) and their true but unknown values by \(Y_{i}^{*}\) and \(X_{i}^{*}\). The errors-in-variables model is then specified by:
\[Y_{i} = Y_{i}^{*}+\varepsilon_{Yi}, \tag{1}\] \[X_{i} = X_{i}^{*}+\varepsilon_{Xi},\] (2) \[Y_{i}^{*} = \beta_{0}+\beta_{1}X_{i}^{*}+\varepsilon_{i}, \tag{3}\]
with \(i\) indexing the stars in a sample. Here, \(\beta_{0}\) and \(\beta_{1}\) are fixed but unknown regression coefficients to be estimated from the data. The measurement error variances \(\mathrm{var}(\varepsilon_{Yi})=\sigma_{Ti}^{2}\) and \(\mathrm{var}(\varepsilon_{Xi})=\sigma_{Xi}^{2}\) are obtained from the observations. The residual error component \(\varepsilon_{i}\), with variance \(\sigma^{2}\), quantifies imperfection in the regression relationship.
If \(Y_{i}^{*}\) and \(X_{i}^{*}\) would be nearly identical, then \(\beta_{0}\) is expected to be close to 0 and \(\beta_{1}\) would be close to 1. If the regression relationship in Eq. (3) is very precise relative to the measurement error, then \(\sigma^{2}\) would be near 0. Expressions (1)-(3) yield the mean and variance relationships:
\[E(Y_{i}) = \beta_{0}+\beta_{1}X_{i}, \tag{4}\] \[\mathrm{Var}(Y_{i}) = \beta_{1}^{2}\sigma_{Xi}^{2}+\sigma_{Ti}^{2}+\sigma^{2}. \tag{5}\]
Assuming (approximate) normality, a fully parametric specification follows, thus enabling maximum likelihood estimation:
\[Y_{i}\sim N(\beta_{0}+\beta_{1}X_{i},\beta_{1}^{2}\sigma_{Xi}^{2}+\sigma_{Ti} ^{2}+\sigma^{2}). \tag{6}\]
The SAS procedure NLMIXED (SAS Institute Inc., 2014) was used for the maximum likelihood estimation.
Extension to multiple predictors \(X_{1i},\ldots,X_{pi}\) is straightforward, upon replacing Eq. (6) by:
\[Y_{i}\sim N\left(\beta_{0}+\sum_{j=1}^{p}\beta_{j}X_{ji},\sum_{j=1}^{p}\beta_ {j}^{2}\sigma_{Xi}^{2}+\sigma_{Ti}^{2}+\sigma^{2}\right), \tag{7}\]
with obvious notation. It is convenient to write Eq. (7) in vector notation as:
\[Y_{i}\sim N(\mathbf{X}_{i}^{\prime}\mathbf{\beta},\mathbf{\beta}^{\prime}\Sigma_{xi}\mathbf{ \beta}+\sigma_{Ti}^{2}+\sigma^{2}), \tag{8}\]
where \(\mathbf{\beta}=(\beta_{0},\beta_{1},\ldots,\beta_{p})^{\prime}\) and \(\Sigma_{xi,i}\) is a diagonal matrix with \((0,\sigma_{Xi}^{2},\ldots,\sigma_{Xi}^{2})^{\prime}\) along the diagonal.
Model-based prediction of \(Y_{i}^{*}\) and its standard deviation can be expressed as:
\[\widehat{Y_{i}}^{*} = \widehat{\beta_{0}}+\sum_{j=1}^{p}\widehat{\beta_{j}}X_{ji}, \tag{9}\] \[\widehat{\mathrm{s.d.}}(\widehat{Y_{i}}^{*}) = \sqrt{\widehat{\mathbf{\beta}}\,\Sigma_{xi}\widehat{\mathbf{\beta}}+\mathbf{x }_{i}^{\prime}\widehat{\mathrm{var}}(\widehat{\mathbf{\beta}})\mathbf{x}_{i}+\sigma_ {Ti}^{2}+\widehat{\sigma^{2}}}, \tag{10}\]
where the unknown parameters have been replaced by their data-based estimates. Expressions (9)-(10) can be used to assess the quality of the model fit. The second term under the square root in Eq. (10) takes the uncertainty in the estimated regression coefficient into account.
### Spectral line broadening for the 37 bona fide \(\gamma\) Dor stars
For the 37 bona fide \(\gamma\) Dor stars, we now add three more quantities to the dominant frequency \(\nu\) from _Kepler_ photometry and the Gaia DR3 values for \(\log\,f_{\mathrm{eff}}\), \(\log\,g\), \(\log\,(L/L_{\odot})\), and \(A_{\nu}\). Following their discovery as g-mode pulsators in the _Kepler_ data, Tkachenko et al. (2013) set up a ground-based spectroscopic campaign with the HERMES spectrograph attached to the 1.2m Mercator telescope situated at La Palma Observatory, Spain (Raskin et al., 2011). HERMES has a spectral resolution of 85000 and covers wavelengths from 377 to 900 nm. The HERMES spectra allowed Van Reeth et al. (2015) to deduce the overall spectral line broadening, here denoted as \(\mathsf{vbroad_{H}}\), for the 37 stars and to unravel it into separate components stemming from time-independent rotational broadening (\(v\sin\,i_{\mathrm{H}}\)) and a broadening component due to the joint effect of microturbulence and the oscillation modes at the particular epoch of the observed spectrum. Microturbulence represents an artificial Gaussian line broadening needed to bring observed spectral lines into agreement with line predictions from 1-dimensional atmospheric models. This small broadening is needed to take into account the occurrence of small-scale turbulent motions in the line forming region that are not included in atmosphere models. On the other hand, the velocities due to non-radial g-mode oscillations result in time-dependent line broadening. In the case of \(\gamma\) Dor stars, their joint net effect in the line-of-sight is of similar order of a few km s\({}^{-1}\)(Aerts et al., 2004; De Cat et al., 2006). We therefore take the Gaussian line broadening determined by Van Reeth et al. (2015) as a good approximation for the overall pulsational broadening and denote it as \(\mathsf{vosc_{H}}\).
The two quantities \(v\sin\,i_{\mathrm{H}}\) and \(\mathsf{vosc_{H}}\) were derived from the observed spectra after ensuring that none of the 37 stars are spectroscopic binaries. In practice, Van Reeth et al. (2015) found the following ranges for these two parameters for the sample of 37 stars: \(\mathsf{vosc_{H}}\in[2.1;4.7]\,\mathrm{km\,s^{-1}}\) and \(\mathsf{vsin_{H}}\in[11;170]\,\mathrm{km\,s^{-1}}\). The values and ranges reveal that this sample of bona fide \(\gamma\) Dor stars consists of slow to moderate rotators (compared to their breakup velocity) and that their rotational velocity is typically an order of magnitude larger than their tangential g-mode and microturbulent velocity together, where we recall that both quantities are integrations across the visible stellar surface in the line-of-sight. Since these two velocity components influence the width of spectral lines added in quadrature, these ranges show that it is extremely challenging to unravel pulsational from rotational broadening, even for high-resolution spectroscopy (cf., Aerts et al., 2004, their Fig. 8). Moreover, since snapshot spectra cannot deliver proper time-dependent line-profile variations and only the line broadening is deduced, assuming a symmetrical line while ignoring its true shape, the relative uncertainties for \(\mathsf{vosc_{H}}\) are considerable.
We use the 37 measured values for \(\texttt{vbroad}_{\mathrm{H}}\), \(v\sin i_{\mathrm{H}}\), and \(\texttt{vosc}_{\mathrm{H}}\) to interpret the Gaia DR3 measurements of the overall line broadening (denoted as \(\texttt{vbroad}_{\mathrm{G}}\) for the bona fide \(\gamma\) Dor stars), realising that the RVS resolving power is in principle insufficient to unravel pulsational from rotational line broadening. DR3 delivered \(\texttt{vbroad}_{\mathrm{G}}\) for 27 of the bona fide \(\gamma\) Dor stars. We used these values to treat the following questions:
1. Is the quantity \(\texttt{vbroad}_{\mathrm{G}}\) obtained by Gaia RVS different from the independently obtained higher-precision quantities \(\texttt{vbroad}_{\mathrm{H}}\) or \(\texttt{vsin}_{\mathrm{H}}\)?
2. Can the variability in \(\texttt{vbroad}_{\mathrm{H}}\) measured from HERMES for the sample of the 37 bona fide \(\gamma\) Dor stars be predicted by the Gaia DR3 covariates \(\log\,T_{\mathrm{eff}}\), \(\log\,g\), \(\log\,(L/L_{\odot})\), \(\nu\), and \(A_{\nu}\) and if so with what kind of quality?
Answering these two questions will help to find an astrophysical interpretation of Gaia's \(\texttt{vbroad}\) values for the two new large samples of g-mode pulsators without being able to rely on measurements of line broadening deduced from high-resolution spectroscopy as we only have it available from a homogeneous data analysis for the 37 bona fide \(\gamma\) Dor pulsators.
To tackle the first question, we fit the statistical model in Eqn. (6) for four combinations of \(X\) and \(Y\). The parameter estimates and statistical properties of the regression models are presented in Table 1. We find that the residual variances \(\sigma^{2}\) are all extremely close to zero. None of the intercepts are significantly different from zero, and none of the slopes are significantly different from unity. This implies that all three quantities are essentially equal to each other, within the uncertainty limits specified by the measurement errors. The fractions of the variance explained by each of the four models range from 94% to 100%.
Given that \(\texttt{vbroad}_{\mathrm{G}}\) is missing for 10 of the 37 \(\gamma\) Dor stars, the models involving this variable were refitted after multiple imputation (Molenberghs & Kenward 2007) to examine the potential impact of missingness on the results. This well-known statistical technique was only recently introduced in astrophysics for the treatment of missing data, such as the multivariate stellar astrophysics study relating nine measured quantities to surface nitrogen by Aerts et al. (2014) and the time-series analysis of visual binaries by Claveria et al. (2019). The method was applied here as follows. First, based on a so-called imputation model, 100 copies of each missing values are drawn from the predictive distribution of what is missing, given what is observed. Second, each so-completed dataset is then analysed with the model that would be used had the data been complete. Third, the 100 results combined into a single result using appropriate combination rules. Results were qualitatively very similar to those reported in Table 1, offering confidence that missingness does not play an important role in the relationships offered in Table 1. For this reason and simplicity, we proceed with the results presented in Table 1.
Following up on the study by Van Reeth et al. (2015), we conclude from the bona fide \(\gamma\) Dor g-mode pulsators that single epoch spectra, even if of high resolution and high SNR, cannot distinguish the overall line broadening from the line broadening caused by just rotation when working with a fudge parameter relying on the assumption of a time-independent symmetrical line profile. Given this spectral line modelling limitation, we find that Gaia RVS delivers good approximate values for spectral line broadening compared to those deduced from snapshot high-resolution spectroscopy for early F-type stars. Yet the uncertainties deduced from the HERMES spectra are lower, because its better suitable spectral coverage includes more spectral lines whose shape is determined by the temperature rather than pressure broadening.
Figure 8: Quality of predictions of \(\texttt{vbroad}_{\mathrm{H}}\), by \(\texttt{vbroad}_{\mathrm{G}}\) (upper panel), \(\texttt{vsin}_{\mathrm{H}}\) (middle panel), and the set of covariates (lower panel) for the 37 bona fide _Kepler_\(\gamma\) Dor stars. The vertical bars are defined by the predicted value \(\pm\) its standard deviation, based on the errors-in-variables models in Table 2.
To address the second question, we examine the effect of the Gaia DR3 variables \(\log\left(L/\mathcal{L}_{\odot}\right)\), \(\log\ T_{\mathrm{eff}}\), \(\log\ g\), \(A_{\nu}\), and \(\nu\) on the independently obtained parameter \(\mathtt{vbroad_{H}}\). This can be done with or without adding \(\mathtt{vbroad_{G}}\) and with or without adding \(\mathtt{vosc_{H}}\) to the set of predictors. We proceed by backward selection, starting with the full set of predictors and then progressively removing the one with the highest \(p\)-value, until only significant effects remain (i.e., all \(p\leq 0.05\)). In both versions with \(\mathtt{vbroad_{G}}\) included in the predictor set, this is the only one remaining after model selection, and we recover the result already reported in Table 1, as expected.
When \(\mathtt{vbroad_{G}}\) cannot be considered, as is the case for the majority of Gaia targets, the following insignificant predictors are removed by means of backward selection: first \(A_{\nu}\), second \(\log\left(L/\mathcal{L}_{\odot}\right)\), and third \(\mathtt{vosc_{H}}\). Since the latter variable is removed, whether or not it is included among the predictors to select from does not matter. Hence, only one additional model is obtained, the fit of which is presented in Table 2. This model explains about \(58\%\) of the variance present in \(\mathtt{vbroad_{H}}\) via the effective temperature, surface gravity, and dominant frequency as covariates, which are all delivered by Gaia DR3. We note that the ranges of the covariates are \([3.83;3.87]\) for \(\log\ T_{\mathrm{eff}}\), \([3.71;4.48]\) for \(\log\ g\), and \([0.78;3.01]\)d\({}^{-1}\) for \(\nu\), which is why they were linearly transformed as displayed in Table 2 to stabilise the model fit optimally.
In response to the second question, we find the dominant g-mode frequency \(\nu\) to be a significant predictor of the high-resolution spectroscopic line broadening, alongside the temperature and gravity of the star. This offers the opportunity to predict the line broadening for all the Gaia \(\gamma\) Dor stars without a Gaia measurement of the line broadening if these three covariates are available, as is the case for the majority of the Gaia DR3 \(\gamma\) Dor stars. Of course it should be born in mind that some predictors exhibit mild to strong correlation, given their astrophysical meaning. In the particular application of the bona fide \(\gamma\) Dor stars, the strongest correlation among the predictors is the one between \(\log\left(L/\mathcal{L}_{\odot}\right)\) and \(\log\ g\), namely -0.70. The correlation between \(\log\ T_{\mathrm{eff}}\) and \(A_{\nu}\) is 0.40, while \(\nu\) correlates equally with \(\log\ T_{\mathrm{eff}}\) and with \(\log\ g\) with a moderate value of 0.32. All other correlations are much smaller. Hence, the regression coefficients in a model with multiple predictors should be interpreted as the effect of change by one unit in a predictor, while all others remain constant, in this case for the three surviving predictors \(\log\ T_{\mathrm{eff}}\), \(\log\ g\), and \(\nu\). A graphical perspective on the predictions for \(\mathtt{vbroad_{H}}\) from \(\mathtt{vbroad_{G}}\), \(\mathtt{vsini_{H}}\), and the three covariates is shown in Fig. 8, using Eqns. (9)-(10).
### Results for the Gaia DR3 \(\gamma\) Dor pulsators
Armed with the knowledge that \(\mathtt{vbroad_{G}}\) and \(\mathtt{vbroad_{H}}\) are equal for the 37 bona fide \(\gamma\) Dor stars to within their measurement uncertainties from high-resolution and Gaia RVS spectra, and with the predictive model for these quantities given in Table 2, we now look at the sample of \(~{}11\,636\) Gaia DR3 \(\gamma\) Dor stars. For all of those, full information is available on \(\log\ T_{\mathrm{eff}}\), \(\log\ g\), \(\log\left(L/\mathcal{L}_{\odot}\right)\), \(\nu\), and \(A_{\nu}\), deduced in one homogeneous way from Gaia DR3 following Paper I. For 100 of these stars, both \(\mathtt{vbroad_{G}}\) (as of now again simplified to \(\mathtt{vbroad}\)) and \(\mathtt{vsini_{e}}\)es are recorded. These are the so-called completers of the Gaia DR3 \(\gamma\) Dor stars. For 1775 \(\gamma\) Dor stars there is a measurement on \(\mathtt{vbroad}\) but not on \(\mathtt{vsini_{e}}\)esphs and for 384 \(\mathtt{vsini_{e}}\)esphs information is available but \(\mathtt{vbroad}\) is missing. For the remaining 9577 stars both of these line broadening quantities are missing.
When considering the 100 completers only, we again conclude that \(\mathtt{vbroad}\) and \(\mathtt{vsini_{e}}\)esphs are identical within the bounds specified by the measurement errors, given that the slope parameter is roughly equal to unity and the residual variance is not different from zero (see Table 3). Figure 7 already showed that the two variables \(\mathtt{vbroad}\) and \(\mathtt{vsini_{e}}\)esphs are similar for the \(\gamma\) Dor stars with both estimates, while graphically revealing the different meaning of their uncertainty regions. The grey dashed line in that figure represents the regression model in Table 3.
\begin{table}
\begin{tabular}{l c c c c c} \hline & \(X\) & \(\mathtt{vbroad_{G}}\) & \(\mathtt{vbroad_{H}}\) & \(\mathtt{vsini_{H}}\) & \(\mathtt{vsini_{H}}\) \\ & \(Y\) & \(\mathtt{vbroad_{H}}\) & \(\mathtt{vbroad_{G}}\) & \(\mathtt{vbroad_{H}}\) & \(\mathtt{vbroad_{G}}\) \\ \hline Effect & Par. & \multicolumn{4}{c}{Estimates (s.e.)} \\ \hline Intercept & \(\beta_{0}\) & 4.8(2.4) & \(-\)2.5(2.5) & 0.53(0.97) & \(-\)2.3(2.4) \\ Slope & \(\beta_{1}\) & 1.02(0.04) & 0.94(0.04) & 0.99(0.02) & 0.94(0.04) \\ Res. var. & \(\sigma^{2}\) & 0.0000(0.0002) & 0.0000(0.0000) & 0.0000(0.0000) & 0.0000(0.0000) \\ \hline Effect & Par. & \multicolumn{4}{c}{95\% confidence intervals} \\ \hline Intercept & \(\beta_{0}\) & \([-\)0.16;9.74] & \([-\)7.63;2.55] & \([-\)1.44;2.49] & \([-\)7.19;2.56] \\ Slope & \(\beta_{1}\) & [0.94;1.11] & [0.86;1.02] & [0.95;1.03] & [0.86;1.01] \\ \hline \end{tabular}
\end{table}
Table 1: Estimates (and standard errors) for the model parameters of the four combinations of \(X\) and \(Y\), where the HERMES quantities are available for all 37 stars and \(\mathtt{vbroad_{G}}\) for 27 of them.
\begin{table}
\begin{tabular}{l c c c c} \hline Effect & Par. & Est. (s.e.) & \(p\)-value & 95\% conf. int. \\ \hline Intercept & \(\beta_{0}\) & 95(4) & & [86;104] \\ \(100\) - (\(\log\ T_{\mathrm{eff}}\)\(-\)3.85) & \(\beta_{1}\) & \(-\)28(5) & \(<\)0.0001 & [\(-\)37;\(-\)18] \\ \(\log\ g-4.0\) & \(\beta_{2}\) & 60(14) & 0.0001 & [32;88] \\ \(\nu-1.75\) & \(\beta_{3}\) & 57(6) & \(<\)0.0001 & [44;70] \\ Res. var. & \(\sigma^{2}\) & 70(152) & 0.3623 & [\(-\)238;378] \\ \hline \end{tabular}
\end{table}
Table 2: Estimates (standard errors) for the model parameters of the errors-in-variables model for \(\mathtt{vbroad_{H}}\), fitted to the bona fide \(\gamma\) Dor stars, based on backward selection from a set of predictors.
\begin{table}
\begin{tabular}{l c c c} \hline Effect & Par. & Estimate (s.e.) & \(p\)-value & 95\% conf. int. \\ \hline Intercept & \(\beta_{0}\) & \(-\)5(3) & & [\(-\)12;2] \\ Slope & \(\beta_{1}\) & 0.90(0.05) & \(<\)0.0001 & [0.81;0.99] \\ Res. var. & \(\sigma^{2}\) & 0(4) & 0.98 & [\(-\)7;7] \\ \hline \end{tabular}
\end{table}
Table 3: Estimates (standard errors) for the model parameters of the errors-in-variables model, relating \(\mathtt{vbroad}\) to \(\mathtt{vsini_{e}}\)esphs, on the 100 completers within the Gaia DR3 set of \(\gamma\) Dor stars.
For the full Gaia DR3 \(\gamma\) Dor data set, the relationship between vbroad and vsini.esphs can also be assessed using all stars after performing multiple imputation. For this application we drew ten imputations using information on vbroad, vsini.esphs, their standard errors, and the covariates log \(T_{\rm eff}\), log \(g\), log \((L/{\cal L}_{\odot})\), \(\nu\), and \(A_{\nu}\). Given that less than 1% of stars have complete information, the relationship found from multiple imputation is relatively different from the one found for the 100 completers. This is not surprising given the relatively large uncertainties of the two broadening parameters and the fact that they are rather weakly correlated with other information in the dataset. Moreover, the very large fraction of incomplete information destabilizes the inference from multiple imputation. These issues taken together suggest that the results of multiple imputation are too unstable to provide trustworthy results. For these
Figure 9: Gaia DR3 measurements of vbroad versus each of the five covariates as indicated for the 1775 \(\gamma\) Dor (grey triangles) and 190 SPB (cyan squares) stars having these quantities available. The lower right panel shows the standard deviation of vbroad as a function of the dominant g-mode amplitude. When invisible, the errors are smaller than the symbol sizes.
reasons, further analysis will be based on completers only for each of the regression applications discussed below.
We now turn to the relationship between vbroad and the predictor variables, applying backward selection to the errors-in-variables model for the 1775 \(\gamma\) Dor stars having this quantity and the covariates log \(T_{\rm eff}\), log \(g\), \(\log\left(L/{\cal L}_{\odot}\right)\), \(\nu\), and \(A_{\nu}\) (whose regression coefficients we denote as \(\beta_{1},\beta_{2},\beta_{3},\beta_{4},\beta_{5}\), respectively). It turns out that all these covariates are significant except the amplitude of the dominant frequency, which has a \(p-\)value of 0.0662 and is thus borderline significant. This is why we present the regression models with and without this covariate in Table 4. Both these models explain 42% of the variance in the measurements of vbroad. It can be seen in Table 4 that keeping \(A_{\nu}\) in the model does not alter the regression coefficients of the other four covariates. We show the measurements of vbroad as a function of each of the covariates in Fig. 9.
As discussed in Sect. 5.1, vsini_esphs is based on the averaged BP/RP (and averaged RVS spectrum when available) and therefore has smaller uncertainty than vbroad whose uncertainty interval represents the time-dependent line broadening covered by at least six snapshot spectra. By construction, vsini_esphs is expected to be a better representative of the time-independent surface rotation velocity of the star than vbroad. Indeed, the latter quantity approximates the overall time-dependent spectral line broadening due to various phenomena acting together because it was computed as the median value from individual transits taken at different epochs and treated as such by the MTA.
To test if vsini_esphs and vbroad indeed capture different astrophysical information, we repeat the same backward model selection for vsini_esphs, considering the same covariates for the 384 \(\gamma\) Dor stars having a measurement of vsini_esphs. This leads to the successive removal of \(A_{\nu}\), \(\log(L/{\cal L}_{\odot})\), and log \(T_{\rm eff}\) as being insignificant. The coefficients of the resulting regression model are listed in Table 5, while the plots of the measurements of vsini_esphs as a function of each of the covariates are included in Appendix A (Fig. 12, to be compared with Fig. 9). We find that vsini_esphs does not depend on the effective temperature and the luminosity, while the surface gravity and dominant frequency remain significant covariates. These two covariates offer the same dependence for vsini_esphs as for vbroad, that is lower log \(g\) (a more evolved star) and higher \(\nu\) give larger line broadening. As far as \(\nu\) is concerned, this is well understood in terms of an asteroseismic interpretation and in agreement with the findings based on the HERMES spectroscopy by Van Reeth et al. (2015) for the bona fide \(\gamma\) Dor stars. Indeed, a higher dominant g-mode frequency in the inertial frame of an observer corresponds to a faster rotating star (Van Reeth et al., 2015, 2016; Papics et al., 2017, for galleries of _Kepler_ light curves and frequency spectra as a function of rotation frequency). Hence, higher asteroseismic \(\nu\) is a signature of faster stellar rotation and thus of larger line broadening, irrespective of whether one considers vsini_esphs or vbroad.
While the resulting regression model for vsini_esphs of 384 class members explains only 5% of the variance in that quantity, it is 42% for vbroad of the 1775 stars having this quantity. Thus the time-independent projected rotational velocity represented by vsini_esphs of the \(\gamma\) Dor stars is independent of their effective temperature and luminosity, while inversely proportional to (but only weakly dependent on) their gravity. On the other hand, the time-dependent quantity vbroad does connect to the effective temperature of the \(\gamma\) Dor stars, such that the hotter the star, the larger vbroad. Our astrophysical interpretation of these findings connects well to the excitation mechanisms and to the level of line broadening found for \(\gamma\) Dor stars in the literature. Indeed, since the _Kepler_ data allowed for detailed asteroseismic modelling, we know that the dominant modes of the bona fide g-mode pulsators are dipole prograde modes and that these stars occupy a narrow range in mass, namely [1.3; 1.9] M\({}_{\odot}\), while they cover the entire main sequence (Mombarg et al., 2019, 2021). This pulsation class thus has stars with quite a broad range of log \(g\) and radii (cf. Fig. 4). The variability in log \(T_{\rm eff}\) and log \(g\) revealed among the class members is thus mainly a signature of evolutionary status.
The regression models for vbroad and vsini_esphs reveal more evolved stars to have larger spectral line broadening, while maximal time-dependent line broadening occurs for \(T_{\rm eff}\) between 6500 and 7500 K (cf. the grey triangles in the upper left panel of Fig. 9). This is precisely the temperature range where Grassitelli et al. (2015a) found a maximal effect of turbulent pressure in the stellar envelope of evolved A- and F-type dwarfs, offering an additional mechanism to excite high-order eigenmodes in such objects, aside from the classical \(\kappa\) mechanism being active in the hotter \(\gamma\) Dor stars and flux blocking at the bottom of the convective envelope causing such g modes in the cool class members (Guzik et al., 2000; Dupret et al., 2005; Xiong et al., 2016). In addition, Tkachenko et al. (2020) already showed that ignoring the turbulent pressure in stellar atmosphere models affects estimation of microturbulent broadening and results in an overestimation of the effective temperature at a few % level. Moreover, the authors found this effect to get worse as the star evolves, that is for decreasing log \(g\). We thus conclude to have found observational evidence from Gaia DR3 abroad measurements that time-dependent macroturbulent spectral line broadening in these stars is connected with their excited g modes and/or surface gravity, in addition to surface rotation. The amplitude limitation from Gaia DR3 and the comparative distributions of the dominant amplitudes and frequencies between the Gaia DR3 and bona fide \(\gamma\) Dor pulsators (cf. left panels of Figs 5 and 6) suggest that the detected dominant frequencies are due to large-scale (i.e., low-degree) gravito-inertial modes. The interplay of the dominant g mode with the rotation of the star, along with variability in log \(T_{\rm eff}\) and log \(g\) due to poor treatment of
\begin{table}
\begin{tabular}{l c r r r} \hline \hline Effect & Par. & Estimate (s.e) & \(p\)-value & 95\% conf. int. \\ \hline \multicolumn{5}{c}{With \(A_{\nu}\)} \\ \hline Intercept & \(\beta_{0}\) & \(-478(262)\) & & [\(-993;\)-36] \\ log \(T_{\rm eff}\) & \(\beta_{1}\) & \(238(87)\) & 0.0065 & [67;409] \\ log \(g\) & \(\beta_{2}\) & \(-100(19)\) & \(<\)0.0001 & [\(-138;\)-63] \\ log \((L/{\cal L}_{\odot})\) & \(\beta_{3}\) & \(-45(15)\) & 0.0022 & [\(-74;\)-16] \\ \(\nu\) & \(\beta_{4}\) & \(44(1)\) & \(<\)0.0001 & [41;46] \\ \(A_{\nu}\) & \(\beta_{5}\) & \(-192(104)\) & 0.0662 & [\(-397;\)13] \\ Res. var. & \(\sigma^{2}\) & \(397(20)\) & \(<\)0.0001 & [358;437] \\ \hline \multicolumn{5}{c}{Without \(A_{\nu}\)} \\ \hline Intercept & \(\beta_{0}\) & \(-478(260)\) & & [\(-988;\)-32] \\ log \(T_{\rm eff}\) & \(\beta_{1}\) & \(237(87)\) & 0.0062 & [68;407] \\ log \(g\) & \(\beta_{2}\) & \(-100(19)\) & \(<\)0.0001 & [\(-138;\)-63] \\ log \((L/{\cal L}_{\odot})\) & \(\beta_{3}\) & \(-45(15)\) & 0.0023 & [\(-73;\)-16] \\ \(\nu\) & \(\beta_{4}\) & \(44(1)\) & \(<\)0.0001 & [41;47] \\ Res. var. & \(\sigma^{2}\) & \(395(20)\) & \(<\)0.0001 & [356;435] \\ \hline \end{tabular}
\end{table}
Table 4: Estimates (standard errors) for the parameters of the errors-in-variables model for vbroad for the 1775 Gaia DR3 \(\gamma\) Dor stars with measured values for this quantity, based on backward selection from the set of listed predictors. Both models explain 42% of the variance in vbroad.
turbulent pressure in the line-forming region, explain the overall spectral line broadening estimates from Gaia DR3.
We point out that the regression model for vbroad in Table 4 explains 42% of the measured variance in the spectral line broadening, while it was 58% for the bona fide pulsators. Both these results are readily understood given that we are dealing with multiperiodic g-mode pulsators. Indeed, \(\gamma\) Dor pulsators have tens of high-order low-degree g modes active simultaneously, whichever of the three excitation mechanisms is dominant (Van Reeth et al. 2015). The line broadening captures the collective effect of all these modes together (Aerts et al. 2009). Yet, the frequencies of the excited g modes in addition to the dominant one were not included in the regression model, because the Gaia light curves currently do not provide sufficient data to unravel the multiperiodic oscillations active in these stars. While the frequencies and amplitudes of the second strongest variability signal were determined in Paper I, it was found that quite a number of those frequencies cannot be distinguished from frequencies above 3 d\({}^{-1}\) that may result from instrumental effects. That is why we did not use these secondary frequencies from DR3. It is to be anticipated that improved regression models explaining a higher fraction of the variance in the spectral line broadening will become possible from DR4 and particularly DR5, because the longer time base and doubling of the number of epochs in the Gaia photometry will allow to unravel several additional g-mode frequencies, particularly when combined with additional light curves dedicated to asteroseismology as illustrated from combined Hipparcos and TESS or ground-based data (cf. Waelkens et al. 1998; De Cat et al. 2007; Cuypers et al. 2009). Yet, Gaia's sampling is too sparse to deliver all the modes active in these multiperiodic g-mode pulsators, while they do contribute to the overall broadening of the spectral lines (cf. Mathias et al. 1994, for the theoretical expression of the spectral line width due to multiperiodic non-radial oscillations). The fraction of the variance explained by regression models relying on the fundamental parameters and the significant frequencies in Gaia light curves will therefore always be limited, even for the bona fide class members. In this sense, the 42% reached for the model in Table 4 is high.
### Results for the Gaia DR3 SPB pulsators
We now repeat the same analyses for the 3426 new Gaia DR3 SPB stars. Among these, both vbroad and vsini_esphs are recorded for 156 stars. For 34 of them there is a measurement vbroad but not on vsini_esphs, while for 948 vsini_esphs is available but vbroad is missing. For the remaining 2288 SPB stars both of these are missing. In line with the arguments provided in Section 5.4, attention is restricted to an analysis of the completers to test relationships for vsini_esphs and vbroad.
The results of the test whether or not the two measures for the spectral line broadening are equal are given in Table 6 and shown graphically in Fig. 7 (cyan symbols). Just like for the Gaia DR3 \(\gamma\) Dor sample, we again have a relationship that is consistent with the hypothesis that vbroad and vsini_esphs are identical, keeping in mind the errors for vsini_esphs and the standard deviation for vbroad.
Next, the relationship between vbroad and the candidate predictors is examined relying on the 190 SPB stars having these data, once again applying backward selection to the errors-in-variables model. All covariates are again significant. The regression model in Table 7 explains 21% of the variance in vbroad. Backward selection applied to vsini_esphs for the 1104 SPB stars with this quantity reveals \(T_{\rm eff}\) and \(\nu\) to be significant predictors, with a regression model explaining 8% of the variance. For the regression coefficient of \(\nu\), we assign the same astrophysical interpretation as before that higher \(\nu\) corresponds to faster surface rotation following the SPB studies by Papics et al. (2017); Pedersen et al. (2021); Pedersen (2022a).
As for the role of the effective temperature in vbroad and vsini_esphs, Papics et al. (2017) already found evidence that hotter SPB stars are more massive and tend to rotate faster. The \(T_{\rm eff}\) dependence revealed is thus in first instance a dependence on stellar mass rather than stellar evolution as we found for the \(\nu\) Dor stars. This result is as expected, given that SPB stars cover a factor 3 in mass, from 3 M\({}_{\odot}\) to 9 M\({}_{\odot}\). Pedersen et al. (2021) placed the observed properties of the 26 bona fide _Kepler_ SPB stars included here and those studied from the ground by De Cat & Aerts (2002) into the context of stellar evolution theory. This showed large diversity of \(\nu\) and \(A_{\nu}\) in terms of the spectroscopic log \(T_{\rm eff}\) and log \(g\), as well as log(\(L/\mathcal{L}_{\odot}\)) from Gaia DR2. This diversity was interpreted as due to the range in mass and rotation rate, the latter covering from almost zero to almost critical rotation for the _Kepler_ sample (Aerts et al. 2021; Pedersen 2022a). Despite the limited predictive power of regression model in Table 7, it reveals that larger line broadening occurs for hotter and/or more evolved SPB stars with higher dominant g-mode frequencies (cf. Fig. 9). This highlights that hotter younger stars have faster rotation, shifting their g modes further into the gravito-inertial regime of the frequency spectrum than those of slower rotators (see Aerts et al. 2019, for a discussion of the various frequency regimes of waves connected to the dominant restoring forces).
The luminosity, log(\(L/\mathcal{L}_{\odot}\)), now also has a significant contribution as predictor for vbroad with a \(p-\)value of 0.0105. The model reveals that less luminous SPB stars have higher line broadening but its regression coefficient is not very accurate. Moreover, the sample of SPB stars with a measurement of vbroad is an order of magnitude smaller than for the \(\gamma\) Dor stars and is skewed towards low-mass class members, limiting this interpretation to only a small part of the SPB instability region. This is graphically illustrated in Fig. 9, where trends reveal that more evolved and more luminous SPB stars have higher vbroad
\begin{table}
\begin{tabular}{l l r r r} \hline Effect & Par. & Estimate (s.e.) & \(p\)-value & 95\% conf. int. \\ \hline Intercept & \(\beta_{0}\) & -11(5) & & [\(-\)20;\(-\)2] \\ Slope & \(\beta_{1}\) & 1.06(0.05) & \(<\)0.0001 & [0.97;1.15] \\ Res. var. & \(\sigma^{2}\) & 25(30) & 0.41 & [\(-\)35;84] \\ \hline \end{tabular}
\end{table}
Table 6: Estimates (standard errors) for the model parameters of the errors-in-variables model, relating vbroad to vsini_esphs, on 156 completers within the SPB sample.
\begin{table}
\begin{tabular}{l l r r r} \hline Effect & Par. & Estimate (s.e.) & \(p\)-value & 95\% conf. int. \\ \hline Intercept & \(\beta_{0}\) & 198(56) & & [88;308] \\ log \(g\) & \(\beta_{2}\) & \(-\)37(15) & 0.0161 & [\(-\)66;\(-\)7] \\ \(\nu\) & \(\beta_{4}\) & 17(6) & 0.0047 & [5;28] \\ Res. var. & \(\sigma^{2}\) & 599(152) & 0.0002 & [296;901] \\ \hline \end{tabular}
\end{table}
Table 5: Estimates (standard errors) for the parameters-in-variables model for vsini_esphs for the 384 Gaia DR3 \(\gamma\) stars with measured values for this quantity, based on backward selection from a set of predictors.
but are not well enough represented in membership to have an equally important effect on the regression model than the cool class members. Moreover, the luminosity panel in Fig. 9 reveals more of a quadratic than linear trend for the SPB stars. As already highlighted above and unlike for the \(\gamma\) Dor stars, the luminosity of SPB pulsators is mostly determined by their mass rather than by their evolutionary stage as for the \(\gamma\) Dor stars. This, along with the relatively large scatter for log \(g\) and for the effective temperature, as well as the lower fraction of the variance explained by the linear regression model for vbroad, makes the distillation of a simple astrophysical interpretation for vbroad more difficult for SPB stars. This conclusion is by itself fully in line with the diversity in pulsational behaviour occurring in the _Kepler_ sample of bona fide SPB pulsators (Pedersen et al., 2021; Pedersen, 2022b).
Finally, we stress that time-dependent macroturbulent spectral line broadening due to the g modes of SPB pulsators has already been found in several of the brightest class members (Aerts et al., 2014b), with values in agreement with those we find here for the new faint Gaia DR3 class members. Moreover, the density of modes excited by the \(\kappa\) mechanism peaks in the lower part of the instability strip, near 13 000 K (Papics et al., 2017). These pulsators have a mass regime where the interpretation of turbulent pressure exciting extra high-order g modes in addition to the classical \(\kappa\) mechanism does not hold (Grassitelli et al., 2015b). Macroturbulence in these pulsators has already been established as a time-independent downgraded quantity representing their dominant tangential pulsational velocity by Aerts et al. (2009) and Aerts et al. (2014a).
## 6 Discussion and conclusions
Thanks to the homogeneous treatment of its multitude of observations and its large scale survey capacity, the Gaia mission has its role to play for gravito-inertial asteroseismology. First of all, its photometric light curves allow to discover thousands of new g-mode pulsators belonging to the classes of the F-type \(\gamma\) Dor stars or SPB-type stars. Secondly, its broadening parameter vbroad contains astrophysical information on stellar oscillations having mmag-level observed amplitudes. We found those results after reassigning a fraction of 22% of the \(\gamma\) Dor candidates as SPB pulsators according to their effective temperature being above 8500 K, a property not taken into account in the variability classifications used in Paper I.
We find the two samples of new Gaia DR3 g-mode pulsators to have similar fundamental parameters than those of bona fide class members, although the Gaia SPB pulsators only cover the cooler and less massive class members. We studied the astrophysical properties of the new \(\gamma\) Dor and SPB pulsators from regression models built upon the principle of errors-in-variables, with their fundamental parameters and dominant oscillation properties as predictors of the overall spectral line broadening. The Gaia DR3 quantity vsini_esphs offers a good estimate of the overall time-independent spectral line broadening, reflecting that the surface rotation of the stars in our samples is the dominant line broadening mechanism. All regression models revealed the dominant g-mode frequency to be a significant predictor of the Gaia DR3 vbroad parameter and its standard deviation, which together represent the overall time-dependent spectral line broadening.
We explicitly checked via re-analyses of all regression models that none of the astrophysical interpretations change if we use the effective temperature of 9500 K as treshold for the re-classifications among the \(\gamma\) Dor and SPB candidates. Such a treshold temperature follows from instability computations by Szewczuk & Daszynska-Daszkiewicz (2017) for the cool border of galactic rotating SPB stars instead of the adopted 8500 K based on the hot border for the \(\gamma\) Dor instability strip by Xiong et al. (2016). We find from the upper left panel of Fig. 9 that Gaia DR3 shows 9500 K to be a less natural and more abrupt treshold temperature between the two classes than the adopted 8500 K. Nevertheless, using 9500 K still gives compliance of class populations with the IMF and almost all the coefficients obtained for the regression models remain within the uncertainty ranges listed in the tables we provided here using 8500 K as treshold.
Despite the limiting resolution of the RVS spectroscopy, the line broadening of rotating g-mode pulsators offered by Gaia is in full agreement with results of well-known class members observed with high-precision space photometry and high-resolution spectroscopy. In particular, we find the dominant g-mode frequency to be a significant predictor of the overall line broadening. This supports earlier findings that macroturbulence is merely a simplified time-independent approximation for the true velocity fields at the stellar surface that cause line-profile variability for g-mode pulsators (Aerts et al., 2014b). We conclude that the combined effect of surface rotation and tangential velocities resulting from multiperiodic g modes can be estimated from vbroad for the case of main-sequence stars of intermediate mass. The regression models for vbroad are fully in line with various excitation predictions for g modes in \(\gamma\) Dor and SPB pulsators.
Given that the regression models based on the fundamental parameters and on the dominant pulsation mode presented in Sect. 5 explain part of the variability of vbroad, it is sensible to also consider the standard deviation of vbroad as measured quantity and to check if its variance can be predicted by any of the covariates. Indeed, aside from being caused by noise, it may partially represent the time-dependence of the line broadening. From our regression analyses (presented in Appendix B)
\begin{table}
\begin{tabular}{l r r r r} \hline Effect & Par. & Estimate (s.e.) & \(p\)-value & 95\% conf. int. \\ \hline Intercept & \(\beta_{0}\) & \(-\)2764(643) & & [\(-\)4033;\(-\)1495] \\ log \(T_{\rm eff}\) & \(\beta_{1}\) & 963(219) & \(<\)0.0001 & [532;1395] \\ log \(g\) & \(\beta_{2}\) & \(-\)213(51) & \(<\)0.0001 & [\(-\)312;\(-\)113] \\ log(\(L/\mathcal{L}_{\odot}\)) & \(\beta_{3}\) & \(-\)109(42) & 0.0105 & [\(-\)192;\(-\)26] \\ \(\nu\) & \(\beta_{4}\) & 34(8) & 0.0001 & [19;\({}^{\circ}\)50] \\ \(A_{\nu}\) & \(\beta_{5}\) & \(-\)1492(715) & 0.0384 & [\(-\)2903;\(-\)81] \\ Res. var. & \(\sigma^{2}\) & 1909(300) & \(<\)0.0001 & [1317;2501] \\ \hline \end{tabular}
\end{table}
Table 7: Estimates (standard errors) for the model parameters of the errors-in-variables model for vbroad of the 190 SPB stars having a measurement for this quantity, based on backward selection from the set of predictors. The model explains 21% of the variability in the data.
\begin{table}
\begin{tabular}{l r r r r} \hline Effect & Par. & Estimate (s.e.) & \(p\)-value & 95\% conf. int. \\ \hline Intercept & \(\beta_{0}\) & \(-\)573(319) & & [\(-\)1204;58] \\ log \(T_{\rm eff}\) & \(\beta_{1}\) & 157(80) & 0.0496 & [0;315] \\ \(\nu\) & \(\beta_{4}\) & 20(6) & 0.0016 & [8;32] \\ Res. var. & \(\sigma^{2}\) & 745(154) & \(<\)0.0001 & [442;1049] \\ \hline \end{tabular}
\end{table}
Table 8: Estimates (standard errors) for the model parameters of the errors-in-variables model for vsini_esphs based on the 1104 SPB stars in the sample having a measurement for this quantity, based on backward selection from a set of predictors. The model explains 8% of the variability in the data.
we conclude that the noise contribution to the standard deviation of vbroad is dominant over intrinsic line-profile variability for the \(\gamma\) Dor pulsators. For the SPB stars, the standard deviation of vbroad does relate to their surface rotation, effective temperature, and g-mode frequency at the level of 20% variance reduction for the regression model based on these three covariates.
Finally, we conclude that our analyses of \(\sim 15\,000\) new Gaia DR3 g-mode pulsators bring the qualitative results on vbroad by Fremat et al. (2022) in full agreement with our quantitative assessments on macroturbulence in g-mode pulsators, as already suggested by the simulation study in Aerts et al. (2009).
###### Acknowledgements.
The research leading to these results has received funding from the KLU Leuven Research Council (grant C16/18005: PARADISE). CA and JDR acknowledge support from the ER1eIgeni Gradel Science Policy Office (BELSPO) through a PRODEX grant for the ESA space mission Gaia. CA acknowledges financial support from the Research Foundation Flanders under grant K802922N (Sabbatel leave). CA and GM are grateful for the kind hospitality offered by the staff of the Center for Computational Astrophysics at the Flatiron Institute of the Simons Foundation in New York City during their work visit in the fall of 2022. The authors thank the referee for the suggestion to investigate the sensitivity of the results to the temperature threshold used to reclassify the g-mode pulsators. They also acknowledge Dominic Bowman and Andrew Tkachenko for valuable comments which helped to improve the manuscript.
|
2301.10684 | Consistency is Key: Disentangling Label Variation in Natural Language
Processing with Intra-Annotator Agreement | We commonly use agreement measures to assess the utility of judgements made
by human annotators in Natural Language Processing (NLP) tasks. While
inter-annotator agreement is frequently used as an indication of label
reliability by measuring consistency between annotators, we argue for the
additional use of intra-annotator agreement to measure label stability over
time. However, in a systematic review, we find that the latter is rarely
reported in this field. Calculating these measures can act as important quality
control and provide insights into why annotators disagree. We propose
exploratory annotation experiments to investigate the relationships between
these measures and perceptions of subjectivity and ambiguity in text items. | Gavin Abercrombie, Verena Rieser, Dirk Hovy | 2023-01-25T16:38:11Z | http://arxiv.org/abs/2301.10684v1 | # Consistency is Key: Disentangling Label Variation in
###### Abstract
We commonly use agreement measures to assess the utility of judgements made by human annotators in Natural Language Processing (NLP) tasks. While _inter_-annotator agreement is frequently used as an indication of label _reliability_ by measuring consistency between annotators, we argue for the additional use of _intra_-annotator agreement to measure label _stability_ over time. However, in a systematic review, we find that the latter is rarely reported in this field. Calculating these measures can act as important quality control and provide insights into _why_ annotators disagree. We propose exploratory annotation experiments to investigate the relationships between these measures and perceptions of subjectivity and ambiguity in text items.
## 1 Introduction
Agreement measures are commonly used to assess the utility of judgements made by human annotators for Natural Language Processing (NLP) tasks. Indeed, the reporting of _inter_-annotator agreement (or inter-rater reliability) has long been the standard to indicate dataset quality Carletta (1996) and frequently serves as an upper bound for model performance on a task Boguslav and Cohen (2017).
While inter-annotator agreement is frequently used in NLP to determine the _reliability_ of labels or the processes used to produce them Artstein (2017), _intra_-annotator agreement is rarely, if ever, reported. However, we can use it to measure the temporal consistency of the annotators who chose the labels and the _stability_ of the labels. Consistency and label stability are important because, without them, annotation schemes are unlikely to be repeatable or reproducible Teufel et al. (1999).1
Footnote 1: Although there may be situations in which annotation consistency is not expected, such as longitudinal studies of attitudinal change.
Such measures of the intra-rater agreement are frequently reported in areas of medicine such as physiotherapy (e.g. Bennell et al., 1998; Meseguer-Henarejos et al., 2018), and speech pathology (e.g. Capilouto et al., 2005; Rose and Douglas, 2003). Intra-rater measures are also reported in other fields as diverse as economics Hodgson (2008), software engineering Grimstad and Jorgensen (2007), and psychology Ashton (2000).
However, reporting intra-annotator agreement is so far extremely uncommon in NLP, as we show in the review in Section 2.
Disagreement and label variation in NLPIn addition, we argue that the use of inter- and intra-annotator agreement allows us to distinguish and measure different sources of observed label variation Rottger et al. (2022); Plank (2022). This is important as NLP researchers have increasingly recognised that, for many tasks, different points of view may be equally valid Abercrombie et al. (2022); Aroyo and Welty (2015); Basile et al. (2021); Plank (2022); Rottger et al. (2022), and that their aggregation can erase minority perspectives Basile et al. (2021); Blodgett (2021).
One of the main challenges in implementing this new paradigm is the interpretation of disagreement. Disagreement between annotators may be due to two sources: 1) genuine differences in their subjective beliefs/perspectives, which can be desirable under this paradigm, or 2) task difficulty, ambiguity, or annotator error, all of which are undesirable. While agreement measures _between_ annotators can give us an idea of task **subjectivity**, they provide little insight as to its **difficulty**, **ambiguity**, or the quality and attentiveness of the annotators themselves Rottger et al. (2022).
In the following, we propose the use of _intra_-annotator agreement as a measure of subjectivity.
The reliability-stability agreement matrixWhat then, does it mean when individual annotators' interpretations are not stable, i.e., internally consistent? In addition to providing an
additional layer of quality control, we suggest that measurement of label stability can help to interpret potential causes of _inter_-annotator disagreement. To this end, we propose the reliability-consistency matrix, a framework for mapping and interpreting the relationship between inter- and intra-annotator agreement in labelled datasets (Table 1).
Under this framework, _inter_-annotator agreement and _intra_-annotator agreement, taken together, indicate the task's ambiguity or complexity and its subjectivity level. _Inter_-annotator agreement measures reliability, while _intra_-annotator agreement measures stability. The resulting axes form a confusion matrix that describes four cases.
If both measures are high, we assume the task is unambiguous and relatively easy, and the annotator group relatively homogenous. Presumably, the quality of the guidelines and textual data is also good [10]. In this scenario, the task or item should be relatively straightforward.
Where both agreement measures are low, we are likely to be faced with a highly ambiguous or difficult task or item-perhaps with multiple equally valid responses-or the annotation quality is poor.
If reliability is low, but consistency is high, the labels likely reflect the annotators' varied but potentially equally valid subjective perspectives.
We do not foresee many situations where reliability is high yet consistency is low. Any agreement between inconsistent annotators would presumably be purely by chance or mass random spamming, i.e., systematic errors. Exceptions could include population-level value shifts over longer time intervals arising from awareness-raising events such as the #MeToo [22] and #BLM [20] movements.
Our framework can be applied at the task- or dataset-level by computing the widely used chance-corrected agreement metrics, such as Cohen's _kappa_, Krippendorf's _alpha_, or Intra-class correlation coefficient (ICC) for more than two raters [1], or to conduct qualitative analyses examining the reasons for label variation of individual instances.
We illustrate this framework in interpretation of the annotation experiments proposed in Section 3.
ContributionsWe suggest addition of intra-annotator consistency as a standard measure, and show how it complements existing reliability measures to distinguish reasons for label variation.
## 2 Intra-annotator agreement in the ACL Anthology
To get a snapshot of the extent to which intra-annotator agreement is reported in the NLP community, we conducted a systematic review of papers published in the _Anthology_ of the Association for Computational Linguistics (ACL).2 Here, we wish to discover for which tasks and what purposes NLP researchers collect and report on repeat annotations and evidence for how and when repeat items should be presented to annotators. Full details of the review methodology are available in Appendix A.
Footnote 2: [https://aclanthology.org/](https://aclanthology.org/)
To what extent and why is intra-annotator agreement reported in NLP?When we conducted our study, the search and filtering process returned only 56 relevant publications out of more than 80,000 papers listed in the Anthology. In other words, a tiny fraction (around 0.07%) of computational linguistics and NLP publications in the repository report measurement of intra-annotator agreement.3 Publication dates of included papers range from 2005 to 2022, with no trend towards increased reporting.
Footnote 3: We acknowledge that intra-annotator agreement is irrelevant to a large proportion of papers, but highlight that the number of publications which report it is nevertheless extremely low.
The only area of NLP in which intra-annotator agreement is somewhat regularly reported is machine translation (MT), which accounts for more than half of the included publications. Most of these were agreement measures on human evaluation of translation quality, with one on word alignment annotation for MT [10]. Several other publications on evaluating natural language generation also report measurement on human evaluation tasks (e.g. Belz and Kow, 2011; Belz et al., 2016, 2018; Jovanovic et al., 2005). Other included fields are semantics (e.g. Cao et al.,
\begin{table}
\begin{tabular}{c c|c|c} & & \multicolumn{2}{c}{**Stability**} \\ & & \multicolumn{2}{c}{(temporal within annotator)} \\ & & \multicolumn{2}{c|}{High _intra_} & Low _intra_ \\ \hline \multirow{4}{*}{**Parameter**} & \multirow{2}{*}{High} & Straight- & Systematic \\ & & forward / & errors/ \\ \cline{3-5} & & Good quality & Value changes \\ \cline{2-5} & \multirow{2}{*}{Low} & Variable & Ambiguous \\ & & \multicolumn{2}{c|}{perspectives} & or difficult / \\ \cline{3-5} & & \multicolumn{2}{c|}{(high subjectivity)} & Poor quality \\ \end{tabular}
\end{table}
Table 1: The reliability-stability matrix for _inter_- and _intra_- annotator agreement.
2022; Hengchen and Tahmasebi, 2021), syntax (e.g. Baldridge and Palmer, 2009; Lameris and Stymne, 2021), affective computing (including sentiment analysis Kiritchenko and Mohammad, 2017) and emotion detection Vaassen and Daelemans (2011)), and automatic text grading Cleuren et al. (2008); Downey et al. (2011). There is also one paper on abusive language detection Cercas Curry et al. (2021).4
Footnote 4: We provide a list of included papers in Appendix B.
Where the authors motivate the collection of repeat annotations, they usually mention quality control or annotator consistency. Notably, no papers mention the possibility that intra-annotator inconsistency could be valid or informative beyond these factors, as we propose.
Best practice for measuring intra-annotator agreement: how long should the label-relabel interval be?When designing annotation tasks (such as ours in Section 3), it would be helpful to know when to present repeated items, thus avoiding annotators labelling from memory, which may not be an actual test of their consistency.
Over a quarter of the papers (15/56) do not provide enough information to determine the interval between initial and repeat annotations. In most other cases, either it can be inferred, or the authors explicitly state that re-annotations are conducted in the same session as the original annotation. Those that report more extended time before re-annotation leave intervals varying from a few minutes Kiritchenko and Mohammad (2017) to a year Cleuren et al. (2008); Hamon (2010).
Two papers do specifically investigate the effects of time on annotator consistency. Li et al. (2010) experimented with intervals of one week, two weeks, and one month, comparing intra-annotator agreement for these and finding that consistency on their word alignment annotation degraded steadily over time. Kiritchenko and Mohammad (2017) performed a similar study, comparing intra-annotator agreement on ratings (on a scale) that were conducted with intervals from a few minutes to a few days between the initial and repeat judgements. They too found that inconsistencies increased as a function of increase in interval.
## 3 Exploratory annotation experiments
We propose an exploratory annotation experiment to investigate the relationships between agreement measures and the possible reasons for disagreements and inconsistencies. We wish to examine how strongly inter- and intra-annotator agreement levels correlate with the reasons for label variation on different tasks. We also investigate whether, as is commonly believed, specific task types are generally more subjective than others.
Hypotheses
At the individual annotation item level, for a given task and dataset:
1. [label=H0.0, ref=H0.0]
2. _Subjective_ annotation items have lower _inter_-annotator agreement than straightforward items, but higher _intra_-annotator agreement than _ambiguous_ items.
3. _Ambiguous_ annotation items have lower _inter_- and _intra_-annotator agreement than both straightforward and _ambiguous_ items.
At the dataset/task level:
1. [label=H0.0, ref=H0.0]
2. _Social_ tasks--such as offensive language detection and sentiment analysis--are more _subjective_ than _linguistic_ tasks, like textual entailment or anaphora resolution. That is, stability is higher for social tasks than linguistic tasks.
DataWe collect subsets of two English language datasets for social tasks that are commonly assumed to be subjective, and two linguistic tasks, thought of as objective Basile et al. (2021) (see Table 2).
These were selected because they (1) have limited label sets (of two or three classes), allowing for comparison across tasks; and (2) have been published with non-aggregated (i.e. annotator specific) labels, allowing us to include items with known inter-annotator disagreement in our subsamples.
MethodologyWe recruit crowdworkers from Prolific5 to annotate a subset of from each of the tasks/datasets. Based on the evidence of our review Li et al. (2010); Kiritchenko and Mohammad (2017), we leave an interval of two weeks before we recall the annotators to collect a second round of annotations in order to measure their consistency.
Footnote 5: [https://www.prolific.co/](https://www.prolific.co/)
We then recruit a second set of expert annotators6 to annotate the examples that demonstrate internal and or external disagreement with rationalisations for these disagreements, using labels
based on Bhattacharya et al. (2019)'s Question Answer Differences dataset: _ambiguous_, _difficult_ (e.g. requires world knowledge), _subjective_.
We also assess stability for this (meta-)task by measuring intra-annotator agreement on a subset of this data.
EvaluationWe conduct a quantitative analysis of the correlation between items that exhibit low stability (_intra_-annotator disagreement) and the perceived reasons for the label differences. That is, we measure the Phi coefficient \({r_{\phi}}^{7}\) between the following two dichotomous variables:
A. Label stability:
* _stable_: those on which annotators are always _consistent_
* _unstable_: those on which annotators are _inconsistent_
B. Rationalisation of instability-items labelled as:
* _subjective_
* _ambiguous/difficult_
using the formula:
\[r_{\phi}=\frac{(bc-ad)}{\sqrt{(a+b)(c+d)(a+c)(b+d)}}\]
where \(a,b,c,d\) are the number of annotation items in the corresponding cells in Table 3.
We will also conduct a qualitative analysis of the data to investigate the types of items on which annotators are inconsistent across the four tasks.
## 4 Results
This version of the paper serves as a pre-registration placeholder (van Miltenburg et al., 2021): results will be published in the full version of this paper.
## 5 Conclusion
We have examined the role and use of intra-annotator agreement measures in NLP research. Calculation of such measures can act as an important quality control and could potentially provide insights into the reasons for disagreements between annotators. However, in a systematic review, we found that they are rarely reported in this field.
We have proposed a framework for the interpretation of inter- and intra-annotator agreement, the _reliabilty-stability agreement matrix_. We have also designed exploratory annotation experiments to investigate the relationships between these measures and perceptions of subjectivity and ambiguity in text items, which we will conduct in future research.
## 6 Ethical considerations
Because we recruit humans to work on data labelling, we will seek approval to undertake this study from our Institutional Review Board (IRB). Additionally, we will take the following measures:
CompensationWe will pay the annotators at least the living wage in our jurisdiction (higher than the minimum wage, as recommended (as a minimum) by Shmueli et al. (2021).
WelfareAs some of the data to be labelled includes offensive language, we will:
* avoid recruiting members of vulnerable groups by restricting annotators to those aged over 18, and by providing annotators with comprehensive warnings prior to consenting to participate
* allow annotators to leave the study at any time
\begin{table}
\begin{tabular}{l|l l l} & **Task** & **Dataset** & **Labels** \\ \hline \multirow{2}{*}{**Social**} & Offensive language detection & Leonardelli et al. (2021) & _Offensive/not offensive_ \\ & Sentiment analysis & Kenyon-Dean et al. (2018) & _Positive/negative/objective_ \\ \hline \multirow{3}{*}{**Linguistic**} & Natural language inference/ & & _Entailment/contradiction_/ \\ & textual entailment & Williams et al. (2018) & _neutral_ \\ & Anaphora resolution & Poesio et al. (2019) & _Referring/non-referring_ \\ \end{tabular}
\end{table}
Table 2: Datasets used in the proposed annotation experiments.
\begin{table}
\begin{tabular}{c|c c} & subjective & ambiguous \\ \hline stable & a & b \\ \hline unstable & c & d \\ \end{tabular}
\end{table}
Table 3: Contingency table for the two variables Task A annotator consistency and Task B instability rationalisation.
* keep the annotation task short to avoid lengthy exposure to material which may exceed '_minimal risk_' (Shmueli et al., 2021).
PrivacyAll personal data of recruited annotators will be fully anonymised.
|
2310.17872 | User Association and Resource Allocation in Large Language Model Based
Mobile Edge Computing System over 6G Wireless Communications | In the rapidly evolving landscape of large language models (LLMs) and mobile
edge computing for 6G, the need for efficient service delivery to mobile users
with constrained computational resources has become paramount. Addressing this,
our paper delves into a collaborative framework for model training where user
data and model adapters are shared with servers to optimize performance. Within
this framework, users initially update the first several layers of the adapters
while freezing the other layers of them, leveraging their local datasets. Once
this step is complete, these partially trained parameters are transmitted to
servers. The servers, equipped with more robust computational capabilities,
then update the subsequent layers. After this training, they send the enhanced
parameters back to the users. This collaborative training approach ensures that
mobile users with limited computational capacities can still benefit from
advanced LLM services without being burdened by exhaustive computations.
Central to our methodology is the DASHF algorithm, which encapsulates the
Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR),
the Hungarian method, and a pioneering fractional programming technique from a
recent IEEE JSAC paper [1]. The crux of DASHF is its capability to reformulate
an optimization problem as Quadratically Constrained Quadratic Programming
(QCQP) via meticulously crafted transformations, making it solvable by SDR and
the Hungarian algorithm. Through extensive simulations, we demonstrate the
effectiveness of the DASHF algorithm, offering significant insights for the
advancement of collaborative LLM service deployments. | Liangxin Qian, Jun Zhao | 2023-10-27T03:20:49Z | http://arxiv.org/abs/2310.17872v3 | User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over Wireless Communications
###### Abstract
In the rapidly evolving landscape of large language models (LLMs) and mobile edge computing, the need for efficient service delivery to mobile users with constrained computational resources has become paramount. Addressing this, our paper delves into a collaborative framework for model training where user data and model adapters are shared with servers to optimize performance. Within this framework, users initially update the first several layers of the adapters while freezing the other layers of them, leveraging their local datasets. Once this step is complete, these partially trained parameters are transmitted to servers. The servers, equipped with more robust computational capabilities, then update the subsequent layers. After this training, they send the enhanced parameters back to the users. This collaborative training approach ensures that mobile users with limited computational capacities can still benefit from advanced LLM services without being burdened by exhaustive computations. Central to our methodology is the DASHF algorithm, which encapsulates the Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR), the Hungarian method, and a pioneering fractional programming technique from a recent IEEE JSAC paper [1]. The crux of DASHF is its capability to reformulate an optimization problem as Quadratically Constrained Quadratic Programming (QCQP) via meticulously crafted transformations, making it solvable by SDR and the Hungarian algorithm. Through extensive simulations, we demonstrate the effectiveness of the DASHF algorithm, offering significant insights for the advancement of collaborative LLM service deployments.
Large language model, mobile edge computing, wireless communications, resource allocation.
## I Introduction
The proliferation of large language models (LLMs) marks a monumental leap in the realms of artificial intelligence and natural language processing. These models, with their deep structures and vast parameter sizes, offer capabilities that redefine the benchmarks of machine-human interactions [2]. However, the very nature of their size and intricacy means they cannot be effortlessly deployed, especially in constrained environments like mobile devices [3].
Mobile Edge Computing (MEC) environments, designed to bring computation closer to the data source, seem like a perfect fit for deploying LLMs. Still, they present their own set of challenges. Mobile devices are constrained by computational resources and battery life, making it strenuous to run these heavyweight LLMs efficiently [4]. Additionally, the unpredictability of wireless communication, with its fluctuating data rates and potential for high latency, complicates the seamless integration of LLMs [5].
To meet the growing demand for on-the-fly LLM services, there's a pressing need to address these issues. This involves optimizing the LLMs for constrained devices and innovating on the wireless communication front. A potential solution lies in a collaborative approach: a synergy where local computations on mobile devices are harmoniously complemented by offloading specific, intensive tasks to more capable servers. Such a paradigm can make the promise of LLMs in MEC environments a tangible reality.
### _Studied problem_
In this paper, we explore the LLM-driven MEC system and introduce the novel concept of the user service experience-cost ratio (ECR), represented as \(\frac{\text{experience score}}{\text{cost}}\). This metric eloquently captures the balance between user service experience scores and the crucial factors of delay and energy consumption within mobile computing environments. The user service experience score amalgamates a user's wireless and computational resources as perceived by the server. Cost consumption embodies the cumulative delay and energy expenditure of both users and servers. Given the computational challenges posed by LLMs, we hypothesize that users begin by training the initial layers of the adapters with their local data. Once this preliminary training is completed, users send these trained parameters to the servers. Servers, equipped with more computational resources, take over from this point, training the subsequent layers of the
adapters. Specifically, servers then assign both wireless and computational resources to each user. This includes bandwidth, user and server transmission power, and the GPU computing resources of users and servers. Upon completion, these refined parameters are then relayed back to the users by servers.
### _Main contributions_
**To the best of our knowledge, our paper is the first to explore user association and resource allocation in the LLM wireless communication scenario.** Our contributions include a novel joint optimization problem, the introduction of ECR, and a novel alternating optimization algorithm as follows:
\(\bullet\) Joint Optimization of User-Sercer Adapter Parameter Training Ratio and User Association: We propose a joint optimization problem that optimizes user adapter training offloading and user association for tailored LLM service to users.
\(\bullet\) Introduction of the User Service Experience-Cost Ratio: The concept of the user service experience-cost ratio (ECR) is introduced. ECR quantifies the balance between user service experience scores and the overall delay and energy consumption in the entire uplink and downlink communication. It provides a valuable metric for assessing the trade-off between user experience and resource efficiency.
\(\bullet\) Innovative Alternating Optimization Approach: We propose an innovative Alternating Optimization (AO) approach called DASHF, which represents the combination of the Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR), the Hungarian algorithm, and a novel fractional programming (FP) technique by [1] published in IEEE JSAC recently. The most challenging part of DASHF is to rewrite an optimization problem as Quadratically Constrained Quadratic Programming (QCQP) via carefully constructed transformations, in order to leverage SDR and the Hungarian algorithm to obtain a solution. Initially, it addresses the optimization of user connections and adapter parameter training ratios as a single QCQP problem. Subsequently, it delves into the optimization of communication-computation resource allocation (for bandwidth, transmit power of users and servers, and computing frequency of users and servers), providing an effective solution for the non-convex FP problem.
\(\bullet\) The simulation results substantiate the effectiveness of the proposed DASHF algorithm in achieving the joint optimization of user adapter parameter offloading, resource allocation, experience-cost ratio, and communication-computation resource allocation, demonstrating its practical applicability and benefits.
The rest of this paper is organized as follows. The system model and optimization problem formulation are presented in Section II. We propose a novel DASHF algorithm to solve the optimization problem in Section III. The numerical results are provided in Section IV. We conclude this paper in Section V.
## II LLM-Empowered MEC System and Optimization Problem Formulation
In this section, we first introduce the system scenario, then analyze the delay and energy consumption in the system model, then introduce the concept of user service experience-cost ratio (ECR), and formulate the optimization problem.
### _System scenario_
As presented in Fig. 1, the LLM-based mobile edge computing system contains multiple servers that distribute tailored LLM models or emulators to various mobile users. Given the computational constraints of some mobile users, a hybrid approach is adopted: users train the first few layers locally with their datasets while freezing the other layers. After training, users send these layer parameters of their adapters to the server. Once the server receives those layer parameters, it completes the training of the remaining layers while freezing the layers trained by the users. After training, the server sends the refined parameters back to the users. This collaborative mechanism ensures efficient and personalized LLM services, compensating for individual users' computational limitations.
### _System model_
We consider a system comprising \(N\) mobile users and \(M\) LLM servers. We use \(n\) and \(m\) as indices for a VR user and a LLM server, respectively, where \(n\in\mathcal{N}:=\{1,2,\cdots,N\}\) and \(m\in\mathcal{M}:=\{1,2,\cdots,M\}\). Each user is connected to one and only one server; i.e., \(\sum_{m\in\mathcal{M}}x_{n,m}=1\). We introduce indicator variables \(x_{n,m}\in\{0,1\}\) to characterize the connection between users and servers; specifically, \(x_{n,m}=1\) (resp, \(0\)) means that the \(n\)-th user is connected (resp., not connected) to the \(m\)-th server. For example, if \(x_{n,m}=1\), it means that the \(n\)-th user only connects to the \(m\)-th server and \(x_{n,m^{\prime}}=0\) for \(m^{\prime}\in\mathcal{M}\setminus\{m\}\).
Fig. 1: Optimizing the ECR of an LLM system with \(N\) users and \(M\) servers through joint optimization of user association, offloading, and resource allocation.
#### Iii-B1 Time consumption
We consider frequency-division multiple access (FDMA) so that communication among users and servers would not interfere. The transmission rate from user \(n\) to the chosen edge server \(m\) is \(r_{n,m}(b_{n,m},p_{n})=b_{n,m}\log_{2}(1+\frac{g_{n,m}p_{n}}{\sigma^{2}b_{n,m}})\), where \(\sigma^{2}\) is the noise power, \(b_{n,m}\) is the allocated bandwidth between user \(n\) and server \(m\), \(p_{n}\) is the transmit power of user \(n\), \(g_{n,m}\) is the channel gain and can be further expressed by \(g_{n,m}=h_{n,m}l_{n,m}\), where \(h_{n,m}\) is the large-scale slow-fading component capturing effects of path loss and shadowing and \(l_{n,m}\) is the small-scale Rayleigh fading.
In Parameter-Efficient Fine-Tuning (PEFT) strategies for large language models, the concept is to introduce "adapters" - smaller neural network components. These are placed within the model, enabling task-specific customization while largely keeping the pre-trained parameters unchanged. By doing this, there's no need for the exhaustive retraining of the complete model. If we consider inserting an adapter between two layers with dimensions \(d_{in}\) and \(d_{out}\), the design usually involves: 1. A down-projection from \(d_{in}\) to a reduced dimension \(d_{adapt}\). 2. An up-projection from \(d_{adapt}\) back to \(d_{out}\). Accounting for weights and biases in these transformations, the total size of the adapter's parameters, \(d\), can be captured as: \(d=d_{in}\times d_{adapt}+d_{adapt}\times d_{out}+d_{adapt}+d_{out}\). For any given user, represented by \(n\), the parameter size of the adapter, \(d_{n}\), can differ. This could be due to user-specific requirements or constraints. Hence, \(d_{n}\) links the inherent complexity of the LLM, the architecture of the adapter, and the unique data requirements of user \(n\).
**User training and sending adapter parameter phases.** Based on the above discussion, assume the total adapter parameter size at user \(n\) is \(d_{n}\). The adapter parameter size trained by user \(n\) is \(\varphi_{n}d_{n}\), \(\varphi_{n}\in[0,1]\). User \(n\) trains the first \(\varphi_{n}d_{n}\) layer parameters with the local datasets. The training time consumed is \(T_{n,m}^{(t_{1})}=\frac{t_{n}\varphi_{n}d_{n}e_{n}}{g_{n}f_{n}}\). \(t_{n}\) is the FLOPs of all tokens for each adapter parameter of the user \(n\). \(e_{n}\) is the number of local training epochs. \(g_{n}\) is the available GPU number of user \(n\). \(F_{n}\) is the available GPU computation speed of user \(n\), defined as floating point operations (FLOPs). After local training and the user-server connection algorithm (this can be complicated by choosing the nearest neighbor server sets, then choosing the server with the lowest transmission time, and finally finishing all user-server connections), user \(n\) transmits \(\varphi_{n}d_{n}\) adapter parameters to the server \(m\). The transmission time from the user \(n\) to the server \(m\) is \(T_{n,m}^{(t_{2})}=\frac{x_{n,m}\varphi_{n}d_{n}\varphi_{n}}{r_{n,m}}\), where \(\omega_{b}\) is the bits number used to represent each parameter. For example, if we use the "float32" floating-point number, \(\omega_{b}\) will be 32.
**Server training and returning adapter parameter phases.** After receiving the partial adapter parameters \(\varphi_{n}d_{n}\) from the user \(n\), the server \(m\) trains the remaining part of the adapter with the shared user datasets and the training delay is \(T_{n,m}^{(t_{3})}=\frac{t_{n}(1-\varphi_{n})x_{n,m}d_{n}e_{m}}{g_{n}F_{m}}\). \(e_{m}\) is the number of server training epochs. \(g_{m}\) is the available GPU number of server \(m\). \(F_{m}\) is the available GPU computation speed of server \(m\). Then, Server \(m\) transmits the results to the user \(n\), and the delay is \(T_{n,m}^{(t_{4})}=\frac{x_{n,m}(1-\varphi_{n})d_{n}\omega_{b}}{r_{m,n}}\), where \(r_{m,n}=b_{n,m}\log_{2}(1+\frac{g_{n,m}p_{n}}{\sigma^{2}p_{n,m}})\). We assume the path loss and bandwidth between the downlink and uplink are the same. The time consumed on the server side is
\[T_{s,n,m}=T_{n,m}^{(t_{2})}+T_{n,m}^{(t_{3})}. \tag{1}\]
The time consumed on the user side is
\[T_{u,n,m}=T_{n,m}^{(t_{4})}+T_{n}^{(t_{4})}. \tag{2}\]
Therefore, the total delay will be
\[T_{total} =T_{s,n,m}+T_{u,n,m}\] \[=\frac{t_{n}\varphi_{n}d_{n}e_{n}+\frac{x_{n,m}\varphi_{n}d_{n} \omega_{b}}{r_{n,m}}}{r_{n,m}}\frac{t_{n}(1-\varphi_{n})x_{n,m}d_{n}e_{m}}{g_{ n}F_{m}}\] \[+\frac{x_{n,m}(1-\varphi_{n})d_{n}\omega_{b}}{r_{m,n}}. \tag{3}\]
#### Iii-B2 Energy consumption
Based on the delay discussion, we then compute the energy consumption in this system. Energy used for user \(n\) training the adapter locally can be calculated by \(E_{n,m}^{(t_{1})}=e_{n}\kappa_{n}\varphi_{n}d_{n}t_{n}F_{n}^{2}\)[6]. \(\kappa_{n}\) is the computational efficiency of user \(n\)'s GPUs, denoting the power growth rate corresponding to rising computing speeds. Energy used for transmitting data from the \(n\)-th user to the server \(m\) is given as \(E_{n,m}^{(t_{2})}=p_{n}T_{n,m}^{(t_{2})}=p_{n}\frac{x_{n,m}\varphi_{n}d_{n} \omega_{b}}{r_{n,m}}\). Energy for server training \(\varphi_{n}d_{n}\) adapter parameters is given as \(E_{n,m}^{(t_{3})}=e_{m}\kappa_{m}x_{n,m}(1-\varphi_{n})d_{n}t_{n}F_{m}^{2}\). \(\kappa_{m}\) is the computational efficiency of server \(m\)'s GPUs. Energy caused by \(m\)-th server transmitting trained adapter parameters to user \(n\) is \(E_{n,m}^{(t_{4})}=p_{m}T_{n,m}^{(t_{4})}=p_{m}\frac{x_{n,m}(1-\varphi_{n})d_{n} \omega_{b}}{r_{n,m}}\). Thus, the total energy consumption can be formulated as follows:
\[E_{total} =\sum\limits_{n\in\mathcal{N},m\in\mathcal{M}}(E_{u,n,m}+E_{s,n,m})\] \[=\sum\limits_{n\in\mathcal{N},m\in\mathcal{M}}(E_{n,m}^{(t_{1})}+E_{ n,m}^{(t_{2})}+E_{n,m}^{(t_{3})}+E_{n,m}^{(t_{4})})\] \[=\sum\limits_{n\in\mathcal{N},m\in\mathcal{M}}(p_{n}\frac{x_{n,m} \varphi_{n}d_{n}\omega_{b}}{r_{n,m}}+p_{m}\frac{x_{n,m}(1-\varphi_{n})d_{n} \omega_{b}}{r_{n,m}}\] \[+e_{n}\kappa_{n}\varphi_{n}d_{n}t_{n}F_{n}^{2}+e_{m}\kappa_{m}x_{n, m}(1-\varphi_{n})d_{n}t_{n}F_{m}^{2}). \tag{4}\]
#### Iii-B3 User service experience score
We denote the service experience score of user \(n\) that is connected to server \(m\) as:
\[v_{n,m}=\varpi_{1}\ln[1+\varpi_{2}(\frac{p_{m}}{p_{m}^{(m)}}+\frac{F_{n,m}}{F_{m}^{ (m)}}+\frac{b_{n,m}}{b_{max}})], \tag{5}\]
where \(\varpi_{1}\) determines the range of function value, \(\varpi_{2}\) is used for normalization of \((\frac{p_{m}}{p_{max}^{(m)}}+\frac{F_{n,m}}{F_{m}^{(m)}}+\frac{b_{n,m}}{b_{max}})\). This function is jointly concave to \(p_{m}\), \(F_{n,m}\), and \(b_{n,m}\)[7]. This user service experience score function is effective and sensitive in all value ranges of \((\frac{p_{m}}{p_{max}^{(m)}}+\frac{F_{n,m}}{F_{m}^{(m)}}+\frac{b_{n,m}}{b_{max}})\), which can describe each user's objective experience of the communication and computing resources obtained from the server.
### _Optimization Problem_
User connection \(\mathbf{x}=(x_{n,m})|_{n\in\mathcal{N},m\in\mathcal{M}}\), \(\mathbf{\varphi}=(\varphi_{n})|_{n\in\mathcal{N}}\), bandwidth \(\mathbf{b}=(b_{n,m})|_{n\in\mathcal{N},m\in\mathcal{M}}\), transmission power \(\mathbf{p_{u}}=(p_{n})|_{n\in\mathcal{N}}\) and \(\mathbf{p_{s}}=(p_{m})|_{m\in\mathcal{M}}\), and GPU computation speed \(\mathbf{f_{u}}=(F_{n})|_{n\in\mathcal{N}}\) and \(\mathbf{f_{s}}=(F_{m})|_{m\in\mathcal{M}}\). Our goal is to maximize the user service experience-cost ratio (ECR):
\[\frac{\mathcal{V}}{\omega_{t}T_{total}+\omega_{e}E_{total}}=\] \[\frac{\sum\limits_{\begin{subarray}{c}n\in\mathcal{N},m\in \mathcal{M}\end{subarray}}x_{n,m}v_{n,m}}{\omega_{t}(T_{s,n,m}+T_{u,n,m})+ \omega_{e}(\sum(E_{t,m}^{(t)}+E_{n,m}^{(t)}+E_{n,m}^{(t)}+E_{n,m}^{(t)}+E_{n,m }^{(t)}))}, \tag{6}\]
where \(\omega_{t}\) and \(\omega_{e}\) represent the weight values of delay and energy, respectively. In order to linearize the "maximize" term of \(T_{total}\), we add an auxiliary variable \(T\), which is constrained to be greater than or equal to \(T_{total}\). Besides, we utilize Dinkelbach's Algorithm [8] by adding an additional variable \(y\), which is obtained from the ECR value in the previous iteration. Then, the fractional programming in the trust-cost ratio is transformed into the following problem:
\[\max\limits_{\mathbf{x},\mathbf{\varphi},\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}}, \mathbf{f_{u}},\mathbf{f_{s}},T}\] \[\{\sum\limits_{n\in\mathcal{N},m\in\mathcal{M}}[x_{n,m}v_{n,m}-y \omega_{e}(E_{u,n,m}+E_{s,n,m})]\}-y\omega_{t}T\] (7) s.t. \[x_{n,m}\in\{0,1\},\forall n,m \tag{8a}\] \[\sum\limits_{m\in\mathcal{M}}x_{n,m}=1,\forall n\] (8b) \[\varphi_{n}\in[0,1],\forall n\] (8c) \[\sum\limits_{n\in\mathcal{N}}x_{n,m}b_{n,m}\leq b_{max},\forall m\] (8d) \[p_{n}\leq p_{max}^{(n)},\forall n\] (8e) \[\sum\limits_{n\in\mathcal{N}}x_{n,m}p_{n,m}\leq p_{max}^{(m)},\forall m\] (8f) \[F_{n}\leq F_{max}^{(n)},\forall n\] (8g) \[\sum\limits_{n\in\mathcal{N}}x_{n,m}F_{m}\leq F_{max}^{(m)},\forall m\] (8h) \[b_{n,m}\!\geq\!0,p_{n}\!\geq\!0,p_{n,m}\!\geq\!0,F_{n}\!\geq\!0,F _{m}\!\geq\!0,\forall n,m\] (8i) \[T_{s,n,m}+T_{u,n,m}\leq T,\forall n,m \tag{8j}\]
Based on Dinkelbach's Algorithm, we iteratively optimize \(y\) and problem (7). Specifically, at the \(i\)-th iteration, given \(y^{(i-1)}\), we first obtain \(\mathbf{x}^{(i)},\mathbf{\varphi}^{(i)},\mathbf{b}^{(i)},\mathbf{p_{u}^{(i)}},\mathbf{p_{s}^{(i)}},\mathbf{f_{u}^{(i)}},\mathbf{f_{s}^{(i)}},T^{(i)}\) by solving the optimization problem 7; then we calculate \(y^{(i)}\) with the given \(\mathbf{x}^{(i)},\mathbf{\varphi}^{(i)},\mathbf{b}^{(i)},\mathbf{p_{u}^{(i)}},\mathbf{p_{s}^{(i)}},\mathbf{f_{u}^{(i)}},\mathbf{f_{s}^{(i)}},T^{(i)}\). Repeat the above operations until the solutions converge. In the following section, we consider using the alternating optimization method (AO) to tackle the complex problem (7).
**Roadmap of the whole algorithm.** First, we decompose the outer fractional structure of the original ECR problem using Dinkelbach algorithm and sequentially optimize \(\mathbf{x},\mathbf{\varphi}\) and \(\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}}\) using the AO method. In the first step of AO, we fix \(\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}}\) and optimize \(\mathbf{x},\mathbf{\varphi},T\). We transform the optimization problem in the first step of AO into a Quadratically Constrained Quadratic Program (QCQP) and solve it using Semidefinite Relaxation (SDR) and the Hungarian algorithm. In the second step of AO, we fix \(\mathbf{x},\mathbf{\varphi}\) and optimize \(\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}},T\). During the optimization in the second step of AO, we propose a new fractional programming method to transform this non-convex problem into a convex one. Finally, we calculate \(y\) based on the obtained solutions and repeat the aforementioned process until \(y\) converges. In this algorithm, since we utilize Dinkelbach's algorithm, alternating optimization, semidefinite relaxation, Hungarian algorithm, and fractional programming, we refer to this algorithm as the **DASHF Algorithm**.
## III Our proposed DASHF Algorithm to solve the optimization problem
Assuming that \(y\) is given, we need to optimize \(\mathbf{x},\mathbf{\varphi},\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}},T\). In the outermost loops, we iteratively optimize \(y\); In the innermost loops, we iteratively optimize \(\mathbf{x},\mathbf{\varphi},\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}},T\). However, it is still difficult to optimize them in parallel. Thus, we consider operating two inner AO steps to solve it. At the \(i\)-th iteration,
1. Optimize \(\mathbf{x}\), \(\mathbf{\varphi}\), \(T\), given \(\mathbf{b}\), \(\mathbf{p_{u}}\), \(\mathbf{p_{s}}\), \(\mathbf{f_{u}}\), \(\mathbf{f_{s}}\). Assuming that \(\mathbf{b}^{(i-1)}\), \(\mathbf{p_{u}^{(i-1)}}\),\(\mathbf{p_{s}^{(i-1)}}\), \(\mathbf{f_{u}^{(i-1)}}\), \(\mathbf{f_{s}^{(i-1)}}\), \(y^{(i-1)}\) are given, we optimize \(\mathbf{x}^{(i)}\), \(\mathbf{\varphi}^{(i)}\), \(T^{(i)}\).
2. Optimize \(\mathbf{b_{u}}\),\(\mathbf{p_{s}}\),\(\mathbf{f_{u}}\),\(\mathbf{f_{s}}\),\(T\), given \(\mathbf{x}\), \(\mathbf{\varphi}\). Assuming that \(\mathbf{x}^{(i-1)}\), \(\mathbf{\varphi}^{(i-1)}\),\(y^{(i-1)}\) are given, we optimize \(\mathbf{b}^{(i)}\),\(\mathbf{p_{u}^{(i)}}\),\(\mathbf{p_{s}^{(i)}}\),\(\mathbf{f_{u}^{(i)}}\),\(\mathbf{f_{s}^{(i)}}\),\(T^{(i)}\).
_AO Part 1: Optimizing \(\mathbf{x},\mathbf{\varphi},T\), given \(\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}}\)_
Given \(\mathbf{b},\mathbf{p_{u}},\mathbf{p_{s}},\mathbf{f_{u}},\mathbf{f_{s}},T\), we optimize \(\mathbf{x},\mathbf{\varphi},T\). The optimization problem will be:
\[\max\limits_{\mathbf{x},\mathbf{\varphi},T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum\limits_{\begin{subarray}{c}n\in\mathcal{N},m\in\mathcal{M}\end{subarray}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We substitute the expression of \(E_{u,n,m}+E_{s,n,m}\) into problem (9) and convert the \(\max\) problem in problem (9) to a min problem (10).
\[\min_{\mathbf{x},\mathbf{\varphi},T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\mathbf{P}_{7}=\left(\begin{array}{cc}\mathbf{0}_{NM+N\times NM+N}& \frac{1}{2}\mathbf{F}\mathbf{e}_{N+1,NM+N}\\ \frac{1}{2}(\mathbf{F}\mathbf{e}_{N+1,NM+N})^{\intercal}&-F_{max}^{(m)}\\ \mathbf{P}_{8}=\left(\begin{array}{cc}\mathbf{P}^{(74)}&\frac{1}{2}(\mathbf{ P}^{(75)}+\mathbf{P}^{(76)})\\ \frac{1}{2}(\mathbf{P}^{(75)}+\mathbf{P}^{(76)})^{\intercal}&0\end{array}\right).\]
The constraints \((13\text{a})\), \((13\text{b})\), \((13\text{c})\), \((13\text{d})\), \((13\text{e})\), \((13\text{f})\), \((13\text{g})\), \((13\text{h})\) in \(\mathcal{P}_{2}\) are transformed into the constraints \((14\text{a})\), \((14\text{b})\), \((14\text{c})\), \((14\text{h})\), \((14\text{d})\), \((14\text{e})\), \((14\text{f})\), \((14\text{g})\) in \(\mathcal{P}_{3}\), respectively. Drop the constraint \(\text{rank}(\mathbf{S})=1\) and the objective function and the constraints are all convex. Then this SDR problem will be solved in polynomial time by common convex solvers. By solving this SDR problem, we can get a continuous solution of \(\mathbf{Q}\). However, this solution is the lower bound of the optimal solution and it may not guarantee the constraint \(\text{rank}(\mathbf{S})=1\). Therefore, we need to use rounding techniques to recover the solution. The latter \(NM\) elements in \(\mathbf{Q}\) is \(x_{n,m}\), for all \(n\in\mathcal{N},m\in\mathcal{M}\), which means that user \(n\) is fractional connected to server \(m\). Then, find all user \(n\) that \(\sum\limits_{m\in\mathcal{M}}x_{n,m}>1\). For these users, modify \(x_{n,m}\) as \(\frac{\sum\limits_{n,m}}{\sum\limits_{n,m}}\). Use the Hungarian algorithm [9] with augmented zero vectors to find the best matching with the maximum weight and denote this matching as a set \(\mathcal{X}_{matching}\). For nodes \(n\) and \(m\) in \(\mathcal{X}_{matching}\), let \(x_{n,m}=1\), else \(x_{n,m}=0\), and denote this integer association result as \(\mathbf{x_{\sharp}}\). Then substitute \(\mathbf{x_{\sharp}}\) into Problem (13) to obtain the optimal \(\mathbf{\varphi}\).
### _AO Part 2: Optimizing \(\mathbf{b,p_{u},p_{s},f_{u},f_{s},T}\), given \(\mathbf{x,\varphi}\)_
Given \(\mathbf{x}\) and \(\mathbf{\varphi}\), the remaining optimization problem is shown as (15),
\[\sum\limits_{\mathbf{b,p_{u},p_{s},f_{u},f_{s},T}}\sum\limits_{n\in \mathcal{N},m\in\mathcal{M}}x_{n,m}v_{n,m}-y\bigg{\{}\omega_{t}T\] \[+\omega_{e}\big{\{}\sum\limits_{n\in\mathcal{N}}e_{n}\kappa_{n} \varphi_{n}d_{n}t_{n}F_{n}^{2}+\sum\limits_{n\in\mathcal{N},m\in\mathcal{M}} \{p_{n}\frac{x_{n,m}\varphi_{n}d_{n}\omega_{b}}{r_{n,m}}\] \[+e_{m}\kappa_{m}x_{n,m}(1\!-\!\varphi_{n})d_{n}t_{n}F_{n,m}^{2}\! +\!p_{m}\frac{x_{n,m}(1\!-\!\varphi_{n})d_{n}\omega_{b})}{r_{m,n}}\!\!\big{\}} \tag{15}\]
s.t. \((7\text{d}),(7\text{e}),(7\text{f}),(7\text{g}),(7\text{h}),(7\text{i}),(7 \text{j})\).
We first Let
\[\mathcal{F}(\mathbf{b,p_{s},f_{u},f_{s},T})=\!\!\!\!\sum\limits_{n\in \mathcal{N},m\in\mathcal{M}}\!\!\!\!x_{n,m}v_{n,m}-y\omega_{t}T\] \[-y\omega_{e}\big{\{}\sum\limits_{n\in\mathcal{N}}e_{n}\kappa_{n} \varphi_{n}d_{n}t_{n}F_{n}^{2}\] \[+\sum\limits_{n\in\mathcal{N},m\in\mathcal{M}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
power of mobile users \(p_{max}^{(n)}\) is 0.2 W. The maximum transmit power of servers \(p_{max}^{(m)}\) is 10 W. We assume the GPU resource utilization is \(0.55\) for users and servers. The maximum GPU computation speed of mobile users \(F_{max}^{(n)}\) is \(19.58\) TFLOPs with four GTX 1080 GPUs and that of servers \(F_{max}^{(m)}\) is \(1372.8\) TFLOPs with eight A100 GPUs. The effective switched capacitance of mobile users and servers (\(\kappa_{n}\) and \(\kappa_{m}\)) is \(10^{-38}\). We refer to the adapter parameter sizes in [3] and [2]. The adapter parameter sizes of mobile users are randomly selected from \([1.2,14]\) M. To achieve this, pseudorandom values are generated, which follow a standard uniform distribution over the open interval \((0,1)\). These pseudorandom values are then scaled to the range of \([1.2,14]\) M to determine the specific adapter parameter sizes for each mobile user. The token data sizes of users are randomly selected from \([10,50]\) M bits. The parameters of delay and energy consumption (\(\omega_{t}\) and \(\omega_{e}\)) are 0.5 and 0.005 to keep them in the same order. We set \(\varpi_{1}\) and \(\varpi_{2}\) as \(\frac{10000}{\ln 2}\) and \(\frac{1}{3}\) to keep ECR big enough, respectively. we consider the "float32" method to represent the floating-point number and \(\omega_{b}\) is 32. We consider using the Mosek optimization tool in Matlab to conduct the simulations.
### _Convergence of proposed Algorithms_
In this section, we evaluate the convergence of the proposed algorithms. We consider two network topologies with (10 users, 2 servers) and (20 users, 3 servers) and keep other settings as default. Primal objective value means an estimate for the primal objective value when using Mosek to solve the optimization problem. When the primal objective value converges, the algorithm converges to one stationary point. Fig. 2(a) plots the convergence of the Algorithm AO-Part 1, which converges within 15 iterations. Fig. 2(b) plots the convergence of the Algorithm AO-Part 2, which converges within 9 iterations. Fig. 2(c) plots the convergence of the DASHF Algorithm, which converges within 9 iterations. Thus, the proposed DASHF Algorithm is effective in finding one stationary point of the Problem (7).
### _Comparison with baselines_
In this section, we mainly consider four baselines to carry out the comparison experiments.
1. **Random user connection with average resource allocation (RUCAA)**. In this algorithm, one server is randomly selected for each user. The server equally allocates communication and computational resources among the users connected to it.
2. **Greedy user connection with average resource allocation (GUCAA)**. In this algorithm, each user selects the server with the least number of users underserving. The server distributes communication and computational resources evenly to the users connected to it.
3. **Average resource allocation with user connection optimization (AAUCO)**. In this
Fig. 3: ECR comparisons under different cases
Fig. 2: Convergence of the proposed Algorithms
algorithm, the communication and computation resources of each Metaverse server are equally allocated to each user connected to it. Besides, **Algorithm III-A** is leveraged to operate user connection optimization.
4. **Greedy user connection with resource allocation optimization (GUCRO)**. In this algorithm, each user selects the Metaverse server with the least number of users underserving. Besides, **Algorithm III-B** is leveraged to operate resource optimization.
5. **Proposed DASHF algorithm**. Joint optimization of user connection and resource allocation by utilizing the whole proposed DASHF algorithm.
In Fig. 3(a), we compare the resource consumption and ECR of the proposed DASHF Algorithms with other baselines. The performances of RUCAA and GUCAA are worse since no optimization is utilized. GUCRO and AAUCO have better performances than GUCAA, which confirms the effectiveness of the proposed Algorithm AO-Part 1 and Part 2. Furthermore, the ECR of AAUCO is higher than that of GUCRO, which shows that user connection optimization is more effective than resource optimization in this case. The time consumption of the proposed DASHF algorithm is the lowest of these five methods and the energy consumption is also low (just higher than AAUCO), and the ECR is the highest one. This results from the benefits of joint optimization of user connection and resource allocation.
### _ECR versus the total bandwidth_
We consider the total bandwidth from 10 MHz to 100 MHz to test the ECR under different total bandwidths. Other parameters are fixed as default settings. Fig. 3(b) reveals distinct algorithmic performance trends, with the proposed DASHF method consistently outperforming GUCRO, AAUCO, RUCAA, and GUCAA in terms of the ECR. Notably, optimization algorithms (GUCRO and AAUCO) demonstrate superior or close performance compared to non-optimization algorithms (RUCAA and GUCAA). AAUCO employs user connection optimization strategies and performs better than RUCAA, GUCRO, and GUCAA.
### _Impact of cost weights on ECR_
Fig. 3(c) featuring various combinations of (\(\omega_{t}\), \(\omega_{e}\)) that signifies the trade-off between delay-energy optimization and trust score. As (\(\omega_{t}\), \(\omega_{e}\)) values shift, emphasizing either delay or energy, distinct performance outcomes are evident. For instance, when prioritizing energy efficiency (e.g., (\(\omega_{t}\), \(\omega_{e}\)) = (0.1, 0.009)), the system achieves lower energy consumption but at the expense of higher delay, resulting in a moderate ECR value. Conversely, balanced settings (e.g., (\(\omega_{t}\), \(\omega_{e}\)) = (0.5, 0.005)) lead to lower delay and slightly higher energy consumption, yielding a high ECR value. These findings underscore the sensitivity of the optimization process to parameter choices and emphasize the importance of tailoring (\(\omega_{t}\), \(\omega_{e}\)) values to meet specific application requirements while carefully considering the trade-offs between delay-energy and trust score.
## V Conclusion
In this investigation into LLMs and MEC, we've delved into the intricacies of ensuring efficient LLM service delivery amidst the constraints of wireless communication systems. With their vast linguistic and computational capabilities, the promise of LLMs is now being actualized in real-world applications. Our contributions, as presented in this paper, lay the foundation for seamless collaborative training between mobile users and servers, addressing the challenges of limited computational and communication resources. We optimize resource utilization and ensure robust LLM performance by implementing a framework where initial layers are trained by users and subsequent layers by servers. In this context, the ECR measures collaboration efficiency and resource optimization. The DASHF algorithm, central to our methodology, solidifies these efforts. In conclusion, we anticipate a landscape where mobile edge computing enables ubiquitous and efficient access to advanced LLM services, harmonizing computational constraints with the ever-growing demands of modern applications.
|
2302.09685 | Intent Identification and Entity Extraction for Healthcare Queries in
Indic Languages | Scarcity of data and technological limitations for resource-poor languages in
developing countries like India poses a threat to the development of
sophisticated NLU systems for healthcare. To assess the current status of
various state-of-the-art language models in healthcare, this paper studies the
problem by initially proposing two different Healthcare datasets, Indian
Healthcare Query Intent-WebMD and 1mg (IHQID-WebMD and IHQID-1mg) and one real
world Indian hospital query data in English and multiple Indic languages
(Hindi, Bengali, Tamil, Telugu, Marathi and Gujarati) which are annotated with
the query intents as well as entities. Our aim is to detect query intents and
extract corresponding entities. We perform extensive experiments on a set of
models in various realistic settings and explore two scenarios based on the
access to English data only (less costly) and access to target language data
(more expensive). We analyze context specific practical relevancy through
empirical analysis. The results, expressed in terms of overall F1 score show
that our approach is practically useful to identify intents and entities. | Ankan Mullick, Ishani Mondal, Sourjyadip Ray, R Raghav, G Sai Chaitanya, Pawan Goyal | 2023-02-19T22:53:03Z | http://arxiv.org/abs/2302.09685v1 | # Intent Identification and Entity Extraction for Healthcare Queries in Indic Languages
###### Abstract
Scarcity of data and technological limitations for resource-poor languages in developing countries like India poses a threat to the development of sophisticated NLU systems for healthcare. To assess the current status of various state-of-the-art language models in healthcare, this paper studies the problem by initially proposing two different Healthcare datasets, Indian Healthcare Query Intent-WebMD and 1mg (IHQID-WebMD and IHQID-1mg) and one real world Indian hospital query data in English and multiple Indic languages Hindi et al. (2020), Tamil et al. (2020) which are annotated with the query intents as well as entities. Our aim is to detect query intents and extract corresponding entities. We perform extensive experiments on a set of models in various realistic settings and explore two scenarios based on the access to English data only (less costly) and access to target language data (more expensive). We analyze context specific practical relevancy through empirical analysis. The results, expressed in terms of overall F1 score show that our approach is practically useful to identify intents and entities.
## 1 Introduction
Healthcare is a top priority for every country. People across the world ask millions of health-related queries, hoping to get a response from a domain expert Gebbia et al. (2020). These queries mostly deal with medical history of patients, possible drug interactions, disease related concerns, treatment protocols and so on. Conversational agents for healthcare play a pivotal role by facilitating useful information dissemination Li et al. (2020); Maniou and Veglis (2020). In order to understand these queries better, practical conversational systems for healthcare need to be developed. However, the primary obstacle in developing such technologies for low-resource languages is the lack of usable data Mehta et al. (2020); Daniel et al. (2019); Liu (2022).
India is a country with a diverse language speaking population suffering from object poverty and low-economic status Mohanty (2010); Pande and Yazbeck (2003). This linguistic diversity and complex socio-economic situation in India certainly poses significant challenges in developing automatic healthcare systems; and there is a lack of linguistic resources specific to the medical domain. For example, situations such as the patient and the doctor speaking in different languages, is not an uncommon situation in rural India. These individuals are unable to avail the existing systems and facilities which exist mainly in the English language. Recent efforts in developing automatic translation systems, even from extremely low resource languages such as 'Mundari' and 'Gondi' Joshi et al. (2019), should ideally improve this situation, but there is no extensive study on that front.
In order to bridge this language barrier, massively Multilingual Transformer based Language Models (MMLM) Devlin et al. (2019); Lample and Conneau (2019) have made impressive advancements on a wide range of downstream applications. But the real-world implications of such advancements in the Indian healthcare system remain largely unexplored. In this paper, we aim to explore scarcity of the data and study the extent to which the existing language technologies can be leveraged to develop practically useful healthcare systems for the low-resource languages in developing countries.
With an aim to answer our research question, we create two different multilingual healthcare datasets, namely, IHQID-WebMD and IHQID-1mg. These datasets are created by crawling frequently asked questions from two healthcare websites, _WebMD_ and _1mg_. These datasets comprise frequently asked questions about drugs, diseases and treatment methods in seven different languages, namely English, Hindi, Bengali, Tamil, Telugu, Gujarati and Marathi. The queries are manually tagged with intent labels and entity tags by domain-experts and translated by native speakers of the corresponding languages. We also collect real world Indian hospital queries (annotated) in seven languages to check the empirical effectiveness of our approach. Fig. 1 shows an example of a health query belonging to 'treatment' intent class manually translated into three different languages. Then we evaluate the performance of state-of-the-art language models (LMs), for both English and multilingual setups on our datasets, to answer the questions regarding their deployability and practicality. Various experimental configurations (Section 4) have been tried on these datasets where we try to figure out the ways of using best technologies through extensive experimentation in two real-world scenarios. First, we assume to have access to only English training queries (less costly) and the test queries are multilingual in nature. We observe that translate-test setup on RoBERTa seems to be a reasonable choice of technology. Second, we assume to have access to manually written multilingual training and test queries in the target languages, which is indeed quite expensive in terms of data collection effort. However, back-translation of both train and test queries proves to be a reasonable choice if we have budget of collecting data in target languages.
In sum, our contributions are four folds:
* We propose two intent and entity labelled Indian healthcare datasets (annotated by domain-experts) comprising of frequently asked questions from users.
* Even though the large language models have proved their effectiveness in almost every NLU operation, we want to determine their effectiveness in determining the correct intent and slot filling operations for practical domain-specific healthcare scenarios in the Indian context. We intend to analyze how should we prioritize the research and resource building investments for the economically backward countries with a high percentage of multilingual population? This will make us aware about the best techniques of deploying the language models in various scenarios such as: availability of English training data vs multilingual training data. Keeping this in mind, all our experiments have been carried out using both monolingual and multilingual setups of these models. Through our experiments, we try to point out the best possible language models and techniques to develop practically useful NLU solutions (pipeline based approach for intent detection and corresponding entity extraction from the queries).
* Through extensive experiments on the datasets, we recommend the community to use back-translation of test queries to English in two real-life scenarios as a reasonable choice when we have access to English training data. However, the same strategy can be applied to both train and test queries if we have the budget of collecting data in target languages.
* Our findings imply that the back-translation of queries using an intermediate bridge language proves to be a useful strategy in the intent recognition experiments.
Figure 1: Example of a query of ‘treatment’ intent category for different languages along with associated entities.
Related Work
We pivot our study of related works into the following buckets - generalised intent and entity detection, entity and intent detection in healthcare, health care in Indian languages and multi-lingual healthcare datasets.
**A) Generalised Intent and Entity Detection Approaches:**Sun et al. (2016); Wang et al. (2020); Mu et al. (2017) focus on detecting novel intents in the form of outlier detection. Mullick et al. (2022) explore intent classification on legal data. People also work on different detection approaches - few shot Xia et al. (2021), zero shot Xia et al. (2018), clustering frameworks Mullick et al. (2022). Yani et al. (2022); Sufi and Alsulami (2021); Zhao et al. (2021) all explore entity detection tasks. Vanzo et al. (2019) develop a hierarchical multi-task architecture for semantic parsing sentences for cross-domain spoken dialogue systems. Most of these approaches are very domain and language specific and thus not very useful for the healthcare domain in Indian languages.
**B) Entity and Intents in Health Care:**Zhou et al. (2021) solve different tasks in smart healthcare. Bao et al. (2020) build a chat-bot framework using user intents. Bai et al. (2022) aim at incremental medical intent detection. Razzaq et al. (2017); Amato et al. (2017) develop an e-Health application using intent-context relationships. Zhang et al. (2017) explore medical query intents. Most of the works are done for English and Chinese languages and there is no proper architecture for Indian multi-lingual scenarios for intent and entity extraction..
**C) Health Care in Indian Languages:** Some researchers focus on Indian Languages - Hindi Medical Conversation system, MedBot Bharti et al. (2020), detecting Hindi and English COVID-19 posts Patwa et al. (2021), Tamil health information Santosh (2019), Bengali health-bot Badlani et al. (2021), Telugu COVID-19 health information Vishwas et al. (2017). But none of the work aims at Indian health query datasets and model analysis. Mondal et al. (2022) highlights the gaps when using existing state-of-the-art commercial frameworks for NLU tasks in a few Asian and African low-resource languages, especially when the goal is to develop conversational agents for healthcare during COVID. In our work, we strengthen the claims made in their paper for generic healthcare specific datasets in Indian context, and highlight the potential drawbacks of the existing LMs.
**D) Multilingual Health Care Dataset:**Liu et al. (2020) develop MedDG (Medical Dialogue dataset of common Gastrointestinal diseases) in Chinese. Zeng et al. (2020) proposes MedDialog, a Chinese and English medical dataset, and explores medical dialogue generation tasks. Zhang et al. (2021) build a medical intent evaluation dataset in Chinese and Kim et al. (2022) has constructed a Korean health intent dataset. Our work differs from the existing research in two ways: 1) We focus on developing a gold standard healthcare NLU dataset in Indian languages, 2) cost parameter and availability oriented usage of models for intent detection and entity extraction, and 3) end-to-end evaluation of the state-of-the-art solutions for healthcare in both English and Indic languages which leads to interesting implications and generates important future recommendations for the language community.
## 3 Dataset and Pre-Processing
### Necessity of a new dataset
India is a country with a diverse language speaking population. There is an increasing population of users consuming Indian language content. This linguistic diversity certainly poses significant challenges in healthcare setup, particularly in the situation when healthcare providers and patients speak different languages (also termed as _Language Discordance_) Shamsi et al. (2020). Therefore, individuals with limited English proficiency are left behind and suffer from worse health outcomes than those who speak English with high proficiency. The growing need for the deployment of multilingual conversational agents in hospital and healthcare facilities in India, especially highlighted by the plight of the healthcare workers during the COVID-19 pandemic, warrants a multilingual healthcare query intent dataset in Indian languages Daniel et al. (2019). Therefore, we resort to create two novel **I**ndian **H**ealthcare **Q**uery **I**ntent **D**atasets - (IHQID-WebMD and IHQID-1mg) and one real-world healthcare dataset from hospitals.
### Source of the dataset
Due to the unavailability of open-source multilingual NLU datasets in healthcare setup, we sample frequently asked medical queries (FAQs) in English from two popular data sources:
**WebMD1:** It is an American website containing a large repository of healthcare data. The queries,
taken from the WebMD health forum are asked by ordinary users regarding a wide range of problems. **1mg2:** 1mg is an Indian website, which is also a rich source for healthcare data, especially in the Indian context. The English queries are scraped from the FAQ section in drug and disease pages.
Footnote 2: [https://www.lmg.com/](https://www.lmg.com/)
Although, both the above datasets are curated from online forums where users post healthcare concerns, in order to evaluate our approach in a practical Indian context, we develop a real world healthcare query dataset in Indian scenario. We collect real world healthcare queries (asked by patients) from the doctors in local hospitals. All queries are anonymous without identity or any details of the patients. For each language, we fetch 100 queries (some of which overlap) belonging to different categories.
### Dataset Sampling
The FAQs sampled from these data sources are unlabeled. Hence, for the purpose of supervised classification, it is necessary to categorize each query into a specific intent and list of corresponding entities. We broadly categorize queries into four different intent types, namely, _'Disease'_, _'Drug'_, _'Treatment Plan'_ and _'Other'_. Each query is assigned one of the four intent labels. Two English-speaking medical graduate doctors annotate the intents from the English queries to prepare the datasets. Annotators also mark entities, belong to three different medical entity categories present in the datasets - _'Disease'_, _'Drug'_ and _'Treatment'_. The queries with their intent labels are retained where both annotators agree, otherwise discarded. On an average, this filtering lead to an average rejection of around 10% samples of the dataset for all our setups and languages. Overall Inter-annotator agreement, Cohen \(\kappa\) is 0.89.
### Parallel Data Generation
In order to generate parallel corpora of these frequently asked questions in English, we choose six Indian languages apart from English.
**Language Selection:** The language set includes English: USA version (EN-US) termed as ('En'), Hindi ('Hi'), Bengali ('Bn'), Tamil ('Ta'), Telugu ('Te'), Gujarati ('Gu') and Marathi ('Mr'). The choice of languages was driven by (a) the number of native speakers of those languages in India, b) number of annotators available for creating the dataset, (c) combined with topological diversity amongst the languages - we choose languages from various language families. For instance, Bengali, Hindi, Gujarati, Marathi belong to the Indo-Aryan family whereas Tamil and Telugu belong to the Dravidian group.
**Annotation and Quality Control:** Since the gold standard annotated queries are not available online in Indian languages, the English queries of 1mg and WebMD have to be manually translated. **After discussions with the doctors and different patients, we create the annotation guidelines.** Annotators are told to formulate the queries on their own regional languages with the help of Bing Translator API3. Annotators are also asked to annotate the entities and their types (in their respective native languages) for each query being corrected with the idea of what common people of corresponding native language generally ask healthcare queries to doctors.
Footnote 3: [https://www.microsoft.com/en-us/translator/business/translator-api/](https://www.microsoft.com/en-us/translator/business/translator-api/)
Three annotators are selected per language after several discussions and conditions of fulfilling many criteria like annotators should have native proficiency in their language of annotation, domain knowledge expertise along with a good working proficiency in English. Initial labeling is done by two annotators and any annotation discrepancy is checked and resolved by the third annotator after discussing with others. While formulating the
\begin{table}
\begin{tabular}{l c c|c c c|c c c c c} \hline \hline & \multicolumn{2}{c}{**Intents**} & \multicolumn{2}{c}{**Latities**} & \multicolumn{3}{c}{**Real World Hospital Query Data (\(\#\)**Intent / \(\#\)**Early**)**} \\ \hline
**Class** & \(\#\)**WebMD** & \(\#\)**Img** & \(\#\)**WebMD** & \(\#\)**Img** & \(\#\)**En** & \(\#\)**Hi** & \(\#\)**Ba** & \(\#\)**Ta** & \(\#\)**Te** & \(\#\)**Ma** & \(\#\)**Gu** \\ \hline Disease & 283 (207+76) & 111 (37+24) & 629 (464+165) & 240 (185+55) & 2837 & 31/35 & 29/37 & 27/35 & 31/39 & 28/35 & 29/35 \\ Drug & 234 (181+535) & 198 (144+54) & 400 (302+98) & 224 (166+58) & 34/44 & 33/43 & 31/37 & 30/35 & 32/38 & 34/40 & 32/37 \\ Treatment & 166 (127+39) & 67 (464+21) & 218 (165+53) & 64 (44+20) & 21/24 & 20/26 & 21/25 & 23/29 & 19/24 & 17/23 & 20/26 \\ Other & 278 (205+73) & 41 (28+13) & - & - & 17/- & 16/- & 19/- & 20/- & 18/- & 21/- & 19/- \\ \hline
**Total** & 961 (720+241) & 417 (305+112) & 1247 (931+316) & 528 (395+133) & 100/105 & 100/104 & 100/99 & 100/99 & 100/101 & 100/98 & 100/98 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distributions of different types of intent and entity labels in WebMD, 1mg datasets (IHQID) and Real World Hospital Query Data. (- + -) represents (train + test) division. \(\#\) denotes the count.
query on their own manually, the annotators are also asked to annotate the entities and their types (in their respective native languages) for each query being corrected. The above quality control measures ensure that the translated data is of high quality, resembling real world data in the target language. In the case of a word such as a proper noun like '_Paracetamol_' (drug), which does not have a translation in the target vocabulary, the word is asked to be simply transliterated in the target language.
In order to prepare the real world hospital query dataset in Indian healthcare contexts, we collect healthcare queries from the doctors of local hospitals. It also consists of six different Indic languages along with English. There are a hundred queries for each of the language. These queries also have similar intent classes and entity categories, which are labelled by the doctors. During collection of queries, we fix the minimum number of samples for each intent classes across all languages.
In order to maintain the quality of the Indian language annotations, the annotators are directed to use the native language words and grammar, keeping the original interpretation of the query. All query logs, annotations and changes are recorded in order to conduct future verification and analysis. On completion of the translation process, the annotators are asked to exchange their work and check the quality of translation for fluency and semantic stability. Inaccuracies are noted, and the respective queries are rectified in the dataset.
At the end, we finally have three multilingual intent and entity recognition labelled datasets - **IHQID-WebMD**, **IHQID-1mg** and a real world hospital query test dataset in seven different languages, the dataset distributions of which are provided in Table 1. The first two datasets (IHQID-WebMD and IHQID-1mg) help to build the models and real world hospital dataset is used to evaluate our approaches in real world contexts. Table 1 also shows the statistical details across different intent classes ('Disease', 'Drug', 'Treatment' and 'Other') and corresponding entities (of 'Disease', 'Drug' and 'Treatment' categories) along with the total counts and train-test divisions. It also shows the distribution of hospital collected practical healthcare queries across different languages (Right part of the table).
## 4 Strategies of Evaluation
In this section, we illustrate the strategies of evaluating the state-of-the-art LMs on our dataset. Our evaluation of these models for Healthcare is scoped down to two fundamental NLU tasks:
a) Intent Recognition (Section 5.1)
b) Entity Extraction (Section 5.2)
**Evaluation Setup Description:** Our evaluation of the models has been conducted while keeping in mind about the availability of human-translated monolingual and multilingual training data in two possible real-life scenarios: 1) **Scenario A:** In this setup, we assume to have access to only English training data (less costly) and in 2) **Scenario B:** we assume to have access to manually written training queries in all the target languages (very expensive). During inference/testing, we expect all the queries are in the corresponding target languages.
**Scenario A:**
**Setup 1) Backtranslated Test (S1): [Translate-Test]** Here we develop our system by training the models on the English queries, and evaluate the intent detection and entity extraction systems in different languages by automatically backtranslating the test queries into English (e.g. similar to Gupta et al. (2021)). **Setup 2) Zero-Shot Cross-Lingual Test (S2):** Cross lingual transfer learning is a useful methodology used for tasks involving scarce data Zhou et al. (2016); Karamanolakis et al. (2020). In this setup, the models make use of zero-shot based cross-lingual capabilities from training on the English data (scraped from WebMD and 1mg) and use it for inference on test queries in Indic languages. **Setup 3) Bridge Language Backtranslation (S3):** Here a relatively low-resource language is first translated to an intermediate language and then finally to English. The motivation behind this setup lies in the fact that even though these Indic languages belong to different scripts, there are linguistic and morphological similarities among them which may improve the translation to English if they are used as intermediate languages. In this paper, we have considered 'Hindi' as the bridge language. This notion of such "bridge" languages has been explored previously in the context of Machine Translation Paul et al. (2013) and zero/few-shot transfer in MMLMs Lauscher et al. (2020).
**Scenario B:**
**Setup 4) Train and Test on Indic Data (S4):** In this setup, we use the training dataset in indic lan
guages to train our NLU models in different target languages. Here, we use the IHQID-WebMD and IHQID-1mg Indic data (non-English) to evaluate the NLU detection performances of the developed models. Jennifer Bot (Li et al., 2020) use a similar setup to extend their English bot to Spanish. **Setup 5) Full Backtranslation (S5):** In this setup, both train and test data are backtranslated to English. This is useful for the countries with poor technical setups for low-resource languages, since an automated approach can translate low-resource medical queries to resource-rich language and test.
In all back translation experiments, we use Bing Translation Api 4.
Footnote 4: [https://www.microsoft.com/en-us/translator/business/translator-api/](https://www.microsoft.com/en-us/translator/business/translator-api/)
## 5 Experiments and Results
**Experimental Setup:** Our experiments are conducted on two Tesla P100 GPUs with 16 GB RAM, 6 Gbps clock cycle and GDDR5 memory. All methods of entity extraction and intent detection took less than 30 GPU minutes for training. We perform a hyperparameter search and report the results of the settings which achieve the best results, and then fixed the same for all the models. The batch size is kept at 16, number of epochs is 10, optimization algorithm used is AdamW and the learning rate is 1e-5 with cross-entropy as the loss function.
### Intent Detection
**Task Description:** It can be defined as a multi-class classification task of correctly assigning a medical query with an intent label from a fixed set of intents (_drug, disease, treatment_ and _other_).
**Classification Models:** Since in Setups 1, 3 and 5, we take both the training and test set in English, we use state-of-the-art LMs pre-trained on English corpora (as shown in (i)) for our classification experiments. Whereas in Setup 2 and 4, we make use of multilingual LMs (as shown in (ii)) which have been widely used for various benchmark tasks in Indian languages. Following are the baselines:
**(i) Pre-trained English Models:** For setups 1, 3 and 5, we fine-tune the last layer of RoBERTa (Liu et al., 2019) and Bio_ClinicalBERT (Alsentzer et al., 2019) models on the English queries for intent detection by adding a classification layer that takes [CLS] token as input. The latter is a state-of-the-art domain-specific transformer based language model pre-trained on MIMIC III notes5, which is a collection of electronic health records and discharge notes.
Footnote 5: [https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT)
**(ii) Pre-trained Multilingual Models:** Two pre-trained multilingual LMs are used, mBERT _(bert-base-multilingual-uncased)_(Pires et al., 2019) and XLM-Roberta _(xlm-roberta-base)_(Conneau et al., 2020), both support all Indic languages in the datasets along with English. In Setup 2, we perform zero-shot classification using these models. The zero shot setting involves fine-tuning the model using English data, and testing on Indic languages. Whereas in Setup 4, we first train these models using the entire train sets in the target languages, separately for WebMD and 1mg, and check the performance on the test sets.
### Entity Recognition
**Task Description:** This task is analogous to performing a Named Entity Recognition (NER) for three categories, namely, _drugs, diseases and treatments_ on the query texts. We follow the standard BIO-tagging system while annotating the entities word-by-word. The train and test files for each configuration and language respectively are constructed from our WebMD and 1mg datasets.
**Extraction Frameworks:** For entity recognition, we follow the same strategies of evaluating the predictive performance of the LMs as described in Section 4. The same models (as described in section 5.1) are also used for entity recognition experiments.
### Evaluation
For all our experiments on intent detection and entity recognition, we calculate the Precision, Recall and report the F1-score.
### Results and Analysis
**Intent Detection:** Table 2 shows the results of intent detection of five experimental strategies on the IHQID-WebMD and IHQID-1mg datasets in terms of Macro F1-score (in percentage).
**Finding 1:** We observe that in general, Backtranslated Test (Setup 1) performs better than Zero-Shot Cross-Lingual Test (Setup 2). Moreover, it is interesting to notice that even though the performance of these models for most of the target languages in Setup 1 are comparable with that of English in
WebMD (an average of 3% drop for all the languages compared to English), there is a significant drop (average of 6%) in the F1 scores for the Setup 1 results in 1 mg Dataset. This holds true for both RoBERTa and BcBERT experiments. This denotes that the state-of-the-art English models, which are performing decently after backtranslation of the medical queries in English, pre-trained on both generic and medical domain, are lagging behind when the vocabularies of the medical entities are in the Indian context. This definitely calls for an immediate attention to developing LMs pre-trained on India-specific medical datasets.
**Finding 2:** Another interesting observation was that the use of Bridge Language Backtranslation (Setup 3) in Table 2, helps to boost performance of most of the languages in the case of 1mg dataset in comparison to Setup 1. The observation does not hold true for intent recognition in WebMD dataset. This might be attributed to the fact that using a bridge Indian language as an intermediate helps preserve the domain-specific sense of the queries instead of directly converting the queries from the target language to English. This seems like a reasonable alternative to develop useful intent recognition models for healthcare in Indian languages.
**Finding 3:** In comparison with zero-shot cross-lingual transfer (Setup 2), both mBERT and XLM-R models are outperformed by few-shot experiments (Setup 4) for intent detection. This observation holds true for both WebMD and 1mg datasets. However, Setup 4 is much more cost-intensive than the Setup 2.
**Finding 4:** We report the average (Avg) F1-score across all languages. The best performing model is RoBERTa (Setup 1 for English and Setup 5 for non-English) for both WebMD (74.94%) and 1mg (70.33%). RoBERTa is used for further evaluations.
**Entity Extraction:** Table 3 displays the results of entity recognition task under five different strategies on IHQID-WebMD and IHQID-1mg datasets.
**Finding 1:** In the Backtranslation test performed in Setup 1, we observe that for WebMD dataset, the difference in the performance of the models (Performance on English is 0.33% more average F1 Score for RoBERTa and 3.58% more than average F1 for bcBERT) is far less significant than the drop observed for 1 mg (Performance on English is 9.66% more average F1 Score for RoBERTa and 10.49% more than average for bcBERT). This implies that loss of information is quite high for the entities in Indian context during backtranslation.
**Finding 2:** Unlike our findings on Setup 3 in intent recognition, we observe that backtranslation using a bridge language seems to induce more loss of
\begin{table}
\begin{tabular}{|l|c c c c c c c c c c c c c c c c c c c c c c|} \hline \multirow{2}{*}{Laps} & \multicolumn{3}{c|}{Backtranslation Test (S1)} & \multicolumn{3}{c|}{Zero-Shot Cross-Lingual Test (S2)} & \multicolumn{3}{c|}{Bridge Language Backtranslation (S3)} & \multicolumn{3}{c|}{Train and Test on Indic Data (S4)} & \multicolumn{3}{c|}{Full Backtranslation (SS)} \\ \cline{2-19} & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERT} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{bBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} \\ \hline \multirow{2}{*}{Laps} & R\({}_{\text{RoBERT}}\) & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERT} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} \\ \hline \multirow{2}{*}{Hai} & 75.0 & 67.48 & 65.00 & 62.46 & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - \\ & Ba & 75.18 & 66.42 & 75.02 & 68.66 & 35.85 & 35.62 & 55.85 & 43.09 & 70.76 & 71.94 & 64.78 & 50.26 & 46.65 & 41.07 & 39.73 & 28.83 & 70.13 & 75.41 & 57.52 \\ & Ta & 76.53 & 64.29 & 73.99 & 67.92 & 48.28 & 38.30 & 39.34 & 57.47 & 42.14 & 69.51 & 66.42 & 73.06 & 64.43 & 51.17 & 57.49 & 40.04 & 23.63 & 24.36 & 69.64 & 74.79 & 62.82 \\ & Ta & 73.25 & 64.88 & 73.79 & 66.17 & 36.04 & 39.58 & 38.33 & 85.93 & 66.93 & 66.72 & 71.91 & 63.67 & 51.28 & 58.91 & 45.26 & 46.17 & 71.80 & 66.90 & 23.80 & 66.53 \\ & Ta & 71.76 & 66.85 & 73.05 & 68.90 & 35.61 & 36.27 & 51.50 & 34.58 & 71.61 & 20.77 & 27.35 & 68.02 & 50.07 & 51.77 & 42.23 & 46.23 & 72.93 & 72.54 & 71.76 & 70.52 \\ & Ta & 72.70 & 70.64 & 73.47 & 73.26 & 43.57 & 38.22 & 66.05 & 44.16 & 71.50 & 72.47 & 73.13 & 24.28 & 54.44 & 52.34 & 43.18 & 46.11 & 26.23 & 70.66 & 75.42 & 63.33 \\ \hline \multirow{2}{*}{Avg} & 73.32 & 67.50 & 73.03 & 67.14 & 38.33 & 37.18 & 56.58 & 41.07 & 70.65 & 69.91 & 62.25 & 67.05 & 52.23 & 58.10 & 42.15 & 40.98 & 24.94 & 70.13 & 74.48 & 64.63 \\ \hline \end{tabular}
\end{table}
Table 2: Macro-F1 scores for intent classification on the WebMD (WMD) and 1mg datasets for five Setups (three different setups for Train on English (Scenario A) and two setups of Train on Indic Data (Scenario B)). bcBert indicates BioClinicalBERT, mBERT indicates Multilingual BERT. Underline denotes the best across five settings.
\begin{table}
\begin{tabular}{|l|c c c c c c c c c c c c c c c c c c c c c c c|} \hline \multirow{2}{*}{Laps} & \multicolumn{3}{c|}{Backtranslation Test (S1)} & \multicolumn{3}{c|}{Zero-Shot Cross-Lingual Test (S2)} & \multicolumn{3}{c|}{Bridge Language Backtranslation (S3)} & \multicolumn{3}{c|}{Train and Test on Indic Data (S4)} & \multicolumn{3}{c|}{Full Backtranslation (SS)} \\ \cline{2-19} & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{bBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} \\ \hline \multirow{2}{*}{Laps} & R\({}_{\text{RoBERT}}\) & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{bBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} \\ \hline \multirow{2}{*}{Laps} & R\({}_{\text{RoBERT}}\) & \multicolumn{3}{c|}{RoBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{mBERT} & \multicolumn{3}{c|}{XLM-RBiBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} & \multicolumn{3}{c|}{RoBERTa} \\ \hline \multirow{2}{*}{Hai} & 75.18 & 66.48 & 75.02 & 68.6
information on the entities compared to Setup 1. This observations holds true for both the models across two datasets.
**Finding 3:** Similar to intent recognition, we observe that completely backtranslating both training and test data to English performs the best among S1, S3 and S5. This holds true for both the datasets and both the models. However, this operation is indeed expensive in terms of data curation cost, since it requires original data in the target languages for both training and testing.
**Finding 4:** The abysmal performances of the multilingual models as shown in Table 3, for both S2 and S4 indicate that these approaches are not so useful in our case.
**Finding 5:** We report the average (Avg) F1-score across all languages. BioClinicalBERT performs the best (Setup 1 for English and Setup 5 for non-English (Avg)) for both WebMD (63.14%) and 1mg (68.69%). It is used for further evaluations.
### Ablation Study
**Experiments with Varying Training Size:** We experiment with varying training sizes on both intent detection and entity extraction tasks using the best performing models, by taking 10%, 30%, 50%, 70% and 100%) of the training set. We then show the F1-scores (Y-axis) for all the languages with different training sizes (X-axis) in Fig. 2. Fig. 1(a) and 1(b) show that the performance of the intent detection models do not vary too much with increasing training sample data. However, Fig. 1(c) and 1(d) clearly show that entity extraction F1-scores increase significantly with the increase of training data. Thus, we can conclude that the intent detection model does not require a large amount of data to generalise, as opposed to the requirements of the entity extraction model.
**Category wise intent detection and entity extraction for the best model:** We evaluate the F1-scores for different intent classes for the RoBERTa Model (Setup 1 for English and Setup 5 for non-English) trained on WebMD and 1mg (See Section 4 for setup descriptions). Similarly, with the help of BioClinicalBERT (Setup 1 for English and Setup 5 for Non-English), we find the individual entity class wise F1-scores. The results in Table 4 show that the model is able to detect 'disease', 'drug' and 'treatment' intent classes with high F1-score but the performance on the 'Other' class is poor, thus bringing the macro averaged F1 score down considerably. This may be due to the fact that the system fails to detect open ended query types, present in the 'Other' class. This is supported by the intent class wise entity distribution, which shows an overwhelming dominance of 'drug', 'disease' and 'treatment' entities in their corresponding intent categories ('drug', 'disease' and 'treatment plan' intents, respectively), whereas the 'other' intent class, of which there are very few instances comparatively anyway, has no such dominant entity class associated with it. In the entity extraction task, the best performing model is able to extract all three entity categories with a similar F1-score performance.
**Real World Hospital Data Evaluation:** We use the real world healthcare query dataset (100 queries per language) to test the usability of our models in practical Indian hospital scenarios. We run the best performing models trained on WebMD and 1mg
Figure 3: Macro Average F1 Score for Intent detection and Entity Extraction across all different languages in Real World Hospital Data
Figure 2: Intent Detection and Entity Extraction F1-score (Y-axis) for Different Percentage of Training Data (X-axis) for WebMD and 1mg
data for intent detection (RoBERTa in Setup 1 for English and Setup 5 for Non-English) and entity extraction (BioCLinicalBERT in Setup 1 for English and Setup 5 for non-English) and report the average of two models (trained on IHQID-WebMD and IHQID-1mg) for each language. Fig. 3 shows the average F1-score for each language, which is consistent with the earlier results shown in Table 2 and 3. This shows that the best performing proposed setup performs satisfactorily on real world data in Indic languages.
### Demonstration
To be able to make the proposed methods accessible and usable by the community, we create an online interface, which could be found in our GitHub repository6. With the help of this website, one can post health query in the allowed language and obtain the predictions using our best models.
Footnote 6: [https://github.com/indichealth/indic-health-demo](https://github.com/indichealth/indic-health-demo)
## 6 Discussion and Error Analysis
We categorize the issues in mis-classification and identify two broad themes of the reasons. The primary reason is model prediction error. Figure 4 shows the model prediction errors for various intents in different languages. For an example, 'How common is syphilis' is of 'disease' intent category but model wrongly predicts it as 'other' category. Another reason is the misclassification due to incorrect translation of the medical entities such as the disease _'uticartia'_ has been transformed into _'ambat'_ during backtranslation as shown in Figure 5 which is not detected as an entity. So, the backtranslation error leads to intent mis-classification and entity extraction error. We speculate such random absurd behaviour due to the context of the query and languages are semantically different. Secondly, there are also certain issues in fluency and grammatical meaning after backtranslation. For instance, _'over the counter drug'_ gets changed to _'over the opposite drug'_. Entity recognition errors are also occurring along with the intent mis-classification.
## 7 Conclusion
We focus on developing novel Indian HealthCare Query Datasets and propose frameworks to detect intents and extract entities from queries in different Indian languages. Through extensive experiments on our proposed datasets, we recommend the community to use backtranslation of test queries to English in two real-life scenarios as a reasonable choice when we have access to English training data. However, the same strategy can be applied to both train and test queries if we have the budget of collecting data in target languages. Backtranslation of queries using an intermediate bridge language also proves to be a useful strategy in some cases.
## Acknowledgements
The project was supported in part by the grant given by I-Hub Foundation for Cobotics, IIT Delhi for the project, "Voice based Natural Interaction for Goal Oriented Tasks in Healthcare".
\begin{table}
\begin{tabular}{|l|c c c c|c c c|c c c c|c c c|} \hline \multirow{2}{*}{Lang} & \multicolumn{6}{c|}{Intent} & \multicolumn{6}{c|}{Entity} \\ \cline{2-13} & \multicolumn{3}{c|}{WebMD} & \multicolumn{3}{c|}{1mg} & \multicolumn{3}{c|}{WebMD} & \multicolumn{3}{c|}{1mg} \\ \cline{2-13} & Disease & Drug & Treatment & Other & Disease & Drug & Treatment & Other & Disease & Drug & Treatment & Disease & Drug & Treatment \\ \hline En & 75.86 & 81.42 & 74.16 & 74.07 & 80.00 & 94.64 & 85.00 & 35.29 & 63.16 & 72.13 & 61.39 & 66.67 & 88.00 & 47.06 \\ Hi & 73.10 & 80.00 & 66.67 & 69.50 & 72.97 & 78.57 & 67.47 & 69.06 & 64.37 & 70.41 & 58.72 & 67.27 & 85.04 & 55.56 \\ Bn & 80.79 & 80.39 & 71.91 & 75.71 & 80.77 & 94.74 & 87.18 & 52.63 & 65.19 & 69.35 & 53.23 & 73.50 & 68.48 & 47.72 \\ Ta & 77.63 & 73.27 & 60.00 & 73.38 & 80.77 & 94.74 & 80.95 & 25.00 & 60.05 & 69.07 & 49.23 & 76.11 & 72.87 & 50.00 \\ Te & 72.85 & 78.10 & 70.45 & 72.46 & 80.77 & 94.55 & 77.27 & 33.33 & 62.09 & 65.00 & 60.71 & 75.63 & 65.37 & 66.67 \\ Gu & 75.64 & 78.50 & 69.77 & 72.18 & 83.02 & 93.81 & 80.00 & 33.33 & 53.41 & 66.32 & 58.82 & 72.41 & 69.44 & 52.86 \\ Mr & 76.82 & 78.85 & 75.29 & 71.83 & 79.25 & 94.64 & 83.72 & 25.00 & 59.48 & 59.26 & 56.45 & 60.78 & 59.74 & 49.58 \\ \hline \end{tabular}
\end{table}
Table 4: Macro-F1 scores for intent identification and entity extraction on the WebMD (WMD) and 1mg datasets. For each language, we portray the results of the best model obtained for the corresponding dataset.
Figure 4: Error in Prediction
Figure 5: Error in Back-Translation
### Limitations
Our dataset needs to be scaled up in terms of size and intent labels which we aim to do as a part of future work. Another constraint is that we do not consider cases where queries are multi-labelled (e.g. - drug and disease both). We shall explore in future.
## Ethical Concerns
We propose to release the dataset which neither reveals any personal sensitive information of the patients nor any toxic statement. Besides, we have paid enough token money (exact remuneration will be revealed once accepted to the conference) to the domain-expert annotators who have helped us in manually tagging the medical queries.
|
2308.02953 | On equivariant fibrations of $G$-CW-complexes | It is proved that if $G$ is a compact Lie group, then an equivariant Serre
fibration of $G$-CW-complexes is an equivariant Hurewicz fibration in the class
of compactly generated $G$-spaces. In the nonequivariant setting, this result
is due to Steinberger, West and Cauty. The main theorem is proved using the
following key result: a $G$-CW-complex can be embedded as an equivariant
retract in a simplicial $G$-complex. It is also proved that an equivariant map
$p: E \to B$ of $G$-CW-complexes is a Hurewicz $G$-fibration if and only if the
$H$-fixed point map $p^H : E^H \to B^H$ is a Hurewicz fibration for any closed
subgroup $H$ of $G$. This gives a solution to the problem of James and Segal in
the case of $G$-CW-complexes. | Pavel S. Gevorgyan, Rolando Jimenez | 2023-08-05T21:19:42Z | http://arxiv.org/abs/2308.02953v1 | # On equivariant fibrations of \(\mathbf{G}\)-CW-complexes
###### Abstract.
It is proved that if \(G\) is a compact Lie group, then an equivariant Serre fibration of \(G\)-CW-complexes is an equivariant Hurewicz fibration in the class of compactly generated \(G\)-spaces. In the nonequivariant setting, this result is due to Steinberger, West and Cauty. The main theorem is proved using the following key result: a \(G\)-CW-complex can be embedded as an equivariant retract in a simplicial \(G\)-complex. It is also proved that an equivariant map \(p:E\to B\) of \(G\)-CW-complexes is a Hurewicz \(G\)-fibration if and only if the \(H\)-fixed point map \(p^{H}:E^{H}\to B^{H}\) is a Hurewicz fibration for any closed subgroup \(H\) of \(G\). This gives a solution to the problem of James and Segal in the case of \(G\)-CW-complexes.
Key words and phrases:\(G\)-CW-complex, simplicial \(G\)-complex, equivariant fibration, \(H\)-fixed points 2020 Mathematics Subject Classification: 55R91, 57S05 \({}^{1}\)The research of R. Jimenez was partially supported by the Programa de Apoyos para la Superacion del Personal Academico de la Direccion General de Asuntos del Personal Academico de la Universidad Nacional Autonoma de Mexico, the Secretaria de Educacion Publica and the Consejo Nacional en Ciencia y Tecnologia (grant no. 284621).
## 2. Embeddings of \(G\)-Cw-complexes in simplicial \(G\)-complexes
Simplicial \(G\)-complexes and \(G\)-CW-complexes are important objects of equivariant algebraic topology. These spaces are constructed using simpler objects: equivariant simplices and equivariant cells. We recall the construction of these objects following the work [6] of Illman.
Let \(H_{0}\), \(H_{1}\),..., \(H_{n}\) be closed subgroups of \(G\) such that \(H_{0}\supset H_{1}\supset\ldots\supset H_{n}\). Consider the \(G\)-space \(\Delta_{n}\times G\) with the action \(g(x,g^{\prime})=(x,gg^{\prime})\), where \(\Delta_{n}\) is the standard simplex in an \(n\)-dimensional vector space. Define an equivalence relation \(\sim\) on \(\Delta_{n}\times G\) as follows:
\[(x,g)\sim(x,g^{\prime})\iff gH_{m}=g^{\prime}H_{m},\quad x\in\Delta_{m} \backslash\Delta_{m-1},0\leqslant m\leqslant n,\]
where the embedding \(\Delta_{m}\subset\Delta_{n}\), \(0\leqslant m\leqslant n\), is given by
\[(x_{0},\ldots,x_{m})\to(x_{0},\ldots,x_{m},0,\ldots,0).\]
The quotient
\[\Delta_{n}(G;H_{0},\ldots,H_{n})=(\Delta_{n}\times G)|\sim\]
is a \(G\)-space with the action \(g[x,g^{\prime}]=[x,gg^{\prime}]\), where \([x,g^{\prime}]\) is the equivalence class of \((x,g^{\prime})\in\Delta_{n}\times G\).
The compact \(G\)-space \(\Delta_{n}(G;H_{0},\ldots,H_{n})\) is called a _standard \(n\)-dimensional \(G\)-simplex_ of type \((H_{0},H_{1},\ldots,H_{n})\). The orbit space of the \(G\)-simplex \(\Delta_{n}(G;H_{0},\ldots,H_{n})\) is \(\Delta_{n}\), with the orbit projection \(\pi:\Delta_{n}(G;H_{0},\ldots,H_{n})\to\Delta_{n}\) given by \(\pi([x,g])=x\).
A \(G\)-space \(X\) is said to be _equivariantly triangulable_ if there exists a triangulation \(t:K\to X|G\) of the orbit space \(X|G\), such that for any \(n\)-dimensional simplex \(s\) of \(K\) there exist closed subgroups \(H_{0}\supset H_{1}\supset\ldots\supset H_{n}\) of \(G\) and an equivariant homeomorphism
\[\alpha:\Delta_{n}(G;H_{0},\ldots,H_{n})\to p^{-1}(t(s))\]
inducing a linear homeomorphism \(\alpha|G:\Delta_{n}\to t(s)\) of the orbit space. Here \(p:X\to X|G\) is the orbit projection. \(G\)-subsets \(p^{-1}(t(s))\) are called _equivariant simplices_ of the equivariant triangulation of \(X\). Since the simplicial complex \(K\) is endowed with the weak topology, the topology of the simplicial \(G\)-complex \(X\) will be weak with respect to the family of all equivariant simplices.
The following theorem on equivariant triangulation of \(G\)-spaces was proved by Illman ([6], Theorem 5.5).
**Theorem 1** (Illman [6]).: _Let \(X\) be a \(G\)-space, and let \(t:K\to X|G\) be a triangulation of the quotient \(X|G\) such that, for any open simplex \(\mathring{s}\), the set \(t(\mathring{s})\subset X|G\) has constant orbit type. Then \(X\) is equivariantly triangulable with triangulation \(t:K^{\prime}\to X|G\), where \(K^{\prime}\) is the barycentric subdivision of \(K\)._
As in the classical case, invariant open subsets of a simplicial \(G\)-complex admit an equivariant simplicial structure.
**Theorem 2**.: _Invariant open subsets of a simplicial \(G\)-complex are equivariantly triangulable._
Proof.: Let \(X\) be a simplicial \(G\)-complex with an invariant open subset \(U\subset X\). The orbit quotient \(X|G\) is a simplicial complex. Since the projection \(p:X\to X|G\) is an open map, \(p(U)=U|G\) is an open subset of \(X|G\). Note that there exists a triangulation
of \(p(U)=U|G\) such that the orbit types of all open simplices in this triangulation are constant. Then Theorem 1 implies that there exists an equivariant triangulation of the \(G\)-space \(U\). The theorem is proved.
Like CW-complexes, \(G\)-CW-complexes are constructed inductively, by attaching at each step equivariant cells of a given dimension to the result of the previous step. Thus, a \(G\)-space \(X\) is called a \(G\)_-CW-complex_ if it is represented as a union of \(G\)-spaces \(X^{n}\), \(n=1,2,\dots\), and the following conditions are satisfied:
1) \(X^{0}\) is a disjoint union of orbits \(G|H\), that is, a disjoint union of equivariant \(0\)-cells;
2) \(X^{n+1}\) is obtained from \(X^{n}\) by attaching equivariant \((n+1)\)-cells \(G|H\times D^{n+1}\) along equivariant maps \(G|H\times S^{n}\to X^{n}\);
3) the topology of \(X\) is consistent with the family \(\{X^{n};\ n\geqslant 0\}\).
**Theorem 3**.: _A \(G\)-CW-complex is an equivariant retract of a simplicial \(G\)-complex._
Proof.: Let \(X\) be a \(G\)-CW-complex. The orbit space \(X|G\) is a CW-complex. By a result of Cauty ([7], Corollary 2) there exists a closed embedding \(i:X|G\to L\) in a simplicial complex \(L\) such that \(i(X|G)\) is a neighbourhood retract of \(L\) (see [7], Theorems 7 and 8). Since open subsets of a simplicial complex are triangulable, we can assume that \(i(X|G)\) is a retract of a simplicial complex \(K\). Let \(r:K\to i(X|G)\) be the corresponding retraction. Consider a triangulation of \(K\) such that, for any open simplex \(\hat{s}\) of it, the image \(r(\hat{s})\) lies in some cell of the CW-complex \(i(X|G)\).
Now consider the subspace \(r^{*}(X)=\{(k,x)\in K\times X;\ r(k)=p(x)\}\) of the product \(K\times X\), where \(p:X\to X|G\) is the natural projection. We define a \(G\)-action on \(r^{*}(X)\) by \(g(k,x)=(k,gx)\). This turns \(r^{*}(X)\) into a \(G\)-space. Note that the orbit space \(r^{*}(X)|G\) is the simplicial complex \(K\). Let \(\pi:r^{*}(X)\to K\) be the natural projection.
We define a map \(\tilde{i}:X\to r^{*}(X)\) by \(\tilde{i}(x)=(i(p(x)),x)\). This is an equivariant embedding of the \(G\)-space \(X\) in \(r^{*}(X)\). Since \(i(X|G)\) is closed in \(K\) and \(\tilde{i}(X)=\pi^{-1}(i(X|G))\), the subset \(\tilde{i}(X)\) is closed in \(r^{*}(X)\).
Since all points in each cell of \(i(X|G)\) have the same orbit type, the orbit type is constant on all open simplices of \(K\). Therefore, by Theorem 1 there exists an equivariant triangulation of the \(G\)-space \(r^{*}(X)\). The theorem is proved.
In the case of trivial \(G\)-action, Theorem 3 implies the following.
**Theorem 4**.: _Any CW-complex is a retract of a simplicial complex._
This statement follows easily from the results of Cauty [7].
Open invariant subsets of a \(G\)-CW-complex are not \(G\)-CW-complexes in general (see [2], Examples 1 and 2). However, they are equivariant retracts of \(G\)-CW-complexes. This follows from a stronger result presented next.
**Theorem 5**.: _Any invariant open subset of a \(G\)-CW-complex is an equivariant retract of a simplicial \(G\)-complex._
Proof.: Let \(U\) be an invariant open subset of a \(G\)-CW-complex \(X\). By Theorem 3 there exists a simplicial \(G\)-complex \(K\) and a closed equivariant embedding \(i:X\to K\) such that \(i(X)\) is an equivariant retract of \(K\). Let \(r:K\to i(X)\) be an equivariant retraction. Since \(i(U)\subset i(X)\subset K\) is an invariant open subset of \(i(X)\), the subset \(r^{-1}(i(U))\) is invariant and open in \(K\). Furthermore, \(i(U)\) is an equivariant retract of \(r^{-1}(i(U))\), which is a simplicial \(G\)-complex by Theorem 2. The theorem is proved.
## 3. Equivariant fibrations
Let \(E\) and \(B\) be \(G\)-spaces. An equivariant map \(p:E\to B\) is said to have the _equivariant covering homotopy property_ with respect to a \(G\)-space \(X\) if for arbitrary equivariant maps \(\widetilde{f}:X\to E\) and \(F:X\times I\to B\) such that \(F(x,0)=p\widetilde{f}(x)\), \(x\in X\), there exists an equivariant map \(\widetilde{F}:X\times I\to B\), satisfying \(\widetilde{F}(x,0)=\widetilde{f}(x)\) for any \(x\in X\) and \(p\circ\widetilde{F}=F\).
An equivariant map \(p:E\to B\) is called an _equivariant Hurewicz fibration_ if \(p\) has the equivariant covering homotopy property with respect to any \(G\)-space \(X\). If an equivariant map \(p:E\to B\) has the equivariant covering homotopy property with respect to \(G\)-CW-complexes, then it is called an _equivariant Serre fibration_.
**Lemma 1**.: _Let \(p:E\to B\) be an equivariant Serre fibration, and let \(X\) be an equivariant retract of a simplicial \(G\)-complex \(K\). Then \(p:E\to B\) has the equivariant covering homotopy property with respect to the \(G\)-space \(X\)._
Proof.: Let \(f:X\to E\) be an equivariant map, and let \(F:X\times I\to B\) be an equivariant homotopy of the map \(p\circ f\): \(F_{0}=p\circ f\). Consider an equivariant retraction \(r:K\to X\) and an equivariant homotopy \(H:K\times I\to B\) given by \(H=F\circ(r\times\operatorname{id})\). Clearly, \(H_{0}=p\circ(f\circ r)\). Since \(p:E\to B\) is an equivariant Serre fibration, there exists a covering homotopy \(\widetilde{H}:K\times I\to E\) such that \(\widetilde{H}_{0}=f\circ r\) and \(p\circ\widetilde{H}=H\). Now it is easy to see that \(\widetilde{F}=\widetilde{H}|_{X\times I}:X\times I\to E\) is the required equivariant covering homotopy, that is, \(\widetilde{F}_{0}=f\) and \(p\circ\widetilde{F}=F\). The lemma is proved.
**Definition 1**.: A \(G\)-space \(X\) is said to be _equivariantly uniformly locally contractible_ if there exists an invariant neighbourhood \(V\) of the diagonal \(\Delta\subset X\times X\) and an equivariant map \(\lambda:V\times I\to X\) such that
(i) \(\lambda(x,y,0)=x\) and \(\lambda(x,y,1)=y\) for all \((x,y)\in V\);
(ii) \(\lambda(x,x,t)=x\) for all \(x\in X\) and \(t\in I\).
It is easy to see that the class of equivariantly uniformly locally contractible spaces contains all \(G\)-ANR-spaces. Therefore, \(G\)-CW-complexes are equivariantly uniformly locally contractible.
The previous results allow us to prove the following theorem.
**Theorem 6**.: _Let \(p:E\to B\) be an equivariant Serre fibration, where \(E\) and \(B\) are \(G\)-CW-complexes. Then \(p:E\to B\) is an equivariant Hurewicz fibration in the class of all compactly generated spaces._
Proof.: It is known that when B is a paracompact \(G\)-space, an equivariant locally trivial bundle \(p:E\to B\) is an equivariant Hurewicz fibration (see [8], Example 5). Therefore, it is enough to prove that for an arbitrary point \(b\in B\) there exists an invariant open neighbourhood \(U\) of \(b\) and an equivariant map \(\omega:U\times p^{-1}(U)\to E\) such that
(1) \(p\circ\omega(b,e)=b\) for all \((b,e)\in U\times p^{-1}(U)\),
(2) \(\omega(p(e),e)=e\) for all \(e\in p^{-1}(U)\).
Since the \(G\)-CW-complex B is equivariantly uniformly locally contractible (see Definition 1), there exists an invariant open neighbourhood \(U\) of \(b\in B\) and an equivariant map
\[\lambda:U\times U\times I\to B,\]
such that
\[\lambda(x,y,0)=x,\quad\lambda(x,y,1)=y,\quad\lambda(x,x,t)=x\]
for all \(x,y\in U\) and \(t\in I\).
Now consider the equivariant map
\[F:U\times p^{-1}(U)\times I\to B,\]
given by
\[F(b,e,t)=\lambda(p(e),b,t)\]
for all \((b,e,t)\in U\times p^{-1}(U)\times I\). Since \(F(b,e,0)=\lambda(p(e),b,0)=p(e)=p(pr_{2}(b,e,0))\), the map \(F\) is an equivariant homotopy of the map \(p\circ pr_{2}\).
Since \(U\) is an equivariant retract of a simplicial \(G\)-complex (see Theorem 5) and \(p:E\to B\) is an equivariant Serre fibration, Lemma 1 implies that there is an equivariant covering homotopy
\[\tilde{F}:U\times p^{-1}(U)\times I\to E,\]
satisfying \(\tilde{F}(b,e,0)=e\) and \(p\circ\tilde{F}=F\). Now we define the required equivariant map \(\omega:U\times p^{-1}(U)\to E\) by the formula
\[\omega(b,e)=\tilde{F}(b,e,1).\]
Conditions (1) and (2) are verified easily. The theorem is proved.
## 4. H-fixed point sets and G-fibrations
Let \(p:E\to B\) be a Hurewicz \(G\)-fibration. Given a closed subgroup \(H\) of \(G\), the induced \(H\)-fixed point map \(p^{H}:E^{H}\to B^{H}\) is a Hurewicz fibration in the usual sense. A natural question arises: is this necessary condition also sufficient? This question was posed by James and Segal in [4].
Why this question is natural is explained, in particular, by the following well-known result (for example, see [9], SS 15.3).
**Theorem 7**.: _An equivariant map \(p:E\to B\) is a Serre \(G\)-fibration if and only if the map \(p^{H}:E^{H}\to B^{H}\) is a Serre fibration for any closed subgroup \(H\) of \(G\)._
In the case of a finite group \(G\) Theorem 7 was proved by Bredon in [5], Ch. III, 4.1.
**Theorem 8**.: _Let \(p:E\to B\) be an equivariant map of \(G\)-CW-complexes. If the map \(p^{H}:E^{H}\to B^{H}\) is a Serre fibration for any closed subgroup \(H\) of \(G\), then \(p:E\to B\) is a Hurewicz \(G\)-fibration in the class of compactly generated \(G\)-spaces._
Proof.: The fact that \(p^{H}:E^{H}\to B^{H}\) is a Serre fibration for any closed subgroup \(H\) implies that the equivariant map \(p:E\to B\) is a Serre \(G\)-fibration (see Theorem 7). Since both \(E\) and \(B\) are \(G\)-CW-complexes, it follows from Theorem 6 that the equivariant map \(p:E\to B\) is a Hurewicz \(G\)-fibration in the class of compactly generated \(G\)-spaces. The theorem is proved.
From the latter theorem we obtain the following positive answer to the problem of James and Segal in the class of compactly generated \(G\)-spaces and under the condition that both \(E\) and \(B\) are \(G\)-CW-complexes:
**Theorem 9**.: _An equivariant map \(p:E\to B\) of \(G\)-CW-complexes is a Hurewicz \(G\)-fibration in the class of compactly generated \(G\)-spaces if and only if the map \(p^{H}:E^{H}\to B^{H}\) is a Hurewicz fibration for any closed subgroup \(H\) of \(G\)._
The question remains open as to whether Theorem 9 holds in the case when \(E\) and \(B\) are arbitrary \(G\)-spaces.
|
2307.03210 | Sparse Graphical Linear Dynamical Systems | Time-series datasets are central in machine learning with applications in
numerous fields of science and engineering, such as biomedicine, Earth
observation, and network analysis. Extensive research exists on state-space
models (SSMs), which are powerful mathematical tools that allow for
probabilistic and interpretable learning on time series. Learning the model
parameters in SSMs is arguably one of the most complicated tasks, and the
inclusion of prior knowledge is known to both ease the interpretation but also
to complicate the inferential tasks. Very recent works have attempted to
incorporate a graphical perspective on some of those model parameters, but they
present notable limitations that this work addresses. More generally, existing
graphical modeling tools are designed to incorporate either static information,
focusing on statistical dependencies among independent random variables (e.g.,
graphical Lasso approach), or dynamic information, emphasizing causal
relationships among time series samples (e.g., graphical Granger approaches).
However, there are no joint approaches combining static and dynamic graphical
modeling within the context of SSMs. This work proposes a novel approach to
fill this gap by introducing a joint graphical modeling framework that bridges
the graphical Lasso model and a causal-based graphical approach for the
linear-Gaussian SSM. We present DGLASSO (Dynamic Graphical Lasso), a new
inference method within this framework that implements an efficient block
alternating majorization-minimization algorithm. The algorithm's convergence is
established by departing from modern tools from nonlinear analysis.
Experimental validation on various synthetic data showcases the effectiveness
of the proposed model and inference algorithm. | Emilie Chouzenoux, Victor Elvira | 2023-07-06T14:10:02Z | http://arxiv.org/abs/2307.03210v2 | # Sparse Graphical Linear Dynamical Systems
###### Abstract
Time-series datasets are central in numerous fields of science and engineering, such as biomedicine, Earth observation, and network analysis. Extensive research exists on state-space models (SSMs), which are powerful mathematical tools that allow for probabilistic and interpretable learning on time series. Estimating the model parameters in SSMs is arguably one of the most complicated tasks, and the inclusion of prior knowledge is known to both ease the interpretation but also to complicate the inferential tasks. Very recent works have attempted to incorporate a graphical perspective on some of those model parameters, but they present notable limitations that this work addresses. More generally, existing graphical modeling tools are designed to incorporate either static information, focusing on statistical dependencies among independent random variables (e.g., graphical Lasso approach), or dynamic information, emphasizing causal relationships among time series samples (e.g., graphical Granger approaches). However, there are no joint approaches combining static and dynamic graphical modeling within the context of SSMs. This work proposes a novel approach to fill this gap by introducing a joint graphical modeling framework that bridges the static graphical Lasso model and a causal-based graphical approach for the linear-Gaussian SSM. We present DGLASSO (Dynamic Graphical Lasso), a new inference method within this framework that implements an efficient block alternating majorization-minimization algorithm. The algorithm's convergence is established by departing from modern tools from nonlinear analysis. Experimental validation on synthetic and real weather variability data showcases the effectiveness of the proposed model and inference algorithm. This work will significantly contribute to the understanding and utilization of time-series data in diverse scientific and engineering applications where incorporating a graphical approach is essential to perform the inference.
T 100
time series modeling. SSMs characterize complex systems through discrete-time models composed of a hidden (or latent) state that evolves in a Markovian manner. SSMs are composed of a state model, which can mimic realistically the complicated dynamics of the system, and the observation model, which links the temporal observations to the hidden state at the same time step. In SSMs, the Bayesian filtering task consists on computing the posterior probability density function (pdf) of the hidden state at a given time step, given all observations from the time series available up to that time. However, in most SSMs the posterior pdf is intractable and must be approximated, generally through particle filters (Djuric et al., 2003; Doucet et al., 2009; Naesseth et al., 2019). One relevant exception is the linear-Gaussian state-space model (LG-SSM), which allows for exact inference, when the model parameters are known, through the Kalman filter and the Rauch-Tung-Striebel (RTS) smoother (Sarkka, 2013, Chapter 8). The LG-SSM is arguably the most popular model and is still subject of intense research (Sarkka, 2013, Chapter 4). The literature of parameter estimation for LG-SSMs considers both probabilistic and point-wise approaches. The former methods can be applied for a wider class of SSMs (i.e., beyond LG-SSMs) and include for instance particle Markov chain Monte Carlo (Andrieu et al., 2010), sequential Monte Carlo squared (Chopin et al., 2013), and nested particle filters (Crisan and Miguez, 2018; Perez-Vieites and Miguez, 2021) (see (Kantas et al., 2015) for a review).
One of the main challenges in SSMs is the estimation of the model parameters. In its canonical version, the LG-SSM is composed of four parameters. The state model is parametrized by the so-called transition matrix, which linearly links consecutive states, and the covariance matrix of the additive Gaussian state noise. The observation model is parametrized by the observation matrix, which connects (also linearly) the state with the observation at the same time-step, and the covariance matrix of the additive Gaussian observation noise. Point-wise approaches are generally based on maximum likelihood estimation of the model parameters in LG-SSMs (Sarkka, 2013, Chap. 12). A first family of methods implement recursive schemes to compute first and second derivatives of the likelihood with respect to the unknown parameters (Segal and Weinstein, 1988, 1989; Nagakura, 2021; Gupta and Mehra, 1974) (see the discussion in (Cappe et al., 2005, Sec. 10.2.4)). Iterative optimizers such as quasi-Newton (Olsson et al., 2007) or Newton-Raphson (Gupta and Mehra, 1974) are then employed to get a maximum likelihood estimate. However, due to the intricate form of the likelihood function, these optimization solvers must be used in a black-box setting, which prevents from the implementation of efficient linesearch procedures, and limits considerably the convergence guarantees. The second family of point-wise methods is based on expectation-minimization (EM) algorithms (Shumway and Stoffer, 1982)(Cappe et al., 2005, Sec. 10.4)(Sarkka, 2013, Sec. 12.2.3). EM algorithm for parameter estimation in LG-SSM exhibits a good convergence stability (Dempster et al., 1977; Wu, 1983), and a simplicity of implementation, which might explain its wide use on practical applied fields, such as finance, electrical engineering, and radar (Sharma et al., 2020, 2021; Frenkel and Feder, 1999). The benefits and drawbacks of these algorithms are discussed in (Shumway and Stoffer, 1982, Sec. 1).
Another important family of approaches for time series analysis relies on graphical modeling. Static models for representing correlation properties within multivariate times series (assuming the series element as independant samples from a random variable), have been explored for decades (Maathuis et al., 2019). Let us mention the famous Graphical Lasso
algorithm (Friedman et al., 2008), and its multiple variants (Chandrasekaran et al., 2012; Benfenati et al., 2020; Belilovsky et al., 2017; Bach and Jordan, 2004; Ying et al., 2020), for inference of sparse Gaussian graphical models. Modeling (non-independant) dynamical behaviors through (directed) graph representations, and in particular, causality, has also been explored (Eichler, 2012; Ioannidis et al., 2019; Giannakis et al., 2018).
There is actually a tight link between state-space modeling and dynamical graphical modeling, as emphasized in (Barber and Cemgil, 2010). For instance, in Sagi et al. (2023) the extended Kalman filter (EKF) is enhanced by taking a graphical perspective. (Ioannidis et al., 2018) takes an approach of jointly estimating latent processes and the topology of a related graph. (Alippi and Zambon, 2023) also adopts a graphical perspective in linear-Gaussian models (or linearized versions of them through the EKF) in order to learn model parameters through deep neural networks. This approach bears links with differentiable particle filters (DPFs) (Corenflos et al., 2021; Chen et al., 2021). In particular, the causality (in the Granger sense) within the hidden state of the state space model is directly related to the transition matrix of the model. As emphasized in (Elvira and Chouzenoux, 2022), such matrix has an interpretation as the adjacency matrix of a directed graph.
For the above reason, the approaches for graphical model inference and model parameter estimation are actually sometimes very similar, algorithmically speaking, the difference being mostly related to the interpretation of the parameters and their representation. The graphical modeling brings new insights such as the choice of specific priors (typically, sparsity) and algorithms (e.g., binary edge selection tools). In most works of graphical modeling, sparsity usually plays a key role (Brendan and Tenenbaum, 2010). Indeed, the interpretation of graphical model requires an edge selection procedure, that can be enforced by imposing a sparsity prior at the graph inference stage. A typical choice is the \(\ell_{1}\)(i.e., Lasso) penalty (Friedman et al., 2008; Meinshausen and Buhlmann, 2006; Chouzenoux and Elvira, 2020), which has the advantage of being convex1. Other types of priors have also been explored in (Ying et al., 2020; Benfenati et al., 2020; Chandrasekaran et al., 2012; Chouzenoux and Elvira, 2023; Elvira and Chouzenoux, 2022; Kumar et al., 2020; Hippert-Ferrer et al., 2022), with the aim of imposing specific graph structures (e.g., low rank, bounded spectrum, block sparsity). For static graph modeling, proximal algorithms (Combettes and Pesquet, 2011) (also including augmented Lagrangian primal-dual methods (Komodakis and Pesquet, 2015)) are typically the answer of choice for accounting for the various class of priors of interest in graphical inference. Discrete optimization techniques for edge selection have also been explored (see (Benfenati et al., 2018) and references therein). Dynamic graph inference has also been performed using proximal tools (Ioannidis et al., 2019). In the particular case of state-space models, as we explained earlier, several methods, for instance based on Newton or EM, are available for parameter inference. But a common drawback of most available methods is that none allow to introduce in a direct manner a graphical aware prior. Newton-based techniques are, by essence, limited to differentiable priors which prevents the use of Lasso and low-rank penalties. EM methods can be modified to incorporate a prior term on the surrogate majorant function. But this usually leads to an M-step without close form, which prevents a directly implementable algorithmic solution. In our recent works (Chouzenoux and Elvira, 2023; Elvira and Chouzenoux, 2022), we introduced more
sophisticated EM-based schemes to cope with generic convex regularizations. The goal was to estimate the transition matrix within an LG-SSM, adopting a Granger-based graphical interpretation of such matrix. A fully Bayesian approach to estimate the transition matrix was taken in (Cox and Elvira, 2022, 2023) where the space of sparse matrices was explored via reversible jump Markov chain Monte Carlo (Green and Hastie, 2009).
Despite this prolific literature, there exists a gap that we propose to fill in this work. Graphical modeling tools are either dedicated to represent static information related to statistical dependence between independent realizations of random variables (e.g., graphical Lasso approach), or to represent dynamic information related to causality behavior between time series samples. For the latter case, time series are either processed directly (Ioannidis et al., 2019), or through a state-space modeling including hidden state (Elvira and Chouzenoux, 2022). However, we are not aware of any joint approach which includes both static and dynamic graphical modeling (hence, two graphs with two distinct purposes), in the context of state-space models. Our contribution lies within this goal, as we describe in the next section.
### Summary of our main contributions
In this work, we introduce a joint graphical modeling for representing static and dynamical behaviors in the hidden state of a linear Gaussian state-space model, bridging the gap between the static graphical Lasso model from (Friedman et al., 2008) and the dynamic model from (Elvira and Chouzenoux, 2022). We then present a novel Bayesian inference method, called DGLASSO (Dynamic Graphical Lasso), to estimate both graphs, under a sparsity prior, jointly with the construction of the filtering/smoothing distribution for the time series. The method relies on an original optimization algorithm, implementing a block alternating proximal algorithm with efficient majorization-minimization inner steps. We establish the convergence of our algorithm using recent tools from nonlinear analysis. We then perform an extensive experimental validation of the proposed model and inference algorithm by means of experiments on synthetic data and real data from weather variability analysis. For reproducibility purpose, the code for DGLASSO algorithm, is available at [https://pages.saclay.inria.fr/emilie.chouzenoux/log/DGLASSO_toolbox.zip](https://pages.saclay.inria.fr/emilie.chouzenoux/log/DGLASSO_toolbox.zip).
## 2 Problem setup
### Notation
Bold symbols are used for matrix and vectors. We denote by \(\|\mathbf{x}\|_{2}=\sqrt{\mathbf{x}^{\top}\mathbf{x}}\) the Euclidean norm of \(\mathbf{x}\in\mathbb{R}^{N}\), where \(\top\) states from the transpose operation and \(\mathbb{R}^{N}\) is the \(N\)-dimensional Euclidean space. We also introduce \(\|\mathbf{X}\|_{F}\), \(\|\mathbf{X}\|_{2}\) and \(\operatorname{tr}(\mathbf{X})\), the Frobenius norm, the spectral norm (i.e., largest singular value), and the trace, respectively, of elements \(\mathbf{X}=(X(n,\ell))_{1\leq n\leq N,1\leq\ell\leq M}\in\mathbb{R}^{N\times M}\). \(\mathbf{Id}_{N}\) is the identity matrix of \(\mathbb{R}^{N}\). \(\mathcal{S}_{N}\) denotes symmetric matrices of \(\mathbb{R}^{N\times N}\). Both \(\mathbb{R}^{N\times N}\) and \(\mathcal{S}_{N}\) are Hilbert spaces, endowed with the Frobenius norm \(\|\cdot\|_{F}\) and the trace scalar product \(\langle\mathbf{X},\mathbf{Y}\rangle=\operatorname{tr}(\mathbf{XY})\). \(\mathcal{S}_{N}^{+}\) (resp. \(\mathcal{S}_{N}^{++}\)) is the set of \(N\times N\) symmetric positive semidefinite (resp. definite) matrices of \(\mathbb{R}^{N}\). Given a sequence of elements \(\{\mathbf{x}_{k}\}_{k=1}^{K}\) of length \(K\geq 1\) and size \(N\), we denote each element as \(\mathbf{x}_{k}=(\mathbf{x}_{k}(n))_{1\leq n\leq N}\) and we use the notation \(\mathbf{x}_{k_{1}:k_{2}}\) to refer to the subsequence \(\{\mathbf{x}_{k}\}_{k=k_{1}}^{k_{2}}\), for \(1\leq k_{1}<k_{2}\leq N\).
\(K\). For convex analysis concepts, we rely on the notation in the reference textbook by Bauschke and Combettes (2017). We denote \(\Gamma_{0}(\mathcal{H})\) the set of proper, lower semi continuous convex functions from a Hilbert space \(\mathcal{H}\) to \((-\infty,+\infty]\)(Bauschke and Combettes, 2017, Chap. 9), and \(\partial f\) the subdifferential of \(f\in\Gamma_{0}(\mathcal{H})\)(Bauschke and Combettes, 2017, Chap. 16). With a slight abuse of notation, we make use of the extended form of the minus logarithm determinant function, defined as
\[(\forall\mathbf{P}\in\mathcal{S}_{N})\quad-\log\det(\mathbf{P})=\begin{cases} -\log|\mathbf{P}|,&\text{if }\mathbf{P}\in\mathcal{S}_{N}^{++},\\ +\infty,&\text{otherwise},\end{cases} \tag{1}\]
with \(|\mathbf{P}|\) the product of the eigenvalues of \(\mathbf{P}\). According to (Bauschke and Combettes, 2017), the function in Eq. (1) belongs to \(\Gamma_{0}(\mathcal{S}_{N})\).
#### 2.1.1 Graphs
We introduce here our notation for describing graphs. Most of our notation is inherited from (Buhlmann and Van De Geer, 2011).
Let us define a graph \(\mathcal{G}\) made of set of \(N\) vertices \(\mathcal{V}=\{v^{(n)}\operatorname{s.t.}n\in\{1,\ldots,N\}\}\) and of a set of edges \(\mathcal{E}=\{e^{(n,\ell)}\operatorname{s.t.}(n,\ell)\in\mathbb{E}\}\). The latter gathers ordered pairs of distinct vertices, and as such, \(\mathbb{E}\subset\{1,\ldots,N\}^{2}\). Undirected graphs are made of undirected edges, that is such that \((n,\ell)\in\mathbb{E}\) and \((m,n)\in\mathbb{E}\), for every \((m,n)\in\{1,\ldots,N\}^{2}\). In contrast, directed graphs consist of directed edges, where we say that some \(e^{(n,\ell)}\in\mathcal{E}\) is directed (from \(n\) to \(\ell\)) if \((\ell,n)\notin\mathcal{E}\). We can also distinguish reflexive graphs if self-loops are allowed (i.e., one can have \((n,n)\in\mathcal{E}\)), and nonreflexive graphs otherwise. Given these definitions, one can simply bound the cardinality of \(\mathbb{E}\) for each category. For instance, for an undirected nonreflexive graph, \(\operatorname{Card}(\mathbb{E})\leq N(N-1)/2\), while a directed nonreflexive graph has \(\operatorname{Card}(\mathbb{E})\leq N^{2}\) and a directed reflexive graph has \(\operatorname{Card}(\mathbb{E})\leq N(N-1)\). Such graph definitions are binary, as it only described presence/absence of edges between vertex pairs. In this work, we require the notion of a weighted graph, where the edges \((e^{(n,\ell)})_{(n,\ell)\in\mathbb{E}}\) are associated to real valued weights \((\omega^{(n,\ell)})_{(n,\ell)\in\mathbb{E}}\). The edge positions and weight values of a graph are summarized in a matrix \(\mathbf{M}\in\mathbb{R}^{N\times N}\), where, for every \((n,\ell)\in\mathbb{E}\), \(M(n,\ell)=\omega^{(n,\ell)}\) and, for every \((n,\ell)\in\{1,\ldots,N\}^{2}\notin\mathbb{E}\), \(M(n,\ell)=0\). An undirected (resp. directed) graph is thus associated to a symmetric (non symmetric) matrix \(\mathbf{M}\). A reflexive (resp. non reflexive) graph has non zero (resp. zero) entries in the diagonal of \(\mathbf{M}\). The number of edges is simply obtained as the number of non-zero entries in \(\mathbf{M}\). We finally define the so-called _binary support_ of the (possibly sparse) matrix \(\mathbf{M}\). For every \(\mathbf{M}\in\mathbb{R}^{N\times N}\), \(\operatorname{supp}(\mathbf{M})=\mathbf{S}\in\{0,1\}^{N\times N}\), with, for every \((n,\ell)\in\{1,\ldots,N\}^{2}\), \(S(n,\ell)=0\) if and only if \(M(n,\ell)=0\) (i.e., \((n,\ell)\in\mathbb{E}\)).
### Dynamical modeling
We consider the LG-SSM described, for \(k=1,\ldots,K\), as:
\[\mathbf{x}_{k} =\mathbf{A}\mathbf{x}_{k-1}+\mathbf{q}_{k}, \tag{2}\] \[\mathbf{y}_{k} =\mathbf{H}_{k}\mathbf{x}_{k}+\mathbf{r}_{k}, \tag{3}\]
where,
* \(\{\mathbf{x}_{k}\}_{k=1}^{K}\in\mathbb{R}^{N_{x}}\) and \(\{\mathbf{y}_{k}\}_{k=1}^{K}\in\mathbb{R}^{N_{y}}\), are the hidden state and the observations at each time \(k\) respectively,
* \(\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}\) is the transition matrix that we aim at estimating,
* \(\{\mathbf{H}_{k}\}_{k=1}^{K}\in\mathbb{R}^{N_{y}\times N_{x}}\) maps the observation model matrices, possibly varying with \(k\), that are assumed to be known,
* \(\{\mathbf{q}_{k}\}_{k=1}^{K}\sim\mathcal{N}(0,\mathbf{Q})\) is the i.i.d. state noise process, assumed to follow a zero-mean Gaussian model with covariance matrix \(\mathbf{Q}\in\mathcal{S}_{N_{x}}^{++}\) that we also aim at estimating,
* \(\{\mathbf{r}_{k}\}_{k=1}^{K}\sim\mathcal{N}(0,\mathbf{R}_{k})\) is the i.i.d. observation noise process, again zero-mean Gaussian with known covariance matrices \(\mathbf{R}_{k}\in\mathcal{S}_{N_{y}}^{++}\).
Throughout the paper, we denote \(\mathbf{P}=\mathbf{Q}^{-1}\) the precision matrix of the state noise. We furthermore assume an initial state distributed such that \(\mathbf{x}_{0}\sim\mathcal{N}(\mathbf{x}_{0};\boldsymbol{\mu}_{0},\boldsymbol {\Sigma}_{0})\) with known \(\boldsymbol{\mu}_{0}\in\mathbb{R}^{N_{x}}\) and \(\boldsymbol{\Sigma}_{0}\in\mathcal{S}_{N_{x}}^{++}\). The state noises and the observation noises are mutually independent and also independent of the initial state \(\mathbf{x}_{0}\).
In the next subsection, we provide some reminders about filtering/smoothing procedures for time series described by an LG-SSM.
### Filtering and smoothing algorithms in linear dynamical systems
Both filtering and smoothing algorithms consist on the computation of a posterior probability density function (pdf) of the hidden state \(\{\mathbf{x}_{k}\}_{k=1}^{K}\). For every \(k\in\{1,\ldots,K\}\), the filtering distribution is \(p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\), where we denote as \(\mathbf{y}_{1:k}=\{\mathbf{y}_{j}\}_{j=1}^{k}\) the set of observations available up to the time step \(k\), i.e., no future observations can be used to estimate \(\mathbf{x}_{k}\). The filtering problem is suitable for online processing of the observations. The smoothing distribution is \(p(\mathbf{x}_{k}|\mathbf{y}_{1:K})\), where \(K\) is the final time-step for which there is an available observation, i.e., for \(k\in\{1,\ldots,K-1\}\) (note that it is also possible to condition on a subset of future observations, e.g., \(p(\mathbf{x}_{k}|\mathbf{y}_{1:k+\tau})\) with \(\tau\in\mathcal{N}\)).
The filtering and smoothing distributions are in general intractable for most SSMs of interest. The LG-SSM is one of the exceptions that admit closed-form solutions.
Estimating the filtering and smoothing distributions is in general a challenging problem, since obtaining these distributions of interest is possible only in few models of interest. For instance, for the LG-SSM described in (2)-(3), it is possible to obtain the filtering and smoothing distributions, for \(k=1,\ldots,K\), in the case where the model parameters \(\mathbf{A}\), \(\mathbf{Q}\), \(\{\mathbf{H}_{k}\}_{k=1}^{K}\), and \(\{\mathbf{R}_{k}\}_{k=1}^{K}\) are known. Interestingly, these distributions can be obtained in an efficient sequential manner. In particular, the Kalman filter (KF) (Kalman, 1960) allows to obtain recursively (in a forward manner) the sequence of filtering distributions. Its smoothing counterpart, the Rauch-Tung-Striebel (RTS) smoother (Briers et al., 2010), runs backwards to obtain the sequence of smoothing distributions. We note that both algorithms require the model parameters to be known, which in the case of the LG-SSM presented in the previous section are \(\mathbf{A}\), \(\mathbf{Q}=\mathbf{P}^{-1}\), \(\{\mathbf{H}_{k}\}_{k=1}^{K}\), and \(\{\mathbf{R}_{k}\}_{k=1}^{K}\). Algorithm 1 describes KF, which at each time step \(k\in\{1,\ldots,K\}\) performs the (a) prediction/propagation step, where the mean \(\boldsymbol{\mu}_{k|k-1}\) and covariance \(\boldsymbol{\Sigma}_{k|k-1}\) of the (state) predictive distribution are obtained; and the (b) update step, where the mean \(\boldsymbol{\mu}_{k}\) and covariance \(\boldsymbol{\Sigma}_{k}\) of the filtering distribution
are obtained. Algorithm 2 describes the RTS smoother. In this case, the iteration starts at \(k=K\) and runs backwards. It can be interpreted as a a refinement from the mean and covariance matrices of the filtering distribution, given by Kalman, updating them with information present in future observations. However, note that the observations are not re-used in the RTS algorithm, i.e., all the required information in the observations is absorbed by the filtering distributions, which are used to produce the smoothing distributions.
### Problem statement
Algorithms 1 and 2 are simple and efficient. However, they require the knowledge of the model parameters. In this paper, we assume \(\{\mathbf{H}_{k}\}_{k=1}^{K}\), and \(\{\mathbf{R}_{k}\}_{k=1}^{K}\) to be known, and we address the problem of obtaining the filtering and smoothing distributions when matrices \(\mathbf{A}\) and \(\mathbf{P}\) are unknown, and must be estimated jointly with the filtering/smoothing step. To do so, we introduce a double graphical modeling of the state equations, where each matrix \(\mathbf{A}\) and \(\mathbf{P}\) now represents the weights of graphs with a specific, and complementary, statistical
interpretation. We then propose an efficient and convergent inference approach to estimate both graphs under sparse priors given the observed sequence \(\{\mathbf{y}_{k}\}_{k=1}^{K}\), while also obtaining its filtering and smoothing distributions at every time steps.
## 3 Proposed model and inference method
We now introduce our graphical modeling perspective of the LG-SSM. This novel view gives a graph-based interpretation of the transition matrix \(\mathbf{A}\) and precision matrix \(\mathbf{P}\). Then, we present the optimization methodology in order to estimate such matrices by exploiting the graphical perspective.
### Sparse dynamical graphical model
Our novel approach interprets the transition matrix \(\mathbf{A}\) and precision matrix \(\mathbf{P}\) as a (weighted) directed and undirected graph, respectively. We now particularize this perspective for each matrix, deepening into the interpretability of such novel view, the complementarity of both graphs, and its benefits during the inference process.
#### 3.1.1 State transition matrix
Matrix \(\mathbf{A}\) governs the hidden process in (2) and can be seen as the matrix parameter of an order-one vector auto-regressive (VAR) unobserved process. For every \(n\in\{1,\ldots,N_{x}\}\) and \(\ell\in\{1,\ldots,N_{x}\}\), the entry \(A(n,\ell)\) contains the information of how the \(n\)-th time series \(\{\mathbf{x}_{k}(n)\}_{k=1}^{K}\) is affected by the \(\ell\)-th time series \(\{\mathbf{x}_{k}(\ell)\}_{k=1}^{K}\) in consecutive time steps. More precisely, we can express the update \(n\)-th dimension of the latent state in the generative model as
\[\mathbf{x}_{k}(n)=\sum_{\ell=1}^{N_{x}}A(n,\ell)\mathbf{x}_{k-1}(\ell)+ \mathbf{q}_{k}(n). \tag{17}\]
Thus, if \(A(n,\ell)=0\), it implies that the \(\ell\)-th time series of the latent state does not provide any information to predict the \(n\)-th time series one time step ahead conditioned to observing the information in all time series for which \(A(n,m)\neq 0\), \(m\in\{1,\ldots,N_{x}\}\setminus\ell\). We express this conditional independence by denoting \(\mathbf{x}(n)\perp\!\!\!\perp\mathbf{x}(\ell)|\mathcal{X}_{n}\). Here, for every \(n\in\{1,\ldots,N_{x}\}\), we denote the set \(\mathcal{X}_{n}=\{\mathbf{x}(m)|A(n,m)\neq 0,m\in\{1,\ldots,N_{x}\}\}\), i.e., the prediction of \(\mathbf{x}(n)\) observing the past of the time series in \(\mathcal{X}_{n}\) cannot be improved by observing the other time series. Note that in general, one can have \(A(n,\ell)\neq A(n,\ell)\) and \(A(n,n)\neq 0\). Our interpretation is clearly connected to Granger causality Granger (1969), and more in particular to conditional Granger causality Luengo et al. (2019). In particular, \(A(\ell,n)\neq 0\) for some \((n,\ell)\in\{1,\ldots,N_{x}\}^{2}\), it means that \(\mathbf{x}_{k-1}(n)\) causes (in conditional Granger sense) \(\mathbf{x}_{k}(\ell)\), for every \(k\in\{1,\ldots,K\}\). The conditional Granger causality directional relationships within the entries of the multivariate latent state time series \(\mathbf{x}\) can hence be represented as a graphical model made of a reflexive directed graph with \(N_{x}\) vertices, whose weights are those of the transpose matrix \(\mathbf{A}^{\top}\). The perspective of matrix \(\mathbf{A}\) interpreted as a (weighted) directed graph bears some links with the work of graphical Granger causality by Shojaie and Michailidis (2010), although we here model the interactions in the latent space instead of directly on the observations. In our case the graphical modeling is simpler, since the work by Shojaie and Michailidis (2010) considers all possible interactions in the observation space across time (i.e., the interpreted graph size is \(K\cdot N_{y}\)). The price to pay of our modeling is the difficulty in inferring \(\mathbf{A}\), since it governs the latent process, hence it is never observed. We propose an advanced methodology to address the inferential task in Section 3.
**Illustrative example.** In Figure 1, we display an illustrative example of the graphical model associated to the following state equations for \(N_{x}=5\), for every \(k\in 1,\ldots,K\), with \(\mathbf{q}_{k}\in\mathbb{R}^{N_{x}}\) the latent state noise:
\[\begin{cases}\mathbf{x}_{k}(1)=&0.9\,\mathbf{x}_{k-1}(1)+0.7\mathbf{x}_{k-1}(2 )+\mathbf{q}_{k}(1),\\ \mathbf{x}_{k}(2)=&-0.3\,\mathbf{x}_{k-1}(3)+\mathbf{q}_{k}(2),\\ \mathbf{x}_{k}(3)=&0.8\,\mathbf{x}_{k-1}(5)+\mathbf{q}_{k}(3),\\ \mathbf{x}_{k}(4)=&-0.1\,\mathbf{x}_{k-1}(2)+\mathbf{q}_{k}(4),\\ \mathbf{x}_{k}(5)=&0.5\,\mathbf{x}_{k-1}(3)+\mathbf{q}_{k}(5).\end{cases} \tag{18}\]
We display the matrix \(\mathbf{A}\), the associated binary matrix \(\text{supp}(\mathbf{A})\) and the resulting reflexive directed graph under this interpretation.
#### 3.1.2 State noise precision matrix
Matrix \(\mathbf{Q}\) denotes the noise covariance in the state Eq. (2). Since the noise is assumed to be Gaussian, this matrix, and more precisely, the associated precision matrix \(\mathbf{P}=\mathbf{Q}^{-1}\), also has a direct interpretation in terms of graphical modeling, using the notion of Gaussian graphical model (GGM) (Buhlmann and Van De Geer, 2011, Section 13.4)(Uhler, 2017). Since we consider \(\mathbf{Q}\) constant during the whole time series, let us denote the multivariate state noise r.v. at any time step as \(\mathbf{q}\sim\mathcal{N}(0,\mathbf{Q})\). The GGM consists in a graphical modeling of the independence (or not) between the scalar random variables \(\mathbf{q}(1),\ldots,\mathbf{q}(N_{x})\). It is easy to prove that
\[\mathbf{q}(n)\perp\!\!\!\perp\mathbf{q}(\ell)|\{\mathbf{q}(j),j\in 1,\ldots,N_{x}\backslash\{n,\ell\}\} \Longleftrightarrow\mathbf{P}(n,\ell)=\mathbf{P}(\ell,n)=0, \tag{19}\]
i.e., the dimensions \(n\) and \(\ell\) of \(\mathbf{q}\) are independent given all other dimensions if and only if the entry \(\mathbf{P}(n,\ell)\) is zero (and obviously also \(\mathbf{P}(\ell,n)\) since the precision matrix is symmetric). Note that it is possible to condition in the l.h.s. of (19) only to the dimensions \(\mathbf{q}(j)\) for which \(\mathbf{P}(n,j)\neq 0\) and the equivalence would still hold. The GGM relies on an undirected graph associated to a symmetric weight matrix equals to the inverse of the covariance matrix \(\mathbf{P}=\mathbf{\Sigma}^{-1}\). In particular, namely \((n,\ell)\notin\mathbb{E}\) if and only if \(P(n,\ell)=P(\ell,n)=0\).
This GGM construction is at the core of the famous GLASSO (Graphical Lasso) formulation (Friedman et al., 2008)(Maathuis et al., 2019, Section 9.7), whose goal is to build the maximum a posteriori estimator of \(\mathbf{P}\) given realizations of the random vector \(\mathbf{q}\) under a sparsity assumption on matrix \(\mathbf{P}\). The sparsity is here interpreted as a way to eliminate spurious edges in the graph associated to \(\mathbf{P}\).
**Illustrative example.** In Figure 2, we display an illustrative example on the GGM associated to a given precision matrix \(\mathbf{P}\) for \(N_{x}=5\). We show the associated binary support matrix \(\operatorname{supp}(\mathbf{P})\) and the resulting undirected graph under this interpretation. Although self-loops (i.e., non-zero diagonal elements) occur, we removed them from the graphical representation for ease of readability.
#### 3.1.3 Proposed unifying view
We now summarize the graphical perspective on both \(\mathbf{A}\) and \(\mathbf{Q}\) and describe an unifying approach, where sparsity plays a key role. Matrix \(\mathbf{A}\) is interpreted as the weight matrix of a directed graph with \(N_{x}\) vertices. Sparsity (i.e., absence of edge in the graph) in \(\mathbf{A}\) is
Figure 1: Graphical model associated to (18). Matrix \(\mathbf{A}\) (a), its binary support (b) and associated reflexive directed graph (c). The edges are defined as non-zero entries of \(\mathbf{A}^{\top}\). Their thickness is proportional to the absolute entries of \(\mathbf{A}^{\top}\).
interpreted as pair-wise partial/conditional independence, given a subset of the remaining time series, for a one-step ahead prediction of the hidden state. Matrix \(\mathbf{P}=\mathbf{Q}^{-1}\) is interpreted as the weight matrix of an undirected graph, related to a GGM describing the noise in the latent space. Sparsity in \(\mathbf{P}\) is interpreted as pair-wise partial/conditional independence of two dimensions of the additive state noise, given a subset of the remaining dimensions. Both graphs are reflexive, with \(N_{x}\) nodes and a maximum of \(N_{x}^{2}\) edges for \(\mathbf{A}\) (\(N_{x}(N_{x}-1)\) for \(\mathbf{P}\)) associated to \(N_{x}^{2}\) weights (i.e., the \(N_{x}^{2}\) entries of \(\mathbf{A}\) or \(\mathbf{P}\)).
Our perspective in the state process of the LG-SSM in (2) is that \(\mathbf{A}\) encodes the way the information flows in consecutive time-steps between the nodes (state dimensions) of the network (vector state). Thus, its properties shape how the energy/information is transferred and dissipated (under the noise). In contrast, \(\mathbf{P}=\mathbf{Q}^{-1}\) encodes how information that is not in the system at time \(k-1\) enters in the system at time \(k\). In that respect, the interpreted graph with weight matrix \(\mathbf{P}\) encodes the dependency of the new information across the nodes of the network.
We adopt the above perspective to estimate both \(\mathbf{A}\) and \(\mathbf{Q}\) by promoting properties in both graphs. Specifically, we introduce sparsity priors on the matrices, as the sparsity property is key to reach interpretability and compactness of the whole model. In particular, it allows to understand the inner structure of the latent space. Moreover, it can be helpful to speed up computations as the sparsity level is increased, e.g., when running the Kalman filter and RTS smoother. Our proposed method DGLASSO (Dynamic Graphical Lasso) hence aims at providing the maximum a posteriori (MAP) estimator of \(\mathbf{A}\) and \(\mathbf{P}\) (i.e., the weight matrices related to the graphical modeling of the latent state correlation and causality) under Lasso sparsity regularization on both matrices, given the observed sequence \(\mathbf{y}_{1:K}\). A visual representation of DGLASSO graphical model is given in Figure 3. The figure summarizes the relationships among the state entries of an LG-SSM using matrices \((\mathbf{A},\mathbf{P})\) from Figures 1 and 2.
Related works:Our approach DGLASSO generalizes important existing sparse graphical inference ones. For instance, our model with \(\mathbf{A}=0\) (degenerate case) has no memory, and all the energy/information of the system is lost at each time step, thus the state dimensions only incorporate exogenous energy/information through the additive noises. This degenerate case is the same model than GLASSO (Friedman et al., 2008) in the case when \(\mathbf{R}_{k}\equiv 0\), and same than the robust GLASSO model (Benfenati et al., 2020, Sec.5.2) when \(\mathbf{R}_{k}\equiv\sigma_{R}^{2}\mathbf{Id}\). In contrast, if the state noise covariance matrix \(\mathbf{Q}\) is known, DGLASSO coincides with our recent GraphEM framework (Elvira and Chouzenoux, 2022). Probably the
Figure 2: Matrix \(\mathbf{P}\) (a), its binary support (b), and the associated undirected graph (c) with edge thickness proportional to the absolute entries of \(\mathbf{P}\).
closer related work is (Ioannidis et al., 2019), which also introduces a joint graph modeling within an LG-SSM, capturing order-one causal relationships and instantaneous influence (i.e., order zero), through two sparse graphs. Their proposed inference method is an alternating optimization technique, that infers the two graphs under Lasso prior, jointly with the estimation of hidden state. In contrast with DGLASSO, in (Ioannidis et al., 2019), (i) the state model follows a structural vector autoregressive model (SVAR) where instantaneous causality and noise are distinguished, while DGLASSO assumes an order-one VAR in the hidden state; and (ii) the cost function does not result from a Bayesian modeling, and as such it is not related to a maximum a posteriori loss for the graph variables, (iii) the state estimation is point wise defined as the solution of an handcrafted optimization problem, while DGLASSO preserves a full Bayesian interpretation and hence allows the complete characterization of the filtering/smoothing state distributions. In particular, (Ioannidis et al., 2019) model does not recover GLASSO as a particular case.
### Optimization problem
The considered MAP inference problem reads as an optimization problem that we formulate hereafter. More specifically, let us denote the posterior of the unknown parameter, \(p(\mathbf{A},\mathbf{P}|\mathbf{y}_{1:K})\), where the hidden states have been marginalized. It is direct to show, using Bayes rule and composition with the (strictly increasing) logarithmic function, that the maximum of \(p(\mathbf{A},\mathbf{P}|\mathbf{y}_{1:K})\propto p(\mathbf{A},\mathbf{P})p( \mathbf{y}_{1:K}|\mathbf{A},\mathbf{P})\), with \(p(\mathbf{A},\mathbf{P})\) some prior on the parameters \(\mathbf{A}\) and \(\mathbf{P}\), coincides with the minimum of the following loss function:
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{L}(\mathbf{A},\mathbf{P})\triangleq \mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})+\mathcal{L}_{0}(\mathbf{A},\mathbf{P}). \tag{20}\]
Figure 3: Summary representation of the DGLASSO graphical model, for the example graphs presented in Figs. 1 and 2. Blue (oriented) edges represent Granger causality between state entries among consecutive time steps, encoded in matrix \(\mathbf{A}\) (Fig. 1). Magenta edges represent static (i.e., instantaneous) relationships between the state entries, at every time step, due to correlated state noise described by matrix \(\mathbf{P}\) (Fig. 2).
with \(\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})\triangleq-\log p(\mathbf{y}_{1:K}|\mathbf{A},\mathbf{P})\) and \(\mathcal{L}_{0}(\mathbf{A},\mathbf{P})=-\log p(\mathbf{A},\mathbf{P})\). According to (Sarkka, 2013, Chap. 12),
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})=\sum_{k=1}^ {K}\tfrac{1}{2}\log\det(2\pi\mathbf{S}_{k})+\frac{1}{2}\mathbf{z}_{k}^{\top} \mathbf{S}_{k}^{-1}\mathbf{z}_{k}, \tag{21}\]
where \(\mathbf{z}_{k}=\mathbf{y}_{k}-\mathbf{H}_{k}\mathbf{A}\boldsymbol{\mu}_{k-1}\) and \(\mathbf{S}_{k}\) is the covariance matrix of the predictive distribution \(p(\mathbf{y}_{k}|\mathbf{y}_{1:k-1},\mathbf{A},\mathbf{P})=\mathcal{N}\left( \mathbf{y}_{k};\mathbf{H}\mathbf{A}\boldsymbol{\mu}_{k-1},\mathbf{S}_{k}\right)\), both being obtained by the KF in Alg. 1 run for given \((\mathbf{A},\mathbf{P})\) (see (Sarkka, 2013, Section 4.3)).
As already mentioned, the introduction of priors that induce sparsity is advantageous due to several reasons. First, it generally reduces over-fitting, particularly when \(K\) is low compared to the number of parameters to be estimated. Also, it enhances the interpretability. The zero elements in \(\mathbf{A}\) and \(\mathbf{P}\) have a clear interpretation of conditional independence between the time series. Ideally, we would use the \(\ell_{0}\) norm of \(\mathbf{A}\) and \(\mathbf{P}\), i.e., penalizing the number of non-zero entries of the matrix. However, this norm is known to have undesirable properties such as being non-convex, non continuous, and associated to a improper law \(p(\mathbf{A})\). We thus propose instead the regularization term
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{L}_{0}(\mathbf{A},\mathbf{P})=\lambda_{A} \|\mathbf{A}\|_{1}+\lambda_{P}\|\mathbf{P}\|_{1}. \tag{22}\]
The \(\ell_{1}\) norm in (22), defined as the sum of absolute values of the matrix entries, is a proper convex function, that leads to the so-called Lasso regularization (Bach et al., 2012). Note that this penalty, used in numerous works of signal processing and machine learning (Tibshirani, 1996; Chaux et al., 2007), including graph signal processing (Friedman et al., 2008; Benfenati et al., 2020), is associated with a joint Laplace prior distribution on \(\mathbf{A}\) and \(\mathbf{P}\). Such joint distribution factorizes (i.e., the prior assumes independence on both parameters), the means are zero, and the scale parameters are proportional to, respectively, \(\lambda_{A}\) and \(\lambda_{P}\). The larger the regularization parameter \(\lambda_{A}\) (or \(\lambda_{P}\)), the higher sparsity of \(\mathbf{A}\) (or \(\mathbf{P}\)), with the degenerate case of a null \(\mathbf{A}\) (and \(\mathbf{P}\)) when the regularization parameter grows.
### General minimization procedure
The expressions (21)-(22) provide an effective way to evaluate (20). However, due to the recursive form in (21), it is challenging to derive direct quantities (e.g., gradient) for \(\mathcal{L}_{1:K}\). Moreover, despite its simple expression, the regularization term (22) is non differentiable. For both reasons, the minimization of (20) is a challenging question.
We propose a block alternating majorization-minimization (MM) technique to jointly infer the probabilistic filtering/smoothing distributions of the SSM, along with MAP estimates of \((\mathbf{A},\mathbf{P})\). Our method presents the advantage of sound convergence guarantees and the ability to incorporate sparsity priors on both \(\mathbf{A}\) and \(\mathbf{P}\). The general idea of MM is to replace a complicated optimization problem by a sequence of more tractable ones (Sun et al., 2016; Hunter and Lange, 2004). Surrogate approximations of the cost function are built iteratively, following a majorization principle. For any estimates \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\) (i.e., in the interior domain of definition of \(\mathcal{L}\)) of \((\mathbf{A},\mathbf{P})\), a majorizing approximation is constructed for the likelihood term. It is required to satisfy both
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{Q}(\mathbf{A},\mathbf{P};\widetilde{\mathbf{ A}},\widetilde{\mathbf{P}})\geq\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P}), \tag{23}\]
and also the so-called tangency condition
\[\mathcal{Q}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})=\mathcal{L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}}). \tag{24}\]
The MM algorithm then alternates between \(\mathbf{M}\)ajorization step to build function \(\mathcal{Q}+\mathcal{L}_{0}\) satisfying conditions (23) and (24), and \(\mathbf{M}\)inimization step to minimize this majorizing approximation. In our proposed approach, we adopt a block alternating implementation of the MM method, where only one variable (i.e., \(\mathbf{A}\) or \(\mathbf{P}\)) is updated at each iteration, following a cyclic rule (Jacobson and Fessler, 2007; Hong et al., 2016). This alternating strategy considerably simplifies the majorizing function construction as well as its minimization. We furthermore add so-called proximal terms in both updates. Let us recall that, for a function \(\phi:\mathcal{H}\mapsto(-\infty,+\infty]\in\Gamma_{0}(\mathcal{H})\), with \(\mathcal{H}\) a Hilbert space with endowed norm \(\|\cdot\|\), the proximity operator2 of function \(f\) at \(\widetilde{\mathbf{V}}\in\mathcal{H}\) is defined as (Combettes and Pesquet, 2011)
Footnote 2: See also [http://proximity-operator.net/](http://proximity-operator.net/)
\[\mathrm{prox}_{\phi}(\widetilde{\mathbf{V}})=\underset{\mathbf{V}\in \mathcal{H}}{\mathrm{argmin}}\ \left(\phi(\mathbf{V})+\frac{1}{2}\|\mathbf{V}-\widetilde{\mathbf{V}}\|^{2} \right). \tag{25}\]
In our context, the considered Hilbert space is either \(\mathbb{R}^{N_{x}\times N_{x}}\) for the update of the transition matrix or \(\mathcal{S}_{N_{x}}\) for the update of the precision matrix, and the endowed norm is in both cases the Frobenius norm \(\|\cdot\|_{F}\). Introducing proximity terms thus amounts to adding to each majorizing function a quadratic distance to the previous iterate, weighted by a positive factor. This modification preserves the MM interpretation of the method, while ensuring improved stability and convergence guarantees. As we show below, the iterates belong to the interior of domain of the loss function by construction. Namely for every \(i\in\mathbb{N}\), \((\mathbf{A}^{(i)},\mathbf{P}^{(i)})\in\mathbb{R}^{N_{x}\times N_{x}}\times \mathcal{S}^{++}_{N_{x}}\), so the precision matrix remains invertible along the iterations and the algorithm is well defined. The resulting DGLASSO approach is summarized in Algorithm 3. DGLASSO aims at providing ultimately the MAP estimates for the matrix parameters \((\mathbf{A},\mathbf{P})\) of the considered LG-SSM, through the minimization of (20). The covariance state noise MAP estimate is straightforwardly obtained by inversion of the precision state noise matrix provided as an output of DGLASSO. The state filtering/smoothing pdf associated to each estimates are obtained by running KF/RTS loops when setting \((\mathbf{A},\mathbf{P})\) equal to DGLASSO outputs. Next sections are dedicated to the (i) construction of the majorizing function, (ii) discussion about the resolution of each inner step, and (iii) convergence analysis.
### Building the majorizing function
In this section, we derive a theorem regarding the expression of the loss function \(\mathcal{L}_{1:K}\) and a valid majorant function for it.
**Theorem 1**: _The loss function can be expressed as_
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\quad\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P}) =\frac{1}{2}\sum_{k=1}^{K}\left((\mathbf{x}_{k}-\mathbf{A}\mathbf{x}_{k-1})^{ \top}\mathbf{P}\left(\mathbf{x}_{k}-\mathbf{A}\mathbf{x}_{k-1}\right)\right)\\ -\frac{K}{2}\log\det(2\pi\mathbf{P})+\log p(\mathbf{x}_{0:K}| \mathbf{y}_{1:K},\mathbf{A},\mathbf{P})+\log p(\mathbf{x}_{0})-\sum_{k=1}^{K} p(\mathbf{y}_{k}|\mathbf{x}_{k}). \tag{28}\]
**Algorithm 3**: DGLASSO algorithm__
_Inputs._ _Prior parameters_ \(\mathbf{\mu}_{0}\) _and_ \(\mathbf{\Sigma}_{0}\)_; model parameters_ \(\{\mathbf{H}_{k}\}_{k=1}^{K}\) _and_ \(\{\mathbf{R}_{k}\}_{k=1}^{K}\)_; set of observations_ \(\{\mathbf{y}_{k}\}_{k=1}^{K}\)_; hyper-parameters_ \((\lambda_{A},\lambda_{P})>0\)_; stepsizes_ \((\theta_{A},\theta_{P})>0\)_; precisions_ \((\varepsilon,\xi)>0\)_._ _Initialization._ _Set_ \((\mathbf{A}^{(0)},\mathbf{P}^{(0)})\in\mathbb{R}^{N_{x}\times N_{x}}\times \mathcal{S}_{N_{x}}^{++}\)_._ _Recursive step._ _For_ \(i=0,1,\ldots\)_:_ 1. _[label=()]_ 2. _//Update transition matrix__ 3. _Build surrogate function satisfying_ (_23_) _and_ (_24_) at_ \(\widetilde{\mathbf{A}}=\mathbf{A}^{(i)}\) _and_ \(\widetilde{\mathbf{P}}=\mathbf{P}^{(i)}\)_._ _Run Algorithm_ 4 _with precision_ \(\xi\) _to solve_ \[\begin{array}{ll}\mathbf{A}^{(i+1)}&=\operatorname*{prox}_{\mathbf{A} \rightarrow\theta_{A}\mathcal{Q}(\mathbf{A},\mathbf{P}^{(i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)})+\theta_{A}\lambda_{A}\|\mathbf{A}\|_{1}}\left(\mathbf{A}^{(i )}\right),\\ &=\operatorname*{argmin}_{\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}} \mathcal{Q}(\mathbf{A},\mathbf{P}^{(i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)})+ \lambda_{A}\|\mathbf{A}\|_{1}+\frac{1}{2\theta_{A}}\|\mathbf{A}-\mathbf{A}^{( i)}\|_{F}^{2}.\end{array}\] (26) _(2) //Update noise precision matrix_ 1. _Build surrogate function satisfying_ (_23_) _and_ (_24_) at_ \(\widetilde{\mathbf{A}}=\mathbf{A}^{(i+1)}\) _and_ \(\widetilde{\mathbf{P}}=\mathbf{P}^{(i)}\)_._ 2. _Run Algorithm_ 5 _with precision_ \(\xi\) _to solve_ \[\begin{array}{ll}\mathbf{P}^{(i+1)}&=\operatorname*{prox}_{\mathbf{P} \rightarrow\theta_{P}\mathcal{Q}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i+1)}, \mathbf{P}^{(i)})+\theta_{P}\lambda_{P}\|\mathbf{P}\|_{1}}\left(\mathbf{P}^{(i )}\right),\\ &=\operatorname*{argmin}_{\mathbf{P}\in\mathcal{S}_{N_{x}}}\mathcal{Q}(\mathbf{ A}^{(i+1)},\mathbf{P};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\lambda_{P}\|\mathbf{P}\|_{1}+ \frac{1}{2\theta_{P}}\|\mathbf{P}-\mathbf{P}^{(i)}\|_{F}^{2}.\end{array}\] (27) _If_ \(\|\mathbf{A}^{(i+1)}-\mathbf{A}^{(i)}\|_{F}\leq\varepsilon\|\mathbf{A}^{(i)} \|_{F}\) _and_ \(\|\mathbf{P}^{(i+1)}-\mathbf{P}^{(i)}\|_{F}\leq\varepsilon\|\mathbf{P}^{(i)} \|_{F}\)_,_ _stop the recursion_ _by returning_ \((\mathbf{A}^{(i+1)},\mathbf{P}^{(i+1)})\)_._ _Output._ _MAP estimates of the transition and state noise precision matrices._
_Moreover, consider the outputs of Algs. 1 and 2 for a given \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}^{++}}\). Denote_
\[\begin{cases}\mathbf{\Psi}&=\frac{1}{K}\sum_{k=1}^{K}\left(\mathbf{\Sigma}_{k}^{s} +\mathbf{\mu}_{k}^{s}(\mathbf{\mu}_{k}^{s})^{\top}\right),\\ \mathbf{\Delta}&=\frac{1}{K}\sum_{k=1}^{K}\left(\mathbf{\Sigma}_{k}^{s}\mathbf{G}_{k-1}^ {\top}+\mathbf{\mu}_{k}^{s}(\mathbf{\mu}_{k-1}^{s})^{\top}\right),\\ \mathbf{\Phi}&=\frac{1}{K}\sum_{k=1}^{K}\left(\mathbf{\Sigma}_{k-1}^{s}+\mathbf{\mu}_{ k-1}^{s}(\mathbf{\mu}_{k-1}^{s})^{\top}\right),\end{cases} \tag{29}\]
_where, for every \(k\in\{1,\ldots,K\}\), \(\mathbf{G}_{k}=\mathbf{\Sigma}_{k}\widetilde{\mathbf{A}}^{\top}(\widetilde{ \mathbf{A}}\mathbf{\Sigma}_{k}\widetilde{\mathbf{A}}^{\top}+\widetilde{\mathbf{ P}}^{-1})^{-1}\) (see Alg. 2). Then, conditions (23) and (24) hold with_
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\quad\mathcal{Q}(\mathbf{A},\mathbf{P}; \widetilde{\mathbf{A}},\widetilde{\mathbf{P}})=\frac{K}{2}\operatorname{tr} \Big{(}\mathbf{P}(\mathbf{\Psi}-\mathbf{\Delta}\mathbf{A}^{\top}-\mathbf{A} \mathbf{\Delta}^{\top}+\mathbf{A}\mathbf{\Phi}\mathbf{A}^{\top})\Big{)}\\ -\frac{K}{2}\log\det(2\pi\mathbf{P}). \tag{30}\]
_As a consequence, for every \((\theta_{A},\theta_{P})>0\),_
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\quad\mathcal{L}(\mathbf{A},\mathbf{P})\leq \mathcal{Q}(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{ P}})+\mathcal{L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})-\mathcal{Q}( \widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})\\ +\lambda_{A}\|\mathbf{A}\|_{1}+\lambda_{P}\|\mathbf{P}\|_{1}+ \frac{1}{2\theta_{A}}\|\mathbf{A}-\widetilde{\mathbf{A}}\|_{F}^{2}+\frac{1}{2 \theta_{P}}\|\mathbf{P}-\widetilde{\mathbf{P}}\|_{F}^{2}, \tag{31}\]
_with equality holding for \(\mathbf{A}=\widetilde{\mathbf{A}}\) and \(\mathbf{P}=\widetilde{\mathbf{P}}\)._
**Proof** See Appendix A.
Theorem 1 allows to build, for any tangent point \((\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\), a majorizing approximation (31) for \(\mathcal{L}\). DGLASSO method leverages this property designing a block alternating MM scheme. At each iteration \(i\in\mathbb{N}\), two steps are conducted, namely the update of (a) the transition matrix, (b) the noise precision matrix. Each steps follows an MM structure, that is it first builds a majorizing approximation for \(\mathcal{L}\) at the current estimates, using Theorem 1, and then minimizes it with respect to the active variable (\(\mathbf{A}\) in step (a), or \(\mathbf{P}\) in step (b)). Processing the variables in two distinct steps allows to build upon the desirable convex structure of (31) with respect to one of the variable, the other being fixed. The good performance of MM approaches combined with block alternating steps have been illustrated in (Hong et al., 2016; Chouzenoux et al., 2016; Hien et al., 2020). In particular, convergence guarantees are at reach, as we will show in Section 4.
**Remark 2**: _From Theorem 1, the construction of the majorizing function of \(\mathcal{L}\) at each tangent point \((\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\) requires to run the KF Algorithm 1 and the RTS Algorithm 2. This feature was already present in previous EM-based approaches for LG-SSM parameter estimations (see for instance (Sarkka, 2013, Chap. 12) and (Elvina and Chouzenoux, 2022)). Due to its block alternating form, the proposed DGLASSO algorithm requires to build twice per iteration a majorizing approximation, which means that the data must be processed by KF/RTS twice per iteration. In contrast, gradient-based methods relying on sensitivity equations (Gupta and Mehra, 1974) only require the KF recursions for building their updates, but not the RTS recursions, being thus more computationally efficient. However, such approaches cannot easily encompass non-smooth sparsity priors. Moreover, if no prior is assumed (i.e., \(\lambda_{A}=\lambda_{P}=0\)), the aforementioned methods exhibit very slow convergence in this problem as we will show in our numerical experiments. This might be because the gradient of the neg-log-likelihood is extremely ill-conditioned in this context._
### Resolution of the inner problems
We now discuss the structure and resolution of the inner problems (26) and (27) arising in Algorithm 3.
Minimization with respect to A:Let \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\). By definition of the proximity operator (25) and the majorant expression in (30), Eq. (26) amounts to minimizing the following function
\[\operatorname{minimize}_{\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{ x}}}\left(\mathcal{C}_{1}(\mathbf{A})\triangleq\frac{\theta_{A}K}{2}\text{tr} \left(\widetilde{\mathbf{P}}(\mathbf{\Psi}-\mathbf{\Delta}\mathbf{A}^{\top}- \mathbf{A}\mathbf{\Delta}^{\top}+\mathbf{A}\mathbf{\Phi}\mathbf{A}^{\top}) \right)+\theta_{A}\lambda_{A}\|\mathbf{A}\|_{1}\right.\\ \left.+\frac{1}{2}\|\mathbf{A}-\widetilde{\mathbf{A}}\|_{F}^{2} \right). \tag{32}\]
Remarkably, the problem above is a special instance of a multivariate Lasso regression problem (Tibshirani, 1996), for which many efficient iterative solvers are available. The specificity here is that the problem is strongly convex thanks to the proximal term. We thus suggest the use of the Dykstra-like algorithm by Bauschke and Combettes (2008), whose iterations are recalled in the Appendix B. This method presents the advantage of fast convergence rate, ease of implementation and no parameter tuning.
Minimization with respect to P:Let \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\). The update of Eq. (27) solves a minimization problem with generic form
\[\operatorname{minimize}_{\mathbf{P}\in\mathcal{S}_{N_{x}}}\left(\mathcal{C}_{ 2}(\mathbf{P})\triangleq\frac{\theta_{P}K}{2}\text{tr}\left(\mathbf{P}\mathbf{ \Pi}\right)-\frac{\theta_{P}K}{2}\log\det\mathbf{P}+\theta_{P}\lambda_{P}\| \mathbf{P}\|_{1}+\frac{1}{2}\|\mathbf{P}-\widetilde{\mathbf{P}}\|_{F}^{2} \right), \tag{33}\]
where we denote
\[\mathbf{\Pi}=\mathbf{\Psi}-\mathbf{\Delta}\widetilde{\mathbf{A}}^{\top}- \widetilde{\mathbf{A}}\mathbf{\Delta}^{\top}+\widetilde{\mathbf{A}}\mathbf{ \Phi}\widetilde{\mathbf{A}}^{\top}. \tag{34}\]
Here we have used the definition of the proximity operator (25) and the majorant expression in (30) (ignoring the constant multiplicative term in the logarithm). Matrix \(\mathbf{\Pi}\in\mathcal{S}_{N_{x}}\) in (34) is constant with respect to variable \(\mathbf{P}\). Remarkably, (33) reads as a regularized form of the famous GLASSO problem (Friedman et al., 2008), and gets actually equivalent to it when \(\theta_{P}\to\infty\). Matrix \(\mathbf{\Pi}\) plays the role of the empirical covariance matrix in GLASSO, and \(\frac{2\lambda_{P}}{K}\) acts as the weight on the \(\ell_{1}\) term. The proximal term works as a Tikhonov-like regularizer, ensuring the strong convexity of the problem and thus the unicity of its minimizer. Moreover, by the definition of the log-determinant in Eq. (1), the solution of (33) belongs to \(\mathbf{P}\in\mathcal{S}_{N_{x}}^{++}\), i.e., the precision matrix is invertible and thus a covariance matrix can be deduced from it by inversion. Standard GLASSO solvers can be easily modified to solve (33). We present in the Appendix B the complete derivations, when applying the Dykstra-like algorithm from (Bauschke and Combettes, 2008), which presents the advantage of fast convergence and ease of implementation.
## 4 Convergence analysis
We now present our convergence proof for the proposed DGLASSO algorithm presented in Algorithm 3. Our analysis is made assuming that the inner steps (26) and (27) are solved in an exact manner. The extension of the result to the case of an inexact resolution of the subproblems is discussed at the end of the section.
### Descent property
**Lemma 1**: _Assuming exact resolution of (26) and (27), the sequence \(\left\{\mathbf{A}^{(i)},\mathbf{P}^{(i)}\right\}_{i\in\mathbb{N}}\) produced by DGLASSO algorithm satisfies_
\[(\forall i\in\mathbb{N})\quad\mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i+1)}) \leq\mathcal{L}(\mathbf{A}^{(i)},\mathbf{P}^{(i)}). \tag{35}\]
**Proof** Let \(i\in\mathbb{N}\). First, let us Theorem 1 at \((\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})=(\mathbf{A}^{(i+1)},\mathbf{P} ^{(i)})\) (Inequality (a)), and the definition of \(\mathbf{P}^{(i+1)}\) in (27) (Inequality (b)),
\[\mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i+1)}) \stackrel{{(a)}}{{\leq}}\mathcal{Q}(\mathbf{A}^{(i+1 )},\mathbf{P}^{(i+1)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\mathcal{L}_{1:K}( \mathbf{A}^{(i+1)},\mathbf{P}^{(i)})-\mathcal{Q}(\mathbf{A}^{(i+1)},\mathbf{P }^{(i)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)}) \tag{36}\] \[+\lambda_{A}\|\mathbf{A}^{(i+1)}\|_{1}+\lambda_{P}\|\mathbf{P}^{ (i+1)}\|_{1}+\frac{1}{2\theta_{A}}\|\mathbf{A}^{(i+1)}-\mathbf{A}^{(i+1)}\|_{ F}^{2}+\frac{1}{2\theta_{P}}\|\mathbf{P}^{(i+1)}-\mathbf{P}^{(i)}\|_{F}^{2}\] \[\stackrel{{(b)}}{{\leq}}\mathcal{Q}(\mathbf{A}^{(i+1 )},\mathbf{P}^{(i)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\mathcal{L}_{1:K}( \mathbf{A}^{(i+1)},\mathbf{P}^{(i)})-\mathcal{Q}(\mathbf{A}^{(i+1)},\mathbf{P }^{(i)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})\] \[+\lambda_{A}\|\mathbf{A}^{(i+1)}\|_{1}+\lambda_{P}\|\mathbf{P}^{ (i)}\|_{1}+\frac{1}{2\theta_{A}}\|\mathbf{A}^{(i+1)}-\mathbf{A}^{(i+1)}\|_{F} ^{2}+\frac{1}{2\theta_{P}}\|\mathbf{P}^{(i)}-\mathbf{P}^{(i)}\|_{F}^{2}. \tag{37}\]
Inequality (37) simplifies into
\[\mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i+1)})\leq\mathcal{L}_{1:K}( \mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\lambda_{A}\|\mathbf{A}^{(i+1)}\|_{1}+ \lambda_{P}\|\mathbf{P}^{(i)}\|_{1}=\mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P }^{(i)}). \tag{38}\]
Applying Theorem 1 now at \((\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})=(\mathbf{A}^{(i)},\mathbf{P} ^{(i)})\) (Inequality (a)), and the definition of \(\mathbf{A}^{(i+1)}\) in (26) (Inequality (b)), leads to
\[\mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i)}) \stackrel{{(a)}}{{\leq}}\mathcal{Q}(\mathbf{A}^{(i+ 1)},\mathbf{P}^{(i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)})+\mathcal{L}_{1:K}( \mathbf{A}^{(i)},\mathbf{P}^{(i)})-\mathcal{Q}(\mathbf{A}^{(i)},\mathbf{P}^{ (i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)}) \tag{39}\] \[+\lambda_{A}\|\mathbf{A}^{(i+1)}\|_{1}+\lambda_{P}\|\mathbf{P}^{ (i)}\|_{1}+\frac{1}{2\theta_{A}}\|\mathbf{A}^{(i+1)}-\mathbf{A}^{(i)}\|_{F}^{2 }+\frac{1}{2\theta_{P}}\|\mathbf{P}^{(i)}-\mathbf{P}^{(i)}\|_{F}^{2}\] \[\stackrel{{(b)}}{{\leq}}\mathcal{Q}(\mathbf{A}^{(i)}, \mathbf{P}^{(i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)})+\mathcal{L}_{1:K}(\mathbf{A }^{(i)},\mathbf{P}^{(i)})-\mathcal{Q}(\mathbf{A}^{(i)},\mathbf{P}^{(i)}; \mathbf{A}^{(i)},\mathbf{P}^{(i)})\] \[+\lambda_{A}\|\mathbf{A}^{(i)}\|_{1}+\lambda_{P}\|\mathbf{P}^{ (i)}\|_{1}+\frac{1}{2\theta_{A}}\|\mathbf{A}^{(i)}-\mathbf{A}^{(i)}\|_{F}^{2}+ \frac{1}{2\theta_{P}}\|\mathbf{P}^{(i)}-\mathbf{P}^{(i)}\|_{F}^{2}, \tag{40}\]
which simplifies into
\[\mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})\leq\mathcal{L}_{1:K}(\mathbf{ A}^{(i)},\mathbf{P}^{(i)})+\lambda_{A}\|\mathbf{A}^{(i)}\|_{1}+\lambda_{P}\| \mathbf{P}^{(i)}\|_{1}=\mathcal{L}(\mathbf{A}^{(i)},\mathbf{P}^{(i)}), \tag{41}\]
and concludes the proof. \(\blacksquare\)
If the cost function \(\mathcal{L}\) is lower bounded (e.g., if it is coercive), Lemma 1 implies the convergence of sequence \(\left\{\mathcal{L}(\mathbf{A}^{(i)},\mathbf{P}^{(i)})\right\}_{i\in\mathbb{N}}\) to a finite value and, as such, the existence of cluster points in \(\{\mathbf{A}^{(i)},\mathbf{P}^{(i)}\}_{i\in\mathbb{N}}\). This is however a rather weak convergence result and we propose hereafter a thorough analysis relying on recent tools of nonlinear analysis (Attouch et al., 2010; Bolte et al., 2014) combined with the works (Tien et al., 2020; Hien et al., 2020) on the convergence of block alternating MM schemes.
### Convergence guarantees
**Theorem 3**: _Consider the sequence \(\{\mathbf{A}^{(i)},\mathbf{P}^{(i)}\}_{i\in\mathbb{N}}\) generated by DGLASSO, assuming exact resolution of both inner steps (26) and (27). If the sequence \(\{\mathbf{A}^{(i)},\mathbf{P}^{(i)}\}_{i\in\mathbb{N}}\) is bounded, then \(\{\mathbf{A}^{(i)},\mathbf{P}^{(i)}\}_{i\in\mathbb{N}}\) converges to a critical point of \(\mathcal{L}\)._
The convergence analysis relies in proving that the exact form of DGLASSO algorithm is a special instance of TITAN algorithm from (Tien et al., 2020), and as such, inherits (Tien et al., 2020, Theorem 7) and (Tien et al., 2020, Theorem 8), under our assumptions.
\(\bullet\) Let us introduce the following notations.
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}}) f(\mathbf{A},\mathbf{P})=\frac{1}{2}\sum_{k=1}^{K}\left( \left(\mathbf{x}_{k}-\mathbf{A}\mathbf{x}_{k-1}\right)^{\top}\mathbf{P}\left( \mathbf{x}_{k}-\mathbf{A}\mathbf{x}_{k-1}\right)\right)\\ +\log p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K},\mathbf{A},\mathbf{P}) -\log p(\mathbf{x}_{0})-\sum_{k=1}^{K}\log p(\mathbf{y}_{k}|\mathbf{x}_{k}), \tag{42}\]
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}) g_{1}(\mathbf{A})=\lambda_{A}\|\mathbf{A}\|_{1}, \tag{43}\] \[(\forall\mathbf{P}\in\mathcal{S}_{N_{x}}) g_{2}(\mathbf{P})=-\frac{K}{2}\log\det(2\pi\mathbf{P})+\lambda_{P}\| \mathbf{P}\|_{1}, \tag{44}\]
so that
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{L}(\mathbf{A},\mathbf{P})=f(\mathbf{A}, \mathbf{P})+g_{1}(\mathbf{A})+g_{2}(\mathbf{P}), \tag{45}\]
with \(f\) lower semi-continuous function, \(g_{1}\in\Gamma_{0}(\mathbb{R}^{N_{x}\times N_{x}})\) and \(g_{2}\in\Gamma_{0}(\mathcal{S}_{N_{x}})\). Moreover, let us denote
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{J}(\mathbf{A},\mathbf{P})=\frac{K}{2}\text{ tr}\left(\mathbf{P}(\boldsymbol{\Psi}-\mathbf{\Delta}\mathbf{A}^{\top}-\mathbf{A} \mathbf{\Delta}^{\top}+\mathbf{A}\boldsymbol{\Phi}\mathbf{A}^{\top})\right), \tag{46}\]
and, for every \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\),
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}) u_{1}(\mathbf{A};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})= \mathcal{J}(\mathbf{A},\widetilde{\mathbf{P}})+\frac{1}{2\theta_{A}}\| \mathbf{A}-\widetilde{\mathbf{A}}\|_{F}^{2}+f(\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})-\mathcal{J}(\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}}), \tag{47}\] \[(\forall\mathbf{P}\in\mathcal{S}_{N_{x}}) u_{2}(\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})= \mathcal{J}(\widetilde{\mathbf{A}},\mathbf{P})+\frac{1}{2\theta_{P}}\| \mathbf{P}-\widetilde{\mathbf{P}}\|_{F}^{2}+f(\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})-\mathcal{J}(\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}}). \tag{48}\]
By Theorem 1, the following majorization properties hold for every \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\),
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}) u_{1}(\mathbf{A};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\geq f (\mathbf{A},\widetilde{\mathbf{P}}), \tag{49}\] \[(\forall\mathbf{P}\in\mathcal{S}_{N_{x}}) u_{2}(\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\geq f (\widetilde{\mathbf{A}},\mathbf{P}). \tag{50}\]
and we have the tangency condition, for every \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\),
\[u_{1}(\widetilde{\mathbf{A}};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})=f(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}}), \tag{51}\] \[u_{2}(\widetilde{\mathbf{P}};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})=f(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}}). \tag{52}\]
Then, straightforward computations allow to rewrite the iterates of Algorithm 3 as follows:
\[(\forall i\in\mathbb{N})\quad\begin{cases}\mathbf{A}^{(i+1)}=\underset{\mathbf{A} \in\mathbb{R}^{N_{x}\times N_{x}}}{\operatorname{argmin}}\ u_{1}(\mathbf{A}; \mathbf{A}^{(i)},\mathbf{P}^{(i)})+g_{1}(\mathbf{A}),\\ \mathbf{P}^{(i+1)}=\underset{\mathbf{P}\in\mathcal{S}_{N_{x}}}{\operatorname{ argmin}}\ u_{2}(\mathbf{P};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+g_{2}(\mathbf{P}), \end{cases} \tag{53}\]
which identifies with the iterative scheme TITAN from (Tien et al., 2020), in the case of two blocks and setting the extrapolation step to zero. The rest of the proof amounts to check the fulfillment of the assumptions required for (Tien et al., 2020, Theorem 7) and (Tien et al., 2020, Theorem 8).
\(\bullet\) Let us denote, for every \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\),
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}) \tilde{u}_{1}(\mathbf{A};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})=u_{1}(\mathbf{A};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})+ g_{1}(\mathbf{A}), \tag{54}\] \[(\forall\mathbf{P}\in\mathcal{S}_{N_{x}}) \tilde{u}_{2}(\mathbf{P};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})=u_{2}(\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})+ g_{2}(\mathbf{P}). \tag{55}\]
Functions \(u_{1}\) and \(u_{2}\) are quadratic and strongly convex with respective strong convexity constants \(\theta_{A}^{-1}\) and \(\theta_{P}^{-1}\). Since both \(g_{1}\) and \(g_{2}\) are convex, functions \(\tilde{u}_{1}\) and \(\tilde{u}_{2}\) are also strongly convex, with respective strong convexity constants \(\theta_{A}^{-1}\) and \(\theta_{P}^{-1}\). Let \(i\in\mathbb{N}\). According to the optimality conditions of both equations in (53), there exists \(\mathbf{T}_{1}^{(i+1)}\in\partial\tilde{u}_{1}(\mathbf{A}^{(i+1)};\mathbf{A}^ {(i)},\mathbf{P}^{(i)})\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\mathbf{T}_{2}^{(i+1)}\in\partial\tilde{u}_{2}(\mathbf{P}^{(i+1)};\mathbf{A}^ {(i+1)},\mathbf{P}^{(i)})\in\mathcal{S}_{N_{x}}\) such that
\[\begin{cases}\operatorname{tr}\left(\mathbf{T}_{1}^{(i+1)}(\mathbf{A}^{(i)}- \mathbf{A}^{(i+1)})\right)\geq 0,\\ \operatorname{tr}\left(\mathbf{T}_{2}^{(i+1)}(\mathbf{P}^{(i)}-\mathbf{P}^{(i+ 1)})\right)\geq 0.\end{cases} \tag{56}\]
Moreover, by strong convexity of both \(\tilde{u}_{1}\) and \(\tilde{u}_{2}\),
\[\begin{cases}\tilde{u}_{1}(\mathbf{A}^{(i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)} )\geq\tilde{u}_{1}(\mathbf{A}^{(i+1)};\mathbf{A}^{(i)},\mathbf{P}^{(i)})+ \operatorname{tr}\left(\mathbf{T}_{1}^{(i+1)}(\mathbf{A}^{(i)}-\mathbf{A}^{(i+ 1)})\right)+\frac{1}{2\theta_{A}}\|\mathbf{A}^{(i+1)}-\mathbf{A}^{(i+1)}\|_{F }^{2},\\ \tilde{u}_{2}(\mathbf{P}^{(i)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})\geq\tilde{ u}_{2}(\mathbf{P}^{(i+1)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\operatorname{tr} \left(\mathbf{T}_{2}^{(i+1)}(\mathbf{P}^{(i)}-\mathbf{P}^{(i+1)})\right)+ \frac{1}{2\theta_{P}}\|\mathbf{P}^{(i+1)}-\mathbf{P}^{(i+1)}\|_{F}^{2}.\end{cases} \tag{57}\]
Hence, using (56),
\[\begin{cases}\tilde{u}_{1}(\mathbf{A}^{(i)};\mathbf{A}^{(i)},\mathbf{P}^{(i)} )\geq\tilde{u}_{1}(\mathbf{A}^{(i+1)};\mathbf{A}^{(i)},\mathbf{P}^{(i)})+ \frac{1}{2\theta_{A}}\|\mathbf{A}^{(i+1)}-\mathbf{A}^{(i+1)}\|_{F}^{2},\\ \tilde{u}_{2}(\mathbf{P}^{(i)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})\geq\tilde{ u}_{2}(\mathbf{P}^{(i+1)};\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\frac{1}{2\theta_{P}}\| \mathbf{P}^{(i+1)}-\mathbf{P}^{(i+1)}\|_{F}^{2},\end{cases} \tag{58}\]
and, using (49)-(50)-(51)-(52),
\[\begin{cases}\mathcal{L}(\mathbf{A}^{(i)},\mathbf{P}^{(i)})\geq\mathcal{L}( \mathbf{A}^{(i+1)},\mathbf{P}^{(i)})+\frac{1}{2\theta_{A}}\|\mathbf{A}^{(i+1)} -\mathbf{A}^{(i+1)}\|_{F}^{2},\\ \mathcal{L}(\mathbf{A}^{(i+1)},\mathbf{P}^{(i)})\geq\mathcal{L}(\mathbf{A}^{(i+ 1)},\mathbf{P}^{(i+1)})+\frac{1}{2\theta_{P}}\|\mathbf{P}^{(i+1)}-\mathbf{P}^{( i+1)}\|_{F}^{2}.\end{cases} \tag{59}\]
It means that the so-called NDSP (nearly sufficiently decreasing property) condition from (Tien et al., 2020) holds with, for every \(i\in\mathbb{N}\), \((\gamma_{1}^{(i)},\gamma_{2}^{(i)},\eta_{1}^{(i)},\eta_{2}^{(i)})\equiv(0,0, \theta_{A}^{-1},\theta_{P}^{-1})\). Remark that our proof for (59) constitutes an alternative proof for Lemma 1.
\(\bullet\) According to (Gupta and Mehra, 1974), function \(f\) is continuously differentiable. As \(g_{1}\in\Gamma_{0}(\mathbb{R}^{N_{x}\times N_{x}})\) and \(g_{2}\in\Gamma_{0}(\mathcal{S}_{N_{x}})\), we have for every \(\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\mathbf{P}\in\mathcal{S}_{N_{x}}\),
\[\begin{cases}\partial_{A}\left(f(\mathbf{A},\mathbf{P})+g_{1}(\mathbf{A}) \right)=\nabla_{A}f(\mathbf{A},\mathbf{P})+\partial g_{1}(\mathbf{A}),\\ \partial_{P}\left(f(\mathbf{A},\mathbf{P})+g_{2}(\mathbf{P})\right)=\nabla_{ P}f(\mathbf{A},\mathbf{P})+\partial g_{2}(\mathbf{P}),\end{cases} \tag{60}\]
so that (Tien et al., 2020, Assumption 3(i)) holds. Moreover, by construction of the majorizing function \(\mathcal{J}\), it is also continuously differentiable and we have the coincidence of the gradient at the majorization point, namely for every \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\),
\[\nabla_{A}f(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})= \nabla\tilde{u}_{1}(\widetilde{\mathbf{A}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}}), \tag{61}\] \[\nabla_{P}f(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})= \nabla\tilde{u}_{2}(\widetilde{\mathbf{P}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}}). \tag{62}\]
Thus, according to (Tien et al., 2020, Lem.2), (Tien et al., 2020, Assumption 2) is verified. Moreover, as the restriction of \(\mathcal{J}\) to both of its variables is quadratic, the partial gradients \(\nabla u_{1}\) and \(\nabla u_{2}\) are linear and thus Lipschitz continuous on any bounded subset of \(\mathbb{R}^{N_{x}\times N_{x}}\) and \(\mathcal{S}_{N_{x}}\), respectively, which yields the fulfillment of (Tien et al., 2020, Assumption 3).
\(\bullet\) Since NSDP and (Tien et al., 2020, Assumption 2) hold, the result (Tien et al., 2020, Theorem 6) allows to show that, if the sequence \(\{\mathbf{A}^{(i)},\mathbf{P}^{(i)}\}_{i\in\mathbb{N}}\) is bounded, then every limit point \((\mathbf{A}^{*},\mathbf{P}^{*})\) of it is a critical point of \(\mathcal{L}\). Moreover, (Tien et al., 2020, Assumption 3) is satisfied and (Tien et al., 2020, Cond.4) and (Tien et al., 2020, Cond.4) are trivially met in our case. We can also show that the loss function \(\mathcal{L}\) satisfies the Kurdyka-Lojasiewicz property (Bolte et al., 2014) (using a similar proof than (Chouzenoux et al., 2019, Lemma 2)). Thus, (Tien et al., 2020, Theorem 7) holds which concludes our proof.
### Discussion
The proof of Theorem 3 is grounded on the recent works by Tien et al. (2020) and Hien et al. (2020), generalizing the works by Bolte et al. (2014) and Chouzenoux et al. (2016), for establishing the convergence of block alternating MM schemes under the Kurdyka-Lojasiewicz (KL) inequality assumption (Lojasiewicz, 1963). The latter assumption is a powerful tool of nonlinear functional analysis that has been popularized in the seminal paper by Attouch et al. (2010). In their paper, the authors show how to prove the convergence of the iterates generated by a large family of minimization schemes, under the sole requirement that the function to minimize satisfies the KL inequality. The latter requirement is very mild, as it holds for a large class of functions, non necessarily convex, as soon as they can be embedded within an o-minimal structure. Semi-algebraic and analytical functions are example of such functions. In our analysis, for the sake of conciseness, we skipped the proof showing that \(\mathcal{L}\) satisfies KL as it is identical to the one of (Chouzenoux et al., 2019, Lem.2).
Theorem 3 assumes that both inner steps (26) and (27) are solved in an exact manner. Due to strong convexity, each problem has a unique solution, which ensures the well posedness of the study. However, in both case, the solution does not take a closed form (except
when \(\lambda_{A}\) or \(\lambda_{P}\) equals zero, in such case we retrieve the MLEM updates from (Sarkka, 2013, Chap. 12)). Iterative inner solvers are thus necessary, and we proposed some efficient ones in Section 3.5. The extension of our convergence study to the case of inexact resolution of (26) and (27) is not straightforward, up to our knowledge. Convergence of inexact proximal schemes for KL losses has been studied in various works, such as (Attouch et al., 2013; Cherni et al., 2020; Chouzenoux et al., 2014) to name a few. But we are not aware of any study covering the block alternating MM scheme considered here, and thus left the convergence study of the inexact implementation of DGLASSO as future work. In practice, we impose a rather demanding condition on the stopping rule for the inner solvers of (26) and (27) and did not observe any numerical instabilities of the proposed algorithm.
## 5 Experiments
We perform a thorough evaluation study on various controlled scenarios where the ground truth matrices denoted \((\mathbf{A}^{*},\mathbf{P}^{*})\) (as well as \(\mathbf{Q}^{*}=(\mathbf{P}^{*})^{-1}\)) are predefined, and the time series \(\{\mathbf{y}_{k},\mathbf{x}_{k}\}_{k=1}^{K}\) are build directly from the LG-SSM model (2)-(3) using such matrices. The goal is then to provide estimates \((\widehat{\mathbf{A}},\widehat{\mathbf{P}},\widehat{\mathbf{Q}})\) of \((\mathbf{A}^{*},\mathbf{P}^{*},\mathbf{Q}^{*})\), given the observation of \(\{\mathbf{y}_{k}\}_{k=1}^{K}\). Except otherwise stated, all the compared methods have moreover access to a perfect knowledge of \((\mathbf{R}_{k},\mathbf{H}_{k},\boldsymbol{\mu}_{0},\mathbf{\Sigma}_{0})\), in top of accessing the time series \(\{\mathbf{y}_{k}\}_{k=1}^{K}\). The hidden state \(\{\mathbf{x}_{k}\}_{k=1}^{K}\) is, by definition, not assumed to be known, and is only used to compute quality metrics on test sets. We first work on synthetic data in Section 5.1, where the structure, the sparsity level, the conditioning, of the sought matrices \((\mathbf{A}^{*},\mathbf{P}^{*})\) is controlled. This allows us to evaluate our method on multiple cases, to discuss its parameter tuning, and to compare it to benchmarks in terms of inference quality and complexity. We then address in Section 5.2, a set of realistic problems of graph inference arising in weather variability analysis, using four datasets built upon the Neurips data challenge (Runge et al., 2020). This second set of experiments aims at evaluating the ability of DGLASSO to model and estimate a large class of graph structures (here, 200 different graphs per dataset), in comparison with other state-of-the-art graph detection methods.
All codes are run on a Desktop Dell Latitude computer, with 11th Gen Intel(R) Core(TM) i7-1185G7 at 3.00GHz, equipped with 32Go Ram, using Matlab R2021a software. The code is publicly available at [https://pages.saclay.inria.fr/emilie.chouzenoux/log/DGLASSO_toolbox.zip](https://pages.saclay.inria.fr/emilie.chouzenoux/log/DGLASSO_toolbox.zip), for reproducibility purpose.
### Synthetic data
#### 5.1.1 Datasets
We set \(K=10^{3}\), \(\mathbf{R}_{k}=\sigma_{\mathbf{R}}^{2}\mathbf{Id}_{N_{y}}\) for every \(k\in\{1,\ldots,K\}\), \(\boldsymbol{\mu}_{0}\in\mathbb{R}^{N_{x}}\) is a vector of ones, \(\mathbf{\Sigma}_{0}=\sigma_{0}^{2}\mathbf{Id}_{N_{x}}\) with \((\sigma_{\mathbf{R}},\sigma_{0})=(10^{-1},10^{-4})\). Matrix \(\mathbf{H}_{k}\) is set to identity matrix for every \(k\in\{1,\ldots,K\}\), so that \(N_{x}=N_{y}\). This setting models a one-to-one correspondence between states and observations. Identifiability issues are hence avoided.
We set \(N_{x}=N_{y}=9\), and we rely on block-diagonal matrices \((\mathbf{A}^{*},\mathbf{P}^{*})\), made of 3 blocks with dimensions \(3\times 3\). The diagonal blocks of \(\mathbf{A}^{*}\) are randomly set as matrices of auto-regressive processes of order one, AR(1), rescaled to have a spectral norm equals 0.9 to ensure the process stability. The diagonal blocks of \(\mathbf{P}^{*}\) are randomly set following
the procedure from (More and Toraldo, 1989), with predefined conditioning number \(c\), with \(\log_{10}(c)\in\{0.1,0.2,0.5,1\}\), leading to datasets A, B, C and D, respectively.
DGLASSO provides estimates \((\widehat{\mathbf{A}},\widehat{\mathbf{P}})\) as a direct output. The estimate \(\widehat{\mathbf{Q}}\) is simply deduced as \(\widehat{\mathbf{Q}}=(\widehat{\mathbf{P}})^{-1}\). Precision parameters in DGLASSO are set to \((\varepsilon,\xi)=(10^{-3},10^{-4})\). We initialize DGLASSO with \(\mathbf{P}^{(0)}=10^{-1}\mathbf{Id}_{N_{x}}\), and \(\mathbf{A}^{(0)}\) equals to a stable auto-regressive order one matrix with entries \(A^{(0)}(n,m)=(10^{-1})^{|n-m|}\), rescaled to have spectral norm equals to \(0.99\). The setting of regularization parameters \((\lambda_{A},\lambda_{P})\) is discussed in the dedicated Section 5.1.4. Performance of the method for varying initializations is discussed in Appendix C.2.
#### 5.1.2 Comparisons to other methods
In our experiments, we also compare DGLASSO with other model inference techniques for LG-SSM.
We consider the EM method from (Shumway and Stoffer, 1982) (denoted after by MLEM) to compute \((\widehat{\mathbf{A}},\widehat{\mathbf{Q}})\) as maximum likelihood estimates (i.e., no regularization is employed) of matrices \((\mathbf{A}^{*},\mathbf{Q}^{*})\), the estimation \(\widehat{\mathbf{P}}\) being defined here as the inverse of \(\widehat{\mathbf{Q}}\). Comparison with another state-of-the-art approach for maximum likelihood computation, relying on sensitivity function minimization from (Gupta and Mehra, 1974), is presented in Appendix C.1.
Second, we consider three model inference techniques that explicitly incorporate a sparse graphical prior knowledge on the sought matrices. Namely, we compare to GLASSO (Friedman et al., 2008), that considers a simple static and noiseless version of the LG-SSM (i.e., \(\widehat{\mathbf{A}}\) is empirically set to zero and observation noise is neglected). Matrix \(\widehat{\mathbf{P}}\) is then obtained through a maximum a posteriori formulation under an \(\ell_{1}\) prior. The convex GLASSO loss is minimized using the proximal splitting method described in (Benfenati et al., 2020, Alg.1). Matrix \(\widehat{\mathbf{Q}}\) is deduced by inversion of the resulting \(\widehat{\mathbf{P}}\). We also compare with the robust GLASSO (rGLASSO) approach introduced in (Benfenati et al., 2020), that explicitly accounts for the expression of \(\mathbf{R}_{k}\) in the maximum a posteriori loss expression. For the sake of fair comparison, we use hereagain an \(\ell_{1}\) penalty to obtain \(\widehat{\mathbf{P}}\) although more sophisticated priors could be encompassed by rGLASSO. For both aforementioned methods, we rely on the Matlab toolbox provided by the authors3. Finally, we provide the results obtained with the GRAPHEM method we recently introduced in (Elvira and Chouzenoux, 2022). GRAPHEM provides a maximum a posteriori estimate \(\widehat{\mathbf{A}}\) under an \(\ell_{1}\) prior, while \(\widehat{\mathbf{Q}}\) is empirically set to \(\sigma^{2}_{\mathbf{Q}}\mathbf{Id}_{N_{y}}\), with a finetuned \(\sigma_{\mathbf{Q}}>0\). The Matlab toolbox provided by the authors4 is used to produce the results for this method.
Footnote 3: [http://www-syscom.univ-mlv.fr/](http://www-syscom.univ-mlv.fr/) benfenat/Software.html
Footnote 4: [https://pages.saclay.inria.fr/emilie.chouzenoux/LogicielEN.html](https://pages.saclay.inria.fr/emilie.chouzenoux/LogicielEN.html)
All the comparative methods are programmed in the same language, they are initialized using a similar strategy as our DGLASSO method, and similar stopping criteria and hyper-parameter tuning approach are employed, for fair comparisons.
#### 5.1.3 Evaluation metrics
We first evaluate the results of our method, as well as the comparative benchmarks, through quality assessment metrics. Namely, we use the relative mean square error (RMSE) between
the ground truth matrices \((\mathbf{A}^{*},\mathbf{P}^{*},\mathbf{Q}^{*})\) and the estimated \((\widehat{\mathbf{A}},\widehat{\mathbf{P}},\widehat{\mathbf{Q}})\) (when available). For instance,
\[\text{RMSE}(\mathbf{A}^{*},\widehat{\mathbf{A}})=\frac{\|\mathbf{A}^{*}- \widehat{\mathbf{A}}\|_{\text{F}}^{2}}{\|\mathbf{A}^{*}\|_{\text{F}}^{2}}. \tag{63}\]
\(\text{RMSE}(\mathbf{P}^{*},\widehat{\mathbf{P}})\) and \(\text{RMSE}(\mathbf{Q}^{*},\widehat{\mathbf{Q}})\) are defined in a similar fashion. We also compute area-under-curve (AUC) and F1 score comparing the non-zero entries (that is, the graph edges positions) of the sparse matrices \((\mathbf{A}^{*},\mathbf{P}^{*})\) and their estimates \((\widehat{\mathbf{A}},\widehat{\mathbf{P}})\). A threshold value of \(10^{-10}\) (in absolute value) is used for the detection hypothesis. We furthermore evaluate the ability of the estimated model parameters to actually describe and infer the time series (both observed and hidden states). To that end, we build test time series \((\mathbf{x}^{\text{test}},\mathbf{y}^{\text{test}})\), not seen by the algorithms, constructed by running the ground truth LG-SSM (i.e., with ground truth matrix parameters \((\mathbf{A}^{*},\mathbf{P}^{*},\mathbf{Q}^{*})\)). We then run KF and RTS algorithms 1 and 2, respectively, using either the ground truth matrices \((\mathbf{A}^{*},\mathbf{P}^{*},\mathbf{Q}^{*})\) or their estimations \((\widehat{\mathbf{A}},\widehat{\mathbf{P}},\widehat{\mathbf{Q}})\), to build, for every \(k\in\{1,\ldots,K\}\), the predictive distribution means \((\boldsymbol{\mu}_{k}^{*},\boldsymbol{\nu}_{k}^{*},\boldsymbol{\mu}_{k}^{* *})\) and \((\widehat{\boldsymbol{\mu}}_{k},\widehat{\boldsymbol{\nu}}_{k},\widehat{ \boldsymbol{\mu}}_{k}^{*})\), respectively.
This allows in particular to compute the cumulative normalized mean squared error (cNMSE) between the predictive distribution means using either the ground truth model matrices or the estimated ones. Namely, we calculate
\[\text{cNMSE}(\boldsymbol{\nu}^{*},\widehat{\boldsymbol{\nu}})=\frac{\sum_{k= 1}^{K}\|\boldsymbol{\nu}_{k}^{*}-\widehat{\boldsymbol{\nu}}\|^{2}}{\sum_{k=1}^ {K}\|\boldsymbol{\nu}_{k}^{*}\|^{2}}, \tag{64}\]
as well as \(\text{cNMSE}(\boldsymbol{\mu}^{*},\widehat{\boldsymbol{\mu}})\), and \(\text{cNMSE}(\boldsymbol{\mu}^{**},\widehat{\boldsymbol{\mu}}^{*})\).
Finally, we evaluate the negative logarithm of the marginal likelihood \(\mathcal{L}_{1:K}(\widehat{\mathbf{A}},\widehat{\mathbf{P}})\) as defined in (21), on the test time series.
#### 5.1.4 Tuning the regularization parameters
Let us first discuss the setting of the DGLASSO hyper-parameters \((\lambda_{A},\lambda_{P})\), accounting for the sparsity priors on \(\mathbf{A}\) and \(\mathbf{P}\), respectively. To that aim, we ran DGLASSO for 100 values of hyperparameters \((\lambda_{A},\lambda_{P})\), regularly spaced on a log-scale grid between 1 and \(10^{2}\), and repeated the experience on 50 randomly generated time series, for dataset A. We display in Figure 4 the averaged values, over the random runs, of several quantitative metrics, as a function of hyperparameters \((\lambda_{A},\lambda_{P})\). We also report in the caption the averaged RMSE scores obtained when running DGLASSO with \((\lambda_{A},\lambda_{P})=(0,0)\) (i.e., MLEM result). As it can be observed, both F1/RMSE for the estimation of the transition matrix \(\mathbf{A}\) are mostly governed by the value of \(\lambda_{A}\), while the quality scores for the state noise precision matrix \(\mathbf{P}\) are mostly influenced by \(\lambda_{P}\). Note that the RMSE scores on \(\mathbf{Q}\), not shown here, follow similar evolution than RMSE on \(\mathbf{P}\). Just inspecting F1/RMSE appears not informative enough to set parameters \((\lambda_{A},\lambda_{P})\), as each metric and each parameter seems to push towards a different goal. The maps of \(\text{cNMSE}(\boldsymbol{\nu}^{*},\widehat{\boldsymbol{\nu}})\) and of the marginal likelihood log-loss \(\mathcal{L}_{1:K}\) show very similar behavior. Note that the later is a practically interesting metric, because it does not require the knowledge of the ground truth matrices. On this example, it however appears not discriminative enough. Typically, it stays almost constant for a (too) wide value range of \((\lambda_{A},\lambda_{P})\), which confirms again the ill-posedness of the minimization
problem. The maps for cNMSE(\(\mathbf{\mu}^{*},\widehat{\mathbf{\mu}}\)) and cNMSE(\(\mathbf{\mu}^{s*},\widehat{\mathbf{\mu}}^{s}\)) are very similar. Interestingly, the minimization of both these quantities, related to the state mean distributions, seems to provide a meaningful value range for the regularization parameters, narrowing down around values that achieve an optimal compromise between (i) good estimation of the sought matrices, (ii) good estimation of the sparse matrices support, and (iii) good predictive behavior for time series inference by KF/RTS techniques. Same conclusions were reached for the other three datasets. Note additionally that the DGLASSO results outperform for a wide range of \((\lambda_{A},\lambda_{P})\) those obtained with MLEM, confirming the validity of the proposed sparsity prior. This will be more deeply examined in an upcoming section. In our forthcoming experiments, except otherwise stated, we fixed \((\lambda_{A},\lambda_{P})\) through a rough grid search to minimize cNMSE(\(\mathbf{\mu}^{*},\widehat{\mathbf{\mu}}\)) averaged on few (typically 5) runs. On real datasets, we advocate the use of the marginal likelihood on test trials, as a control variate.
#### 5.1.5 Influence of conditioning number
We report in Table 1 the results, averaged over 50 realizations of the time series generation. We do not provide the scores for the estimation of \(\mathbf{A}\) (resp. \((\mathbf{P},\mathbf{Q})\)) for GLASSO/rGLASSO (resp. GRAPHEM), as those methods are not designed for such task. The general trend in the results is a slight decrease of the quality of estimation for all methods, when \(c_{Q}\) increases (i.e., from dataset A to D). This is expected, as an ill-conditioned matrix \(\mathbf{Q}\) complicates the mathematical structure of the likelihood term.
Regarding the estimation of \(\mathbf{A}\), GRAPHEM method presents the best results for the three considered metrics. DGLASSO is second best, while MLEM follows. As already stated, GLASSO/rGLASSO do not estimate \(\mathbf{A}\) (i.e., they assume this matrix to be zero, that is the process is i.i.d. without any Markov structure). Regarding the estimation of \((\mathbf{P},\mathbf{Q})\), DGLASSO is outperforming the benchmarks in terms of RMSE score. The second best, in terms of RMSE is MLEM. GLASSO and rGLASSO gets very bad RMSE scores, probably because of the model mismatch induced by assuming \(\mathbf{A}\) to be zero. The edge detection performance of DGLASSO are excellent, except in few cases where rGLASSO gets slightly better results. MLEM gets poorer results than DGLASSO in all metrics. In particular it exhibits a bad F1 score, as it does not include sparse prior, and thus does not succeed to eliminate any spurious edges.
Regarding the distribution mean calculation, it is remarkable that DGLASSO outperforms in all examples the benchmarks, which shows that the proposed formulation allows to provide model parameters that are best suited for inference using KF/RTS at test phase. The marginal likelihood log-loss is also minimized with the quantities provided by our method. This could appear as counter-intuitive, as the method is not specifically designed to minimize this loss (while MLEM is). The advantage of DGLASSO is that it accounts for prior knowledge, making the inferred model more robust, and less prone to overfitting, which is translated into better behavior on an independent test time series.
We display in Figure 5 box plots assessing the variability to the RMSE and F1 scores, for both MLEM and DGLASSO methods, on 50 runs on dataset A and dataset D. The RMSE values are in most runs lower (i.e., better) for the proposed method. Both methods are quite stable with respect to the time series generation, as the plots show few outliers. For dataset D, corresponding to a more challenging case with an ill-conditioned matrix \(\mathbf{P}^{*}\)
Figure 4: Evolution of RMSE, F1, cNMSE and loss scores on the estimation of \(\mathbf{A}\) (left) and \(\mathbf{P}\) (right) by DGLASSO, as a function of hyperparameters \((\lambda_{A},\lambda_{P})\), for dataset A (averaged on 10 runs). As a comparison, the averaged RMSE scores for \((\lambda_{A},\lambda_{P})=(0,0)\) (i.e., MLEM) on this example were \((0.077,0.106)\) for \((\mathbf{A},\mathbf{P})\), respectively.
the results are slightly more spread, especially for the metrics related to the recovery quality of this matrix. Remark that the F1 scores of MLEM are constant, equal to \(0.5\). As already pointing out, this is expected as MLEM is not designed to perform an edge detection task.
We refer the reader to Appendix C.3 for extra experiments on the synthetic datasets, assessing the performance of DGLASSO when the sparsity level of \(\mathbf{A}^{*}\) increases.
#### 5.1.6 Complexity analysis
We now examine the time complexity of the method, as well as of MLEM, when processing dataset A, for various values of the time series length \(K\), namely \(K\in\{100,200,500,1000,2000,5000\}\) (we recall that our previous experiments were all done with \(K=1000\)). We display in Figure 6(left) the computing time in seconds for computing \((\widehat{\mathbf{A}},\widehat{\mathbf{P}})\), for DGLASSO and MLEM, averaged over 50 realizations. We also display, in Figure 6(middle) for the same experiment, the RMSE between \(\mathbf{A}^{*}\) and \(\widehat{\mathbf{A}}\), and in Figure 6(right) the metric cNMSE\((\boldsymbol{\mu}^{*},\widehat{\boldsymbol{\mu}})\). One can notice that our method is slightly more demanding in terms of computational time, but it scales similarly than its non regularized version MLEM. Moreover, as already observed in our previous experiments, DGLASSO outperforms MLEM on both metrics shown here, in all tested values of \(K\). As expected, the results are better for higher values of \(K\), at
Figure 5: Box plots for the RMSE and F1 scores when running MLEM (left) and DGLASSO (right) on 50 randomly generated LG-SSM time series, for dataset A (top) and dataset D (bottom). DGLASSO outperforms MLEM on all runs, in terms of accuracy metrics (the higher, the better), while its RMSE scores are better (i.e., lower) for DGLASSO for most runs. Dataset D is more challenging, in terms of inference, thus yielding to more spread results for both methods.
the price of an increased computational time. Interestingly, the regularization still yields improved results for very large \(K\).
### Weather data
#### 5.2.1 Experimental settings
We now evaluate our method on realistic graph datasets arising from causal discovery studies in the field of weather variability tracking. Specifically, we consider two sets of 200 sparse matrices \(\mathbf{A}^{*}\in\mathbb{R}^{N_{x}}\), with \(N_{x}=5\) or 10 respectively, representing the ground truth causal graphs used to produce WEATH datasets in the Neurips 2019 data challenge (Runge et al.,
\begin{table}
\begin{tabular}{|c|c|c c||c c|c c||c c|c c|c c|} \cline{3-13} \multicolumn{1}{c|}{} & \multicolumn{3}{c||}{Estimation of \(\mathbf{A}\)} & \multicolumn{3}{c||}{Estimation of \(\mathbf{P}\)} & \multicolumn{3}{c||}{Estim. \(\mathbf{Q}\)} & \multicolumn{3}{c||}{State distribution} & \multicolumn{3}{c|}{Projective distrib.} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Method} & RMSE & AUC & F1 & RMSE & c\(\mathrm{RMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu}^{\star})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu}^{\star})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu}^{\star})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu}^{\star})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu})\) & \(\mathrm{cMSE}(\boldsymbol{\mu}^{\star},\boldsymbol{\mu})\) \\ \hline \multirow{6}{*}{} & DGLASSO & 0.061 & 0.843 & **0.641** & 0.082 & 0.778 & 0.698 & **0.688** & **0.634 \(\mathbf{10^{-8}}\) & **1.603 \(\mathbf{10^{-7}}\)** & **2.984 \(\mathbf{10^{-7}}\)** & **12901.169** \\ & MLEM & 0.076 & 0.817 & 0.500 & 0.105 & 0.857 & 0.500 & 0.102 & \(1.005\times 10^{-7}\) & \(1.803\times 10^{-7}\) & \(4.843\times 10^{-4}\) & 12 341.205 \\ & GLASSO & NA & NA & NA & 0.818 & 0.804 & 0.406 & \(1.073\),510 & \(4.485\times 10^{-6}\) & \(7.180\times 10^{-6}\) & 1.000 & 28 458.294 \\ & rGLASSO & NA & NA & NA & 0.764 & **0.924** & 0.598 & 31.689 & \(2.826\times 10^{-6}\) & \(5.492\times 10^{-6}\) & \(1.000\) & 22 957.693 \\ & GRAPHEM & **0.045** & **0.085** & **0.047** & NA & NA & NA & NA & \(4.364\times 10^{-6}\) & \(6.944\times 10^{-6}\) & 2.980 \(\times 10^{-4}\) & 29 033.030 \\ \hline \multirow{6}{*}{} & DGLASSO & 0.068 & 0.833 & 0.603 & **0.070** & 0.893 & **0.831** & **0.071** & **1.740 \(\mathbf{10^{-7}}\)** & **1.128 \(\mathbf{10^{-7}}\)** & **2.381 \(\mathbf{10^{-4}}\)** & 11 886.744 \\ & MLEM & 0.080 & 0.815 & 0.500 & 0.106 & 0.898 & 0.500 & 0.100 & \(1.299\times 10^{-7}\) & \(2.133\times 10^{-7}\) & 4.619 \(\times 10^{-4}\) & 11 833.438 \\ & GLASSO & NA & NA & 0.827 & 0.826 & 0.505 & 341.873 & \(5.009\times 10^{-6}\) & \(8.072\times 10^{-6}\) & 1.000 & 27 474.964 \\ & rGLASSO & NA & NA & 0.734 & **0.830** & 0.608 & 33.896 & \(3.215\times 10^{-6}\) & \(6.187\times 10^{-6}\) & 1.000 & 22 533.036 \\ & GRAPHEM & **0.047** & **0.803** & **0.548** & NA & NA & NA & NA & \(5.158\times 10^{-6}\) & \(8.036\times 10^{-6}\) & 2.912 \(\times 10^{-4}\) & 29 031.412 \\ \hline \multirow{6}{*}{} & DGLASSO & 0.070 & 0.829 & 0.581 & **0.099** & 0.954 & **0.830** & **0.078** & **1.886 \(\mathbf{10^{-7}}\)** & **2.944 \(\mathbf{10^{-7}}\)** & **3.365 \(\mathbf{10^{-6}}\)** & **10 \(\mathbf{111.140}\)** \\ & MLEM & 0.081 & 0.810 & 0.500 & 0.097 & **0.974** & 0.500 & 0.094 & \(2.583\times 10^{-7}\) & \(4.180\times 10^{-7}\) & \(5.053\times 10^{-4}\) & 10 326.410 \\ \cline{1-1} & GLASSO & NA & NA & NA & 0.901 & 0.805 & 0.489 & \(3.926\times 10^{7}\) & 0.012 & 0.012 & 1.000 & 26 634.892 \\ \cline{1-1} & GLASSO & NA & NA & 0.805 & 0.928 & 0.614 & 29.530 & \(7.195\times 10^{-6}\) & \(1.320\times 10^{-5}\) & 1.000 & 21 322.247 \\ \cline{1-1} & GRAPHEM & **0.049** & **0.082** & **0.857** & NA & NA & NA & \(1.055\times 10^{-5}\) & \(1.641\times 10^{-5}\) & \(3.912\times 10^{-4}\) & 29 032.309 \\ \hline \multirow{6}{*}{} & DGLASSO & 0.073 & 0.835 & 0.575 & **0.083** & **1.000** & 0.598 & **0.080** & **5.127 \(\mathbf{10^{-7}}\)** & **8.240 \(\mathbf{10^{-7}}\)** & **3.373 \(\mathbf{10^{-4}}\)** & **7911.240** \\ & MLEM & 0.098 & 0.080 & 0.095 & **1.000** & 0.500 & 0.084 & \(6.296\times 10^{-7}\) & \(1.027\times 10^{-6}\) & \(4.219\times 10^{-4}\) & 7292.850 \\ \cline{1-1} & GLASSO & NA & NA & 0.964 & 0.941 & 0.550 & 187.823 & \(2.348\times 10^{-5}\) & \(3.701\times 10^{-5}\) & 1.000 & 22 684.178 \\ \cline{1-1} & GLASSO & NA & NA & 0.882 & 0.956 & **0.645** & 28.703 & \(1.886\times 10^{-5}\) & \(3.239\times 10^{-5}\) & 1.000 & 20 100.491 \\ \cline{1-1} & GRAPHEM & **0.061** & **0.802** & **0.854** & NA & NA & NA & NA & \(2.503\times 10^{-5}\) & \(3.839\times 10^{-5}\) & \(3.743\times 10^{-4}\) & 29 016.321 \\ \hline \end{tabular}
\end{table}
Table 1: Results for the four considered datasets A to D, with an increasing conditioning number of \(\mathbf{P}^{*}\) equals to \(\log_{10}(c)\in\{0.1,0.2,0.1,1\}\), respectively. We evaluate the methods in terms of estimation quality for \((\mathbf{A},\mathbf{P
2020)5. For each \(\mathbf{A}^{*}\), we create times series \((\mathbf{x}_{k},\mathbf{y}_{k})_{k=1}^{K}\) using (2)-(3), with \(K=10^{3}\), \(\mathbf{H}^{*}=\mathbf{Id}\) (i.e., \(N_{y}=N_{x}\)), and \((\sigma_{\mathbf{R}},\sigma_{0})=(10^{-1},10^{-4})\). We set \(\mathbf{Q}^{*}\) as a block diagonal matrix of \(J\) blocks with dimensions \(B_{j1\leq j\leq J}\), with \(\sum_{j=1}^{J}B_{j}=N_{x}\). Here, we consider two settings for the conditioning of \(\mathbf{Q}^{*}\), namely one with a condition number close to one (i.e., \(\mathbf{Q}^{*}\) is close to identity matrix) and another with high condition number. The main characteristics of the datasets and their names are summarized in Table 2.
Footnote 5: [https://causeme.uv.es/static/datasets/TestWEATH/](https://causeme.uv.es/static/datasets/TestWEATH/)
We evaluate our results in terms on quality assessment metrics of the estimated \(\widehat{\mathbf{A}}\) when compared to its ground truth \(\mathbf{A}^{*}\), as this is the quantity of interest for these datasets. We compute \(\text{RMSE}(\widehat{\mathbf{A}},\mathbf{A}^{*})\), as well as the precision, recall, specificity, accuracy, and F1 score for detecting the non-zero entries of the transition matrix (that is, the graph edges positions). A threshold value of \(10^{-10}\) on the absolute entries is used for the detection hypothesis.
As for comparison, we also provide the results obtained with MLEM and GRAPHEM approaches, as in our previous experiments. The same stopping criterion than in our previous set of experiments is used. The hyperparameters are finetuned using the strategy depicted on the previous section. We used rough grids for the finetuning and keep the parameters fixed for all the dataset, to avoid any overfitting behavior. In addition to these EM-based methods, we provide comparisons with two Granger-causality approaches for graphical modeling, namely pairwise Granger Causality (PGC) and conditional Granger Causality (CGC). Those approaches are based on sparse vector autoregressive (VAR) models. We allow the order of the VAR process to be up to \(p=11\), CGC is run with a maximum distance of 5 causal links, and in each experiment we display the best results (in F1 score) for the statistical tests performed with the significance level \(\alpha\in\{0.01,0.05,0.1\}\) (see more details in (Luengo et al., 2019)). As PGC and CGC do not provide a weighted graph estimation, no RMSE score is computed in those cases.
#### 5.2.2 Results
We summarize our results in Table 3. We also report the averaged inference time for all methods. Remarkably, the proposed method outperforms all benchmarks in all the considered metrics, related to the inference of the graph weights (RMSE metric) and the edge detection task (binary classification metrics). As expected, the result quality tends to slightly degrade when a more complicated structure is assumed for \(\mathbf{Q}^{*}\) (see, for instance, WeathN5a versus WeathN5b), and when the dataset size increases (see, for instance, WeathN5a versus WeathN10a). MLEM has very poor edge detection metrics, since it does not exploit any sparsity prior on the sought matrix. GRAPHEM has better behavior, but, in these datasets, it stays way behind the quality of DGLASSO, by all metrics (which was not the case for the synthetic data). This shows that our method really makes the change when dealing with complex graphs and correlated noise in the observations. In terms of computational time, the proposed method has similar complexity than the two other EM-based approaches. The results of CGC and PGC are satisfactory in the first two examples although the binary metrics are far from the performance of DGLASSO. PGC still gives meaningful results in the last two examples, but CGC has a low performance due to a high number of false negatives (i.e., many existing links are not discovered). We also note that while PGC and
CGC have significantly lower running times for the case with \(N_{y}=5\), the computational cost for \(N_{y}=10\) is closer to DGLASSO while the performance gap is higher. Thus, in this examples we show that as the dimension grows, the computational cost of all methods is comparable while the proposed method has clearly a better performance. We also display some examples of inferred graphs in Figures 7 and 8. We can observe that DGLASSO is able to capture in a very accurate manner the structure of the graphs, despite the wide variability of the dataset. MLEM and GRAPHEM capture the main edges, but their graphs are perturbed by several spurious edges, which shows how important it is to adopt a joint graph modeling, with sparsity priors on each.
In particular, we propose a joint approach that considers a sparse undirected graph as a prior on the precision matrix of the hidden state noise and a sparse directed graph for the transition matrix that models the state dynamics. By bridging the gap between the static graphical Lasso model and the dynamic state-space model, we provide a novel comprehensive framework for interpretable inference with time-series data. The presented inference method, based on an efficient block alternating majorization-minimization algorithm, enables simultaneous estimation of both graphs and the construction of the filtering/smoothing
Figure 7: Graph inference results on 4 examples extracted from the dataset _WeathN5a_. From top to bottom: Original graph representation of \(\mathbf{A}^{*}\), and of its estimation \(\widehat{\mathbf{A}}\), using DGLASSO, MLEM, GRAPHEM, respectively.
distribution for the time series. The established convergence of our algorithm, departing from recent nonlinear analysis tools, enhances the reliability and practicality of our approach. Through extensive experimental validation on synthetic and real weather variability data, we have demonstrated the effectiveness and potential of our proposed model and inference algorithm. The results showcase its ability to uncover meaningful insights from time-series data, contributing not only to better forecasting performance but also to a better understanding of complex phenomena in various scientific and engineering domains. Future work is needed to understand the effectiveness of our approach.
Figure 8: Graph inference results on 4 examples extracted from the dataset _WeathN10a_. From top to bottom: Original graph representation of \(\mathbf{A}^{*}\), and of its estimation \(\widehat{\mathbf{A}}\), using DGLASSO, MLEM, GRAPHEM, respectively.
ture research can further explore and extend the presented framework to tackle even more challenging state-space models, and more complex graphical structures.
E.C. acknowledges support from the European Research Council Starting Grant MAJORIS ERC-2019-STG-850925. The work of V. E. is supported by the _Agence Nationale de la Recherche_ of France under PISCES (ANR-17-CE40-0031-01), the Leverhulme Research Fellowship (RF-2021-593), and by ARL/ARO under grant W911NF-22-1-0235. The authors thank Gustau Camps-Valls and his team, for providing the ground truth data for (Runge et al., 2020).
## Appendix A Proof of Theorem 1
For any initial state \(\mathbf{x}_{0:K}\), with non zero probability, the neg-log-likelihood \((\mathbf{A},\mathbf{P})\to(\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})=-\log p( \mathbf{y}_{1:K}|\mathbf{A},\mathbf{P}))\) reads, according to Bayes' rule,
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{ P}\in\mathcal{S}_{N_{x}})\\ \mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})=-\log p(\mathbf{x}_{0:K },\mathbf{y}_{1:K}|\mathbf{A},\mathbf{P})+\log p(\mathbf{x}_{0:K}|\mathbf{y}_{ 1:K},\mathbf{A},\mathbf{P}). \tag{65}\]
According to Eqs. (2)-(3)
\[\log p(\mathbf{x}_{0:K},\mathbf{y}_{1:K}|\mathbf{A},\mathbf{P})=\log p( \mathbf{x}_{0})+\sum_{k=1}^{K}\log p(\mathbf{x}_{k}|\mathbf{x}_{k-1},\mathbf{ A},\mathbf{P})+\sum_{k=1}^{K}\log p(\mathbf{y}_{k}|\mathbf{x}_{k}). \tag{66}\]
Moreover, using again Eq. (2) and the statistical model of the state noise,
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}}) \tag{67}\] \[\sum_{k=1}^{K}\log p(\mathbf{x}_{k}|\mathbf{x}_{k-1},\mathbf{A}, \mathbf{P}) =-\frac{1}{2}\sum_{k=1}^{K}\left((\mathbf{x}_{k}-\mathbf{A}\mathbf{ x}_{k-1})^{\top}\mathbf{P}\left(\mathbf{x}_{k}-\mathbf{A}\mathbf{x}_{k-1} \right)-\log\det(2\pi\mathbf{P})\right),\] (68) \[=-\frac{1}{2}\sum_{k=1}^{K}\left((\mathbf{x}_{k}-\mathbf{A} \mathbf{x}_{k-1})^{\top}\mathbf{P}\left(\mathbf{x}_{k}-\mathbf{A}\mathbf{x}_{k -1}\right)\right)+\frac{K}{2}\log\det(2\pi\mathbf{P}), \tag{69}\]
which concludes the first part of the proof.
Let us now consider some \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N_{x}\times N_{x}}\) and \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\). We start by recalling some known results arising from the EM methodology (Dempster et al., 1977; Wu, 1983), that we specify here for our context for the sake of clarity. First, we notice that
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\\ \mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})=-\int\log p(\mathbf{y}_{ 1:K}|\mathbf{A},\mathbf{P})p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K},\widetilde{ \mathbf{A}},\widetilde{\mathbf{P}})\mathrm{d}\mathbf{x}_{0:K}, \tag{70}\]
since \(\log p(\mathbf{y}_{1:K}|\mathbf{A},\mathbf{P})\) is constant with respect to the integration variable, and the distribution \(p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K},\widetilde{\mathbf{A}},\widetilde{\mathbf{ P}})\) integrates to one. Then, according to (65) and (70), the expectation of the neg-log-likelihood multiplied by \(p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K},\widetilde{\mathbf{A}},\widetilde{\mathbf{ P}})\) over all possible values of the unknown state reads:
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}}) \mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})=\overbrace{-\int p( \mathbf{x}_{0:K}|\mathbf{y}_{1:K},\widetilde{\mathbf{A}},\widetilde{\mathbf{ P}})\log p(\mathbf{x}_{0:K},\mathbf{y}_{1:K}|\mathbf{A},\mathbf{P})\mathrm{d} \mathbf{x}_{0:K}}^{q(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})}\] \[+\overbrace{\int p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K},\widetilde{ \mathbf{A}},\widetilde{\mathbf{P}})\log p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K}, \mathbf{A},\mathbf{P})\mathrm{d}\mathbf{x}_{0:K}}^{h(\mathbf{A},\mathbf{P}; \widetilde{\mathbf{A}},\widetilde{\mathbf{P}})}\mathrm{d}\mathbf{x}_{0:K}}^{h (\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})} \mathrm{d}\mathbf{x}_{0:K}. \tag{71}\]
In particular, for \((\mathbf{A},\mathbf{P})=(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\), \(\mathcal{L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})=q(\widetilde{ \mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}} )+h(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})\) so that
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\quad\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P} )-\mathcal{L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})=q(\mathbf{A },\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})-q(\widetilde{ \mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}} )\\ +h(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})-h(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{ \mathbf{A}},\widetilde{\mathbf{P}}). \tag{72}\]
Using Gibbs's inequality, \(h(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\leq h (\widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})\), with equality at \((\mathbf{A},\mathbf{P})=(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\). Thus, using (72), that is
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})\leq q( \mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})+\mathcal{ L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})-q(\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}}), \tag{73}\]
where equality holds at \((\mathbf{A},\mathbf{P})=(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\). Notice that, for any function reading
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{Q}(\mathbf{A},\mathbf{P};\widetilde{\mathbf{ A}},\widetilde{\mathbf{P}})=q(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})+\mathrm{ct}_{/\mathbf{A},\mathbf{P}}, \tag{74}\]
we obviously still have
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{L}_{1:K}(\mathbf{A},\mathbf{P})\leq\mathcal{ Q}(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})+ \mathcal{L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})-\mathcal{Q}( \widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}}), \tag{75}\]
where equality hereagain holds at \((\mathbf{A},\mathbf{P})=(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\). Using (75), (20), (22) and noticing that, for every \((\theta_{A},\theta_{P})>0\),
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\frac{1}{2\theta_{A}}\|\mathbf{A}-\widetilde{\mathbf{A }}\|_{F}^{2}\geq 0,\quad\frac{1}{2\theta_{P}}\|\mathbf{P}-\widetilde{\mathbf{P}}\|_ {F}^{2}\geq 0, \tag{76}\]
with equality holding at \((\mathbf{A},\mathbf{P})=(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\), we deduce the desired majorizing property
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\quad\mathcal{L}(\mathbf{A},\mathbf{P})\leq \mathcal{Q}(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{\mathbf{P}} )+\mathcal{L}_{1:K}(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})-\mathcal{Q} (\widetilde{\mathbf{A}},\widetilde{\mathbf{P}};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})\\ +\lambda_{A}\|\mathbf{A}\|_{1}+\lambda_{P}\|\mathbf{P}\|_{1}+ \frac{1}{2\theta_{A}}\|\mathbf{A}-\widetilde{\mathbf{A}}\|_{F}^{2}+\frac{1}{2 \theta_{P}}\|\mathbf{P}-\widetilde{\mathbf{P}}\|_{F}^{2}, \tag{77}\]
with equality holding at \((\mathbf{A},\mathbf{P})=(\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\). The remaining of the proof amounts to expliciting the expression for \((\mathbf{A},\mathbf{P})\to\mathcal{Q}(\mathbf{A},\mathbf{P};\widetilde{ \mathbf{A}},\widetilde{\mathbf{P}})\) satisfying (74) with function \(q\) defined as in (71):
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall \mathbf{P}\in\mathcal{S}_{N_{x}})\\ q(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}},\widetilde{ \mathbf{P}})=-\int p(\mathbf{x}_{0:K}|\mathbf{y}_{1:K},\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})\log p(\mathbf{x}_{0:K},\mathbf{y}_{1:K}|\mathbf{A}, \mathbf{P})\mathrm{d}\mathbf{x}_{0:K}. \tag{78}\]
Following (Sarkka, 2013, Theorem 12.4) (see also an alternative proof in (Elvira and Chouzenoux, 2022, Sec.III-B)), (71)-(74) hold for
\[(\forall\mathbf{A}\in\mathbb{R}^{N_{x}\times N_{x}})(\forall\mathbf{P}\in \mathcal{S}_{N_{x}})\quad\mathcal{Q}(\mathbf{A},\mathbf{P};\widetilde{\mathbf{A}}, \widetilde{\mathbf{P}})=\frac{K}{2}\mathrm{tr}\left(\mathbf{P}(\mathbf{\Psi}- \mathbf{\Delta}\mathbf{\Lambda}^{\top}-\mathbf{\Lambda}\mathbf{\Delta}^{\top}+ \mathbf{\Lambda}\mathbf{\Phi}\mathbf{A}^{\top}))-\frac{K}{2}\log\det(2\pi \mathbf{P}), \tag{79}\]
and
\[\boldsymbol{\Psi} =\frac{1}{K}\sum_{k=1}^{K}\left(\boldsymbol{\Sigma}_{k}^{\mathrm{s}} +\boldsymbol{\mu}_{k}^{\mathrm{s}}(\boldsymbol{\mu}_{k}^{\mathrm{s}})^{\top} \right), \tag{80}\] \[\boldsymbol{\Delta} =\frac{1}{K}\sum_{k=1}^{K}\left(\boldsymbol{\Sigma}_{k}^{ \mathrm{s}}\mathbf{G}_{k-1}^{\top}+\boldsymbol{\mu}_{k}^{\mathrm{s}}( \boldsymbol{\mu}_{k-1}^{\mathrm{s}})^{\top}\right),\] (81) \[\boldsymbol{\Phi} =\frac{1}{K}\sum_{k=1}^{K}\left(\boldsymbol{\Sigma}_{k-1}^{ \mathrm{s}}+\boldsymbol{\mu}_{k-1}^{\mathrm{s}}(\boldsymbol{\mu}_{k-1}^{ \mathrm{s}})^{\top}\right). \tag{82}\]
Hereabove, \((\boldsymbol{\mu}_{k}^{\mathrm{s}},\boldsymbol{\Sigma}_{k}^{\mathrm{s}})_{0 \leq k\leq K-1}\) denotes the mean and covariance of the smoothing distribution obtained when running Algs. 1-2 using \((\widetilde{\mathbf{A}},\widetilde{\mathbf{P}})\). Moreover, the matrix
\[\mathbf{G}_{k}=\boldsymbol{\Sigma}_{k}\widetilde{\mathbf{A}}^{\top}\left( \widetilde{\mathbf{A}}\boldsymbol{\Sigma}_{k}\widetilde{\mathbf{A}}^{\top}+ \widetilde{\mathbf{P}}^{-1}\right) \tag{83}\]
is defined in Alg. 2. This concludes the proof.
## Appendix B Proximal algorithms to solve the inner steps
We present Algorithms 4 and 5, that are proximal splitting algorithms to solve, respectively, the inner problems (32) and (33). Specifically, both algorithms are special instances of the Dykstra-like splitting algorithm from (Bauschke and Combettes, 2008) (see also (Combettes and Pesquet, 2011, Sec.5)), for the minimization of the sum of two convex but non-differentiable functions. Sequence \((\mathbf{A}_{n})_{n\in\mathbb{N}}\) (resp. \((\mathbf{P}_{n})_{n\in\mathbb{N}}\)) is guaranteed to converge to the solution of problem (32) (resp. (33)). The proximity steps involved in Algorithms 4 and 5 have closed form expressions that can be found for instance in the reference book (Bauschke and Combettes, 2017). We explicit them hereafter for the sake of completeness.
Proximity of \(\ell_{1}\).Let \(\gamma>0\) and \(\widetilde{\mathbf{V}}\in\mathbb{R}^{N_{x}\times N_{x}}\). Then,
\[\mathrm{prox}_{\gamma\ell_{1}}(\widetilde{\mathbf{V}}) \tag{84}\] \[=\left(\mathrm{sign}(\widetilde{V}(n,\ell)\max(0,\widetilde{V}(n,\ell)-\gamma)\right)_{1\leq n,\ell\leq N_{x}}, \tag{85}\]
which amounts to applying the soft thresholding operator with weight \(\gamma\) on every entry of the matrix input \(\widetilde{\mathbf{V}}\).
Proximity of quadratic term.Let \(\gamma>0\) and \(\widetilde{\mathbf{W}}\in\mathbb{R}^{N_{x}\times N_{x}}\). Then, by definition,
\[\widehat{\mathbf{Z}} =\mathrm{prox}_{\mathbf{W}\rightarrow\gamma\mathrm{tr}\left(- \widetilde{\mathbf{P}}\Delta\mathbf{W}-\widetilde{\mathbf{P}}\mathbf{W} \Delta^{\top}+\widetilde{\mathbf{P}}\mathbf{W}\Psi\Psi^{\top}\right)}\left( \widetilde{\mathbf{W}}\right) \tag{86}\] \[=\underset{\mathbf{W}\in\mathbb{R}^{N_{x}\times N_{x}}}{\mathrm{ argmin}}\ \gamma\mathrm{tr}\left(-\widetilde{\mathbf{P}}\Delta\mathbf{W}^{\top}- \widetilde{\mathbf{P}}\mathbf{W}\Delta^{\top}+\widetilde{\mathbf{P}}\mathbf{ W}\Psi\Psi^{\top}\right)+\frac{1}{2}\|\mathbf{W}-\widetilde{\mathbf{W}}\|_{ \mathrm{F}}^{2}. \tag{87}\]
The optimality condition for (87) gives
\[-\gamma\widetilde{\mathbf{P}}\boldsymbol{\Delta}-\gamma\boldsymbol{\Delta}^{ \top}\widetilde{\mathbf{P}}+\gamma\widetilde{\mathbf{P}}\widehat{\mathbf{Z}} \boldsymbol{\Psi}+\gamma\widetilde{\mathbf{P}}^{\top}\widehat{\mathbf{Z}} \boldsymbol{\Psi}^{\top}+\widehat{\mathbf{Z}}-\widetilde{\mathbf{W}}= \mathbf{0}. \tag{88}\]
Since \(\widetilde{\mathbf{P}}\in\mathcal{S}_{N_{x}}^{++}\), and \(\boldsymbol{\Psi}\in\mathcal{S}_{N_{x}}\) (by construction), we have equivalently,
\[-\gamma\boldsymbol{\Delta}-\gamma\widetilde{\mathbf{P}}^{-1}\boldsymbol{\Delta} ^{\top}\widetilde{\mathbf{P}}+2\gamma\widehat{\mathbf{Z}}\boldsymbol{\Psi}+ \widetilde{\mathbf{P}}^{-1}\widehat{\mathbf{Z}}-\widetilde{\mathbf{P}}^{-1} \widetilde{\mathbf{W}}=\mathbf{0}. \tag{89}\]
Thus,
\[\widehat{\mathbf{Z}}=\text{lyapunov}\left(\widetilde{\mathbf{P}}^{-1},2\gamma \boldsymbol{\Psi},\gamma(\boldsymbol{\Delta}+\widetilde{\mathbf{P}}^{-1} \boldsymbol{\Delta}^{\top}\widetilde{\mathbf{P}})+\widetilde{\mathbf{P}}^{-1} \widetilde{\mathbf{W}}\right), \tag{90}\]
where \(\mathsf{A}=\text{lyapunov}(\mathsf{X},\mathsf{Y},\mathsf{Z})\) provides the solution to the Lyapunov equation \(\mathsf{XA}+\mathsf{AY}=\mathsf{Z}\).
Proximity of log-determinant term.Let \(\gamma>0\) and \(\widetilde{\mathbf{W}}\in\mathcal{S}_{N_{x}}^{++}\). Then, by definition,
\[\widehat{\mathbf{Z}} =\text{prox}_{\mathbf{W}\to\gamma(-\log\det(\mathbf{W})+\text{ tr}(\mathbf{W}\mathbf{I}))}\left(\widetilde{\mathbf{W}}\right) \tag{91}\] \[=\operatorname*{argmin}_{\mathbf{W}\in\mathcal{S}_{N_{x}}}\ -\gamma\log\det(\mathbf{W})+\gamma\text{ tr}(\mathbf{W}\mathbf{I})+\frac{1}{2}\|\mathbf{W}-\widetilde{\mathbf{W}}\|_{F}^{2}. \tag{92}\]
Using (Bauschke and Combettes, 2017, Chap. 24) (see also (Benfenati et al., 2020)), for every \(\alpha>0\),
\[\widehat{\mathbf{Z}}=\mathbf{U}\text{Diag}\left(\left(\frac{1}{2}(\omega(n)+ \sqrt{\omega(n)^{2}+4\gamma})\right)_{1\leq n\leq N_{x}}\right)\mathbf{U}^{\top} \tag{93}\]
where \(\boldsymbol{\omega}=(\omega(n))_{1\leq n\leq N_{x}}\) gathers the eigenvalues of \(\widetilde{\mathbf{W}}-\gamma\boldsymbol{\Pi}\in\mathcal{S}_{N_{x}}\) and \(\mathbf{U}\in\mathbb{R}^{N_{x}\times N_{x}}\) is an orthogonal matrix such that
\[\widetilde{\mathbf{W}}-\gamma\boldsymbol{\Pi}=\mathbf{U}\text{ Diag}(\boldsymbol{\omega})\mathbf{U}^{\top}. \tag{94}\]
## Appendix C Additional experiments
We present here additional experimental results completing Section 5.1.
### Maximum likelihood calculation
We compare MLEM approach from (Shumway and Stoffer, 1982) with the sensitivity equation approach from (Segal and Weinstein, 1989), for computing maximum likelihood estimates of the sought matrices. For the latter, we use the gradient expressions from (Nagakura, 2021), and performed the minimization using a Quasi-Newton routine from the Optimization Toolbox from Matlab, and used the notation MLQN. We ran both MLEM and MLQN methods on the four synthetic datasets A to D. Figure 9 displays the evolution of the loss function along iterations for both MLEM and MLQN algorithms. One can notice that both reach a similar value at convergence. Moreover, MLEM requires very few iterations (typically, less than 10) to stabilize, while MLQN displays a slower convergence profile despite the quasi-Newton (i.e., second-order) acceleration strategy. This might be due to the ill conditioning of the minimization problem. In terms of time complexity, both methods are comparable, requiring a full pass on the time series, through a Kalman update, at each iteration. Given these results, we decided to keep only MLEM as a comparative benchmark for our experiments.
Figure 9: Evolution of \(\mathcal{L}_{1:K}(\mathbf{A}^{(i)},\mathbf{Q}^{(i)})\) along iterations using either MLEM or MLQN approach.
### Robustness to initialization
DGLASSO algorithm amounts to minimizing a non-convex loss function. As such, its results might be sensitive to the initialization of the algorithm. To evaluate this aspect, we consider the computation of \((\widehat{\mathbf{A}},\widehat{\mathbf{P}})\) given the observation of a single time series generated by the ground truth LG-SSM, when using 50 different initializations of DGLASSO algorithm. To do so, we use the same initialization strategy as discussed above, now with \((a,p)\) randomly selected as \(p\sim\mathcal{U}([0,1])\) and \(a\sim\mathcal{U}([0,1])\). Figure 10 displays the box plots for RMSE and F1 scores obtained for dataset A. One can notice that the box plots are very concentrated, showing a good robustness of the method to its initialization, with the wider spreading observed for the F1 score on \(\mathbf{A}\). Similar behavior was observed for the other three datasets.
### Influence of sparsity level
We evaluate here the performance of DGLASSO, as well as the benchmarks, when varying the sparsity level of the ground truth matrices. To do so, we perform slight changes in the dataset, to vary the sparsity pattern of the matrices (i.e., the edge structure of the graphs). First, we modify the ground truth matrix \(\mathbf{A}^{*}\) by keeping \(s_{A}\in\{27,15,10,5\}\) entries of it, within the 27 block diagonal ones, to be non-zero, the others being set to zero. We then rescaled the matrix to keep a spectral norm equals 0.99. Matrix \(\mathbf{Q}^{*}\) is taken from dataset A. The results are reported in Table 4.
As we can observe, the performance of MLEM, in terms of RMSE and F1 score, drop dramatically when the sparsity level on \(\mathbf{A}\) increases. This is expected as this approach does not promote any sparsity prior on matrix \(\mathbf{A}\). The best AUC are either obtained by MLEM, DGLASSO or rGLASSO (for \(\mathbf{P}\)), depending on the test cases. GLASSO/rGLASSO metrics slightly improve when \(\mathbf{A}^{*}\) gets sparser, which is expected, as their assumption of a zero transition matrix gets more realistic. Hereagain, DGLASSO outperforms the benchmarks in most cases.
Figure 10: Box plots for the RMSE and F1 scores for retrieving \((\mathbf{A},\mathbf{P},\mathbf{Q})\) matrices, when running DGLASSO on one single LG-SSM time series using dataset A, and 50 random initializations \((\mathbf{A}^{(0)},\mathbf{P}^{(0)})\). Noticeably, low variability is observed for all metrics. |
2310.03807 | Detecting Fast Neutrino Flavor Conversions with Machine Learning | Neutrinos in dense environments like core-collapse supernovae (CCSNe) and
neutron star mergers (NSMs) can undergo fast flavor conversions (FFCs) once the
angular distribution of neutrino lepton number crosses zero along a certain
direction. Recent advancements have demonstrated the effectiveness of machine
learning (ML) in detecting these crossings. In this study, we enhance prior
research in two significant ways. Firstly, we utilize realistic data from CCSN
simulations, where neutrino transport is solved using the full Boltzmann
equation. We evaluate the ML methods' adaptability in a real-world context,
enhancing their robustness. In particular, we demonstrate that when working
with artificial data, simpler models outperform their more complex
counterparts, a noteworthy illustration of the bias-variance tradeoff in the
context of ML. We also explore methods to improve artificial datasets for ML
training. In addition, we extend our ML techniques to detect the crossings in
the heavy-leptonic channels, accommodating scenarios where $\nu_x$ and
$\bar\nu_x$ may differ. Our research highlights the extensive versatility and
effectiveness of ML techniques, presenting an unparalleled opportunity to
evaluate the occurrence of FFCs in CCSN and NSM simulations. | Sajad Abbar, Hiroki Nagakura | 2023-10-05T18:00:14Z | http://arxiv.org/abs/2310.03807v2 | # Detecting Fast Neutrino Flavor Conversions with Machine Learning
###### Abstract
Neutrinos in dense environments like core-collapse supernovae (CCSNe) and neutron star mergers (NSMs) can undergo fast flavor conversions (FFCs) once the angular distribution of neutrino lepton number crosses zero along a certain direction. Recent advancements have demonstrated the effectiveness of machine learning (ML) in detecting these crossings. In this study, we enhance prior research in two significant ways. Firstly, we utilize realistic data from CCSN simulations, where neutrino transport is solved using the full Boltzmann equation. We evaluate the ML methods' adaptability in a real-world context, enhancing their robustness. In particular, we demonstrate that when working with artificial data, simpler models outperform their more complex counterparts, a noteworthy illustration of the bias-variance tradeoff in the context of ML. We also explore methods to improve artificial datasets for ML training. In addition, we extend our ML techniques to detect the crossings in the heavy-leptonic channels, accommodating scenarios where \(\nu_{x}\) and \(\bar{\nu}_{x}\) may differ. Our research highlights the extensive versatility and effectiveness of ML techniques, presenting an unparalleled opportunity to evaluate the occurrence of FFCs in CCSN and NSM simulations.
+
Footnote †: preprint: MPP-2023-237
## I Introduction
Core-collapse supernovae (CCSNe) and neutron star mergers (NSMs) are cataclysmic stellar events that represent the dramatic culmination of massive stars' life cycles and the collision and coalescence of incredibly dense remnants, respectively. These events not only mark the end of massive stars and dense objects, but also unveil some of the most energetic and enigmatic phenomena in the universe. In the heart of these cosmic fireworks, one of the most fascinating processes at play is the neutrino emission, which are released in vast quantities during CCSNe and NSMs.
As they journey through the extraordinarily dense and extreme conditions within these events, neutrinos undergo an intriguing phenomenon known as collective neutrino oscillations. This fascinating behavior arises from their interactions with the dense background neutrino gas, where coherent forward scatterings play a pivotal role. This phenomenon occurs in a nonlinear and collective manner, creating a rich tapestry of flavor transformations [1; 2; 3; 4; 5; 6; 7] (for a recent review see Ref. [8]).
Of particular interest are the so-called _fast_ flavor conversions (FFCs), which occur on scales characterized by \(\sim G_{\rm F}^{-1}n_{\nu}^{-1}\) (see, e.g., Refs. [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]). Here, \(G_{\rm F}\) represents the Fermi coupling constant, and \(n_{\nu}\) denotes the neutrino number density. These FFCs can take place on timescales much shorter than what would be expected in the vacuum.
FFCs occur _iff_ the angular distribution of the neutrino lepton number, defined as,
\[\begin{split} G({\bf v})=\sqrt{2}G_{\rm F}\int_{0}^{\infty}\frac {E_{\nu}^{2}{\rm d}E_{\nu}}{(2\pi)^{3}}&[\big{(}f_{\nu_{x}}({ \bf p})-f_{\nu_{x}}({\bf p})\big{)}\\ &-\big{(}f_{\bar{\nu}_{x}}({\bf p})-f_{\bar{\nu}_{x}}({\bf p}) \big{)}],\end{split} \tag{1}\]
crosses zero at some \({\bf v}={\bf v}(\mu,\phi_{\nu})\), with \(\mu=\cos\theta_{\nu}\)[23]. Here, \(E_{\nu}\), \(\theta_{\nu}\), and \(\phi_{\nu}\) are the neutrino energy, the zenith, and azimuthal angles of the neutrino velocity, respectively, and \(f_{\nu}\)'s are the neutrino occupation numbers. When \(\nu_{x}\) and \(\bar{\nu}_{x}\) have similar angular distributions, a scenario commonly observed in state-of-the-art CCSN simulations, this expression transforms into the conventional \(\nu\)ELN (neutrino electron lepton number).
Exploring \(\nu\)ELN crossings necessitates access to the complete angular distributions of neutrinos. However, obtaining such detailed angular information poses a significant challenge in most cutting-edge CCSN and NSM simulations due to the prohibitive computational demands involved.
As a practical alternative, many simulations simplify neutrino transport by relying on a limited set of angular distribution moments. In our specific investigation, we focus on radial moments, defined as,
\[I_{n}=\int_{-1}^{1}{\rm d}\mu\ \mu^{n}\ \int_{0}^{\infty}\int_{0}^{2\pi}\frac {E_{\nu}^{2}{\rm d}E_{\nu}{\rm d}\phi_{\nu}}{(2\pi)^{3}}\ f_{\nu}({\bf p}). \tag{2}\]
These moments capture the key aspects of the neutrino angular distribution, at the same time allowing for its more computationally manageable treatment.
Despite the inherent loss of a significant amount of information when considering only a select few neutrino angular moments, ingenious methods can still be devised to harness these limited information for assessing FFCs in CCSN and NSM simulations. In the initial stages of research in this field, the primary focus was on analytical or semi-analytical techniques [51; 52; 53; 54; 18; 24]. While these methods have demonstrated their ability to capture ELN crossings and have found relative success in the literature, they are constrained in their performance. This limitation arises from their either sluggish computational speed or their inefficiency in identifying ELN crossings which impacts their ability to efficiently detect FFCs in real-time simulations. Specifically, the most efficient techniques tend to be noticeably sluggish, and their development can be relatively intricate when starting from scratch.
Recent research has demonstrated the remarkable effectiveness of machine learning (ML) techniques in identifying FFC in CCSN and NSM simulations [56]. While ML methods are data-intensive and require an initial training phase with data, it's important to note that once trained, they exhibit exceptional speed and efficiency. This presents a promising avenue for real-time detection of FFI's within the context of CCSN and NSM simulations. Moreover, integrating pre-trained ML models is a straightforward procedure, significantly reducing the requirement for extensive coding work, even when analyzing the occurrence of FFCs in a post-processing phase. In fact, ML techniques offer the fastest and most precise approach to detect FFCs, and their performance can be further enhanced as one encounters increasingly complex environments.
In this paper, we advance the prior study in two pivotal directions. Firstly, in earlier work, ML models were trained using artificial data generated from specific parametric angular distributions. While these models showed promise, they were only partially validated against a limited amount of realistic data from NSM remnant simulations. It is essential for ML techniques to be trained and tested on data that closely resembles real-world simulations. In our study, we take a significant step forward by utilizing authentic data from a CCSN simulation, where neutrino transport was modeled using the full Boltzmann equation. This allows us to assess the adaptability of ML methods in a real-world context and examine their limitations as well as their optimal performance range. Furthermore, we acknowledge that artificial data are more readily available and can offer broader distributions that are expected in CCSN and NSM environments. Consequently, we also investigate methods to enhance artificial datasets for ML training purposes.
In addition, in our previous study, we assumed that the distributions of \(\nu_{x}\) and \(\bar{\nu}_{x}\) were identical. From a ML perspective, this simplification facilitated the development of our ML module by requiring a smaller number of features and a more efficient classification. In this work, we develop ML techniques to detect the crossings regarding the neutrino heavy-leptnic channel distribution (\(\nu\)XLN) addressing scenarios where \(\nu_{x}\) and \(\bar{\nu}_{x}\) may exhibit differences, a _previously unexplored_ area in the literature. While our results may be slightly less accurate than those in the previous scenario, ML methods still prove remarkably effective in identifying the occurrence of FFC in this scenario..
In the upcoming section, we delve into our CCSN model, the source of our data. We then assess the performance of ML methods in detecting FFCs in our CCSN model, specifically focusing on the detection of \(\nu\)ELN crossings. Finally and before presenting our conclusions, we analyze the performance of ML methods in detecting the crossings in the heavy-leptnic channel.
## II CCSN model
Here, we construct a ML technique to detect \(\nu\)ELN- and XLN angular crossings based on an axisymmetric CCSN model with full Boltzmann neutrino transport [57]. Before entering into the detail of our ML, we briefly describe our CCSN model providing neutrino dataset used for ML training and its testing.
The numerical simulation was carried out by a Boltzmann-neutrino-radiation hydrodynamic code [58] with some special treatments to handle proper motions of proto-neutron star (PNS) [59, 60]. In this model, a table of multi-nuclear variational method equation-of-state [61] was used, and the nuclear abandance in the table was also used to compute neutrino-matter interactions for the consistent treatment between EOS and weak rates [62]. We used a 11.2 solar mass progenitor model in Ref. [63].
One of the noticeable features in the CCSN model is that large-scale asymmetric neutrino emission emerges \(>150\)ms after bounce, that corresponds to the timing of asymmetric shock expansion and the onset of PNS proper motion (see Fig. 1 in Ref. [57]). We note that the asymmetric emission is clearly anti-correlated between \(\nu_{e}\) and \(\bar{\nu}_{e}\), which is attributed to the distribution of electron fraction (\(Y_{e}\)) in the vicinity of PNS, and \(\bar{\nu}_{e}\) tends to be more abundant in low \(Y_{e}\) environments. As shown in Ref. [64], the increase of \(\bar{\nu}_{e}\) reduces the disparity between \(\nu_{e}\) and \(\bar{\nu}_{e}\) angular distributions, leading to enhance the possibility of \(\nu\)ELN crossings. In the region with higher \(Y_{e}\) environments, on the other hand, \(\nu_{e}\) becomes much higher than \(\bar{\nu}_{e}\), indicating that FFI is unlikely to occur.
In the CCSN model, \(\nu\)ELN angular crossings are observed rather stably at \(>200\)ms (see Fig. 2 in Ref. [64]); hence, we employ three different time snapshots for our ML training (200, 250, and 300ms after bounce), in which there are both spatial regions with and without \(\nu\)ELN angular crossings. It should also be mentioned that \(\nu\)ELN crossings are also observed in PNS convective layer and in pre-shock regions, which are consistent with previous studies [65, 66, 67]. For more detailed discussion about neutrino angular distributions associated with arguments of \(\nu\)ELN crossings, we refer readers to Ref. [64].
## III ML algorithms
ML, at the crossroads of computer science and artificial intelligence, is transforming how computers learn and make decisions from data. By unraveling intricate patterns, ML drives progress across diverse domains, from image and speech recognition to healthcare, finance, and autonomous vehicles.
Recent advancements have demonstrated the effective utilization of ML algorithms for detecting \(\nu\)ELN crossings in CCSN and NSM simulations [56]. In this context, we commence with a brief overview of the data preparation and ML techniques employed in Ref. [56]. Subsequently, we provide a comprehensive discussion of our
research findings.
To effectively train and evaluate our ML algorithms, it is imperative to possess a substantial dataset comprising labeled values for \(I_{0}\) and \(I_{1}\) associated with \(\nu_{e}\) and \(\bar{\nu}_{e}\). These labels are instrumental in discerning the presence or absence of \(\nu\)ELN crossing. It's worth highlighting that our current emphasis is primarily on the first two moments. These moments are of particular interest as they are the ones typically tracked directly in the simulation processes.
In order to train our ML algorithms, we partially employ two parametric neutrino angular distributions which have been widely used in the literature [37, 54, 68], namely, the maximum entropy distribution defined as,
\[f_{\nu}^{\rm max-ent}(\mu)=\exp[\eta+a\mu], \tag{3}\]
and the Gaussian distribution,
\[f_{\nu}^{\rm Gauss}(\mu)=A\exp[-\frac{(1-\mu)^{2}}{a}], \tag{4}\]
with,
\[f_{\nu}(\mu)=\int_{0}^{\infty}\int_{0}^{2\pi}\frac{E_{\nu}^{2}{\rm d}E_{\nu}{ \rm d}\phi_{\nu}}{(2\pi)^{3}}f_{\nu}(\mathbf{p}). \tag{5}\]
Here the parameters a, \(\eta\), and \(A\) determines the overall neutrino number density and the shape of the neutrino distributions.
In addition, in order to improve the efficiency of our ML algorithms, we do feature engineering and instead of \(I_{0}\) and \(I_{1}\) of \(\nu_{e}\) and \(\bar{\nu}_{e}\) which are the provided information, we use,
\[\alpha=I_{0}^{\bar{\nu}_{e}}/I_{0}^{\nu_{e}},\ F_{\nu_{e}}=I_{1}^{\nu_{e}}/I_{ 0}^{\nu_{e}},\ {\rm and}\ F_{\bar{\nu}_{e}}=I_{1}^{\bar{\nu}_{e}}/I_{0}^{\bar{\nu}_{e}}, \tag{6}\]
as the relevant features to be considered in the ML algorithms. This is justified by bearing in mind that an overall normalisation factor does not affect the occurrence of \(\nu\)ELN crossings.
To facilitate the training and evaluation of ML algorithms, it is essential to partition the dataset into three distinct sets: i) The training set: This subset is employed to train the ML algorithm, allowing it to learn patterns and relationships within the data, ii) The development set: This set serves as a tool for fine-tuning the algorithm's hyperparameters, ensuring optimal performance, and iii) The test set: This portion is dedicated to assessing the ML method's performance on previously unseen data, providing a reliable measure of its effectiveness.
To comprehensively assess the effectiveness of our ML algorithm, we go beyond mere accuracy, which can be a somewhat simplistic measure. Instead, we consider also more detailed evaluations using precision and recall metrics, defined as,
\[\begin{split}&\text{accuracy}=\frac{T_{p}+T_{n}}{T_{p}+T_{n}+F_{p}+F_{n }}\\ &\text{precision}=\frac{T_{p}}{T_{p}+F_{p}}\\ &\text{recall}=\frac{T_{p}}{T_{p}+F_{n}}\\ & F_{1}=2\times\frac{\text{precision}\times\text{recall}}{\text{ precision}+\text{recall}},\end{split} \tag{7}\]
with \(T(F)_{p(n)}\) denoting True (False) positive (negative) classifications. A discerning reader will notice that the precision/recall metric informs us about the reliability/detectability of classifications, while \(F_{1}\) is their harmonic mean. In this study, we opt for accuracy as the suitable metric because we aim for equal sensitivity to the presence or absence of \(\nu\)ELN crossings.
In this study we consider the following ML algorithms: i) Logistic Regression (LR): a statistical classification algorithm that models the probability of a binary outcome. LR turns out to be one of the most promising ML algorithm to be used in detecting \(\nu\)ELN crossings [56], ii) k-Nearest Neighbors (KNN): an intuitive learning algorithm that classifies data points based on the majority class of their k-nearest neighbors in the feature space, iii) Support Vector Machine (SVM): a powerful algorithm that separates data into classes by finding the hyperplane that maximizes the margin between them in a high-dimensional space, and iv) Decision Tree (DT): a tree-like model used for both classification and regression tasks, where the data is split into subsets based on feature conditions, ultimately leading to a decision or prediction.
There are two final aspect of the LR and SVM algorithms that requires a bit of clarification. While LR incorporates the nonlinear logistic function, it fundamentally operates as a linear classifier. Consequently, it cannot be directly applied to the detection of \(\nu\)ELN crossings, which inherently represents a non-linear problem [56]. To overcome this limitation, it becomes necessary to undertake a preprocessing step involving non-linear transformations and the creation of new features based on the original three features involved in the problem. The degree of polynomial transformation, being a hyper-parameter of this algorithm, plays a crucial role in this process.
Regarding the SVM algorithm, we should mention that we employ the radial basis function (RBF) kernel, defined as, \(\mathcal{K}(x,x^{\prime})=\exp(-\gamma||x-x^{\prime}||^{2})\). Here \(\gamma\) is a hyper-parameter which is set to be \(\gamma=100\)[56].
In the next part, we present our results. In addition, to promote transparency and collaboration, we have made our ML methodologies available on GitHub.
### ML-based detection of \(\nu\)ELN crossings using the SN data
We initially assess the performance of the pre-trained ML models of Ref. [56] (using artificial data) on the realistic dataset obtained from the simulation. The metric scores for the performance of these pre-trained ML algorithms are presented in TABLE. 1. It becomes evident that the performance of ML models, which were initially trained on artificial data, degrades when applied to realistic data. Notably, the accuracies of LR and SVM are of paramount concern. Specifically, the LR model (here was considered with a polynomial degree of 9) exhibits a significant deviation from its accuracy on artificial data (as indicated in TABLE 1 of Ref. [56]). This indicates poor generalization of the old LR algorithm with a polynomial degree \(n=9\), a known issue referred to as high variance in the context of ML.
Another enlightening aspect centers on the SVM model's exceptional performance, consistently achieving high accuracy even when confronted with previously unseen data. This serves as yet another compelling example of the effectiveness of maximizing margins to separate classes, underscoring its conceptual soundness and its ability to enhance the model's overall generalizability in ML.
However and as depicted in Fig. 1, the former LR model (red-curve) trained on synthetic data, attains its peak performance around \(n\simeq 2\) achieving an accuracy of approximately \(85\%\). In simpler terms, the more straightforward models outperform their counterparts when it comes to \(\nu\)ELN detection on previously unseen data. This serves as a prime illustration of the bias-variance tradeoff within the realm of ML. As we see later on, it seems to be a general observation that when considering LR, opting for nonlinear transformations with lower polynomial degrees tends to yield better results.
While achieving an accuracy of \(\sim 85\%\) is commendable, it's important to note that the performance of models developed using artificial datasets can still be enhanced. The initial ML models were trained on data with \(\alpha\) values ranging from 0.03 to 2.5, alongside random selections of \(F_{\nu_{e}}\) and \(F_{\tilde{\nu}_{e}}\) in the (0, 1) range. However, this approach, while suitable for an initial step, doesn't align with realistic conditions. In realistic simulations of CCSNe and NSM's, it's anticipated that \(F_{\nu_{e}}\lesssim F_{\tilde{\nu}_{e}}\).
To address this issue, we enhance our ML model by training it with artificial data while considering \(F_{\nu_{e}}\) within the range of \((0.6F_{\tilde{\nu}_{e}},F_{\tilde{\nu}_{e}})\). The performance of such a ML model on realistic data is illustrated in Fig. 1 (red-dotted curve), revealing two significant insights. Firstly, high variance is observed at large polynomial degrees, indicating poor generalizability of the ML performance at those polynomial degrees. Secondly, the ML model, trained on this improved artificial dataset, achieves notably higher accuracy compared to the previous version. This reaffirms the significance of a well-representative training dataset for accurate testing.
In the final step, we enhance our ML model by incorporating real-world data into the training set. It's
Figure 1: The accuracy of the LR algorithms evaluated on the realistic dataset for models trained on various training sets, with a focus on the impact of polynomial degree of the nonlinear transformations. It is noteworthy that LR models trained on the improved artificial training sets can achieve comparable accuracies to those trained on realistic data at lower polynomial degrees, implying simpler model structures. The black dashed line represents the performance of the old LR model trained on artificial data when tested on its dedicated test set.
\begin{table}
\begin{tabular}{|c c c|c|} \hline \multicolumn{4}{|c|}{**LR (n = 9)** (68\%)} \\ \hline \multicolumn{3}{|c|}{precision} & recall & \(F_{1}\)-score \\ \hline no crossing & 72\% & 82\% & 77\% \\ crossing & 59\% & 43\% & 50\% \\ \hline \multicolumn{3}{|c|}{**KNN (n=3)** (77\%)} \\ \hline \multicolumn{3}{|c|}{precision} & recall & \(F_{1}\)-score \\ \hline no crossing & 77\% & 89\% & 83\% \\ crossing & 75\% & 55\% & 63\% \\ \hline \multicolumn{3}{|c|}{**SVM** (87\%)} \\ \hline \multicolumn{3}{|c|}{precision} & recall & \(F_{1}\)-score \\ \hline no crossing & 98\% & 81\% & 89\% \\ crossing & 75\% & 98\% & 85\% \\ \hline \multicolumn{3}{|c|}{**Decision tree** (71\%)} \\ \hline \multicolumn{3}{|c|}{precision} & recall & \(F_{1}\)-score \\ \hline no crossing & 74\% & 84\% & 79\% \\ crossing & 63\% & 48\% & 55\% \\ \hline \end{tabular}
\end{table}
Table 1: A summary of the metric scores of the previously-trained ML algorithms (using artificial data) tested on the realistic dataset. This is to be compared with TABLE I. in Ref. [56]. Alongside each algorithm, one can find its corresponding accuracy score.
important to highlight that we don't exclusively rely on real data for training. This approach allows us to maintain variability in the angular distributions of neutrinos. The performance of this ML model is depicted by the blue curve in Fig. 1. Notably, it exhibits exceptionally high accuracy across all polynomial degrees. However, drawing from our previous experiences, we favor selecting \(n=2\) due to its strong potential for effective generalization to unseen data. Beyond its generalization capabilities, opting for \(n=2\) also brings the advantage of reduced computational intensity when implementing the LR model in CCSN and NSM simulations on the fly. In addition, In Fig. 2, we present the comprehensive set of metric scores obtained from our improved LR model. It's worth highlighting that all these scores attain satisfactory values when the polynomial degree is set to 2.
In Table 2, we present the performance results of our enhanced ML models, which were trained using a combination of both realistic and artificial datasets. The outcomes demonstrate that our ML models achieve exceptionally high performance metrics. Notably, these scores surpass those obtained by the ML model trained exclusively on artificial data, as shown in Table 1 of Ref. [56]. This improvement can be attributed to the fact that the artificial data includes noisy labels, as discussed in Ref. [56], which was identified as a primary source of inaccuracies once the ML model is tested on artificial data.
### Detection of \(\nu\)ELN-XLN crossings
So far, our discussion has focused exclusively on detecting \(\nu\)ELN crossings. However, in a broader context, it's important to acknowledge that the angular distributions of \(\nu_{x}\) and \(\bar{\nu}_{x}\) can be different in CCSN and NSM environments. This difference becomes particularly pronounced when we account for the potential creation of muons at the core of these extreme astrophysical objects [69, 70, 71]. Consequently, to accurately identify the occurrence of FFCs under the most realistic conditions, we must shift our attention towards detecting \(\nu\)ELN-XLN crossings rather than confining ourselves to the \(\nu\)ELN ones.
The distinction between the detection of \(\nu\)ELN-XLN crossings and \(\nu\)ELN crossings presents several key differences. One of the most significant distinctions lies in the increased complexity of required information. In the context of our ML methods, this translates to an expansion in the number of essential features. Specifically, we now necessitate seven features instead of the previous three, namely \(\alpha_{\nu_{e}}\), \(\alpha_{\nu_{x}}\), \(\alpha_{\bar{\nu}_{x}}\), \(F_{\nu_{e}}\), \(F_{\bar{\nu}_{e}}\), \(F_{\nu_{x}}\), and \(F_{\bar{\nu}_{x}}\), with \(\alpha_{\nu_{\beta}}=n_{\nu_{\beta}}/n_{\nu_{x}}\).
The increase of the number of features and the greater demand for information significantly contribute to an elevated classification error in this context. Notably, when \(\nu_{x}\) and \(\bar{\nu}_{x}\) exhibit disparities, the \(\nu\)ELN-XLN profile can exhibit more intricate characteristics, e.g., it is conceivable that even multiple crossings occur.
In order to train our ML models, we use artificial distributions for neutrinos, given the fact that labeled data regarding the existence of \(\nu\)ELN-XLN crossings are not available. In order to prepare our data, we consider
\begin{table}
\begin{tabular}{|l c c|c|} \hline \multicolumn{4}{|c|}{**LR (n = 2)** (94\%)} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 96\% & 95\% & 95\% \\ crossing & 91\% & 93\% & 92\% \\ \hline \multicolumn{4}{|c|}{**KNN (n=3)** (98\%)} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 98\% & 99\% & 99\% \\ crossing & 98\% & 97\% & 98\% \\ \hline \multicolumn{4}{|c|}{**SVM** (97\%)} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 98\% & 98\% & 98\% \\ crossing & 96\% & 97\% & 97\% \\ \hline \multicolumn{4}{|c|}{**Decision tree** (99\%)} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 99\% & 99\% & 99\% \\ crossing & 98\% & 98\% & 98\% \\ \hline \end{tabular}
\end{table}
Table 2: A summary of the metric scores of ML algorithms trained on the combination of the realistic and artificial datasets, and then tested with the realistic data. Alongside each algorithm, one can find its corresponding accuracy score.
Figure 2: The metric scores of the LR algorithm trained on a combination of the artificial and realistic datasets, as a function of the polynomial degree of the nonlinear transformations. It’s worth noting that the precision and recall scores typically exhibit opposing trends, a phenomenon commonly referred to as the precision-recall tradeoff within the field of ML.
\(\alpha_{\nu_{x}(\bar{\nu}_{x})}\) to be in the range of (0., 2.5) and (0., 3.), respectively. Also we assume an allowed maximum 40% difference between \(\nu_{x}\) and \(\bar{\nu}_{x}\) quantities. This is consistent with the observation that the difference between \(\nu_{x}\)and \(\bar{\nu}_{x}\) should be subdominant in realistic simulations [69]. In addition, we keep in mind what we learned previously that the data should be enough representative of the realistic data. Thus, we also respect the hierarchy \(F_{\nu_{x}}\lesssim F_{\bar{\nu}_{x}}\lesssim F_{\nu_{x}(\bar{\nu}_{x})}\).
The performance of our ML models in detecting \(\nu\)ELN-XLN crossings is presented in Table 3. Notably, the overall performance lags behind that of \(\nu\)ELN crossing detection. This disparity can be attributed to the presence of intricate patterns governing the crossings and an increase in label noise.
## IV Discussion and Outlook
Recent advancements have showcased the remarkable capabilities of ML in identifying the \(\nu\)ELN crossings in the CCSN and NSM simulations [56]. In this study, we have propelled prior research in two pivotal and distinctive directions. Firstly, we have subjected ML models to the rigorous test of real-world data acquired from CCSN simulations, where the intricate problem of neutrino transport is addressed through the comprehensive Boltzmann equation. Secondly, we have expanded our ML techniques to encompass the detection of \(\nu\)ELN-XLN crossings, accommodating situations where there may exist distinctions between \(\nu_{x}\) and \(\bar{\nu}_{x}\).
Using realistic CCSN data, we have demonstrated that the simpler models consistently outperform their more complex counterparts in the context of \(\nu\)ELN detection when applied to previously unseen data. This provides a clear and compelling example of the bias-variance trade-off within the domain of ML. Specifically, it was observed that the LR model performs most effectively when a polynomial transformation of degree \(n=2\) is applied, as opposed to the previously suggested degree of \(n=9\) which was based on artificial data. This underscores the importance of considering the complexity of models and their suitability for real-world data, highlighting that sometimes, a simpler approach can yield superior results.
We demonstrate a significant enhancement in model performance when utilizing artificial datasets by aligning the parameter space of the synthetic data with the realism expected in CCSN and NSM simulations. Specifically, we adhere to the hierarchy \(F_{\nu_{x}}\lesssim F_{\bar{\nu}_{x}}\lesssim F_{\nu_{x}(\bar{\nu}_{x})}\), which mirrors the conditions anticipated in these astrophysical events. This deliberate consideration in the preparation of artificial data results in model performance that rivals that of ML models trained on realistic data, at least within a certain parameter range.
We have further fortified our ML models by integrating real-world data into the training set, and this enhancement has yielded remarkable generic accuracy in out ML models. Based on our meticulous observations, we are inclined to assert that the LR model with a polynomial degree of \(n=2\) stands out as the optimal choice for detecting FFCs in CCSN and NSM simulations. This choice not only exhibits exceptional generalization capabilities for unseen data but also offers the distinct advantage of reducing computational overhead when deploying the LR model in real-time CCSN and NSM simulations.
We have also developed ML models to identify neutrino flavor crossings in the \(\nu\)ELN-XLN distributions. This is particularly relevant because the angular distributions of \(\nu_{x}\) and \(\bar{\nu}_{x}\) can exhibit variations in CCSN and NSM environments. Unlike the simpler task of detecting \(\nu\)ELN crossings, detecting \(\nu\)ELN-XLN crossings introduces a higher level of complexity due to the need for more information. In the context of our ML methods, this complexity manifests as an increase in the number of essential features. This augmented feature set substantially contribute to an elevated classification error in this specific context.
In summary, our study significantly expands upon prior research, allowing for more confident utilization of ML methods in detecting FFCs. However, there remain crucial avenues for exploration. Specifically, our current analysis focuses on crossings occurring in the zenith angle (\(\mu\)), when the angular distribution is integrated over \(\phi_{\nu}\). Nevertheless, as demonstrated in previous references, a substantial fraction of \(\nu\)ELN crossings may be exclusively in \(\phi_{\nu}\), exhibiting non-axisymmetric behavior [64]. Thus, it is imperative to develop ML techniques capable of capturing these non-axisymmetric crossings. Closely related to this issue is the exploration of FFCs in rotating CCSN models [30], as the prominence of non-axisymmetric features in such models could influence the performance of ML algorithms. Given the proven and remarkable versa
\begin{table}
\begin{tabular}{|c c c c|} \hline \multicolumn{4}{|c|}{**LR (n = 2)** (88\%)} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 87\% & 89\% & 88\% \\ crossing & 88\% & 86\% & 87\% \\ \hline \multicolumn{4}{|c|}{**KNN (n=3)** (88\%)} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 89\% & 88\% & 89\% \\ crossing & 88\% & 89\% & 88\% \\ \hline \multicolumn{4}{|c|}{**SVM (88\%)**} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 92\% & 84\% & 88\% \\ crossing & 85\% & 93\% & 89\% \\ \hline \multicolumn{4}{|c|}{**Decision tree (87\%)**} \\ \hline & precision & recall & \(F_{1}\)-score \\ \hline no crossing & 87\% & 87\% & 87\% \\ \hline \end{tabular}
\end{table}
Table 3: A summary of the metric scores of the ML algorithms for \(\nu\)ELN-XLN crossing detection. Alongside each algorithm, one can find its corresponding accuracy score.
tility and effectiveness of ML in this context, implementing these measures will further improve the detection of FFCs in CCSN and NSM simulations.
###### Acknowledgements.
We would like to thank Georg Raffelt for useful discussions. We would also like to express our sincere gratitude to the Institute of Physics of Academia Sinica for their warm hospitality and support during the _Focus Workshop on Collective Oscillations and Chiral Transport of Neutrinos_, where the inception of this project took place. Their gracious hosting and collaborative environment played a pivotal role in shaping the foundation of our work. S.A. was supported by the German Research Foundation (DFG) through the Collaborative Research Centre "Neutrinos and Dark Matter in Astro- and Particle Physics (NDM)," Grant SFB-1258, and under Germany's Excellence Strategy through the Cluster of Excellence ORIGINS EXC-2094-390783311. H. N. was supported by Grant-in-aid for Scientific Research (23K03468). The numerical simulations for CCSNe were carried out by using "K", "Fugaku", and the higherperformance computing resources of "Flow" at Nagoya University ICTS through the HPCI System Research Project (Project ID: 220173, 220047, 220223, 230033, 230204, 230270). We would also like to acknowledge the use of the following softwares: Scikit-learn[72], Matplotlib[73], Numpy[74], SciPy[75], and IPython[76].
|
2304.01157 | Promoting Bright Patterns | User experience designers are facing increasing scrutiny and criticism for
creating harmful technologies, leading to a pushback against unethical design
practices. While clear-cut harmful practices such as dark patterns have
received attention, trends towards automation, personalization, and
recommendation present more ambiguous ethical challenges. To address potential
harm in these "gray" instances, we propose the concept of "bright patterns" -
persuasive design solutions that prioritize user goals and well-being over
their desires and business objectives. The ambition of this paper is threefold:
to define the term "bright patterns", to provide examples of such patterns, and
to advocate for the adoption of bright patterns through policymaking. | Hauke Sandhaus | 2023-04-03T17:24:35Z | http://arxiv.org/abs/2304.01157v1 | # Promoting Bright Patterns
###### Abstract.
User experience designers are facing increasing scrutiny and criticism for creating harmful technologies, leading to a pushback against unethical design practices. While clear-cut harmful practices such as dark patterns have received attention, trends towards automation, personalization, and recommendation present more ambiguous ethical challenges. To address potential harm in these "gray" instances, we propose the concept of "bright patterns" - persuasive design solutions that prioritize user goals and well-being over their desires and business objectives. The ambition of this paper is threefold: to define the term "bright patterns", to provide examples of such patterns, and to advocate for the adoption of bright patterns through policymaking.
+
Footnote †: copyrighted: [leftmargin=*]This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAllike 4.0 International License.
_CHI '23 Workshop, April 23, 2023, Hamburg, Germany_
## 1. Introduction
As the field of user experience (UX) design has matured, so too has the recognition of the ethical implications of our work. In recent years, designers have come under fire for creating technologies harmful to users, leading to increased scrutiny and criticism from both the design community and lawmakers. While some design practices - such as the use of "dark patterns" to deceive or manipulate users - are clearly unethical, many other practices are more difficult to judge normatively. The rise of artificial intelligence, automation, personalization, and recommendation systems has presented new ethical challenges that require more nuanced solutions. In response, we propose the concept of 'bright patterns' - aspirational design solutions that prioritize ethical considerations over established design conventions, such as those involving reflective design. The ambition of this paper is threefold: to define the term "bright patterns", to provide examples of such patterns, and to advocate for the adoption of bright patterns through policymaking. By doing so, we hope to foster a culture of responsible design and encourage designers to take a more active role in shaping the policies that govern our work.
Figure 1. Examples of bright pattern categories: 1) Friction, 2) Transparency, 3) Explainability.
## 2. Related Work
Dark patterns are recognized as unethical and increasingly face legal scrutiny; properly conducted user experience design does not lead to designs that overtly deceive users in favor of business interests, but it can result in designs that favor marketable user desires and needs over their long-term goals and well-being.
Monteiro (Monteiro, 2017) argues that designers have a responsibility to ensure the ethical well-being of end-users and to act as the last line of defense against unethical practices. However, the ways in which UX designers engage with ethical issues remain under-researched (Mayer et al., 2018). The term 'dark patterns' originates from software and design patterns. It has various definitions (Han et al., 2017). **We define it as deceptive user interface design functionality that prioritizes business' objectives over users' well-being and goals.** This body of work has led to substantial progress in critical scholarship (Han et al., 2017; Han et al., 2017; Han et al., 2018) and legal push-back against manipulative, unethical UX design practices (Han et al., 2017; Han et al., 2017; Han et al., 2018; Han et al., 2018).
A focus solely on 'dark patterns' and the negative impacts of unethical design practices may be limiting. Gray and Chivukula (Gray and Chivukula, 2017) present a case study of three UX practitioners and illustrate how designers activate personal and organizational values to navigate complex ethical design considerations. While some philosophers reasoned for technologies value neutrality, it is more commonly accepted that technology is value-laden (Gray and Chivukula, 2017). Enabling technology, such as artificial intelligence, personalization, recommender, and automation do present threats to user well-being such as addiction, reliance, privacy, inequality, and bias. These threats are hard to judge normatively out of context (Han et al., 2017; Han et al., 2017; Han et al., 2017); the morality of technology depends on how the technology is crafted and put to use.
Designers deal with highly contextual problems regularly. Work on so-called wicked problems is nonlinear, without a definitive, and no _right_ solution. The problem-solving process ends when one runs out of resources (Han et al., 2017). According to Sweeting (Sweeting, 2018) ethical design work is hidden and only implicit; working on wicked problems is in some ways like working on ethical dilemmas. In these cases, the harms and benefits of technology cannot fully be resolved; _Designers have no way to be right, but also no right to be wrong._ UX designers recognize that many designs cannot easily be distinguished in _bright_ or _dark_(Brock et al., 2017). Designers often find themselves facing complex ethical dilemmas that lack clear solutions, leaving them in a gray area (Han et al., 2017).
In western User Interface (UI) design communities, design conventions are well established. _Good design_ that promotes efficient, usable interaction is documented in plenty of design principles, guidelines, and patterns (Han et al., 2017; Han et al., 2017; Han et al., 2017; Han et al., 2018). Critical and reflective design approaches call us to question these assumptions. Critical design shall foreground ethics of design practice and help to reveal hidden agendas and values by provocation (Brock et al., 2017). Reflective design, build on critical design approaches, uses reflection to uncover and challenge assumptions and biases in design practice, to "bring unconscious aspects of experience to conscious awareness, thereby making them available for conscious choice" (Han et al., 2017). Critical reflection shall be applied by both users and designers. Despite its popularity in Human-Computer-Interaction (HCI), critical design is not well adopted in the practicing communities (Han et al., 2017). Applying reflective design does frequently require breaking well established conventions of _good design_, such as usability (Han et al., 2017).
Digital companies prioritize using measurable results, such as key results and performance metrics. Usability and user experience design metrics are used for assessment of design quality and systematically applied in UX teams (Han et al., 2017; Han et al., 2017; Han et al., 2018; Han et al., 2018). UX designers should ensure that human-centered technology is built, typically through user need-based design (Han et al., 2017). The responsibility for ethical design outcomes, though, cannot rest solely on individual designers, as organizational goals often do not align with ethical goals (Han et al., 2017; Han et al., 2017; Han et al., 2018) and designers may act immorally due to self-deception (Han et al., 2017). This misalignment can lead to unintentional or even unwilling ethical violations in the design of digital technology.
While dark patterns are well understood, the literature shows ambiguity around what constitutes good and what ethical design patterns. To our knowledge, no one has systemically considered what the antonym to dark
pattern design is. The concept of 'bright patterns' can provide a positive framework, for ethical design. It is not yet established or defined, and there also has not been an analysis of existing examples in the field. One journal article uses the term bright patterns to describe privacy-friendly nudges (Kumar et al., 2018), one blog post to describe generally good design practices (Kumar et al., 2018), and one master's thesis to describe UI that persuades consumers to make decisions that benefit both themselves and the company (Kumar et al., 2019). Further research is needed to fully develop the concept.
## 3. Defining Bright Patterns
While existing literature has made significant strides in identifying and combatting unethical design practices, UX design can lead to ethically ambiguous design; the concept of 'bright patterns' can provide a proactive and positive frame for ethical design that keeps business and users themselves from inflicting harm.
**Bright patterns shall refer to persuasive user interface design functionality that prioritizes users' well-being and goals over their desires and business' objectives.**
While the term dark patterns demands an antonym to exist, it is not obvious what it would be; the few examples mentioning 'bright patterns' use it incoherently. To remind ourselves, dark pattern is a term used in user experience (UX) design to describe user interfaces that are intentionally designed to benefit businesses and deceive users into taking actions they would not otherwise take. Design absent of dark patterns, i.e., functional interfaces that do not deceive, would be labeled simply good or effective design. Good design describes functional, aesthetically pleasing products, services, or experiences that meet the needs of their intended users. Poor design, on the other hand, i.e., design that does not work as intended, is sometimes referred to as 'anti-pattern' (Kumar et al., 2018; Kumar et al., 2019; Kumar et al., 2019). Additional confusion may arise as dark patterns are sometimes used to colloquially refer to any type of unethical user interface (Kumar et al., 2019), and not just those that deceive and favor business needs over users'. Further, UX designer today is often used ambiguously as a synonym for user interface designer, for example in job listings.
If user interface designers would correctly use user experience design methods in their work, i.e., user-centered, user-needs-based design, such dark patterns should not come to exist in the first place. Needs-based UX design can still lead to interface design with questionable ethics, when commodifiable needs are prioritized over those that lead to long term flourishing, and well-being of a user. Humans are complex, with diverse desires, needs, and goals that often times are conflicting within themselves. We see in particular trends towards automation, personalization, and recommendation exemplary for such potential conflict.
Bright patterns, as an antonym to dark patterns, cannot just describe good design practice or the absence of dark patterns. We define bright patterns as user interface elements that promote user behavior aligned with their genuine goals, rather than their immediate desires or businesses' objectives. Notably, these patterns are intentionally designed by businesses to resist short-term gains that would come at the expense of user long-term satisfaction, despite the conflict of interest between the two parties.
### Examples
Based on our definition of bright patterns, we have collected examples of bright pattern categories for the themes of friction, transparency, and explainability. We have set up the website brightpatterns.org for collection and maintenance of bright pattern examples in the wild. This collection is not exhaustive, but demonstrates the characteristics of these patterns and highlights that some businesses are already using them, albeit without explicit acknowledgement.
* Slow down: This pattern involves friction in interaction by adding extra steps or barriers to prevent users from engaging in harmful or addictive behaviors. For example, a social media app may require users to confirm their intention before posting a potentially toxic comment, or an app intentionally take longer to open up. (Kumar et al., 2019; Kumar et al., 2019)
* Escape hatch: Making it easy for users to leave a situation or cancel a subscription. For example, providing a clear and accessible option to cancel a membership or delete a profile. [4; 51]
* Simple Consent: This pattern involves obtaining clear and unambiguous consent from the user before collecting, using, or sharing their personal data. This can involve providing clear explanations of how the data will be used and providing easy-to-use options for opting in or out. [6; 21; 38]
* Honest defaults: This pattern involves setting default options that are in the best interest of the user, rather than the business. For example, a default option to unsubscribe from marketing emails may be provided, rather than requiring users to opt-out. [75]
* Nutrition labels: The nutrition label is a design element that provides users with a clear and transparent overview about the content or impact of a particular action, decision, or data. These are becoming common in AI dataset documentation. [10; 19]
* Persona profiling: The app or website transparently informs users of the categories or groups they are being placed into based on their data, allowing them to better understand how they are being profiled and how their experience may differ from other users. [43]
* Healthy alternative/5-a-day: The platform suggests higher quality content that is still of interest to the user, but may discourage mindless usage. This pattern respects the user's well-being and attention, and does not try to exploit their curiosity or boredom into consuming more content than they need or want. or example, a social media app may suggest that users watch educational videos or read informative articles instead of scrolling through endless feeds of memes or gossip. [39]
* Usage limits: Often implemented in external well-being and child control apps, these are patterns within apps that limit usage time to healthy levels. [11]
* Transparent recommender: This pattern involves revealing the logic or criteria behind the recommendations or suggestions that are provided to the user. For example, a streaming service may explain why a certain show or movie is recommended based on the user's preferences, ratings, or viewing history, or an e-commerce site may disclose how sponsored products are ranked or selected. [54; 62; 69]
* Dumb it down: This pattern involves providing users with clear and understandable explanations of how their data is processed or used by an AI system. These explanations can be visual, personalized, and even counterfactual. For example, a credit-scoring system may provide users with a personalized explanation of how it determines their creditworthiness based on their data. [16; 57]
* Data traces: This pattern involves showing users the traces or records of their data that are stored, shared, or accessed by an app or website. For example, a messaging app may show users when their messages are read, forwarded, or deleted by others, or a search engine may show users their search history and explain how it affects their results. [20; 55]
* Cost transparency: This pattern involves showing users the detailed breakdown of the costs involved in producing and selling a product or service. This pattern respects the user's curiosity and trust, and does not try to hide or inflate the margins or profits of the vendor. For example, showing users how much the product costs in terms of purchase, marketing, return and shipping. [2]
* Outside my bubble: This pattern involves exposing users to different perspectives or opinions frequently that challenge their existing beliefs or preferences. For example, a news aggregator may show users articles from diverse sources or a music streaming service may suggest songs from genres that they usually don't listen to. [1; 56; 58; 67]
* Contrarian's companion: This pattern provides critique on consumed and posted content. For example, it shows, views with different political leanings than theirs, and makes users reflect on their opinions. [27]
### Promoting Bright Patterns through Policy
Implementing policy-driven change is a difficult task, and it cannot be guaranteed that the policy solutions chosen will be effectively communicated and understood, properly executed, or result in the intended outcomes (Krishnan et al., 2017). While ambiguity is part of policy adoption, it can lead to numerous problems in implementation (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019). Policy making which relies on ambiguous terms can lead to non-optimal implementations, with businesses potentially mixing deceptive practices into their chosen implementations. Speaking about and defining actual patterns can help to avoid this issue.
A clear definition of 'bright patterns' is necessary for policymakers to start regulating the design of ethical user interfaces that prioritize user goals and well-being over their desires and business objectives. Defining and recognizing bright patterns can help policymakers establish legal guidelines for user interface design that prioritize these principles over business interests. In some jurisdictions, the regulation of user interface elements, which can be considered as bright patterns, has already begun.
The most well-known example of policy endorsed ethical UI pattern is the cookie consent pop-up in the European Union, which is required by the General Data Protection Regulation (GDPR) (Krishnan et al., 2019). However, the cookie consent debacle, where companies employed dark patterns to deceive users into accepting privacy settings they would not have chosen otherwise, shows how important it is to design these user interfaces with business buy-in and in user-friendly ways; otherwise, bright patterns risk being ineffective annoyances which businesses circumvent through manipulative practices (Krishnan et al., 2019).
Another recent example, of legislation promoting ethical designs, are screen time limits in China. The government has implemented regulations on the design of user interfaces for online games to restrict the amount of time minors spend playing them (Krishnan et al., 2019). These regulations require game companies to implement features such as daily play time limits and age-appropriate content.
Moreover, the establishment of clear definitions and guidelines for bright patterns can lead to a more honest and self-conscious UX design community. It is desirable that bright patterns are invented and designed through teams of UX designers applying user-centered design practices, with the intrinsic motivation of designers to build long-term trust and reputation. Governmental agencies rarely have the in-house power to do so. Leading companies need to set benevolent examples, as advocating for bright patterns from internal motivations will motivate more businesses to set good practice examples.
To effectively regulate and regularize bright patterns, policymakers and the UX design community must collaborate. UX designers can offer expertise and insight into how to design interfaces that prioritize user needs, while policymakers can provide guidance on how to create legal frameworks that promote fair and ethical practices in UX design (Krishnan et al., 2019).
Overall, while there are currently only a few specific user interface patterns that are regulated by law, the increasing interest in regulating other aspects of user interface design, particularly regarding dark patterns, is a great starting point for policymaking, UX design, and HCI researcher communities to come together to innovate policy which does not just prevent overt harm but also promotes good.
## 4. Discussion
The emergence of dark patterns in user interface design raises significant ethical concerns. Businesses use these deceptive techniques to increase profits, often at the expense of users' autonomy, trust, and well-being. With the rise of regulations against dark patterns, it is now the time to define the opposite concept of bright patterns. Bright patterns, as we define them, use persuasion rather than deception to prioritize users' goals and well-being over their desires and business objectives. Bright patterns can promote trust, foster positive relationships between businesses and their users, and contribute to user autonomy, well-being, and overall experience.
Focusing on patterns, as in standardized design solutions, has several advantages for policy making over regulating design processes. Design processes can be complex and varied, making them difficult to regulate with a one-size-fits-all policy. A policy aimed at regulating design processes might be too broad and could lead to unintended consequences, such as stifling innovation. Regulating exact design patterns can provide clearer and more actionable guidance to designers, be more effective in promoting well-being to users, and be more flexible in responding to changing design practices.
As regulating bright patterns on their own can be a challenging task for policymakers, collaboration between sectors and fields is compulsory. The professional UX design community and the academic HCI community can contribute to this effort by providing insights on best practices, user-centered design, and ethics. Policymakers can use this information to establish guidelines and regulations that promote ethical user interface design. We hope that our short paper encourages future work on this necessary thread of work.
We acknowledge that the terms dark pattern and bright pattern build on dualistic views, equating bright with good and dark with evil [7]. Alternative terms good design, bad design, ethical design, and unethical design, are all describing different concepts. Harry Brignull is credited with coining the term 'dark patterns' [15]; he has since switched to using the term 'deceptive design', citing concerns about the precision and inclusivity of the term. The terms 'honest design' and 'transparent design' are close antonyms to dark patterns, but they do not adequately capture the ethical tension and prioritization of user goals over user needs and business interests that bright patterns embody. Furthermore, we find the term 'design' to be ambiguous, as it can refer to the design process, intended design, and design outcome. Therefore, we suggest when referring to ethical design practices that go beyond established moral standards, one may refer to _benevolent design_.
###### Acknowledgements.
This work originated from a class in philosophical and analytical approaches to societal implications of digital technologies. I thank Professor Helen Nissenbaum for her mentorship, and classmate Amritansh Kwatra for his help in outlining this thread of work.
|
2304.11344 | Results on gradients of harmonic functions on Lipschitz surfaces | We study various properties of the gradients of solutions to harmonic
functions on Lipschitz surfaces. We improve an exponential bound of Naber and
Valtorta on the size of the superlevel sets for the frequency function to a
sharp quadratic bound in this setting using complex analytic tools. We also
develop a propagation of smallness for gradients of harmonic functions,
settling an open question in this setting. Finally, we extend the estimate on
superlevel sets of the frequency to more general divergence-form elliptic PDEs
with bounded drift terms at the cost of a subpolynomial factor. | Benjamin Foster | 2023-04-22T07:52:44Z | http://arxiv.org/abs/2304.11344v2 | # Results on gradients of harmonic functions on Lipschitz surfaces
###### Abstract.
We study various properties of the gradients of solutions to harmonic functions on Lipschitz surfaces. We improve an exponential bound of Naber and Valtorta [17] on the size of the effective critical set to a sharp quadratic bound in this setting using complex analytic tools. We also develop a propagation of smallness for gradients of harmonic functions, settling an open question from [11] in this setting.
## 1. Introduction
Throughout, we will study the size of the gradient for harmonic functions on 2-dimensional domains with Lipschitz Riemannian metrics. We denote the Laplace-Beltrami operator by \(\Delta_{g}\), and a function \(u\) is said to be harmonic if it solves the equation \(\Delta_{g}u=0\). In local coordinates, if we let \(g_{ij}\) denote the components of the metric, \(g^{ij}\) denote the components of the inverse tensor, and \(|g|\) denote the determinant, the equation takes the form
\[\frac{1}{\sqrt{|g|(x)}}\partial_{i}(\sqrt{|g|(x)}g^{ij}(x)\partial_{j}u(x))=0 \qquad\text{in }B_{2}(0), \tag{1}\]
where we have employed Einstein summation convention. We denote the maximum of the ellipticity and Lipschitz constants by \(\lambda\), i.e.
\[|g_{ij}(x)-g_{ij}(y)|\leq\lambda|x-y|,\qquad\qquad\frac{1}{\lambda}|v|^{2}\leq \sum_{i,j}g_{ij}v_{i}v_{j}\leq\lambda|v|^{2} \tag{2}\]
hold uniformly across all \(x,y\in B_{2}(0)\) and all \(v\in\mathbb{R}^{2}\), for some \(\lambda>1\).
Standard regularity results imply that the solution \(u\) is \(C^{1}\)[12]. Its critical set \(\mathcal{C}(u)\), consisting of the points where \(\nabla u\) vanishes, is finite. In fact, for nonconstant harmonic functions on the unit disk, the number of critical points can be bounded by a multiple of the **frequency function1** of the solution [10]. It was shown more recently in [10] that this holds more generally for solutions to divergence form elliptic equations with Lipschitz coefficients and a bounded drift term, once the definition of the frequency is suitably modified.
Footnote 1: The frequency function will be defined more carefully in sections 2 and 7.
The frequency function of Almgren is a versatile tool that has been used in the study of elliptic equations2. It takes in a point in the domain and a scale parameter \(r\) and satisfies a monotonicity property in the scale parameter, converging to the vanishing order of the solution at the point as \(r\) tends to \(0\). When \(r\) is instead fixed, it captures how the solution grows in size from the ball of radius \(r\) to the ball of
radius \(2r\). Heuristically, the frequency function can be thought of as the analog of the "degree of a polynomial" in the setting of solutions to elliptic equations; see (8) for a precise formulation. We direct the interested reader to [10] for a more complete exposition on the properties of the frequency function.
Quantitative bounds on the size of the critical set in terms of the frequency of the solution has been a topic of research interest, with the current best known bounds in higher dimensions established in [11]. It is known that the critical set is dimension \(n-2\), and Lin conjectured in [12] that
\[\mathcal{H}^{n-2}(\mathcal{C}(u)\cap B_{1/2}(0))\leq CN^{2}, \tag{3}\]
where \(\mathcal{H}^{n-2}\) denotes the \((n-2)\)-dimensional Hausdorff measure and \(N\) is an upper bound for the frequency. The authors of [11] established Minkowski estimates on the volume of neighborhoods of the critical set by studying **effective critical sets**\(\mathcal{C}_{r}(u)\), which have the advantage of behaving much better under perturbation than the critical set3. Their estimates are exponential in \(N^{2}\) in general, although in dimension \(2\) they are only exponential in \(N\). In dimension two, we can improve their bound to a quadratic bound that is sharp. This is our first main result.
Footnote 3: \(\mathcal{C}_{r}(u)\) is defined more precisely in section 2
**Theorem 1**.: _Suppose \(u\) is a solution to \(\Delta_{g}u=0\) on the Euclidean disk of radius \(4s\) for some suitably large universal constant \(t\), depending on the Lipschitz and ellipticity constants of \(g\). Assume its frequency satisfies \(N_{u}(0,2t)\leq\Lambda\). Then we have the following volume estimate for the effective critical set_
\[\text{Vol}(\mathcal{C}_{r}(u)\cap B_{1/2}(0))\leq C\Lambda^{2}r^{2}, \tag{4}\]
_for some constant \(C\) depending on the Lipschitz and ellipticity constants of \(g\)._
A key property of the effective critical set at scale \(r\) is that it is covered by superlevel sets of the frequency function at scale \(r\) and contains a tubular neighborhood of the critical set. In higher dimensions, where sharp bounds on the size of the critical set have not been obtained, it can be helpful to informally think that the \((n-2)\)-dimensional parts of the critical set occur when the solution is locally very symmetric and close to being a function of two variables. This can be made precise via quantitative stratification arguments; see [12] for a non-quantitative stratification and [11], [11] for a discussion of the quantitative stratification. The effective critical set is open and more stable under perturbations, unlike the critical set which can change dimension when perturbed. Having control on the size of the effective critical set in the two-dimensional case could be useful in understanding the size and structure of the critical set in higher dimensions as a consequence of the aforementioned quantitative stratifications.
It has also been a topic of interest in elliptic PDEs to understand how solutions are quantitatively "close" to being constant or nonconstant in terms of the size of the set where the gradient is small, rather than the critical set. Results of interest can loosely be described by the estimate
\[\sup_{B_{1/2}(0)}|\nabla u|\leq C\sup_{E}|\nabla u|^{\alpha}\sup_{B_{1}(0)}| \nabla u|^{1-\alpha}, \tag{5}\]
where \(E\) is often taken to be some sublevel set of the gradient; we call this propagation of smallness for the gradient. The question is what properties of the set \(E\) allow us to deduce such an estimate and how the implicit constant \(C\) depends on
these properties. There are a number of well known results along these lines for both solutions to elliptic equations and their gradients, such as the Three Spheres Theorem. There has been interest in understanding how propagation of smallness works for arbitrarily wild sets. It was shown in [14] that for solutions to divergence form elliptic equations with Lipschitz coefficients smallness propagates for the solutions from arbitrarily wild sets with positive \((n-1+\delta)\)-dimensional Hausdorff content. The argument involved a novel induction on scales as well as some geometric-combinatorial lemmas from [10]. For gradients of solutions to elliptic equations, it is believed that there should be even better propagation of smallness properties; that is, smallness should propagate from arbitrarily wild sets with positive \((n-2+\delta)\)-dimensional Hausdorff content for any \(\delta>0\). The authors of [14] also gave a modified argument for gradients which proved propagation of smallness from sets with positive \((n-1-c_{n})\)-dimensional Hausdorff content for some small constant \(c_{n}>0\), but were not able to achieve the conjectured optimal propagation result.
In the two-dimensional case, we first use the various tools developed in the intermediate section to give a short proof of propagation of smallness for gradients of harmonic functions in the Euclidean case. This has also been studied using tools from potential theory in the past, see [13, Theorem 2.1] for instance. We then extend this to the case of Lipschitz Riemannian metrics by appealing to the existence theory of isothermal coordinates. In such coordinates, the metric is conformal to the Euclidean metric, allowing us to reduce to the harmonic case. The key is that we can control the derivative of the quasiconformal coordinate change map pointwise in terms of the Lipschitz constant \(\lambda\) of our original elliptic equation, which can be seen through an analysis of the construction of the quasiconformal coordinate change via the solution of the Beltrami equation. This allows us to do a local propagation of smallness for solutions in coordinate charts, and a simple chaining argument allows us to extend the propagation of smallness globally. Our second main result is the following.
**Theorem 2**.: _Let \(g\) be a Lipschitz Riemannian metric, where \(|g_{ij}(x)-g_{ij}(y)|\leq\lambda\) and \(\lambda^{-1}|v|^{2}\leq\sum_{i,j}g_{ij}(x)v_{i}v_{j}\leq\lambda|v|^{2}\) hold for some \(\lambda>0\) and all \(x,y\in B_{1}(0)\) and all \(v\in\mathbb{R}^{2}\). Suppose that \(\Delta_{g}u=0\) holds for some \(u:B_{1}(0)\to\mathbb{R}\) with \(\|\nabla u\|_{L^{\infty}(B_{1}(0))}=1\). Suppose also that \(|\nabla u|\leq\epsilon\) on a set \(E_{\epsilon}\) which has \(\delta\)-dimensional Hausdorff content (for some \(\delta>0\)) equal to \(\beta>0\). Then_
\[|\nabla u(x)|\leq C\epsilon^{\gamma} \tag{6}\]
_holds for all \(x\in B_{1/2}(0)\), where \(C,\gamma>0\) depend only on \(\lambda,\delta,\beta\)._
Here, we have given a local version of the result, but applying this in sufficiently small coordinate charts for a Lipschitz surfaces and chaining extends the result to a global propagation of smallness, with constants depending only on the surface itself.
Throughout the paper, we use the dimension \(2\) hypothesis in several fundamental ways. The first is the well known fact that if \(u\) is a harmonic function on a subset of the plane, then the function \(F(x+iy)=u_{x}(x,y)-iu_{y}(x,y)\) is a holomorphic function with \(|F|=|\nabla u|\). This allows for the use of powerful tools from complex analysis. We mainly use the fact that the zeros of \(F\) can be factored out, so that \(F(z)=P(z)g(z)\) where \(P(z)\) is a polynomial and \(g(z)\) is nonvanishing. We can then study the frequency of polynomials and nonvanishing functions separately
to understand the frequency of gradients of arbitrary harmonic functions. The dimension 2 hypothesis is also used in order to change to isothermal coordinates, as these do not exist in general in higher dimensions.
The structure of the paper is as follows. In section 2, we discuss the conventions for the frequency of a solution and the effective critical set, giving slight variants on the normal definitions for technical reasons. In section 3, we study the frequency of solutions with nonvanishing gradients. In section 4, we study the effective critical set of polynomials. In section 5, we use the previous sections to derive a cubic bound (in terms of frequency) on the size of the effective critical set of a harmonic function in the Euclidean case. In section 6, we give a simple proof of the propagation of smallness result for gradients of harmonic functions in the Euclidean case. In section 7, we reformulate frequency and other relevant quantities in the setting of more general Riemannian metrics. In section 8, we discuss how to extend the propagation of smallness for gradients and volume estimates for effective critical sets to the more general case of a Lipschitz Riemannian metric.
The author has become aware of a recent paper by Zhu [23] which obtains a similar result to Theorem 2 of this paper.
The author is grateful to Eugenia Malinnikova for numerous valuable discussions while working on this project. The author completed part of this work while visiting the Hausdorff Research Institute for Mathematics, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2047/1 - 390685813. The author is grateful for the institute's hospitality and support.
## 2. Conventions and Preliminaries
Throughout the paper, we will use \(C\) to denote various constants. Unless otherwise specified, these will be universal in the harmonic setting, and these will depend only on the ellipticity and Lipschitz constants in the elliptic setting. We will also use the notation \(A\lesssim B\) to denote \(A\leq CB\) for such constants \(C\) when it is convenient.
The key result will be a volume estimate on a tubular neighborhood of the critical set for harmonic functions in terms of the frequency. For technical reasons, we have slightly modified some of the definitions given in [17]. Given a nonconstant harmonic function \(u\), we denote its frequency by
\[N(x,r)=\log_{2}\frac{\fint_{B_{2r}(x)}|\nabla u(y)|^{2}\,dy}{\fint_{B_{r}(x)}| \nabla u(y)|^{2}\,dy}. \tag{7}\]
If \(u-u(x)=\sum_{d\geq 1}a_{d}P_{d}\) is the expansion into homogeneous harmonic polynomials about \(x\), then the frequency can equivalently be expressed as
\[N(x,r)=\log_{2}\frac{\sum_{d\geq 1}2^{2d-2}da_{d}^{2}r^{2d}}{\sum_{d\geq 1}da_{d }^{2}r^{2d}}. \tag{8}\]
We will assume \(r<1\) throughout the paper. Notice that the formulation (8) implies that frequency is monotone in \(r\) for fixed \(x\), just as is the case with the standard definition of frequency.
**Proposition 3**.: \(N(x,r)\) _is monotone nondecreasing in \(r\)._
Proof.: For convenience, we replace \(r\) by \(\sqrt{r}\). Differentiating in \(r\) gives
\[\log(2)\partial_{r}N(x,\sqrt{r})=\frac{\sum_{d\geq 1}4^{d-1}d^{2}a_{d}^{2}r^{d-1}}{ \sum_{d\geq 1}4^{d-1}da_{d}^{2}r^{d}}-\frac{\sum_{d\geq 1}d^{2}a_{d}^{2}r^{d-1}}{ \sum_{d\geq 1}da_{d}^{2}r^{d}}. \tag{9}\]
Then monotonicity is equivalent to the inequality
\[\left(\sum_{d\geq 1}4^{d-1}d^{2}a_{d}^{2}r^{d-1}\right)\left(\sum_{d\geq 1}da_{ d}^{2}r^{d-1}\right)\geq\left(\sum_{d\geq 1}4^{d-1}da_{d}^{2}r^{d-1}\right) \left(\sum_{d\geq 1}d^{2}a_{d}^{2}r^{d-1}\right). \tag{10}\]
We can then divide everything by \(\left(\sum_{d\geq 1}da_{d}^{2}r^{d-1}\right)\) to normalize, so without loss of generality \(\left(\sum_{d\geq 1}da_{d}^{2}r^{d-1}\right)=1\) and we have a probability measure on the positive integers via \(p(d)=da_{d}^{2}r^{d-1}\). Recall Chebychev's inequality for sums, which says that for nondecreasing functions \(f,g\) and probability measure \(p\), we have
\[\sum_{d}f(d)g(d)p(d)\geq\left(\sum_{d}f(d)p(d)\right)\left(\sum_{d}g(d)p(d) \right). \tag{11}\]
Taking \(f(d)=4^{d-1}\) and \(g(d)=d\) then implies (10), completing the proof.
We have taken the logarithm with base \(2\) in the definition so that if \(u-u(x)\) has vanishing order \(j\) at a point \(x\), then \(N(x,r)\geq 2j-2\) for all \(r\). In particular, we have that \(x\) is in the critical set if and only if \(\lim_{r\to 0}N(x,r)\geq 2\).
In the following discussion, we will let \(u\) be a solution to a more general elliptic equation \(\operatorname{div}(A\nabla u)=0\). In [10], the effective critical set at scale \(r\) was (roughly) defined to be the set of points such that the solution \(u\) was well approximated by a normalized linear function on the ball of radius \(r\). Subsequently, in [10], the effective critical set at scale \(r\) was defined to be
\[\tilde{\mathcal{C}}_{r}(u)=\left\{x:r^{2}\inf_{y\in B_{r}(x)}|\nabla u(y)|^{2 }<C\mathchoice{\vbox{\hbox{$-$}}\kern-7.499886pt}{\vbox{\hbox{$-$}} \kern-6.374903pt}{\vbox{\hbox{$-$}}\kern-4.499931pt}{\vbox{ \hbox{$-$}}\kern-3.749943pt}\!\int_{\partial B_{2r}(x)}|u(y)-u(x)|^{2}\,dy \right\}. \tag{12}\]
A short calculation shows that all points in the original definition of the effective critical set are also in \(\tilde{\mathcal{C}}_{r}(u)\). It is discussed in [10], we have for a suitable choice of the constant \(C\) that this is contained in the superlevel set
\[\tilde{\mathcal{C}}_{r}(u)\subset\{x:N(x,r)>1\}. \tag{13}\]
Also notice that \(\tilde{\mathcal{C}}_{r}(u)\) trivially contains an \(r\)-tubular neighborhood of the critical set. Thus, it will be enough to prove volume estimates for a fixed superlevel set of the frequency function. For our purposes, it will be convenient to consider a slightly smaller set as the effective critical set that will still contain a tubular neighborhood of the critical set. By the Caccioppoli inequality (see for instance [10, Lemma 2.1]), we have
\[\mathchoice{\vbox{\hbox{$-$}}\kern-7.499886pt}{\vbox{\hbox{$ -$}}\kern-6.374903pt}{\vbox{\hbox{$-$}} \kern-4.499931pt}{\vbox{\hbox{$-$}}\kern-3.749943pt}\!\int_{ \partial B_{r}(x)}|u(y)-u(x)|^{2}\,dy\geq cr^{2}\mathchoice{\vbox{\hbox{$-$}} \kern-7.499886pt}{\vbox{\hbox{$-$}}\kern-6.374903pt}{\vbox{ \hbox{$-$}}\kern-4.499931pt}{\vbox{\hbox{$-$}} \kern-3.749943pt}\!\int_{\partial B_{\partial B_{\partial r}(x)}}|\nabla u(y)|^{ 2}\,dy, \tag{14}\]
with the constant depending on \(0<\alpha<1\) and the ellipticity and Lipschitz constants of the elliptic equation which \(u\) solves. As a result, we can define
\[\mathcal{C}_{r}(u)=\left\{x:r^{2}\inf_{y\in B_{r}(x)}|\nabla u(y)|^{2}<C^{\prime }\int\limits_{B_{(1+\theta)r}(x)\setminus B_{(1-\theta)r}(x)}|\nabla u(y)|^{2} \,dy\right\}, \tag{15}\]
and we have the containments \(B_{r}(\mathcal{C}(u))\subset\mathcal{C}_{r}(u)\subset\tilde{\mathcal{C}}_{r} (u)\subset\{x:N(x,r)\geq 1\}\) for the choices of \(C,\theta\) that we will fix subsequently. The parameter \(\theta\) and the implied constant \(C^{\prime}\) are chosen to be small in terms of the Lipschitz and ellipticity constants of the operator \(\operatorname{div}(A\nabla\cdot)\); this will simplify passing between elliptic operators and the easier case of the Euclidean Laplacian.
We will also make use of the local equivalence of \(L^{2}\) and \(L^{\infty}\) norms for harmonic functions, which is a consequence of the Poisson formula for harmonic functions on the disk.
**Proposition 4**.: _Suppose \(\Delta u=0\) in the unit disk. Then there are universal constants \(C_{1},C_{2}\) such that_
\[C_{1}\|u\|_{L^{2}(B_{1/2}(0))}\leq\|u\|_{L^{\infty}(B_{1/2}(0))}\leq C_{2}\|u \|_{L^{2}(B_{1}(0))} \tag{16}\]
In particular, we can apply this to the partial derivatives of harmonic functions to get analogous equivalences for gradients of harmonic functions.
For discussing propagation of smallness, it is useful to frame this in terms of the Hausdorff content of a set. Unlike Hausdorff measure, Hausdorff content is always finite for bounded sets which makes it more suitable for propagating smallness quantitatively from open sets (such as sublevel sets of the gradient). Given \(\delta>0\), the \(\delta\)-dimensional Hausdorff content of a set \(E\) is defined to be
\[\mathcal{C}_{H}^{\delta}(E)=\inf\sum_{j}r_{j}^{\delta} \tag{17}\]
where the infimum is taken over all covers of \(E\) by balls \(B_{r_{j}}(x_{j})\). Unlike the Hausdorff measure, we do not require the diameters of the covering sets to become arbitrarily small.
## 3. Empty Effective Critical Sets for Nonvanishing Gradients
The first thing we want to show is that if the critical set is empty, then the effective critical set is empty for small scales. Before we make this precise, we first do a simple calculation to show that if the gradient of \(u\) is nonvanishing and has unit supremum on the unit ball, then on the ball of radius \(1/2\), its gradient is not too small. Precisely, we have the following lemma:
**Lemma 5**.: _Let \(u\) be a harmonic function on \(B_{2}(0)\) with \(\|\nabla u\|_{L^{\infty}(B_{1}(0))}=1\). Suppose its frequency satisfies \(N(0,1)\leq\Lambda\). If \(|\nabla u|\) is nonvanishing on the unit ball, then_
\[\big{|}\log|\nabla u(x)|\big{|}\leq C\Lambda+C \tag{18}\]
_for all \(x\in B_{1/2}(0)\), where \(C\) is universal._
Proof.: By monotonicity, we have that
\[\log_{2}\left(\frac{\int_{B_{2}(0)}|\nabla u(y)|^{2}\,dy}{\int_{B_{1/2}(0)}| \nabla u(y)|^{2}\,dy}\right)=N(0,1)+N(0,1/2)\leq 2\Lambda \tag{19}\]
Since \(u\) is harmonic, so is each partial derivative of \(u\), so we have local equivalence of \(L^{2}\) and \(L^{\infty}\) norms up to doubling of the ball by Theorem 4. In particular, this implies
\[\fint_{B_{2}(0)}|\nabla u|^{2}\geq c\|\nabla u\|_{L^{\infty}(B_{1}(0)}^{2}\geq c \tag{20}\]
for some universal constant. We also can trivially upper bound the average over the half ball by the supremum. Together, these inequalities imply
\[2\Lambda\geq\log(c)-\log\left(\sup_{B_{1/2}(0)}|\nabla u|\right) \tag{21}\]
Rearranging implies
\[\inf_{B_{1/2}(0)}-\log|\nabla u(x)|\leq-\log(c)+2\Lambda \tag{22}\]
Now, since \(\nabla u\) is nonvanishing, recall that \(-\log|\nabla u|\) is harmonic and by our normalization, it is nonnegative on the unit ball. Thus, applying Harnack's inequality on the half ball implies that the supremum over the half ball is comparable in size to \(\Lambda+|\log c|\), giving the claim since \(|\log|\nabla u(x)||=-\log|\nabla u(x)|\).
With this preliminary result established, it is not too difficult to show that if the critical set is empty, then so is the effective critical set, at least in the case where the gradient is nonvanishing.
**Proposition 6**.: _Let \(u\) be a harmonic function on the double of the unit ball with \(|\nabla u(x)|>0\) everywhere on the unit ball. Suppose that the frequency bound \(N(x,1)\leq\Lambda\) holds across the unit ball for some \(\Lambda>2\). Then there is a universal constant \(c>0\) such that_
\[N(x,c/\Lambda)\leq\frac{1}{2}+\frac{1}{\Lambda} \tag{23}\]
_for all \(x\) in the unit ball._
Proof.: Throughout, denote \(v=-2\log|\nabla u|\) which is a nonnegative bounded harmonic function by hypothesis. We have that
\[N(x,c/\Lambda)=\log_{2}\left(\frac{\fint_{B_{2c/\Lambda}(x)}\exp(-v(y))\,dy}{ \fint_{B_{c/\Lambda}(x)}\exp(-v(y))\,dy}\right)=v(y_{1})-v(y_{2}) \tag{24}\]
for some points \(y_{j}\) in \(B_{jc/\Lambda}(x)\). We recall the following quantitative version of the Harnack inequality: if \(w\) is harmonic on the unit ball then
\[\frac{1-|y|}{1+|y|}w(0)\leq w(y)\leq\frac{1+|y|}{1-|y|}w(0). \tag{25}\]
Applying this at the points \(y_{1},y_{2}\) for \(v\) and using that they are very close to \(x\), we get that there is some universal constant \(K\) such that the difference is controlled, i.e.
\[|v(y_{2})-v(y_{1})|\leq Kcv(x)/\Lambda \tag{26}\]
However, Lemma 5 showed that \(v(x)\leq C\Lambda+C\). Taking the original factor \(c\) small enough, this implies that the frequency is at most \(1/2+\Lambda^{-1}\), as desired.
In particular, we have shown that if the gradient is nonvanishing in the disk, then the effective critical set (which is contained in the superlevel sets of the frequency function) is empty at scales smaller than \(c/\Lambda\), provided that \(\Lambda\) is not too small. For the volume estimates we want, we will always be able assume \(\Lambda\) is large enough to absorb the additive constants arising in Lemma 5 and to assume that frequency is at most \(1\) in Proposition 6.
## 4. Holomorphic Polynomial Frequency Control
As we are working in the two-dimensional case, we have the tools of complex analysis available. Given a harmonic function \(u\), we can form a holomorphic function \(F(x+iy)=u_{x}(x,y)-iu_{y}(x,y)\) with the property that \(|F|=|\nabla u|\) everywhere. One useful tool we can utilize is that we can factor out roots of \(F\). Since \(F\) is a holomorphic function on the disk of radius \(2\), we can write
\[F(z)=g(z)\prod_{j}(z-a_{j}) \tag{27}\]
where \(g(z)\) is nonvanishing in the unit disk and the \(a_{j}\)s are the roots of \(F\) in the unit disk counted with multiplicity. We know the product is finite by analyticity. In the previous section, we already investigated the nonvanishing case, so we will now study the polynomial case. We have the following volume estimate:
**Proposition 7**.: _Let \(P(z)=\prod_{j=1}^{\Lambda}(z-z_{j})\) be a holomorphic polynomial on the disk, arising from a harmonic polynomial via \(P\sim u_{x}-iu_{y}\). There is a universal constant \(C\) such that the set where the frequency is greater than \(1/2\) has the following volume estimate:_
\[\text{Vol}(\{x\in B_{1/2}(0):N(x,r)>1/2\})\leq C\Lambda^{2}r^{2}. \tag{28}\]
Proof.: Note that we may assume that \(r<C\Lambda^{-1}\), as otherwise the bound is trivial due to the ball having finite measure. The frequency of \(u\) is controlled by \(\Lambda\), as can be seen by (8). Fix a point \(z_{0}\) in the unit disk which is distance at least \(4r\) from each \(z_{j}\). Then we have that for any \(w\in B_{2r}(z_{0})\) and \(z\in B_{r}(z_{0})\)
\[\frac{|w-z_{j}|}{|z-z_{j}|}\leq 1+\frac{|w-z|}{|z-z_{j}|}\leq 1+\frac{3r}{|z-z_{ j}|} \tag{29}\]
Optimizing in \(w\) and \(z\) leads to the estimate
\[N(z_{0},r) \leq\log_{2}\frac{\sup_{w\in B_{2r}(z_{0})}|P(w)|^{2}}{\inf_{z\in B _{r}(z_{0})}|P(z)|^{2}} \tag{30}\] \[\leq\sup_{z\in B_{r}(z_{0})}2\sum_{j=1}^{\Lambda}\log_{2}\left(1 +\frac{3r}{|z-z_{j}|}\right)\] (31) \[\leq\sup_{z\in B_{r}(z_{0})}C\sum_{j=1}^{\Lambda}\frac{r}{|z-z_{ j}|} \tag{32}\]
Let \(B(r)\) be the union of balls of radius \(4r\) centered at each \(z_{j}\), so that \(B(r)\) has volume at most \(C\Lambda r^{2}\). Let \(S(r)=B_{1}(0)\setminus B(r)\). We have by definition of the weak
\(L^{2}\) quasinorm that
\[\text{Vol}\left(\left\{z\in S(r):\sup_{z\in B_{r}(z_{0})}\sum_{j=1}^{\Lambda}\frac{ Cr}{|z-z_{j}|}\geq 1/2\right\}\right)\leq\left\|\sup_{z\in B_{r}(z_{0})}\sum_{j=1}^{ \Lambda}\frac{Cr}{|z-z_{j}|}\right\|_{L^{2,\infty}(S(r))}^{2}. \tag{33}\]
For a single term in the sum, we claim we have the estimate
\[\left\|\sup_{z\in B_{r}(z_{0})}\frac{Cr}{|z-z_{j}|}\right\|_{L^{2,\infty}(S(r) )}^{2}\leq Cr^{2}. \tag{34}\]
This can be seen by homogeneity and as a result of the well known fact that for any \(w\in\mathbb{R}^{2}\)
\[\left\|\frac{1}{|z-w|}\right\|_{L^{2,\infty}(\mathbb{R}^{2})}^{2}\leq C. \tag{35}\]
Although this function does not have the supremum, it is clear from the definition of \(S(r)\) that
\[\sup_{z\in B_{r}(z_{0})}\frac{Cr}{|z-z_{j}|}\leq\frac{C^{\prime}r}{|z_{0}-z_{ j}|} \tag{36}\]
holds for all \(z_{0}\in S(r)\), so we can pass the estimate to the version of the function with the supremum.
We recall that although the definition we gave for weak \(L^{2}\) is only a quasinorm, it is known that for \(p>1\), the quasinorm \(L^{p,\infty}\) is equivalent to a norm [10]. Thus, at the cost of a single multiplicative factor, we can apply the triangle inequality with arbitrarily many terms. In particular, this gives us the desired bound
\[\left\|C\sum_{j=1}^{\Lambda}\frac{r}{|z-z_{j}|}\right\|_{L^{2,\infty}(S(r))}^{ 2}\leq C\Lambda^{2}r^{2}. \tag{37}\]
We have established that the frequency's superlevel set is contained in the union of balls \(B(r)\), which has measure at most \(C\Lambda r^{2}\), and the subset of \(S(r)\) arising from the weak \(L^{2}\) estimate which has measure at most \(C\Lambda^{2}r^{2}\), which gives the claim.
As a consequence of this, we see that in the case that \(u\) is a harmonic polynomial, we already have the desired volume estimate on the effective critical set, since the effective critical set is contained in the set where \(N(x,r)\geq 1\), which is of course contained in the set where \(N(x,r)\geq 1/2\).
**Corollary 8**.: _Let \(P(z)=\prod_{j=1}^{\Lambda}(z-z_{j})\) be a holomorphic polynomial on the disk, arising from a harmonic polynomial via \(P\sim u_{x}-iu_{y}\). There is a universal constant \(C\) such that the effective critical set satisfies the volume estimate_
\[\text{Vol}(\mathcal{C}_{r}(u))\leq C\Lambda^{2}r^{2}. \tag{38}\]
This can be seen to be sharp by considering monomials \(z^{\Lambda}\) where \(\Lambda\) is a positive integer.
## 5. More General Holomorphic Functions
Finally, we must deal with the general case where we have a factorization \(F(z)=g(z)P(z)\) where \(g\) is nonvanishing in the unit ball and \(P\) is a polynomial with roots in the unit ball. We will work with analogs of the doubling index for these; given a holomorphic function \(F\), define
\[N_{F}(z,r)=\frac{\int_{B_{2r(0)}}|F(z)|^{2}\,dz}{\int_{B_{r(0)}}|F(z)|^{2}\,dz}. \tag{39}\]
We first recall the fact that in two dimensions, controlling the frequency on a large disk controls the number of zeroes on a smaller disk.
**Proposition 9** (Theorem 3.4 in [1]).: _Suppose \(N_{F}(0,1)\leq\Lambda\). Then there exists some universal \(s>0\) such that \(F\) has at most \(2\Lambda\) roots on \(B_{s}(0)\), counting multiplicities._
As a result of this, we get a bound on the degree of \(P\) in terms of \(\Lambda\). We first claim a quasi-subadditivity property for the frequency given such a factorization \(F=Pg\), where \(P\) is monic.
**Lemma 10**.: _Suppose \(N_{F}(0,2t)\leq\Lambda\) where \(t=\max\{1/s,5\}\) with \(s\) as in Lemma 9 and \(F=Pg\) is a factorization into a monic polynomial whose roots all lie in \(B_{1}(0)\) and a holomorphic function which is nonvanishing on \(B_{1}(0)\). Then there exists some universal \(C>0\) such that \(N_{g}(0,t)\leq C\Lambda\)._
Proof.: We have by the previous lemma that the degree of \(P\) is at most \(2\Lambda\). Outside of the ball of radius \(2<t/2\), we have that \(1<|z-a_{j}|\) for any root \(a_{j}\) of \(P\) and inside the ball of radius \(k\), we have \(|z-a_{j}|\leq k+1\). We estimate
\[2^{3\Lambda} \geq\frac{\bar{f}_{B_{4t}(0)}\,|P(z)|^{2}|g(z)|^{2}\,dz}{\bar{f}_ {B_{t/2}(0)}\,|P(z)|^{2}|g(z)|^{2}\,dz} \tag{40}\] \[\geq c\frac{\int_{B_{4t}(0)-B_{2t}(0)}|P(z)|^{2}|g(z)|^{2}\,dz}{ \int_{B_{t}(0)-B_{t/2}(0)}|P(z)|^{2}|g(z)|^{2}\,dz}\] (41) \[\geq\frac{c}{(t+1)^{4\Lambda}}\frac{\int_{B_{4t}(0)-B_{2t}(0)}|g( z)|^{2}\,dz}{\int_{B_{t}(0)-B_{t/2}(0)}|g(z)|^{2}\,dz}\] (42) \[\geq\frac{c}{(t+1)^{4\Lambda}}\frac{\int_{B_{2t}(0)}|g(z)|^{2}\, dz}{\int_{B_{t}(0)}|g(z)|^{2}\,dz}\] (43) \[\geq\frac{c}{(t+1)^{4\Lambda}}2^{N_{g}(0,t)} \tag{44}\]
Here, we used the mean value inequality for the square of a modulus of a holomorphic function to control integrals over the half ball by integrals over a larger annulus. We also used the fact that outside the ball of radius \(2\), we have that \(|P(z)|\geq 1\) and inside the ball of radius \(k\), we have \(|P(z)|\leq(k+1)^{2\Lambda}\). The claim follows by rearranging and taking \(\Lambda\) sufficiently large compared to \(c\).
We now have tools for understanding the effective critical sets of polynomials, nonvanishing functions, and how to combine these estimates for more general functions. We are ready to prove Theorem 1 in the harmonic case.
**Proposition 11**.: _Suppose \(u\) is a harmonic function on the disk of radius \(4s\) for some suitably large universal constant \(s\). Assume its frequency satisfies \(N_{u}(0,2s)\leq\Lambda\). Then we have the following volume estimate for the effective critical set_
\[\text{Vol}(\mathcal{C}_{r}(u)\cap B_{1/2}(0))\leq C\Lambda^{2}r^{2} \tag{45}\]
_for some universal constant \(C\)._
Proof.: As usual, define the holomorphic function \(F(x+iy)=u_{x}(x,y)-iu_{y}(x,y)\). Based on our previous remarks, \(N_{F}(0,2s)\leq\Lambda\) and it suffices to show the desired volume estimate for the set of points where \(N_{F}(z,r)>1\). Moreover, we may assume \(r<c\Lambda^{-1}\) as otherwise, the volume estimate holds just by taking \(C\) large enough since \(B_{1/2}(0)\) is a set of finite measure. Factor \(F(z)=P(z)g(z)\) where \(P\) is a polynomial with all its roots in the unit disk and degree at most \(2\Lambda\) and \(g\) is nonvanishing in the unit disk. We have that \(N_{P}(0,1)\leq C\Lambda\), and it follows by Lemma 10 that \(N_{g}(0,1)\leq C\Lambda\). For simplicity, let \(\mathcal{C}(u)\) denote only the critical set of \(u\) inside the unit disk. Proposition 6 implies that if we chose \(c\) small enough so that \(r>c\Lambda^{-1}\) is sufficiently small, then \(N_{g}(z,r)<1/2\) for any \(z\) in the disk of radius \(1/2\). Now, we will reuse the estimate on \(P\) from the polynomial case. We have for \(z\) in \(B_{1/2}(0)\) that
\[2^{N_{F}(z,r)} =\frac{\oint_{B_{2r}(z)}|P(w)|^{2}|g(w)|^{2}\,dw}{\oint_{B_{r}(z)} |P(w)|^{2}|g(w)|^{2}\,dw} \tag{46}\] \[\leq\frac{\oint_{B_{2r}(z)}|g(w)|^{2}\,dw\sup_{B_{2r}(z)}|P(w)|^{ 2}}{\oint_{B_{r}(z)}|g(w)|^{2}\,dw\inf_{B_{r}(z)}|P(w)|^{2}}\] (47) \[\leq 2^{N_{g}(z,r)}\frac{\sup_{B_{2r}(z)}|P(w)|^{2}}{\inf_{B_{r}(z) }|P(w)|^{2}}. \tag{48}\]
Taking logarithms, the frequency from \(g\) contributes at most \(1/2\), and by Proposition 7, we see that we can bound the contribution from the polynomial to the frequency to be at most \(1/2\) on everywhere except on a set of measure \(C\Lambda^{2}r^{2}\). In particular, this implies that we have
\[N_{F}(z,r)\leq 1 \tag{49}\]
holds for all \(z\) in \(B_{1/2}(0)\) outside of a set of measure \(C\Lambda^{2}r^{2}\), which gives the desired result.
Moreover, this estimate is sharp in \(\Lambda\). This can be seen by considering the holomorphic function \(F(z)=\exp(\Lambda z)\), which arises as the gradient of the harmonic function \(\Lambda^{-1}\exp(\Lambda x)\cos(\Lambda y)\). It can be seen to have frequency comparable to \(\Lambda\), and its effective critical set \(\bar{\mathcal{C}}_{r}(u)\) is the full unit ball when \(r=C/\Lambda\) for a suitable choice of constant. Indeed, one has that at a point \(p=(x,y)\) then
\[\inf_{w\in B_{r}(p)}|F(w)|^{2}=e^{2\Lambda(x-r)}.\]
Computing the \(L^{2}\) average in a ball around \(p\) gives \(Ce^{2\Lambda x}\), so as \(r\to 0\), there is a transition between the effective critical set being the full ball and being empty below \(r=C^{\prime}/\Lambda\). Thus, comparing this with (12), we see the power of \(\Lambda\) cannot be improved in the volume estimate.
## 6. Propagation of Smallness for Gradients of Harmonic Functions
With the tools developed in previous sections, we can prove the desired propagation of smallness for gradients of harmonic functions. Given a function \(f\), we will use the following notation for its sublevel set
\[E_{a}(f)=\{z:|f(z)|<e^{-a}\} \tag{50}\]
We will need the following result of Cartan concerning polynomials.
**Lemma 12** ([1]).: _Let \(P\) be a monic polynomial of degree \(n\). Then for any \(\delta,a>0\), we have that there is a finite collections of balls \(\{B_{j}\}\) with radii \(r_{j}\) covering \(E_{a}(P)\cap B_{1}(0)\) and_
\[\sum_{j}r_{j}^{\delta}\leq Ce^{-a\delta/n} \tag{51}\]
From here, obtaining the propagation of smallness is fairly straightforward.
**Theorem 13**.: _Let \(u\) be a harmonic function on the double of the unit disk, and assume its gradient has \(\|\nabla u\|_{L^{\infty}(B_{1}(0))}=1\). Suppose that_
\[\beta\leq\mathcal{C}_{\mathcal{H}}^{\delta}(E_{a}(|\nabla u|)\cap B_{1}(0)) \tag{52}\]
_for some \(\delta,a>0\). Then we have that_
\[|\nabla u(x)|\leq Ce^{-\gamma a} \tag{53}\]
_for all \(x\) in the disk of radius \(1/2\) and some \(\gamma,C>0\) depending only on \(\beta,\delta\)._
Proof.: As usual, consider the holomorphic function \(F(z)=u_{x}(x+iy)-iu_{y}(x+iy)\). Let \(N_{F}(0,1)=\Lambda\). We claim that \(\Lambda\gtrsim a\); then the conclusion of the theorem follows immediately by the normalization of \(F\) in the hypotheses, the definition of frequency, and the local equivalence of \(L^{2}\) and \(L^{\infty}\) norms as a consequence of Theorem 4. Take the usual factorization \(F(z)=P(z)g(z)\) where \(P\) is a monic polynomial whose roots lie in the unit disk and \(g\) is nonvanishing on the unit disk. We have the simple containment
\[\{z\in B_{1}(0):|F(z)|<e^{-a}\}\subset\{z\in B_{1}(0):|P(z)|<e^{- a/2}\}\] \[\cup\{z\in B_{1}(0):|g(z)|<e^{-a/2}\}.\]
Taking the Hausdorff content and applying Lemma 12, we have that
\[\beta\leq Ce^{-\delta a/n}+\mathcal{C}_{\mathcal{H}}^{\delta}(\{z\in B_{1}(0) :|g(z)|<e^{-a/2}\}) \tag{54}\]
where \(n\leq C\Lambda\) is the degree of \(P\). If \(n\gtrsim a\), then we are done, as then \(\Lambda\gtrsim a\) as well. If not, then the first term is at most \(\beta/2\) (for fixed \(\beta>0\) where the implied constant is chosen suitably), and we instead consider the frequency of \(g\), call it \(\rho\), which is at most \(C\Lambda\). We have by Proposition 6 that
\[|g(z)|\geq e^{-c\rho}. \tag{55}\]
In order for (54) to hold, we must have \(\rho\gtrsim a\), which in turn implies that \(\Lambda\gtrsim a\), as desired.
## 7. Preliminaries on Lipschitz Surfaces
In order to make sense of our main results, we need to understand the definition and properties of the frequency function in the setting of solutions to elliptic equations \(\operatorname{div}(A\nabla u)+\mathbf{b}\cdot\nabla u=0\). Following the example of [1] and [1], we assume \(A(x)\) is normalized to be the identity (at the point \(x\)) and define
\[N(x,r)=r\frac{\int_{B_{r}(x)}\langle A(y)\nabla u(y),\nabla u(y)\rangle dy}{ \int_{\partial B_{r}(x)}\mu(y)|u(y)-u(x)|^{2}dS(y)}. \tag{56}\]
Here, \(\langle\cdot,\cdot\rangle\) denotes the Euclidean dot product and
\[\mu(y)=|y|^{-2}\langle A(y)y,y\rangle. \tag{57}\]
This recovers the previous definition of frequency in the Euclidean case. When the elliptic operator is not the Euclidean Laplacian, this is no longer a monotone quantity in general, but it is almost monotone in the sense that there exists a constant \(C\) depending on the elliptic operator such that \(e^{Cr}N(x,r)\) is monotone in \(r\) for \(r\) smaller than some universal \(r_{0}\) depending on the elliptic operator, see [1, Theorem 2.1] for details. We will find it convenient to also consider the doubling index, a comparable quantity defined as
\[\hat{N}(x,r)=\log\frac{\sup_{y\in B_{2r(x)}}|\nabla u(y)|^{2}}{\sup_{y\in B_{ r(x)}}|\nabla u(y)|^{2}}. \tag{58}\]
A version of the doubling index for solutions (as opposed to their gradients) was used extensively in several recent influential works such as [1] and [1]. It is unsurprising that as a quantity measuring doubling, it is intimately related to the frequency. In fact, we have comparability between these quantities up to changing the scale and an additive constant4
Footnote 4: This can be seen by a variant of the argument given in [1, Section 2] combined with a suitable version of Caccioppoli’s inequality.
\[cN(x,r)-c\leq\hat{N}(x,r)\leq CN(x,4r)+C, \tag{59}\]
where the constants depend on ellipticity and Lipschitz constants of the elliptic operator.
## 8. Proofs of Theorems 1 and 2 on Surfaces
As we are in the two-dimensional setting, isothermal coordinates provide a convenient way to extend the volume bounds for the effective critical set and the propagation of smallness to the Laplace-Beltrami operator on a closed Riemannian surface with at least Lipschitz regularity. On any Riemannian surface, there exist local coordinates called isothermal coordinates5 in which the Laplace-Beltrami operator takes the form of a scalar multiple of the Euclidean Laplacian. In our propagation of smallness, the constant \(C_{\epsilon}\) obtained will also depend on the ellipticity and Lipschitz constants of the surface's metric, but we suppress this dependence from the notation in order to be more concise. The idea is simple; cover the surface with small coordinate charts such that the map changing to isothermal coordinates has bi-Lipschitz constant close to \(1\). Consequently, the frequency is perturbed only slightly and we can apply the result from the harmonic case on each small ball. The number of charts needed is bounded in terms of the ellipticity and Lipschitz
constants of the manifold. This leads to the desired local volume bound for the effective critical set. The propagation of smallness for gradients can be deduced by using the fact that sublevel sets of the gradient of the solution aren't distorted too much when passing to the isothermal coordinates, and a chaining argument can be done to propagate smallness to the entire surface. We can also obtain local bounds on the size of the effective critical set using similar techniques.
In local coordinates, we recall that the Laplace-Beltrami operator for a metric \(g\) is a divergence-form elliptic operator given by
\[\Delta_{g}u=\frac{1}{\sqrt{\det g}}\partial_{i}(\sqrt{\det g}g^{ij}\partial_{ j}u), \tag{60}\]
where we are using the Einstein summation convention and \(g^{ij}\) denotes the \((i,j)\)th component of the inverse of the metric tensor. The key tool is the following result, which controls distortion of the geometry in all directions under the coordinate change, provided the metric is Lipschitz close to being Euclidean.
**Proposition 14**.: _Let \(g\) be a Lipschitz Riemannian metric on \(B_{\eta}(0)\subset\mathbb{R}^{2}\) with Lipschitz constant controlled by \(\lambda\) and \(g(0)=I\) is the Euclidean metric. Assume \(\eta\) is sufficiently small (depending on \(\lambda\)) and let \(F:B_{\eta}(0)\dot{\to}U_{\eta}\subset\mathbb{R}^{2}\) be the map given by the isothermal change of coordinates. Then the eigenvalues of the differential \(DF\) lie in the interval \([1/2,2]\) if \(\eta\) is sufficiently small in terms of \(\lambda\)._
Proof.: The proof will rely on the fact that the existence of isothermal coordinates on surfaces can be rephrased in terms of solvability of the Beltrami equation. We will use many results from [1] in this proof, and we will frequently view \(\mathbb{R}^{2}\) as \(\mathbb{C}\), as in previous sections of the paper. If the metric \(g\) takes the matrix form
\[g=\begin{bmatrix}E&F\\ F&G\end{bmatrix},\]
then for
\[\mu=\frac{E-G+2iF}{E+G+2\sqrt{EG-F^{2}}}, \tag{61}\]
the existence of isothermal coordinates is equivalent to finding a homeomorphic solution to
\[\partial_{z}\omega=\mu\partial_{z}\omega. \tag{62}\]
Here, we are extending the definition of \(\mu\) to all of \(\mathbb{C}\) by taking a Lipschitz extension with the same Lipschitz constant and \(L^{\infty}\) norm and \(\mu=0\) outside \(B_{2\eta}(0)\) (note that we can do both of these simultaneously since \(\mu(0)=0\), such as by interpolating linearly between the values on \(\partial B_{\eta}(0)\) and \(0\) on radial segments). Thus, we have that
\[|\tilde{\mu}(x)|\leq C(\lambda)\eta\chi_{B_{2\eta(0)}}, \tag{63}\]
where \(\chi_{E}\) denotes the indicator function of the set \(E\). It follows by [1, Equation 5.25] that
\[\mathcal{H}^{2}(B_{2\eta(0)})\leq C(\lambda)\eta^{2} \tag{64}\]
In particular, notice that by ellipticity and the definition of \(\mu\), it follows that the Lipschitz constant of \(\mu\) depends only on \(\lambda\). It is discussed in Chapter 5 of [1] (see specifically Theorem 5.1.1, equation (5.4) and the proof of Theorem 5.2.3) that when \(|\mu|<1\), then such a solution is given by
\[\omega(z)=z+\mathcal{C}[(I-\mu\mathcal{S})^{-1}\partial_{z}\mu]:=z+\sigma(z), \tag{65}\]
where \(\mathcal{C}\) denotes the Cauchy transform on \(\mathbb{C}\) and \(\mathcal{S}\) denotes the Beurling transform. We do not define these here, as we will only be using their \(L^{p}\) boundedness properties as a black box, but a more complete exposition can be found in [1, Chapter 4]. One can compute that the derivative of this mapping (with respect to \(z,\bar{z}\)) is given by
\[D\omega(z)=\left[\frac{e^{\sigma}}{\mu e^{\sigma}}\quad\frac{\mu e^{\sigma}}{e ^{\sigma}}\right]. \tag{66}\]
The eigenvalues of this mapping are \(\mathrm{Re}(e^{\sigma})\pm(\mathrm{Re}(e^{\sigma})^{2}+(|\mu|^{2}-1)|e^{2 \sigma}|)^{1/2}\). When choosing \(\eta=c(\lambda)\) is sufficiently small, then we have \(|\mu|<c^{\prime}<<1\) is very small (due to the metric being Lipschitz close to the Euclidean metric on the ball), and we will first show that \(|\sigma|\) is also small when this happens.
To this end, notice that \(|\partial_{z}\mu(z)|\leq C(\lambda)\) holds almost everywhere by Rademacher's theorem and that \(\partial_{z}\mu\) is supported in \(B_{2\eta}(0)\). Thus, for \(1<p<\infty\), we have that
\[\|\partial_{z}\mu\|_{L^{p}}\leq C(\lambda)\eta^{2/p}. \tag{67}\]
The Beurling transform is bounded on \(L^{p}\) for \(1<p<\infty\) by the theory of Calderon-Zygmund operators (see [1, Theorem 4.5.3]), and its resolvent \((I-\mu\mathcal{S})^{-1}\) is also bounded on \(L^{p}\) with operator norm controlled by \(C^{\prime}_{p}(1-|\mu|)^{-1}\) (see [1, Chapter 5]) which we can bound in terms of \(\lambda\) when \(\eta\) is small enough. Thus, we have that
\[\|(I-\mu\mathcal{S})^{-1}\partial_{z}\mu\|_{L^{p}}\leq C^{\prime}_{p}\eta^{2/p }C(\lambda) \tag{68}\]
holds for all \(1<p<\infty\). However, it is shown in [1, Theorem 4.3.11] that the Cauchy transform maps \(L^{p}(\mathbb{C})\cap L^{q}(\mathbb{C})\to C_{0}(\mathbb{C})\) for conjugate pairs \(1<q<2<p<\infty\) with the estimate
\[\|\mathcal{C}\phi\|_{L^{\infty}}\leq\frac{2}{\sqrt{2-q}}\sqrt{\|\phi\|_{L^{p} }\|\phi\|_{L^{q}}}. \tag{69}\]
Taking \(\phi=(I-\mu\mathcal{S})^{-1}\partial_{z}\mu\), we deduce the estimate
\[\|\sigma\|_{L^{\infty}}\leq C^{\prime}_{p,q}C(\lambda)\eta \tag{70}\]
We can fix \(q=3/2\) and take \(\eta=(10C^{\prime}_{p,q}C(\lambda))^{-1}\) to conclude that \(|\sigma|\) is sufficiently small.
With this established, we return to getting a lower bound on the modulus of the eigenvalues. Recall throughout that we have upper and lower bounds on \(|e^{\sigma}|\) by the above argument. We will consider cases based on if the eigenvalues are real or complex and on whether \(e^{\sigma}\) has a large or small real part.
**Case 1**: Suppose that the eigenvalues are complex, i.e. that
\[\mathrm{Re}(e^{\sigma})^{2}+(|\mu|^{2}-1)|e^{2\sigma}|<0.\]
If \(|\mathrm{Re}(e^{\sigma})|^{2}>c_{\lambda}|e^{2\sigma}|\) for some constant \(c_{\lambda}\) depending only on \(\lambda\), then we are done since we have a lower bound on the real part of the eigenvalue and hence also on its modulus. Thus, we may assume the inequality fails for some small constant \(c_{\lambda}\). Taking it smaller than \((1-|\mu|^{2})/2\) (where \(|\mu|\) depends on \(\lambda\)), we see that we can lower bound the imaginary part of the eigenvalue, completing this case.
**Case 2**: Suppose that the eigenvalues are real, i.e. that
\[\mathrm{Re}(e^{\sigma})^{2}+(|\mu|^{2}-1)|e^{2\sigma}|\geq 0.\]
In the case where the discriminant is \(0\), we can write \(\mathrm{Re}(e^{\sigma})\) as a multiple of \(|e^{\sigma}|\) and get the desired lower bound, so we can assume the discriminant is positive.
In general, we will have lower bounds on \(\mathrm{Re}(e^{\sigma})^{2}\), so we just need to show that the summand arising from the discriminant is not too close to this quantity. If the discriminant is less than \(c_{\lambda}\mathrm{Re}(e^{\sigma})^{2}\) for \(c_{\lambda}<1\) then we are done and get a lower bound of \((1-c_{\lambda})\mathrm{Re}(e^{\sigma})^{2}\). Fix \(c_{\lambda}=|\mu|^{2}\). Then if the previous discriminant estimate fails, we can derive that
\[\mathrm{Re}(e^{\sigma})^{2}>\frac{1-|\mu|^{2}}{1-c_{\lambda}}|e^{2\sigma}|=|e^{ 2\sigma}|\]
which is a contradiction. Thus, we always have the desired lower bound on the modulus of the eigenvalues.
With this established, the propagation of smallness follows in a straightforward fashion.
Proof of Theorem 2.: This is already known in the case where the operator is the Euclidean Laplacian, so it suffices to reduce to that case. Around each point \(x\) in \(B_{1/2}(0)\), consider an elliptic disk with major axis having length at most \(\eta(2\lambda)^{-1}\), where \(\eta\) is as in the previous lemma, and the ellipse is chosen so that the linear change of coordinates taking \(g(x)\) to the identity matrix sends the corresponding boundary ellipse to the circle of radius \(\eta/2\). By subadditivity of the Hausdorff content, we can select one of these elliptic disks \(\Theta_{0}\) with the property that \(\mathcal{C}_{H}^{\delta}(\Theta_{0}\cap E_{\epsilon})\geq c(\lambda)\beta\). We can pick \(C(\lambda)\) many points on the boundary and of \(\Theta_{0}\) and take the corresponding elliptic disks, \(\Theta_{1,j}\)s, so that \((1+c(\lambda))\Theta_{0}\) is covered by \(\Theta_{0}\) and the \(\Theta_{1,j}\)s (this is a consequence of ellipticity). We can then iterate this procedure until \(B_{1/2}(0)\) is covered by \(C(\lambda)\) many elliptic disks. Moreover, in this construction, we can ensure by ellipticity that we have a lower bound on the fraction of each elliptic disk that overlaps with the previous disks, call it \(b(\lambda)\). On \(\Theta_{0}\), we can consider its double and make a linear change of coordinates so that the double becomes a ball of radius at most \(\eta\). Changing to isothermal coordinates, we can then apply Theorem 13 to propagate smallness on a nearby sublevel set of the gradient, within the factor of \(2\) appearing in Proposition 14. In this step, it is essential that we have control on the eigenvalues of the differential of the coordinate change, as it guarantees that the sublevel set of the gradient has comparable \(\delta\)-Hausdorff content in the new coordinate system. Since we have good control over the derivative of the quasiconformal coordinate change, this gives us a slightly worse propagation of smallness back on \(\Theta_{0}\), with the various constants multiplied by quantities depending only on \(\lambda\). Thus, we can deduce
\[|\nabla u(x)|\leq C(\lambda)\epsilon^{\gamma(\lambda)} \tag{71}\]
holds for all \(x\in\Theta_{0}\). Now, consider the doubles of each of the \(\Theta_{1,j}\)s. We have that for all \(x\) in a set with \(\delta\)-Hausdorff content bounded below by \(C(\lambda)\) that (71) holds. This as a result of the fact that we ensured that there is a lower bound on the overlap between the various elliptic disks. We can propagate smallness using the same argument as for \(\Theta_{0}\), getting more factors in the constants depending on \(\lambda\). Chaining this argument iteratively completes the proof, since the number of steps required to cover the original disk of radius \(1/2\) is also controlled in terms of \(\lambda\).
We can also extend the volume estimates on the effective critical set to the case of Lipschitz manifolds.
Proof of Theorem 1.: As in the previous proof of Theorem 2, we can cover the half ball with elliptic disks with major axis having length at most \(\eta(2\lambda)^{-1}\), such that the linear change of coordinates taking the metric tensor to the identity at the center of each ellipse sends the corresponding boundary ellipse to the circle of radius \(\eta/2\). By almost monotonicity of the frequency function, the frequency on these smaller domains is at most \(C\Lambda\). By the comparability of frequency and doubling index, we have that the doubling index is also at most \(C\Lambda\). Applying the quasiconformal map, we get that our function \(\tilde{u}\) in the new coordinates is harmonic and has doubling index is at most \(C\Lambda\). Appealing again to the comparability in (59), we see the frequency of \(\tilde{u}\) is bounded by \(C\Lambda\) on this Euclidean disk. The harmonic case of Theorem 1 implies
\[\operatorname{Vol}(\mathcal{C}_{r}(\tilde{u})\cap B_{\eta/2}(0))\leq C\Lambda ^{2}r^{2} \tag{72}\]
Applying the inverse of the quasiconformal map distorts this volume by a bounded constant factor, and by choosing the constants \(\theta,C\) in (15) appropriately, we have that its image covers \(\mathcal{C}_{r}(u)\) in the elliptic disk. More precisely, fix \(\theta\) to be \(1/2\) for harmonic functions, and choose \(\theta\) sufficiently small so that when changing coordinates back, the image of the annulus from the harmonic case covers the thinner annulus from the elliptic case (which depends on \(\theta\)); the multiplicative constant \(C\) should absorb any multiplicative factors from the Jacobian of the coordinate change so that the image of the harmonic effective critical set under the quasiconformal map covers the elliptic effective critical set. Repeating this argument for each such elliptic disk and summing the volume contributions completes the proof.
|
2306.10623 | Enhanced Masked Image Modeling for Analysis of Dental Panoramic
Radiographs | The computer-assisted radiologic informative report has received increasing
research attention to facilitate diagnosis and treatment planning for dental
care providers. However, manual interpretation of dental images is limited,
expensive, and time-consuming. Another barrier in dental imaging is the limited
number of available images for training, which is a challenge in the era of
deep learning. This study proposes a novel self-distillation (SD) enhanced
self-supervised learning on top of the masked image modeling (SimMIM)
Transformer, called SD-SimMIM, to improve the outcome with a limited number of
dental radiographs. In addition to the prediction loss on masked patches,
SD-SimMIM computes the self-distillation loss on the visible patches. We apply
SD-SimMIM on dental panoramic X-rays for teeth numbering, detection of dental
restorations and orthodontic appliances, and instance segmentation tasks. Our
results show that SD-SimMIM outperforms other self-supervised learning methods.
Furthermore, we augment and improve the annotation of an existing dataset of
panoramic X-rays. | Amani Almalki, Longin Jan Latecki | 2023-06-18T19:20:38Z | http://arxiv.org/abs/2306.10623v1 | # Enhanced Masked Image Modeling for Analysis of Dental Panoramic Radiographs
###### Abstract
The computer-assisted radiologic informative report has received increasing research attention to facilitate diagnosis and treatment planning for dental care providers. However, manual interpretation of dental images is limited, expensive, and time-consuming. Another barrier in dental imaging is the limited number of available images for training, which is a challenge in the era of deep learning. This study proposes a novel self-distillation (SD) enhanced self-supervised learning on top of the masked image modeling (SimMIM) Transformer, called SD-SimMIM, to improve the outcome with a limited number of dental radiographs. In addition to the prediction loss on masked patches, SD-SimMIM computes the self-distillation loss on the visible patches. We apply SD-SimMIM on dental panoramic X-rays for teeth numbering, detection of dental restorations and orthodontic appliances, and instance segmentation tasks. Our results show that SD-SimMIM outperforms other self-supervised learning methods. Furthermore, we augment and improve the annotation of an existing dataset of panoramic X-rays.
Amani Almalki and Longin Jan Latecki Department of Computer and Information Sciences, Temple University, Philadelphia, USA {amani.almalki,latecki}@temple.edu
Self-distillation, Self-supervised learning, Masked image modeling, Object detection, Instance segmentation
## 1 Introduction
The computer-assisted decisions are essential in dental practice to help dentists diagnose and plan for treatments. Dental imaging is a valuable tool that facilitates diagnosis and treatment plans, which is impossible through clinical examination and patient history only [1]. A Dental X-ray is a two-dimensional radiograph that captures the patient's entire mouth from ear to ear in a single image, including the upper and lower jaws and surrounding alveolar bone [2].
In dentistry, many teeth numbering systems provide a specific code for each tooth. Specifically, in this study, we utilize The Federation Dentiature International numbering system (FDI), which is internationally known among dental care providers. It is a two-digit code where the first digit is given for each quadrant from 1 to 4 for permanent adult teeth. And the second digit is assigned for each tooth number based on its location in the jaw, starting from the middle front teeth (number 1) and moving back up to the third molar (number 8) [3].
Furthermore, dental restorations are used to restore the tooth's missing structure resulting from caries or trauma with full or partial coverage. Moreover, root canal fillings are utilized to fill the space of the root portion inside the tooth structure because of decay or other damage. In addition, orthodontic appliances apply force onto the teeth to be moved into the correct position; such appliances include but are not limited to bands, brackets, and retainers. The restorative materials and orthodontic appliances appear radiopaque in the X-rays and can be identified by dental practitioners [4].
Deep learning models are successful when trained with a large amount of data, however, a very limited number of dental radiographs is available for training. To mitigate this problem, we propose a new self-distillation and self-supervised learning combination for training a Swin Transformer [5] for dental panoramic X-rays analysis.
Recently, self-supervised learning methods with masked image modeling (MIM) such as SimMIM [6], MAE [7] and UM-MAE [8] are shown to be effective in pre-training deep learning models, like Transformers [5, 9]. However, only SimMIM and UM-MAE are applicable to Swin Transformer. Generally, the idea of MIM methods is to mask some patches before they are fed into the Swin encoder and predict the original patches to gain more understanding of the images. However, the patches' location is important in dental panoramic X-rays for a predictable outcome. SimMIM maintains the patches location known to both the encoder and decoder, while UM-MAE drops the location information unknown to the encoder, which may induce inaccuracy. Therefore, SimMIM pre-training is selected in this study.
Inspired by [6, 7, 10], we hypothesize that the Swin encoder can be improved by transferring knowledge obtained by decoded visible patches to their encoded peers through self-distillation. We believe that the visible patches in the decoder contain more knowledge than the ones in the encoder. Moreover, similar to [6, 7] and unlike [10], we found out that predicting the masked area only outperforms predicting all image pixels.
The proposed SD-SimMIM is trained on the same dataset as the downstream tasks, excluding the test dataset. We apply SD-SimMIM on dental panoramic X-rays for teeth numbering, detection of dental restorations and orthodontic appliances, and instance segmentation tasks. It is shown that SD
SimMIM performs better than other self-supervised learning methods.
Although previous studies investigated teeth numbering [11] and segmentation of dental restorations [12], there is no comprehensive dataset that simultaneously studied orthodonics appliance segmentation. We believe that the inclusion of segmentation of orthodontics appliances increases the complexity of the computer vision problem because of class quantity and class imbalance. Therefore, we augment the existing dataset introduced in [11] under dental expert supervision. We further expand the dataset by developing annotations for orthodontics appliances, including bands, brackets, and retainers. The labeling process led to a unique high-quality augmented dataset. Our data will be available, upon request, under the name **Dental** analysis (Dentalysis) annotations. Our main contributions are twofold:
* We introduce SD-SimMIM, a self-distillation enhanced SimMIM. It aims to boost the feature representation on top of SimMIM to alleviate the demands on large data for dental panoramic radiographs, and further help downstream tasks.
* The augmented dataset increases performance, while added labeling of orthodontics appliances extends the horizon of possible dental applications.
## 2 Methods
Fig. 1 illustrates our SD-SimMIM framework. It includes two modules, masked image modeling (MIM) and visible image modeling (VIM). MIM generates self-supervised learning on unlabeled data by masking some image patches, while VIM imposes self-distillation constraints on visible patches for better and more powerful encoder learning. Hence, VIM enhances the original SimMIM, particularly for dental panoramic radiographs.
### SimMIM
SimMIM framework includes four components: patchifying and masking, encoder, decoder, and prediction target.
**Patchifying and masking** designs how to select the area to mask, and how to implement masking of the selected area. The Patchifying first divides the input image \(x\) into \(N\) patches. Then, it flattens each patch to a token (a one-dimensional vector of visual features) with length \(D\). Hence, the formulation of the representation of all patches is \(v_{all}\subseteq\mathbb{R}^{N\times D}\). Next, the masking randomly divides the patches into two sets with respect to a masking ratio \(M\), more precisely \(v_{all}\subseteq\mathbb{R}^{N\times D}\to v_{vis}\subseteq\mathbb{R}^{N^{ \prime}\times D},v_{M}\subseteq\mathbb{R}^{N^{\prime}\times D}\) where \(N^{\prime}=N\times(1-M),N^{\prime}=N\times M\). \(v_{all}\) will be the input to the encoder and \(v_{M}\) are the labels.
**Encoder** takes \(v_{all}\) as input, and extracts latent feature from visible patches. First, it maps \(D\) dimensions of tokens to \(D^{\prime}\) with a linear projection, and then these patch tokens are processed via Swin Transformer blocks to get latent representation vectors of patches \(z_{vis}\subseteq\mathbb{R}^{N^{\prime}\times D^{\prime}}\) and masked tokens \(z_{M}\subseteq\mathbb{R}^{\tilde{N}^{\prime}\times D^{\prime}}\).
**Decoder** takes \(z_{all}\subseteq\mathbb{R}^{N\times D^{\prime}}\) as input, and learns low-level representation from visible patches for image reconstruction. Hence, the decoder output \(y_{all}\subseteq\mathbb{R}^{N\times D^{\prime}}\) will divided into \(y_{vis}\) and \(y_{M}\), as visible and masked tokens, respectively.
**Prediction target** defines the form of original signals to predict. First, we consider the original masked tokens after normalizing \(Y_{M}=Norm(v_{M})\) as our prediction target. The decoder applies a linear layer to align \(y_{M}\) and \(Y\), i.e. \(y_{M}\to y^{\prime}_{M}\). The \(L_{1}\) loss is computed between the predicted masked tokens \(y^{\prime}_{M}\) and the original masked tokens after normalization \(Y_{M}\) as described in Eq. 1.
\[L_{1}=\ell_{1}(y^{\prime}_{M},Y_{M}),\quad y^{\prime}_{M},Y_{M}\subseteq\mathbb{ R}^{\tilde{N}^{\prime}\times D} \tag{1}\]
### Self-distillation
Knowledge Distillation is the process of transferring knowledge from a large model to a smaller one [13]. Previous studies apply it to the vectors at various depths within the same network, either a convolutional neural network (CNN) [14] or a Vision Transformer (ViT)[10]. Hence, knowledge is distilled from deep layers to shallow layers, augmenting the feature representation of shallow layers. Considering the imbalance of knowledge, we found that this is exactly how knowledge in the visible tokens can be transferred from the decoder to the encoder through this distillation paradigm. Particularly, there are two types of latent representation vectors for visible tokens in SimMIM, i.e. \(z_{vis}\) outputted from the encoder and \(y_{vis}\) from the decoder. We treat \(z_{vis}\) as shallow features and \(y_{vis}\) as deeper features in the self-distillation framework [14]. We use a 3-layer MLP over these two vectors, resulting in probability distributions over \(K\) dimensional feature denoted by \(q\) and \(p\), respectively. Each of them is normalized with a \(Softmax\) over the feature dimension. Thus, we learn
Figure 1: **Our SD-SimMIM framework.** Alongside the original SimMIM, we benefit from decoded visible patches (as the teacher) and transfer knowledge to their peers after encoding. (Best viewed in color)
to match these distributions by minimizing the cross-entropy loss as shown in Eq. (2).
\[\begin{split}& q=MLP(z_{vis}),\quad p=MLP(y_{vis})\\ & q^{\prime}=Softmax(q),\quad p^{\prime}=Softmax(p)\\ & L_{distill}=-p^{\prime}log(q^{\prime})\end{split} \tag{2}\]
The total loss is formulated as shown in Eq.(3).
\[L=\alpha L_{1}+(1-\alpha)L_{distill} \tag{3}\]
where \(\alpha\) is the empirically defined scaling factor (in this study, \(\alpha\) is equal to 0.2).
## 3 Experiments
### Dataset
Detection, Numbering, and Segmentation (DNS) [11] is a dental panoramic X-rays dataset consisting of 543 annotated images with ground truth segmentation labels, including numbering information based on the FDI teeth numbering system. Each image size is 1991x1127 pixels. The dataset annotations from [12] do not contain any segmentation of orthodontic appliances. Therefore, we contribute to expanding the dataset by developing segmentation for orthodontic appliances and introducing three more classes, namely bands, brackets, and retainers. This process was under a supervision of a dentist using the COCO-Annotator tool [15]. We attended weekly meetings where related issues and questions were discussed. In the end, the annotations were reviewed to assure quality and avoid systematic and random errors. Fig. 2 presents samples of segmentation of orthodontics appliances. We believe this is the most inclusive dataset for segmenting teeth, dental restorations, and orthodontic appliances in dental panoramic radiographs. Our data will be available upon request, namely Dentalysis annotations.
### Evaluation metric
For all our experiments, we split the data into five folds, each containing about 20% of the images. One fold is fixed as the test set (111 images), and the other four folds (108 images each) compose the training and validation datasets in a cross-validation manner. This process is repeated five times. The evaluation metric we adopt is the Average Precision for object detection and instance segmentation models.
### Implementation details
Our experiments are implemented based on the PyTorch [16] framework and trained with NVIDIA Tesla Volta V100 GPUs. In all experiments, the batch size equals the total number of training samples, which is 432. The input images are all resized to 800x600 pixels. We utilize the AdamW [17] optimizer in all experiments.
**Data augmentation.** We apply noise addition and horizontal flipping, which turns left teeth numbers into right teeth numbers and vice-versa.
**SD-SimMIM pre-training.** We follow a similar protocol to SimMIM [6] to train our SD-SimMIM. We use Swin-B [5] as the encoder and a lightweight decoder with a linear projection. The base learning rate is set to 8e-4, weight decay is 0.05, \(\beta\)1 = 0.9, \(\beta\)2 = 0.999, with a cosine learning rate scheduler. We use a random MIM with a patch size of 16x16 and a mask ratio of 20%. We apply the L2-normalization bottleneck [18] (dimension 256 for the bottleneck and \(K\) dimensions equals 4096) as the projection head in self-distillation. This model was pre-trained for 100 epochs with a warm-up for 10 epochs. The target image size is 800x600.
**Task fine-tuning.** We utilize single-scale training. The initial learning rate is 0.0001, and the weight decay is 0.05.
### Quantitative results
Table 1 shows the results of different methods on the dataset for teeth numbering, detection of dental restorations, and instance segmentation only. As a baseline (the first row, called Supervised), Swin-B [5] is trained using the dataset without self pre-training to demonstrate the improvement obtained by self-supervised learning. The original Swin-B was trained on the Image Net dataset with 1000 classes denoted as (IN-1K). The CNN-based network, PANet [11], reports a result that is worse than Swin-B. This can be explained as the difference in the network capacity, where ResNet-50 is used as the backbone in PANet. As a comparison, the way SimMIM uses image reconstruction is obviously more suitable than UM-MAE for dental images. The reason may be attributed to the fact that the location of the patches is essential in dental radiographs for a predictable outcome. SimMIM maintains the location of the patches known to both the encoder and decoder, while UM-MAE drops the location information, which may induce inaccuracy. The proposed SD-SimMIM shows steady improvements over SimMIM and yields the best performance. Hence transferring decoder information to the encoder with self-distillation improves the outcomes of self-learning. We also observe that similar to [6, 7] and unlike [10], our results show that predicting the masked area only outperforms predicting all image pixels for both SimMIM and our SD-SimMIM.
Table 2 shows results after including the annotations of orthodontics appliances. The proposed SD-SimMIM method achieves the highest performance of 92.7% and 90.8% on detecting teeth, dental restorations and orthodontics appliances,
Figure 2: Samples of segmentation of orthodontics appliances, a) shows examples of bands (yellow arrows) and brackets (green arrows), and b) a retainer (orange arrow). (Best viewed in color)
and instance segmentation, respectively. Again it is worth noting that the best performance is gained when computing the loss on the masked areas only.
### Qualitative results
To illustrate the effectiveness of adding self-distillation to simMIM, we provide some visualization examples. Firstly, we are curious about the results of image reconstruction. Fig. 3 presents two reconstruction examples using our SD-SimMIM. As shown, SD-SimMIM obtains a slightly better reconstruction than SimMIM. It proves that self-distillation reinforces the learning capability of the SimMIM encoder.
Secondly, Fig. 4 displays four different qualitative samples of improved performance when the Swin Transformer is pre-trained with SD-SimMIM for teeth numbering, detecting dental restorations, orthodontic appliances, and instance segmentation. Those improvements in detection and segmentation agree with the quantitative results in Section 3.4.
## 4 Conclusions
We propose SD-SimMIM, a novel self-distillation scheme that transfers knowledge from the decoder to the encoder to guide a more effective visual pre-training. The quantitative and qualitative results present the benefits of our SD-SimMIM, which is a promising tool for the analysis of dental radiographs. For future work, we will evaluate our SD-SimMIM on different downstream tasks such as detecting dental disease on dental bitewing radiographs.
## 5 Compliance with Ethical Standards
This research study was conducted retrospectively using human subject data made available in open access by [11]. Ethical approval was not required, as confirmed by the license attached to the open-access data.
## 6 Acknowledgments
We would like to express our deepest thanks to Dr. Abdulrahman Almalki, a dental expert from the University of Pennsylvania, for his valuable discussions related to dentistry. Also, we would like to thank Dr. Luciano Oliveira and his group from Ivisionlab at the Federal University of Baha for sharing the dataset used in this study.
\begin{table}
\begin{tabular}{l l r r r} \hline \hline Initialization & Backbone & Pre-train Data & \(AP^{box}\) & \(AP^{mask}\) \\ \hline Supervised & Swin-B & IN-1K w/ Labels & 80.3 & 79.2 \\ \hline PANet [11] & ResNet-50 & IN-1K w/ Labels & 76.8 & 75.1 \\ UM-MAE [12] & Swin-B & IN-1K & 88.3 & 85.7 \\ SimMIM* & Swin-B & IN-1K & 89.9 & 88.5 \\ SimMIM [12] & Swin-B & IN-1K & 90.4 & 88.9 \\ \hline SD-SimMIM* & Swin-B & IN-1K & 90.7 & 89.6 \\ SD-SimMIM & Swin-B & IN-1K & **92.4** & **90.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of teeth numbering, detection of dental restorations, and instance segmentation only. * denotes \(L_{1}\) loss is computed on the whole image. \(AP^{box}\) and \(AP^{mask}\) indicate Average Precision for object detection and instance segmentation, respectively.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Initialization & Backbone & Pre-train Data & \(AP^{box}\) & \(AP^{mask}\) \\ \hline Supervised & Swin-B & IN-1K w/ Labels & 81.9 & 80.1 \\ \hline SimMIM* & Swin-B & IN-1K & 90.3 & 88.8 \\ SimMIM [12] & Swin-B & IN-1K & 90.8 & 89.4 \\ \hline SD-SimMIM* & Swin-B & IN-1K & 91.2 & 90.0 \\ SD-SimMIM & Swin-B & IN-1K & **92.7** & **90.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results after including orthodontics appliances. * denotes \(L_{1}\) loss is computed on the whole image.
Figure 4: Qualitative results of detection and instance segmentation. Note that teeth detection and instance segmentation are missing (red arrows) when Swin Transformer is pre-trained with SimMIM compared to the ones produced by Swin Transformer pre-trained with SD-SimMIM architecture (green arrows). (Best viewed in color.)
Figure 3: Images reconstructed by SimMIM and SD-SimMIM. SD-SimMIM shows a clearly better reconstruction than SimMIM. The color boxes highlight their details. (Best viewed in color) |
2305.09622 | Prescribing nearly constant curvatures on balls | In this paper we address two boundary cases of the classical Kazdan-Warner
problem. More precisely, we consider the problem of prescribing the Gaussian
and boundary geodesic curvature on a disk of R^2, and the scalar and mean
curvature on a ball in higher dimensions, via a conformal change of the metric.
We deal with the case of negative interior curvature and positive boundary
curvature. Using a Ljapunov-Schmidt procedure, we obtain new existence results
when the prescribed functions are close to constants. | Luca Battaglia, Sergio Cruz-Blázquez, Angela Pistoia | 2023-05-16T17:20:19Z | http://arxiv.org/abs/2305.09622v2 | # Prescribing nearly constant curvatures on balls
###### Abstract.
In this paper we address two boundary cases of the classical Kazdan-Warner problem. More precisely, we consider the problem of prescribing the Gaussian and boundary geodesic curvature on a disk of \(\mathbb{R}^{2}\), and the scalar and mean curvature on a ball in higher dimensions, via a conformal change of the metric. We deal with the case of negative interior curvature and positive boundary curvature. Using a Ljapunov-Schmidt procedure, we obtain new existence results when the prescribed functions are close to constants.
Key words and phrases:Prescribed curvature, conformal metrics, Ljapunov-Schmidt construction 2020 Mathematics Subject Classification: Primary: 35J25. Secondary: 58J32 S.C. acknowledges financial support from the Spanish Ministry of Universities and Next Generation EU funds, through a _Margarita Salas_ grant from the University of Granada, by the FEDER-MINECO Grant PID2021-122122NB-I00 and by J. Andalucia (FQM-116). This work was carried out during his long visit to the University "Sapienza Universita di Roma", to which he is grateful. The three authors are partially supported by the group GNAMPA of the Istituto Nazionale di Alta Matematica (INdAM).
###### Abstract
We consider the following problem of the following problem:
\[\begin{cases}-\Delta u=2Ke^{u}&\text{in }\mathbb{B}^{2},\\ \partial_{\nu}u+2=2He^{\frac{u}{2}}&\text{on }\mathbb{S}^{1},\end{cases} \tag{1.1}\]
where \(\mathbb{B}^{2}\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\). We also consider the following problem:
\[\begin{cases}-\Delta u=2Ke^{u}&\text{in }\mathbb{B}^{2},\\ \partial_{\nu}u+2=2He^{\frac{u}{2}}&\text{on }\mathbb{S}^{1},\end{cases} \tag{1.2}\]
where \(\mathbb{B}^{2}\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\).
## 1. Introduction
In this paper, we consider the following problem of the following problem:
\[\begin{cases}-\Delta u=2Ke^{u}&\text{in }\mathbb{B}^{2},\\ \partial_{\nu}u+2=2He^{\frac{u}{2}}&\text{on }\mathbb{S}^{1},\end{cases} \tag{1.3}\]
where \(\mathbb{B}^{2}\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\). The problem of the following problem is the following problem:
\[\begin{cases}-\Delta u=2Ke^{u}&\text{in }\mathbb{B}^{2},\\ \partial_{\nu}u+2=2He^{\frac{u}{2}}&\text{on }\mathbb{S}^{1},\end{cases} \tag{1.4}\]
where \(\mathbb{B}^{2}\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(\partial_{\nu}u\) is a bounded domain with boundary \(\partial_{\nu}u\) and \(
In the literature we can find many partial results for this equation. The case of prescribing a scalar flat metric with constant boundary mean curvature is known as the Escobar problem, in strong analogy with the Yamabe problem. Its study was initiated by Escobar in [25, 26], with later contributions in [41, 42, 2, 43]. Different settings with constant curvatures are considered in [30, 31, 11, 16, 27, 1]. Some results are available for the case of nonconstant functions when one of them is equal to zero. Existence results for the scalar flat problem are given in [1, 12, 24, 45], while the works [7, 8, 36] concern the case with minimal boundaries.
On the other hand, the problem with nonconstant functions \(K\) and \(H\) has received comparatively little study. In this regard, we highlight [3], which contains perturbative results about nearly constant positive curvature functions on the unit ball of \(\mathbb{R}^{n}\).
The case of nonconstant \(K>0\) and \(H\) of arbitrary sign was also considered in [22] in the half sphere of \(\mathbb{R}^{3}\), and a blow-up analysis was carried out. As for negative \(K\), in [15] the authors study the equation (1.4) with \(K<0\) and \(H<0\) by means of a geometric flow, in the spirit of [10], but solutions are obtained up to Lagrange multipliers. Finally, in the recent work [19] the case with nonconstant functions \(K<0\) and \(H\) of arbitrary sign is treated on manifolds of nonpositive Yamabe invariant. Similarly to the two dimensional case, it is shown that the nature of the problem changes greatly depending on whether the function \(\mathfrak{D}_{n}:\partial M\to\mathbb{R}\) given by
\[\mathfrak{D}_{n}=\sqrt{n(n-1)}\frac{H}{\sqrt{-K}} \tag{1.5}\]
is less than one over the entire boundary or not. When \(\mathfrak{D}_{n}<1\) the energy functional becomes coercive and a global minimizer can be found. However, if \(\mathfrak{D}_{n}\geq 1\) somewhere on \(\partial M\), a min-max argument and a careful blow-up analysis are needed to recover existence of solutions, although only in dimension three.
In this paper, we will focus on the following perturbative version of (1.2) and (1.4):
\[\left\{\begin{array}{ll}-\Delta u=-2(1+\varepsilon K(x))e^{u}&\text{in } \mathbb{B}^{2},\\ \partial_{\nu}u+2=2\mathfrak{D}_{2}(1+\varepsilon H(x))e^{\frac{u}{2}}&\text {on }\mathbb{S}^{1},\end{array}\right.\] ( \[P_{\varepsilon}^{2}\] )
and if \(n\geq 3\)
\[\left\{\begin{array}{ll}-\frac{4(n-1)}{n-2}\Delta u=-(1+\varepsilon K)u^{ \frac{n+2}{n-2}}&\text{in }\quad\mathbb{B}^{n},\\ \frac{2}{n-2}\partial_{\nu}u+u=\frac{\mathfrak{D}_{n}}{\sqrt{n(n-1)}}(1+ \varepsilon H)u^{\frac{n}{n-2}}&\text{on }\quad\mathbb{S}^{n-1},\end{array}\right.\] ( \[P_{\varepsilon}^{n}\] )
where \(\mathfrak{D}_{n}>1\) is defined in (1.3) and (1.5), \(K:\mathbb{B}^{n}\to\mathbb{R},H:\mathbb{S}^{n-1}\to\mathbb{R}\) are smooth with bounded derivatives and the parameter \(\varepsilon\in\mathbb{R}\) small.
Our main result for problem (\(P_{\varepsilon}^{2}\)) reads as follows:
**Theorem 1.1**.: _Assume \(\mathfrak{D}_{2}\neq\frac{2}{\sqrt{3}}\), let \(\psi:\mathbb{S}^{1}\to\mathbb{R}\) be defined by_
\[\psi(\xi):=\frac{2\pi}{\sqrt{\mathfrak{D}_{2}^{2}-1}}\left(\left(\mathfrak{D} _{2}-\sqrt{\mathfrak{D}_{2}^{2}-1}\right)K(\xi)-2\mathfrak{D}_{2}H(\xi)\right)\]
_and \(\Phi_{1}:\mathbb{S}^{1}\to\mathbb{R}\) be defined by_
\[\Phi_{1}(\xi):=\left(\mathfrak{D}_{2}-\sqrt{\mathfrak{D}_{2}^{2}-1}\right) \partial_{\nu}K(\xi)-2\mathfrak{D}_{2}(-\Delta)^{\frac{1}{2}}H(\xi)\]
_and \(\Phi_{m}\) be defined as in Definition 2.1. If one of the following holds true:_
1. \(\Phi_{1}(\xi)<0\) _at any global maximum_ \(\xi\) _of_ \(\psi\)_;_
2. \(\Phi_{1}(\xi)>0\) _at any global minimum_ \(\xi\) _of_ \(\psi\)_;_
3. \(\Phi_{1}(\xi)\neq 0\) _at any critical point_ \(\xi\) _of_ \(\psi\)_,_ \(\psi\) _is Morse and_ \[\sum_{\{\xi:\nabla\psi(\xi)=0,\,\Phi_{1}(\xi)<0\}}(-1)^{\operatorname{ind}_{ \xi}\nabla\psi}\neq 1;\]
4. \(\Phi_{1}(\xi)=\cdots=\Phi_{m-1}(\xi)=0\neq\Phi_{m}(\xi)\) _at any critical point_ \(\xi\) _of_ \(\psi\) _for some_ \(m\geq 2\)_,_ \(\psi\) _is Morse and_ \[\sum_{\{\xi:\nabla\psi(\xi)=0,\,\Phi_{m}(\xi)<0\}}(-1)^{\operatorname{ind}_{ \xi}\nabla\psi}\neq 1;\]
_then, Problem \((P_{\varepsilon}^{2})\) has a solution for \(|\varepsilon|\) small enough._
Our main result for problem \((P_{\varepsilon}^{n})\) reads as follows:
**Theorem 1.2**.: _Let \(\psi:\mathbb{S}^{n-1}\to\mathbb{R}\) be defined by_
\[\psi(\xi):=\operatorname{a}(\mathfrak{D}_{n})K(\xi)-\operatorname{b}( \mathfrak{D}_{n})H(\xi),\]
_with \(\operatorname{a}(\mathfrak{D}_{n}),\operatorname{b}(\mathfrak{D}_{n})\) as in (2.3), \(\Phi_{1}:\mathbb{S}^{n-1}\to\mathbb{R}\) be defined by_
\[\Phi_{1}(\xi):=\partial_{\nu}K(\xi),\]
_and \(\Phi_{m}:\mathbb{S}^{n-1}\to\mathbb{R}\) be defined as in Definition 2.1. If one of the following holds true:_
1. \(\Phi_{1}(\xi)>0\) _at any global maximum_ \(\xi\) _of_ \(\psi\)_;_
2. \(\Phi_{1}(\xi)<0\) _at any global minimum_ \(\xi\) _of_ \(\psi\)_;_
3. \(\Phi_{1}(\xi)\neq 0\) _at any critical point_ \(\xi\) _of_ \(\psi\)_,_ \(\psi\) _is Morse and_ \[\sum_{\{\xi:\nabla\psi(\xi)=0,\,\Phi_{1}(\xi)<0\}}(-1)^{\operatorname{ind}_{ \xi}\nabla\psi}\neq 1;\]
4. \(\Phi_{1}(\xi)=\cdots=\Phi_{m-1}(\xi)=0\neq\Phi_{m}(\xi)\) _at any critical point_ \(\xi\) _of_ \(\psi\) _for some_ \(m\geq 2\)_,_ \(\psi\) _is Morse and_ \[\sum_{\{\xi:\nabla\psi(\xi)=0,\,\Phi_{m}(\xi)<0\}}(-1)^{\operatorname{ind}_{ \xi}\nabla\psi}\neq 1;\]
_then, Problem \((P_{\varepsilon}^{2})\) has a solution for \(|\varepsilon|\) small enough._
Problems \((P_{\varepsilon}^{2})\) and \((P_{\varepsilon}^{n})\) share many similarities, not only for their geometric importance, but also from an analytic point of view.
In fact, they both have critical terms in the interior and in the boundary nonlinearities: exponential nonlinearities in \((P_{\varepsilon}^{2})\) are critical in view of the Moser-Trudinger inequalities, whereas in \((P_{\varepsilon}^{n})\) we have the critical Sobolev exponent and the critical trace exponent
\[\frac{n+2}{n-2}=2^{*}-1,\qquad\frac{n}{n-2}=2^{\sharp}-1. \tag{1.6}\]
Moreover, since we are prescribing a negative curvature in the interior and a positive curvature in the boundary, the two nonlinear terms have different sign and are therefore in _competition_.
Theorem 1.1 seems to be the first result of prescribing both nearly constant curvatures on a disk. Similar results were recently obtained in [5] in the case of zero curvature in the interior and in [28] for the sphere. Theorem 1.2 is the counterpart of the result obtained in [3], where the authors perturb the positive constant curvature on the unit ball of \(\mathbb{R}^{n}\).
We also provide higher-order expansions of the reduced energy functional, which permits to consider also some cases of degenerate critical points. This is the case when the functionals \(\Phi_{m}(\xi)\) play a role in Theorems 1.1, 1.2.
Such expansions require sharper estimates (see Proposition 3.4 and Appendix 6) and both derivatives of \(K,H\) and nonlocal terms appear. In particular, nonlocal terms are present only if the order of the expansion is high enough, depending on the dimension. At the first order, we only get the fractional Laplacian in the two-dimensional case, which is why \(\Phi_{1}(\xi)\) is defined differently in Theorems 1.1 and 1.2.
The definition of \(\Phi_{m}\) for \(m\geq 2\) is rather involved and it is therefore postponed to Definition 2.1.
Finally, we point out that, in Theorem (1.1), \(\Phi_{1}\) can be seen as the normal derivative (up to a constant) of the functional \(\psi\), which can be naturally extended from the circle to the closed disk. More precisely, for \(\xi\in\overline{\mathbb{B}^{2}}\), we set
\[\Psi(\xi):=\frac{2\pi}{\sqrt{\mathfrak{D}_{2}^{2}-1}}\left(\mathfrak{D}_{2}- \sqrt{\mathfrak{D}_{2}^{2}-1}\right)K(\xi)-2\mathfrak{D}_{2}\hat{H}(\xi)\qquad \text{where}\quad\left\{\begin{array}{ll}\Delta\hat{H}=0&\text{in }\mathbb{B}^{2}\\ \hat{H}=H&\text{in }\mathbb{S}^{1}\end{array}\right.\;;\]
therefore, in view of the Dirichlet-to-Neumann characterization of the fractional Laplacian, we have \(\Phi_{1}(\xi)=\partial_{\nu}\Psi(\xi)\) for any \(\xi\in\mathbb{S}^{1}\).
Quite interestlingly, this fact has no higher-dimensional counterpart in Theorem 1.2.
The assumption \(\mathfrak{D}_{2}\neq\frac{2}{\sqrt{3}}\) (i.e. \(\alpha_{\mathfrak{D}}\neq 0\) in Proposition 4.3) allows to apply the degree argument to the function which also depend on the extra parameter that only appears in the 2D case (see (2) of Proposition 4.3). It would be interesting to understand whether this is a mere technical assumption or not and also whether it has some geometrical meaning.
The plan of the paper is as follows.
In Section 2 we introduce some notation and preliminaries which we will use in the following; in Section 3 we study the energy functional associated to the system and show some of its crucial properties; in Section 4 we apply the Ljapunov-Schmidt finite dimensional reduction; in Section 5 we study the existence of critical points to the reduced energy functional; finally, in the Appendix we prove some crucial asymptotic estimates.
## 2. Notation and Preliminaries
We remind that \(\mathbb{B}^{n}\) will denote the unit ball of \(\mathbb{R}^{n}\), for \(n\geq 2\). We consider the well-known inversion map \(\mathscr{I}:\mathbb{R}^{n}_{+}\to\mathbb{B}^{n}\) defined by
\[\mathscr{I}(\bar{x},x_{n})=\left(\frac{2\bar{x}}{\left|\bar{x}\right|^{2}+(x_{ n}+1)^{2}},\frac{1-\left|\bar{x}\right|^{2}-x_{n}^{2}}{\left|\bar{x}\right|^{2}+(x_ {n}+1)^{2}}\right),\quad(\bar{x},x_{n})\in\mathbb{R}^{n-1}\times\mathbb{R}_{+}. \tag{2.1}\]
Straightforward computations show that \(\mathscr{I}\circ\mathscr{I}=\mathrm{Id}\), therefore \(\mathscr{I}^{-1}\) has the same expression.
We point out that, up to the sign of the last coordinate, \(\mathscr{I}\) extends the stereographic projection from \(\partial\mathbb{R}^{n}_{+}\) to \(\mathbb{S}^{n-1}\) and, in dimension \(2\), it coincides with the Riemann map from the half-plane to the disk. In particular, \(\mathscr{I}\) is a conformal map and satisfies
\[\mathscr{I}^{\star}g_{\mathbb{B}^{n}}=\varrho\left|dx\right|^{2},\quad\varrho (\bar{x},x_{n})=\frac{4}{\left(\left|\bar{x}\right|^{2}+(x_{n}+1)^{2}\right)^ {2}}.\]
For convenience, we define \(\rho:\mathbb{R}^{n}_{+}\to\mathbb{R}_{+}\) by
\[\rho=\left\{\begin{array}{ll}\varrho^{\frac{n-2}{4}}&\text{if}\quad n\geq 3,\\ \log\varrho&\text{if}\quad n=2.\end{array}\right.\]
We point out that \(\rho\) satisfies (1.1) or (1.4) for some particular choices of the curvatures. More precisely, in dimension \(n=2\)
\[\left\{\begin{array}{ll}-\Delta\rho=0&\text{in}\quad\mathbb{R}^{2}_{+}\\ \partial_{\nu}\rho=2e^{\frac{\rho}{2}}&\text{on}\quad\partial\mathbb{R}^{2}_ {+},\end{array}\right. \tag{2.2}\]
while in dimensions \(n\geq 3\)
\[\left\{\begin{array}{ll}-\frac{4(n-1)}{n-2}\Delta\rho=0&\text{in}\ \mathbb{R}^{n}_{+},\\ \frac{2}{n-2}\partial_{\nu}\rho=\rho^{\frac{n}{n-2}}&\text{on}\ \partial\mathbb{R}^{n}_{+}.\end{array}\right.\]
For the reader's convenience, we collect here all the constants that appear in our computations. In the following we agree that \(\mathfrak{D}=\mathfrak{D}_{n}\).
**Definition 2.1**.: _Let \(n\geq 2\) and \(\mathfrak{D}>1\). We define_
\[\Lambda_{n}= \left\{\begin{array}{ll}4&\text{if}\quad n=2\\ 4n(n-1)&\text{if}\quad n\geq 3\end{array}\right.,\] \[\alpha_{n}= \left\{\begin{array}{ll}2&\text{if}\quad n=2\\ \frac{(n-2)^{2}}{8n(n-1)}&\text{if}\quad n\geq 3\end{array}\right.,\] \[\beta_{n}= \left\{\begin{array}{ll}2&\text{if}\quad n=2\\ 2\sqrt{\frac{n}{n-1}}&\text{if}\quad n\geq 3\end{array}\right.,\] \[a_{n,i,j}= \Lambda_{n}^{\frac{n}{2}}\int_{\mathbb{R}^{n}_{+}}\frac{|\bar{y}| ^{2i}y_{n}^{j}}{\left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1 \right)^{n}}d\bar{y}dy_{n},\]
\[b_{n,j}= \Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\frac{\mathfrak{D}}{\left( \mathfrak{D}^{2}-1\right)^{\frac{n-2j-1}{2}}}\int_{\partial\mathbb{R}_{+}^{n}} \frac{|\bar{y}|^{2j}}{\left(\left|\bar{y}\right|^{2}+1\right)^{n-1}}d\bar{y},\] \[c_{n,m}= \Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D}\left(\mathfrak{ D}^{2}-1\right)^{\frac{m-n+1}{2}}\int_{\partial\mathbb{R}_{+}^{n}}|\bar{y}|^{m} \left(\frac{1}{\left(|\bar{y}|^{2}+1\right)^{n-1}}-\sum_{j=0}^{\frac{m-n}{2}}(-1 )^{j}\frac{(n+j-2)!}{j!(n-2)!}\frac{1}{|\bar{y}|^{2(n+j-1)}}\right)d\bar{y},\] \[d_{n,j}= \Lambda_{n}^{\frac{n}{2}}\omega_{n-2}\int_{0}^{\pi}\sin^{j}tdt\] \[e_{n}= \Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D}\omega_{n-2}\] \[A_{n,i,j}= \frac{1}{(j-2i)!(2i)!}\frac{(n-3)!!}{(2i)!!(n+2i-3)!!},\] \[B_{n,j}= \frac{(n-3)!!}{(2j)!(2j)!!(n+2j-3)!!},\] \[C_{n,m,i}= \frac{(m-i-1)!}{(n-1)!i!(m-n-2i)!}(\mathfrak{D}^{2}-1)^{i} \mathfrak{D}^{m-n-2i}\] \[D_{n,m}= \frac{\left(\frac{n+m+1}{2}-2\right)!}{\left(\frac{m-n+1}{2} \right)!(n-2)!}\left(\mathfrak{D}^{2}-1\right)^{\frac{m-n+1}{2}}\] \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\Phi_{n}(\xi):=(1+\xi_{n})^{n-1}\left((-1)^{n}\sum_{i=0}^{\lfloor\frac{n-1}{2} \rfloor}a_{n,i,n-1}A_{n,i,n-1}\partial_{\nu}^{n-1-2i}\Delta_{\tau}^{i}K(\xi)+ \mathsf{J}_{n,n-1}(H)\right).\]
_For \(m\geq n\) we set:_
\[\Phi_{2m-n+1}(\xi):= (1+\xi_{n})^{m}\left(\sum_{i=0}^{\lfloor\frac{m-n}{2}\rfloor}\sum _{j=0}^{\lfloor\frac{m}{2}\rfloor}(-1)^{m-n-i-1}A_{n,j,m}C_{n,m,i}d_{2m-n-2i-2 j}\partial_{\nu}^{m-2j}\Delta_{\tau}^{j}K(\xi)\right.\] \[+\left.\left\{\begin{array}{ll}0&\mbox{if $n$ even or $m$ odd}\\ (-1)^{\frac{m-n+1}{2}}e_{n}B_{n,\frac{m}{2}}D_{n,m}\Delta^{\frac{m}{2}}H(\xi) &\mbox{if $n$ odd and $m$ even}\end{array}\right.\right)\] \[\Phi_{2m-n+2}(\xi):= (1+\xi_{n})^{m}\left(\sum_{i=0}^{\lfloor\frac{m-n}{2}\rfloor}(- 1)^{m-n-i-1}C_{n,m,i}\mathsf{I}_{n,m,i}(K)\right.\] \[+\left.\left\{\begin{array}{ll}c_{n,m}B_{n,\frac{m}{2}}\Delta^{ \frac{m}{2}}H(\xi)&\mbox{if $n$ and $m$ even}\\ 0&\mbox{if $n$ and $m$ odd}\\ (-1)^{\frac{m-n+1}{2}}D_{n,m}\mathsf{J}_{n,m}(H)&\mbox{otherwise}\end{array} \right.\right)\]
The symbol \(a\lesssim b\) will be used to mean \(a\leq cb\) with \(c\) independent on the quantities. We denote as \(\partial_{\nu}\) the (outer) normal derivative of a function at a point on \(\mathbb{S}^{n-1}\) and as \(\Delta_{\tau}\) the tangential Laplacian.
For a multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}^{n}\) we denote:
\[|\alpha|:=\alpha_{1}+\cdots+\alpha_{n};\qquad\qquad x^{\alpha}:=x_{1}^{\alpha _{1}}\cdot\cdots\cdot x_{n}^{\alpha_{n}};\qquad\qquad\partial_{x_{\alpha}}:= \partial_{x_{1}}^{\alpha_{1}}\ldots\partial_{x_{n}}^{\alpha_{n}}.\]
### Conformal Metrics
Throughout this article we will use the existing conforming equivalence between \(\mathbb{R}^{n}_{+}\) and \(\mathbb{B}^{n}\) via the inversion map (2.1), often without explicitly specifying it. Therefore, it is important to remember the conformal properties of the conformal Laplacian and conformal boundary operator.
If \(n\geq 3\) and \(\tilde{g}=\rho^{\frac{4}{n-2}}g\) is a conformal metric, then the conformal Laplacian and conformal boundary operators, defined by
\[L_{g}=-\frac{4(n-1)}{n-2}\Delta_{g}+k_{g},\quad B_{g}=\frac{2}{n-2}\partial_{ \nu}+h_{g},\]
are conformally invariant in the following sense:
\[L_{g}\varphi=\rho^{\frac{n+2}{n-2}}L_{\tilde{g}}\left(\frac{\varphi}{\rho} \right),\quad B_{g}\varphi=\rho^{\frac{n}{n-2}}B_{\tilde{g}}\left(\frac{ \varphi}{\rho}\right). \tag{2.4}\]
If \(n=2\), then the Laplace-Beltrami operator and the normal derivative satisfy the following conformal property: if \(\tilde{g}=e^{\rho}g\) is a conformal metric, then
\[\Delta_{e^{\rho}g}=e^{-\rho}\Delta_{g},\quad\nabla_{e^{\rho}g}\cdot\eta_{e^{ \rho}g}=e^{-\frac{\rho}{2}}\nabla\cdot\eta_{g}.\]
The following result establishes the conformal invariance of a certain geometric quantity that will be very much related to our energy functionals.
**Lemma 2.2**.: _Let \((M^{n},g)\) be a compact Riemannian manifold of dimension \(n\geq 3\) and \(\tilde{g}=\varphi^{\frac{4}{n-2}}g\) a conformal metric with \(\varphi\) smooth and positive. If we set \(\hat{f}=f\varphi^{-1}\), then_
\[\frac{4(n-1)}{n-2}\int_{M}\left(\nabla_{\tilde{g}}\hat{u}\cdot \nabla_{\tilde{g}}\hat{v}\right)dV_{\tilde{g}}+\int_{M}k_{\tilde{g}}\hat{u} \hat{v}\,dV_{\tilde{g}}+2(n-1)\int_{\partial M}h_{\tilde{g}}\hat{u}\hat{v}\,d \sigma_{\tilde{g}} \tag{2.5}\] \[= \frac{4(n-1)}{n-2}\int_{M}\left(\nabla_{g}u\cdot\nabla_{g}v \right)dV_{g}+\int_{M}k_{g}uv\,dV_{g}+2(n-1)\int_{\partial M}h_{g}uv\,d\sigma_ {g}.\]
Proof.: We will use the following basic identities:
\[dV_{\tilde{g}}=\varphi^{2^{*}}dV_{g},d\sigma_{\tilde{g}}=\varphi^{2^{\sharp}} d\sigma_{g},\nabla_{\tilde{g}}=\varphi^{-\frac{4}{n-2}}\nabla_{g},\]
where \(2^{*},2^{\sharp}\) are as in (1.6) and the relation between \(k_{\tilde{g}}\), \(k_{g}\), \(h_{\tilde{g}}\) and \(h_{g}\) given by (1.4). The first term in the left-hand side of (2.5) can be decomposed using the previous identities:
\[\int_{M}\left(\nabla_{\tilde{g}}\hat{u}\cdot\nabla_{\tilde{g}} \hat{v}\right)dV_{\tilde{g}}=\int_{M}\varphi^{2}\left(\nabla_{g}\hat{u}\cdot \nabla_{g}\hat{v}\right)dV_{g}\] \[= \int_{M}\left(\nabla_{g}u\cdot\nabla_{g}v\right)dV_{g}-\int_{M} \left(\hat{v}\nabla_{g}\varphi\cdot\nabla_{g}u+\hat{u}\nabla_{g}\varphi\cdot \nabla_{g}v-\hat{u}\hat{v}\left|\nabla_{g}\varphi\right|^{2}\right)dV_{g}. \tag{2.6}\]
On the other hand, integrating by parts on \(M\) and using (1.4):
\[\int_{M}k_{\tilde{g}}\hat{u}\hat{v}\,dV_{\tilde{g}}=\int_{M}\hat{ u}\hat{v}\left(k_{g}\varphi^{2}-\frac{4(n-1)}{n-2}(\Delta_{g}\varphi)\varphi \right)dV_{g}\] \[= \int_{M}k_{g}uv\,dV_{g}-2(n-1)\int_{\partial M}h_{\tilde{g}}\hat{ u}\hat{v}\varphi^{2^{\sharp}}d\sigma_{g}+2(n-1)\int_{\partial M}h_{g}uv\,d \sigma_{g}\] \[+ \frac{4(n-1)}{n-2}\int_{M}\left(\hat{v}\nabla_{g}\varphi\cdot \nabla_{g}u+\hat{u}\nabla_{g}\varphi\cdot\nabla_{g}v-\hat{u}\hat{v}\left| \nabla_{g}\varphi\right|^{2}\right)dV_{g}. \tag{2.7}\]
Finally, (2.5) can be obtained combining (2.6) and (2.7).
### Solutions of the unperturbed problems
By means of the inversion map and the classification results available for \(\mathbb{R}^{n}_{+}\), we can give a \(n-\)dimensional family of solutions for the problems \((P^{2}_{\varepsilon})\) and \((P^{n}_{\varepsilon})\) with \(\varepsilon=0\).
First, we consider the problem in \(\mathbb{B}^{2}\):
\[\left\{\begin{array}{ll}-\Delta u=-2e^{u}&\text{in}\quad\mathbb{B}^{2}\\ \partial_{\nu}u+2=2\mathfrak{D}e^{\frac{u}{2}}&\text{on}\quad\mathbb{S}^{1}. \end{array}\right.\] ( \[P^{2}_{0}\] )
By [46], a family of solutions of the problem in the half space
\[\left\{\begin{array}{ll}-\Delta u=-2e^{u}&\text{in}\quad\mathbb{R}^{2}_{+} \\ \partial_{\nu}u=2\mathfrak{D}e^{\frac{u}{2}}&\text{on}\quad\partial\mathbb{R}^{2 }_{+}.\end{array}\right.\]
is given by
\[U_{x_{0},\lambda}(x,y)=2\log\frac{2\lambda}{(x-x_{0})^{2}+(y+\mathfrak{D} \lambda)^{2}-\lambda^{2}},\]
for \(\lambda>0\) and \(x_{0}\in\mathbb{R}\). Let us call \(\hat{U}_{x_{0},\lambda}=(U_{x_{0},\lambda}-\rho)\circ\mathscr{I}^{-1}\). Taking into account equation (2.2) and the conformal properties of the Laplacian and normal derivative in
\(\mathbb{R}^{2}\), it is clear that
\[\left\{\begin{array}{ll}-\Delta\hat{U}=-2e^{\hat{U}}&\text{in}\quad\mathbb{B}^{2 }\\ \partial_{\nu}\hat{U}+2=2\mathfrak{D}e^{\frac{\hat{U}}{2}}&\text{on}\quad \mathbb{S}^{1}.\end{array}\right.\]
Therefore, a family of solutions for \((P_{0}^{2})\) is given by
\[\hat{U}_{x_{0},\lambda}(s,t)=2\log\frac{\lambda\left(x^{2}+(y+1)^{2}\right)}{( x-x_{0})^{2}+(y+\mathfrak{D}\lambda)^{2}-\lambda^{2}}, \tag{2.8}\]
with
\[x=x(s,t)=\frac{2s}{s^{2}+(t+1)^{2}},\quad y=y(s,t)=\frac{1-s^{2}-t^{2}}{s^{2}+( t+1)^{2}},\]
\(\lambda>0\) and \(x_{0}\in\mathbb{R}\).
Now, we address the unperturbed problem in \(\mathbb{B}^{n}\) for \(n\geq 3\):
\[\left\{\begin{array}{ll}-\frac{4(n-1)}{n-2}\Delta u=-u^{\frac{n+2}{n-2}}& \text{in }\mathbb{B}^{n},\\ \frac{2}{n-2}\partial_{\nu}u+u=\frac{\mathfrak{D}}{\sqrt{n(n-1)}}u^{\frac{n} {n-2}}&\text{on }\mathbb{S}^{n-1}.\end{array}\right.\]
Consider \(\mathbb{R}^{n}_{+}\) with its usual metric, and the problem
\[\left\{\begin{array}{ll}\frac{-4(n-1)}{n-2}\Delta u=-u^{\frac{n+2}{n-2}}& \text{in }\mathbb{R}^{n}_{+},\\ -\frac{2}{n-2}\partial_{x_{n}}u=\frac{\mathfrak{D}}{\sqrt{n(n-1)}}u^{\frac{n} {n-2}}&\text{on }\partial\mathbb{R}^{n}_{+}.\end{array}\right. \tag{2.9}\]
The results in [18] imply that all solutions of (2.9) have the form
\[U_{x_{0},\lambda}(\bar{x},x_{n})=\frac{(4n(n-1))^{\frac{n-2}{4}}\lambda^{\frac {n-2}{2}}}{\left(\left|\bar{x}-x_{0}\right|^{2}+(x_{n}+\lambda\mathfrak{D})^ {2}-\lambda^{2}\right)^{\frac{n-2}{2}}},\]
for any \(x_{0}\in\partial\mathbb{R}^{n}_{+}\) and \(\lambda>0\).
Then, by (2.4), we can write (2.9) as:
\[\left\{\begin{array}{ll}\rho^{\frac{n+2}{n-2}}\frac{-4(n-1)}{n-2}\Delta \left(\frac{u}{\rho}\right)=-u^{\frac{n+2}{n-2}}&\text{in }\mathbb{R}^{n}_{+},\\ \rho^{\frac{n}{n-2}}\left(\frac{2}{n-2}\partial_{\nu}\left(\frac{u}{\rho} \right)+\left(\frac{u}{\rho}\right)\right)=\frac{\mathfrak{D}}{\sqrt{n(n-1)}}u^ {\frac{n}{n-2}}&\text{on }\partial\mathbb{R}^{n}_{+}.\end{array}\right.\]
If we call \(\hat{u}=\left(\frac{u}{\rho}\right)\circ\mathscr{I}^{-1}\), it is clear that
\[\left\{\begin{array}{ll}\frac{-4(n-1)}{n-2}\Delta\hat{u}=-\hat{u}^{\frac{n +2}{n-2}}&\text{in }\mathbb{B}^{n},\\ \frac{2}{n-2}\partial_{\nu}\hat{u}+\hat{u}=\frac{\mathfrak{D}}{\sqrt{n(n-1)}} \hat{u}^{\frac{n}{n-2}}&\text{on }\mathbb{S}^{n-1},\end{array}\right.\]
which is exactly \((P_{0}^{n^{\prime}})\). Hence, a family of solutions of \((P_{0}^{n^{\prime}})\) is given by
\[\hat{U}_{x_{0},\lambda}(\bar{x},x_{n})=\lambda^{\frac{n-2}{2}}(n(n-1))^{\frac{ n-2}{4}}\left(\frac{\left|\bar{z}\right|^{2}+(z_{n}+1)^{2}}{\left|\bar{z}-x_{0} \right|^{2}+(z_{n}+\lambda\mathfrak{D})^{2}-\lambda^{2}}\right)^{\frac{n-2}{ 2}}, \tag{2.10}\]
with
\[\bar{z}=\bar{z}(\bar{x},x_{n})=\frac{2\bar{x}}{\left|\bar{x}\right|^{2}+(x_{n}+1)^ {2}},\quad z_{n}=z_{n}(\bar{x},x_{n})=\frac{1-\left|\bar{x}\right|^{2}-x_{n}^{2 }}{\left|\bar{x}\right|^{2}+(x_{n}+1)^{2}}.\]
In view of formulae (2.8) and (2.10) we set:
\[P_{x_{0},\lambda}(\bar{x},x_{n})=\frac{\Lambda_{n}\lambda^{2}\left(\left|\bar{ z}\right|^{2}+(z_{n}+1)^{2}\right)^{2}}{\left(\left|\bar{z}-x_{0}\right|^{2}+(z_{n}+ \lambda\mathfrak{D})^{2}-\lambda^{2}\right)^{2}},\]
with \(\bar{z},z_{n}\) as before, \(x_{0}\in\mathbb{R}^{n-1}\), \(\lambda>0\), and define
\[V_{x_{0},\lambda}=\left\{\begin{array}{ll}P_{x_{0},\lambda}\,^{\frac{n-2}{4 }}&\text{if}\quad n\geq 3,\\ \log P_{x_{0},\lambda}&\text{if}\quad n=2.\end{array}\right. \tag{2.11}\]
## 3. Properties of the Energy Functionals
We define the functionals \(J_{\varepsilon}^{n}:H^{1}\left(\mathbb{B}^{n}\right)\rightarrow\mathbb{R}\) by
\[J_{\varepsilon}^{2}(u)= \frac{1}{2}\int_{\mathbb{B}^{2}}\left|\nabla u\right|^{2}+2\int_ {\mathbb{S}^{1}}u+2\int_{\mathbb{B}^{2}}(1+\varepsilon K)e^{u}-4\mathfrak{D} \int_{\mathbb{S}^{1}}(1+\varepsilon H)e^{\frac{u}{2}}, \tag{3.1}\] \[J_{\varepsilon}^{n}(u)= \frac{1}{2}\int_{\mathbb{B}^{n}}\left|\nabla u\right|^{2}+\frac{ 1}{2}\int_{\mathbb{S}^{n-1}}u^{2}+\frac{(n-2)^{2}}{8n(n-1)}\int_{\mathbb{B}^{ n}}(1+\varepsilon K)\left|u\right|^{2^{*}}\] \[- \frac{(n-2)^{2}}{4\sqrt{n(n-1)^{3}}}\mathfrak{D}\int_{\mathbb{S}^ {n-1}}(1+\varepsilon H)\left|u\right|^{2^{\sharp}},\quad\text{if}\quad n\geq 3. \tag{3.2}\]
Observe that we can write
\[J_{\varepsilon}^{n}(u)=J_{0}^{n}(u)+\varepsilon\alpha_{n}\gamma^{n}(u),\]
with
\[\gamma^{n}(u)=\left\{\begin{array}{ll}\int_{\mathbb{B}^{2}}Ke^{u}-\beta_{2 }\mathfrak{D}\int_{\mathbb{S}^{1}}He^{\frac{u}{2}}&\text{if }n=2\\ \int_{\mathbb{B}^{n}}K\left|u\right|^{2^{*}}-\beta_{n}\mathfrak{D}\int_{ \mathbb{S}^{n-1}}H\left|u\right|^{2^{\sharp}}&\text{if }n\geq 3\end{array}\right.\]
with \(\alpha_{n},\beta_{n}\) as in Definition 2.1.
Let \(V_{x_{0},\lambda}\) be given by (2.11). We set
\[\Gamma(x_{0},\lambda)=\gamma^{n}\left(V_{x_{0},\lambda}\right)=\int_{\mathbb{ B}^{n}}KP_{x_{0},\lambda}\,^{\frac{n}{2}}-\beta_{n}\mathfrak{D}\int_{ \mathbb{S}^{n-1}}HP_{x_{0},\lambda}\,^{\frac{n-1}{2}}.\]
The first term of the energy is constant along our family of solutions:
**Proposition 3.1**.: _There exist constants \(\mathtt{E}_{n,\mathfrak{D}}\), independent on \(\lambda\) and \(x_{0}\), such that_
\[J_{0}^{n}(V_{x_{0},\lambda})=\mathtt{E}_{n,\mathfrak{D}},\quad\forall n\geq 2.\]
Proof.: Let us study the cases \(n=2\) and \(n\geq 3\) separately.
When \(n=2\), integrating by parts and using (\(P_{0}^{2}\)) and (2.2), we can see that:
\[\frac{1}{2}\int_{\mathbb{B}^{2}}\left|\nabla V_{x_{0},\lambda} \right|^{2}+2\int_{\mathbb{S}^{1}}V_{x_{0},\lambda}=\frac{1}{2}\int_{\mathbb{R} _{+}^{2}}\left|\nabla\left(U_{x_{0},\lambda}-\rho\right)\right|^{2}+2\int_{ \mathbb{R}}(U_{x_{0},\lambda}-\rho)e^{\frac{\rho}{2}}\] \[= -\frac{1}{2}\int_{\mathbb{R}_{+}^{2}}\Delta U_{x_{0},\lambda}(U_{ x_{0},\lambda}-\rho)+\frac{1}{2}\int_{\mathbb{R}}\partial_{\nu}U_{x_{0},\lambda}(U_{ x_{0},\lambda}-\rho)+2\int_{\mathbb{R}}(U_{x_{0},\lambda}-\rho)e^{\frac{\rho}{2}}\]
\[\Gamma(x_{0},\lambda)= \int_{\mathbb{R}_{+}^{n}}\frac{\Lambda_{n}^{\frac{n}{2}}\tilde{K}( \bar{x},x_{n})\lambda^{n}d\bar{x}dx_{n}}{\left(\left|\bar{x}-x_{0}\right|^{2}+( x_{n}+\lambda\mathfrak{D})^{2}-\lambda^{2}\right)^{n}}-\int_{\partial\mathbb{R}_{+}^{n}} \frac{\Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D}\tilde{H}(\bar{x}) \lambda^{n-1}d\bar{x}}{\left(\left|\bar{x}-x_{0}\right|^{2}+\lambda^{2}( \mathfrak{D}^{2}-1)\right)^{n-1}} \tag{3.5}\] \[= \int_{\mathbb{R}_{+}^{n}}\frac{\Lambda_{n}^{\frac{n}{2}}\tilde{K}( \lambda\bar{y}+x_{0},\lambda y_{n})}{\left(\left|\bar{y}\right|^{2}+(y_{n}+ \mathfrak{D})^{2}-1\right)^{n}}d\bar{y}dy_{n}-\int_{\partial\mathbb{R}_{+}^{n} }\frac{\Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D}\tilde{H}(\lambda\bar{ y}+x_{0})}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d \bar{y},\]
_where \(\tilde{K}=K\circ\mathscr{I},\tilde{H}=H\circ\mathscr{I}\)._
We are interested in the behaviour of \(\Gamma\) at infinity and when \(\lambda\to 0\).
**Proposition 3.3**.: \(\lim_{\left|x_{0}\right|+\lambda\rightarrow+\infty}\Gamma(x_{0},\lambda)=\psi( (0,-1))\)
Proof.: First, notice that
\[\lim_{\lambda+|x_{0}|\to+\infty}\mathscr{I}(\lambda\bar{x}+x_{0}, \lambda x_{n}) =\lim_{\lambda+|x_{0}|\to+\infty}\left(\frac{2(\lambda\bar{x}+x_{0} )}{\left|\lambda\bar{x}+x_{0}\right|^{2}+(\lambda x_{n}+1)^{2}},\frac{1-| \lambda\bar{x}+x_{0}|^{2}-(\lambda x_{n})^{2}}{\left|\lambda\bar{x}+x_{0} \right|^{2}+(\lambda x_{n}+1)^{2}}\right)\] \[=\]
With that in mind, we fix \(\varepsilon>0\) small enough and write
\[\Gamma(x_{0},\lambda)= \int_{|y|>\varepsilon}\frac{\Lambda_{n}^{\frac{n}{2}}\tilde{K}( \lambda\bar{y}+x_{0},\lambda y_{n})}{\left(\left|\bar{y}\right|^{2}+(y_{n}+ \mathfrak{D})^{2}-1\right)^{n}}d\bar{y}dy_{n}-\int_{|\bar{y}|> \varepsilon}\frac{\Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D}\tilde{H}( \lambda\bar{y}+x_{0},0)}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1 \right)^{n-1}}d\bar{y}\] \[+ \int_{|y|\leq\varepsilon}\frac{\Lambda_{n}^{\frac{n}{2}}\tilde{K }(\lambda\bar{y}+x_{0},\lambda y_{n})}{\left(\left|\bar{y}\right|^{2}+(y_{n}+ \mathfrak{D})^{2}-1\right)^{n}}d\bar{y}dy_{n}-\int_{|\bar{y}|\leq\varepsilon} \frac{\Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D}\tilde{H}(\lambda\bar{y} +x_{0},0)}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d \bar{y}.\]
Then, taking limits when \(\lambda+|x_{0}|\to+\infty\),
\[\Gamma(x_{0},\lambda)= K(0,-1)\int_{|y|>\varepsilon}\frac{\Lambda_{n}^{\frac{n}{2}}d \bar{y}dy_{n}}{\left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1 \right)^{n}}\] \[- H(0,-1)\int_{|\bar{y}|\leq\varepsilon}\frac{\Lambda_{n}^{\frac{n- 1}{2}}\beta_{n}\mathfrak{D}d\bar{y}}{\left(\left|\bar{y}\right|^{2}+\mathfrak{ D}^{2}-1\right)^{n-1}}+O\left(\epsilon^{n-1}\right)\] \[+ K(\mathscr{I}(x_{0},0))O\left(\epsilon^{n}\right)-H(\mathscr{I} (x_{0},0))O\left(\epsilon^{n-1}\right).\]
The claim follows from taking limits when \(\epsilon\to 0\).
The following result describes the behaviour of \(\Gamma\) around \(\lambda=0\). Its proof will postponed to Appendix 6.
**Proposition 3.4**.: _Define \(\psi:\mathbb{S}^{n-1}\to\mathbb{R}\) by \(\psi(\xi):=\mathtt{a}(\mathfrak{D}_{n})K(\xi)-\mathtt{b}(\mathfrak{D}_{n})H(\xi)\), and let us write \(\xi=\mathscr{I}(x_{0})\in\mathbb{S}^{n-1}\). The following expansions hold, for any \(m\in\mathbb{N}\), when \(\lambda\ll 1\): If \(n=2\),_
\[\Gamma(x_{0},\lambda)= \psi(\xi)-\left(2\pi(1+\xi_{n})\lambda\left(\left(\mathfrak{D}- \sqrt{\mathfrak{D}^{2}-1}\right)\,\partial_{\nu}K(\xi)-2\mathfrak{D}(-\Delta)^{ \frac{1}{2}}H(\xi)\right)\] \[- \lambda^{2}\log\frac{1}{\lambda}\Phi_{3}(\xi)+\lambda^{2}\Phi_{4} (\xi)\cdots+\lambda^{m}\log\frac{1}{\lambda}\Phi_{2m-1}(\xi)+\lambda^{m}\Phi_ {2m}(\xi)\right)(1+o(1));\]
_If \(n\geq 3\),_
\[\Gamma(x_{0},\lambda)= \psi(\xi)-\left(a_{n,0,1}(1+\xi_{n})\lambda\partial_{\nu}K(\xi)- \lambda^{2}\Phi_{2}(\xi)+\cdots+\lambda^{n-2}\Phi_{n-2}(\xi)\right.\] \[+ \left.\lambda^{n-1}\log\frac{1}{\lambda}\Phi_{n-1}(\xi)+\lambda^{ n-1}\Phi_{n}(\xi)\cdots+\lambda^{m}\log\frac{1}{\lambda}\Phi_{2m-n+1}(\xi)+\lambda^{m} \Phi_{2m-n+2}(\xi)\right)(1+o(1)).\]
_Here \(\mathtt{a}(\mathfrak{D}_{n}),\mathtt{b}(\mathfrak{D}_{n}),a_{n,0,1},\Phi_{j}(\xi)\) are given in Definition 2.1._
## 4. The Linear Theory
In this section we develop the technicalities of the Ljapunov-Schmidt finite dimensional reduction. Most of the results hereby presented are well-known in the literature of this argument, therefore details of the proofs will be skipped.
### The \(2-\)dimensional case
It is known (see [33]) that the solutions of the linear problem
\[\left\{\begin{array}{ll}-\Delta\psi+2e^{U_{x_{0},\lambda}}\psi=0&\text{in } \mathbb{B}^{2}\\ \partial_{\nu}\psi-\mathfrak{D}e^{\frac{U_{x_{0},\lambda}}{2}}\psi=0&\text{on } \mathbb{S}^{1}\end{array}\right.\]
are a linear combination of
\[\mathcal{Z}^{1}_{x_{0},\lambda}(z):=\partial_{x_{0}}U_{x_{0},\lambda}\quad \text{and}\quad\mathcal{Z}^{2}_{x_{0},\lambda}(z):=\partial_{\lambda}U_{x_{0}, \lambda}\quad\text{and}\]
Given \(\kappa>0\), set
\[\mathsf{C}_{\kappa}:=\left\{(t,x_{0},\lambda)\in\mathbb{R}\times(0,\infty) \times\mathbb{R}^{n-1}\ :\ \frac{1}{\kappa}\leq|t|+\lambda\leq\kappa,\ |x_{0}|\leq \kappa\right\}. \tag{4.1}\]
Arguing as in Theorem 3.3 of [6] we can prove that
**Proposition 4.1**.: _Fix \(p>1\) and \(\kappa>0\). For any \((x_{0},\lambda)\in\mathsf{C}_{\kappa}\) (see (4.1)) and \(\mathfrak{f}\in L^{p}\left(\mathbb{B}^{2}\right)\) and \(\mathfrak{g}\in L^{p}\left(\mathbb{S}^{1}\right)\) such that_
\[\int_{\mathbb{B}^{2}}\mathfrak{f}+\int_{\mathbb{S}^{1}}\mathfrak{g}=\int_{ \mathbb{B}^{2}}\mathfrak{f}\mathcal{Z}^{i}_{x_{0},\lambda}+\int_{\mathbb{S}^{ 1}}\mathfrak{g}\mathcal{Z}^{i}_{x_{0},\lambda}=0,\quad i=1,2,\]
_there exists a unique \(\phi\in H^{1}\left(\mathbb{B}^{2}\right)\) such that_
\[-2\int_{\mathbb{B}^{2}}e^{U_{x_{0},\lambda}}\phi+\mathfrak{D}\int_{\mathbb{S} ^{1}}e^{\frac{U_{x_{0},\lambda}}{2}}\phi=-2\int_{\mathbb{B}^{2}}e^{U_{x_{0}, \lambda}}\phi\mathcal{Z}^{i}_{x_{0},\lambda}+\mathfrak{D}\int_{\mathbb{S}^{1} }e^{\frac{U_{x_{0},\lambda}}{2}}\phi\mathcal{Z}^{i}_{x_{0},\lambda}=0,\ i=1,2, \tag{4.2}\]
_which solves the problem_
\[\begin{cases}-\Delta\phi+2e^{U_{x_{0},\lambda}}\phi=\mathfrak{f}&\text{in } \mathbb{B}^{2}\\ \partial_{\nu}\phi-\mathfrak{D}e^{\frac{U_{x_{0},\lambda}}{2}}\phi=\mathfrak{g }&\text{on }\mathbb{S}^{1}\end{cases}\]
_Furthermore_
\[\|\phi\|\lesssim\left(\|\mathfrak{f}\|_{L^{p}(\mathbb{B}^{2})}+\|\mathfrak{g} \|_{L^{p}(\mathbb{S}^{1})}\right).\]
#### 4.1.1. Rewriting the problem
We look for a solution of \((P^{2}_{\varepsilon})\) in the form
\[u=U_{x_{0},\lambda}+\tau+\phi,\ \text{with}\ \lambda>0,\ x_{0}\in\mathbb{R} \quad\text{and}\quad\tau=t\sqrt{\varepsilon},\ t\in\mathbb{R}\]
where \(\phi\) satisfies the orthogonality condition (4.2). We shall rewrite problem \((P^{2}_{\varepsilon})\) as a system
\[\left\{\begin{array}{ll}-\Delta\phi+2e^{U_{x_{0},\lambda}}\phi=\mathscr{E} _{in}+\mathscr{N}_{in}(\phi)+c_{0}+\sum\limits_{i=1,2}c_{i}\mathcal{Z}^{i}_{x_ {0},\lambda}&\text{in }\mathbb{B}^{2}\\ \partial_{\nu}\phi-\mathfrak{D}e^{\frac{U_{x_{0},\lambda}}{2}}\phi=\mathscr{E} _{bd}+\mathscr{N}_{bd}(\phi)+c_{0}+\sum\limits_{i=1,2}c_{i}\mathcal{Z}^{i}_{x_ {0},\lambda}&\text{on }\mathbb{S}^{1}\end{array}\right. \tag{4.3}\]
where \(c_{i}\)'s are real numbers.
The error that we are paying by using this approximating solution equals to
\[\mathscr{E}_{in}:=-\varepsilon\mathcal{F}\left(U_{x_{0},\lambda}+\tau\right)\quad \text{and}\quad\mathscr{E}_{bd}:=\varepsilon\mathcal{G}\left(U_{x_{0},\lambda}+\tau\right)\]
and the non-linear part is
\[\mathscr{N}_{in}(\phi):= -\left[\mathcal{F}\left(U_{x_{0},\lambda}+\tau+\phi\right)- \mathcal{F}\left(U_{x_{0},\lambda}+\tau\right)-\mathcal{F}^{\prime}\left(U_{x _{0},\lambda}\right)\phi\right]-\left[\left(\mathcal{F}^{\prime}\left(U_{x_{0},\lambda}+\tau\right)-\mathcal{F}^{\prime}\left(U_{x_{0},\lambda}\right) \right)\phi\right] \tag{4.4}\] \[-\varepsilon K\left[\mathcal{F}\left(U_{x_{0},\lambda}+\tau+\phi \right)-\mathcal{F}\left(U_{x_{0},\lambda}+\tau\right)\right];\] \[\mathscr{N}_{bd}(\phi):= -\left[\mathcal{G}\left(U_{x_{0},\lambda}+\tau+\phi\right)- \mathcal{G}\left(U_{x_{0},\lambda}\right)-\mathcal{G}^{\prime}\left(U_{x_{0},\lambda}\right)\phi\right]-\left[\left(\mathcal{G}^{\prime}\left(U_{x_{0}, \lambda}+\tau\right)-\mathcal{G}^{\prime}\left(U_{x_{0},\lambda}\right) \right)\phi\right]\] \[-\varepsilon H\left[\mathcal{G}\left(U_{x_{0},\lambda}+\phi\right)- \mathcal{G}\left(U_{x_{0},\lambda}\right)\right]).\]
Here we set
\[\mathcal{F}(u):=2e^{u}\quad\text{and}\quad\mathcal{G}(u)=2\mathfrak{D}e^{\frac{ u}{2}}.\]
We have the following result
**Proposition 4.2**.: _Fix \(\kappa>0\). There exists \(\varepsilon_{\kappa}>0\) such that or any \((x_{0},\lambda)\in\mathtt{C}_{\kappa}\) (see (4.1)) there exists a unique \(\phi=\phi(\varepsilon,x_{0},\lambda)\in H^{1}\left(\mathbb{B}^{2}\right)\) and \(c_{i}\in\mathbb{R}\) which solve (4.3). Moreover, \((x_{0},\lambda)\to\phi(\varepsilon,x_{0},\lambda)\) is a \(C^{1}-\)function and \(\|\phi\|\lesssim\varepsilon\)._
Proof.: The proof is standard and relies on a contraction mapping argument combined with the linear theory developed in Proposition 4.1 and the estimates for \(p>1\)
\[\|\mathscr{E}_{in}\|_{L^{p}(\mathbb{B}^{2})}\lesssim\varepsilon\quad\text{and }\quad\|\mathscr{E}_{bd}\|_{L^{p}(\mathbb{S}^{1})}\lesssim\varepsilon.\]
#### 4.1.2. The reduced energy
Let us consider the energy functional \(J_{\varepsilon}^{2}\) defined in (3.1), whose critical points produce solutions of \((P_{\varepsilon}^{2})\). We define the reduced energy
\[\widetilde{J}_{\varepsilon}^{2}(t,x_{0},\lambda):=J_{\varepsilon}^{2}\left(U_ {x_{0},\lambda}+t+\phi\right),\]
where \(\phi\) is given in Proposition 4.2.
**Proposition 4.3**.: _The following are true:_
1. _If_ \((x_{0},\lambda)\) _is a critical point of_ \(\widetilde{J}_{\varepsilon}\)_, then_ \(U_{x_{0},\lambda}+\phi\) _is a solution to (_\(P_{\varepsilon}^{2}\)_)._
2. _The following expansion holds_ \[\widetilde{J}_{\varepsilon}(t,x_{0},\lambda)=\mathtt{E}_{2,\mathfrak{D}}- \varepsilon\left(\alpha_{\mathfrak{D}}t^{2}+\Gamma(x_{0},\lambda)\right)+o( \varepsilon)\] \(C^{1}-\)_uniformly in compact sets of_ \(\mathbb{R}\times(0,+\infty)\times\mathbb{R}\)_._ _Here_ \(\mathtt{E}_{2,\mathfrak{D}}\) _is a constant independent on_ \(x_{0}\)_,_ \(t\) _and_ \(\lambda\) _whose expression is given by (_3.3_),_ \(\Gamma\) _is defined in (_3.5_) and_ \[\alpha_{\mathfrak{D}}=\pi\left(\frac{\mathfrak{D}}{\sqrt{\mathfrak{D}^{2}-1}}- 2\right).\]
Proof.: We use the choice \(\tau=t\sqrt{\varepsilon}\) and the fact that
\[\mathfrak{D}\int_{\mathbb{S}^{1}}e^{\frac{U_{x_{0},\lambda}}{2}}d\sigma-\int_{ \mathbb{B}^{2}}e^{U_{x_{0},\lambda}}dx=2\pi\quad\text{and}\quad\int_{\mathbb{S} ^{1}}e^{\frac{U_{x_{0},\lambda}}{2}}d\sigma=2\pi.\]
### The \(n-\)dimensional case
Recently, in [21], it has been proved that all the solutions to the linearized problem
\[\left\{\begin{array}{ll}-\frac{4(n-1)}{n-2}\Delta Z=-\frac{n+2}{n-2}U_{x_{0}, \lambda}\frac{4}{n-2}Z&\text{in }\mathbb{B}^{n},\\ \frac{2}{n-2}\partial_{\nu}Z+Z=\frac{n}{(n-2)\sqrt{n(n-1)}}\mathfrak{D}U_{x_{0},\lambda}\frac{2}{n-2}Z&\text{on }\mathbb{S}^{n-1}\end{array}\right.\]
are a linear combination of the \(n\) functions
\[Z^{i}_{x_{0},\lambda}=\partial_{x_{0},i}U_{x_{0},\lambda},\ i=1,\ldots,n-1 \quad\text{and}\quad Z^{n}_{x_{0},\lambda}=\partial_{\lambda}U_{x_{0},\lambda}.\]
Given \(\kappa>0\) set
\[\mathtt{C}_{\kappa}:=\left\{(x_{0},\lambda)\in(0,\infty)\times\mathbb{R}^{n-1 }\ :\ \frac{1}{\kappa}\leq\lambda\leq\kappa,\ |x_{0}|\leq\kappa\right\}. \tag{4.5}\]
Arguing as in [21] we can prove that
**Proposition 4.4**.: _Fix \(\kappa>0\). For any \((x_{0},\lambda)\in\mathtt{C}_{\kappa}\) (see (4.5)) and \(\mathfrak{f}\in L^{\frac{2n}{n+2}}\left(\mathbb{B}^{n}\right)\) and \(\mathfrak{g}\in L^{\frac{2(n-1)}{n}}\left(\mathbb{S}^{n-1}\right)\) such that_
\[\int_{\mathbb{B}^{n}}\mathfrak{f}\mathcal{Z}^{i}_{x_{0},\lambda}+\int_{ \mathbb{S}^{n-1}}\mathfrak{g}\mathcal{Z}^{i}_{x_{0},\lambda}=0,\quad i=1, \ldots,n,\]
_there exists a unique \(\phi\in H^{1}\left(\mathbb{B}^{n}\right)\) such that_
\[-\frac{n+2}{n-2}\int_{\mathbb{B}^{n}}U_{x_{0},\lambda}\frac{4}{n-2}\phi \mathcal{Z}^{i}_{x_{0},\lambda}+\frac{n}{(n-2)\sqrt{n(n-1)}}\mathfrak{D}\int_{ \mathbb{S}^{n-1}}U_{x_{0},\lambda}\frac{2}{n-2}\phi\mathcal{Z}^{i}_{x_{0}, \lambda}=0,\quad i=1,\ldots,n, \tag{4.6}\]
_which solves the problem_
\[\begin{cases}-\Delta\phi+\frac{n+2}{4(n-1)}U_{x_{0},\lambda}\frac{4}{n-2}\phi =\mathfrak{f}&\text{in }\ \mathbb{B}^{n}\\ \partial_{\nu}\phi+\frac{n-2}{2}\phi-\frac{n}{2\sqrt{n(n-1)}}\mathfrak{D}U_{ x_{0},\lambda}\frac{2}{n-2}\phi=\mathfrak{g}&\text{on }\ \mathbb{S}^{n-1}\end{cases}\]
_Furthermore_
\[\|\phi\|\lesssim\left(\|\mathfrak{f}\|_{L^{\frac{2n}{n+2}}\left(\mathbb{B}^{n }\right)}+\|\mathfrak{g}\|_{L^{\frac{2(n-1)}{n}}\left(\mathbb{S}^{n-1}\right) }\right).\]
#### 4.2.1. Rewriting the problem
We look for a positive solution of \((P^{n}_{\varepsilon})\) as
\[u=U_{x_{0},\lambda}+\phi\ \text{with}\ \lambda>0,\ x_{0}\in\mathbb{R}\]
where \(\phi\) satisfies (4.6). We rewrite problem \((P^{n}_{\varepsilon})\) as a system
\[\left\{\begin{array}{ll}-\Delta\phi+\frac{n+2}{4(n-1)}U_{x_{0},\lambda} \frac{4}{n-2}\phi=\mathscr{E}_{in}+\mathscr{N}_{in}(\phi)+\sum\limits_{i=1}^{n} c_{i}\mathcal{Z}^{i}_{x_{0},\lambda}&\text{in }\ \mathbb{B}^{2}\\ \partial_{\nu}\phi+\frac{n-2}{2}\phi-\frac{n}{2\sqrt{n(n-1)}}\mathfrak{D}U_{x _{0},\lambda}\frac{2}{n-2}\phi=\mathscr{E}_{bd}+\mathscr{N}_{bd}(\phi)+\sum \limits_{i=1}^{n}c_{i}\mathcal{Z}^{i}_{x_{0},\lambda}&\text{in }\ \mathbb{S}^{1}\end{array}\right. \tag{4.7}\]
where the \(c_{i}\) are real numbers. Moreover, the error is given by
\[\mathscr{E}_{in}:=-\varepsilon\mathcal{F}\left(U_{x_{0},\lambda}\right)\quad \text{and}\quad\mathscr{E}_{bd}:=\varepsilon\mathcal{G}\left(U_{x_{0},\lambda}\right)\]
and the non-linear part is
\[\mathscr{N}_{in}(\phi):= -\left[\mathcal{F}\left(U_{x_{0},\lambda}+\phi\right)-\mathcal{F} \left(U_{x_{0},\lambda}\right)-\mathcal{F}^{\prime}\left(U_{x_{0},\lambda} \right)\phi\right]-\varepsilon K\left[\mathcal{F}\left(U_{x_{0},\lambda}+\phi \right)-\mathcal{F}\left(U_{x_{0},\lambda}\right)\right] \tag{4.8}\] \[\mathscr{N}_{bd}(\phi):= -\left[\mathcal{G}\left(U_{x_{0},\lambda}+\phi\right)-\mathcal{G }\left(U_{x_{0},\lambda}\right)-\mathcal{G}^{\prime}\left(U_{x_{0},\lambda} \right)\phi\right]-\varepsilon H\left[\mathcal{G}\left(U_{x_{0},\lambda}+\phi \right)-\mathcal{G}\left(U_{x_{0},\lambda}\right)\right])\]
Here we set
\[\mathcal{F}(u):=-\frac{n-2}{4(n-1)}(u^{+})^{\frac{n+2}{n-2}}\quad\text{and} \quad\mathcal{G}(u)=\frac{n-2}{2\sqrt{n(n-1)}}\mathfrak{D}(u^{+})^{\frac{n}{n- 2}}.\]
We have the following result:
**Proposition 4.5**.: _Fix \(\kappa>0\). There exists \(\varepsilon_{\kappa}>0\) such that or any \((x_{0},\lambda)\in\mathcal{C}_{\kappa}\) (see (4.5)) there exists a unique \(\phi=\phi(\varepsilon,x_{0},\lambda)\in H^{1}\left(\mathbb{B}^{2}\right)\) and \(c_{i}\in\mathbb{R}\) which solve (4.7). Moreover, \((x_{0},\lambda)\to\phi(\varepsilon,x_{0},\lambda)\) is a \(C^{1}-\)function and \(\|\phi\|\lesssim\varepsilon\)._
Proof.: The proof is standard and relies on a contraction mapping argument combined with the linear theory developed in Proposition 4.4 and the estimates
\[\|\mathscr{E}_{in}\|_{L^{\frac{2n}{n+2}}(\mathbb{B}^{2})}\lesssim\varepsilon \quad\text{and}\quad\|\mathscr{E}_{bd}\|_{L^{\frac{2(n-1)}{n}}(\mathbb{S}^{1} )}\lesssim\varepsilon.\]
#### 4.2.2. The reduced energy
We consider the functional \(J_{\varepsilon}^{n}\) defined on (3.2). It is easy to see that its critical points are positive solutions to equation \((P_{\varepsilon}^{n})\). Now, we introduce the reduced energy
\[\widetilde{J}_{\varepsilon}^{n}(x_{0},\lambda):=J_{\varepsilon}^{n}\left(U_{x _{0},\lambda}+\phi\right),\]
where \(\phi\) is given in Proposition 4.5. It is quite standard to prove the following result
**Proposition 4.6**.: _The following assertions hold true_
1. _If_ \((x_{0},\lambda)\) _is a critical point of_ \(\widetilde{J}_{\varepsilon}\)_, then_ \(U_{x_{0},\lambda}+\phi\) _is a solution to_ \((P_{\varepsilon}^{n})\)_._
2. _Moreover, we have the following expansion_ \[\widetilde{J}_{\varepsilon}^{n}(x_{0},\lambda)=\mathtt{E}_{n,\mathfrak{D}}- \varepsilon\Gamma(x_{0},\lambda)+o(\varepsilon)\] \[C^{1}-\] _uniformly with respect to_ \((x_{0},\lambda)\) _in compact sets of_ \((0,+\infty)\times\mathbb{R}^{n-1}\)_._ _Here_ \(\mathtt{E}_{n,\mathfrak{D}}\) _is a constant independent on_ \(x_{0}\) _and_ \(\lambda\)_, given by (_3.4_), and_ \(\Gamma\) _is the function defined on (_3.5_)_
## 5. Existence of critical points of \(\Gamma\)
In this section we are finally able to get critical points of the map \((x_{0},\lambda)\mapsto\Gamma(x_{0},\lambda)\), hence solutions to problems \((P_{\varepsilon}^{2})\), \((P_{\varepsilon}^{n})\).
We start with the following abstract result about critical points of maps defined on balls in dependence of the boundary behavior.
**Proposition 5.1**.: _Let \(f:\mathbb{B}^{n}\to\mathbb{R}\) be a \(C^{1}\) map satisfying, as \(\xi\) goes to \(\mathbb{S}^{n-1}\),_
\[f(\xi)=f_{0}\left(\frac{\xi}{|\xi|}\right)+g(1-|\xi|)f_{1}\left(\frac{\xi}{|\xi |}\right)+o(g(1-|\xi|)),\]
_for some \(f_{i}:\mathbb{S}^{n-1}\to\mathbb{R}\) of class \(C^{1}\) and some increasing \(g:(0,1)\to(0,+\infty)\) of class \(C^{1}\) such that \(g(t)\underset{t\to 0}{\to}0\)._
_If one of the following holds true:_
1. \(f_{1}(\xi)>0\) _at any global maximum_ \(\xi\) _of_ \(f_{0}\)_;_
2. \(f_{1}(\xi)<0\) _at any global minimum_ \(\xi\) _of_ \(f_{0}\)_;_
3. \(f_{1}(\xi)\neq 0\) _at any critical point_ \(\xi\) _of_ \(f_{0}\)_,_ \(f_{0}\) _is Morse and_ \[\sum_{\{\xi:\nabla f_{0}(\xi)=0,\,f_{1}(\xi)>0\}}(-1)^{\operatorname{ind}_{ \xi}\nabla f_{0}}\neq 1;\]
4. \(f_{1}(\xi)=0\neq f_{2}(\xi)\) _at any critical point_ \(\xi\) _of_ \(f_{0}\)_, with_ \(f_{2}:\mathbb{S}^{n-1}\to\mathbb{R}\) _of class_ \(C^{1}\) _such that_ \[f(\xi)=f_{0}\left(\frac{\xi}{|\xi|}\right)+\tilde{g}(1-|\xi|)f_{2}\left(\frac{ \xi}{|\xi|}\right)+o\left(\tilde{g}(1-|\xi|)\right),\] _at critical points_ \(\xi\) _of_ \(f_{0}\)_, for some_ \(\tilde{g}\) _satisfying the same assumptions as_ \(g\)_,_ \(f_{0}\) _is Morse and_ \[\sum_{\{\xi:\nabla f_{0}(\xi)=0,\,f_{2}(\xi)>0\}}(-1)^{\operatorname{ind}_{ \xi}\nabla f_{0}}\neq 1;\]
_then, \(f\) has at least a stable critical point._
Theorems 1.1 and 1.2 will follow without much difficulty from this proposition and Proposition 3.4.
Proof of Theorems 1.1, 1.2.: We only consider the case of Theorem 1.1, since the same arguments also work for Theorem 1.2.
Thanks to Proposition 4.3, we get a solutions to the problem \((P^{2}_{\varepsilon})\) whenever \(\frac{\mathfrak{D}}{\sqrt{\mathfrak{D}^{2}-1}}-2\neq 0\), that is \(\mathfrak{D}\neq\frac{2}{\sqrt{3}}\), and \((x_{0},\lambda)\) is a stable critical point of \(\Gamma\). After composing with \(\mathscr{I}\), this is equivalent to getting a critical point of the map \(f(\xi)=\Gamma\left(\mathscr{I}^{-1}(\xi)\right)\), which is well-defined and smooth in the whole \(\overline{\mathbb{B}^{n}}\) thanks to Proposition 3.3.
In view of Proposition 3.4, \(f\) satisfies the assumptions of Proposition 5.1 with \(f_{0}=\psi\), \(g(t)=t\) and \(f_{1}=-2\pi\Phi_{1}\); in the last case, we have \(f_{2}=\Phi_{m}\) and \(\tilde{g}(t)=t^{\lfloor\frac{m+1}{2}\rfloor}\log^{\frac{1-(-1)^{m}}{2}}\frac{ 1}{\lambda}\). Here, we used that \(\lambda=\frac{1-|\xi|}{1+\xi_{n}}+o(1-|\xi|)\) and that
\[a_{2,0,0}= 4\int_{\mathbb{R}^{2}_{+}}\frac{1}{(\tilde{y}^{2}+(y_{2}+ \mathfrak{D})^{2}-1)^{2}}d\bar{y}dy_{2}=\frac{2\pi}{\sqrt{\mathfrak{D}^{2}-1} }\left(\mathfrak{D}-\sqrt{\mathfrak{D}^{2}-1}\right)\] \[b_{2,0}= 4\frac{\mathfrak{D}}{\sqrt{\mathfrak{D}^{2}-1}}\int_{\partial \mathbb{R}^{2}_{+}}\frac{1}{\tilde{y}^{2}+1}d\bar{y}=\frac{4\pi\mathfrak{D}} {\sqrt{\mathfrak{D}^{2}-1}}\]
hence the two definitions of \(\psi\) given in Theorem 1.1 and Proposition 3.4 actually coincide. Since \(-2\pi<0\), then the assumptions on \(K,H\) in Theorem 4.3 are equivalent to the ones in Proposition 5.1, hence they ensure existence of solutions.
To prove Proposition 5.1, we will compute the Leray-Schauder degree of the map \(f\).
Proof of Proposition 5.1.: First of all, \(f\) can be extended up to \(\mathbb{S}^{n-1}\) as \(f_{0}\). Since \(g\) vanishes at \(0\), this extension is continuous.
Assume (1) holds and take an absolute maximum point \(\xi_{0}\) for \(f\) on \(\overline{\mathbb{B}^{n}}\). To get a critical point for \(f\) on \(\mathbb{B}^{n}\) we suffice to show that \(\xi_{0}\not\in\mathbb{S}^{n-1}\).
If \(\xi_{0}\in\mathbb{S}^{n-1}\), we would have \(f_{1}(\xi_{0})>0\), therefore, for \(0<t\ll 1\) we would have
\[f((1-t)\xi_{0})=f_{0}(\xi_{0})+g(t)f_{1}(\xi_{0})+o(t)>f_{0}(\xi_{0}),\]
contradicting the fact that \(\xi_{0}\) is a maximum point.
If (2) holds, then the same argument shows that the minimum of \(f\) on \(\overline{\mathbb{B}^{n}}\) lies in the interior of \(\mathbb{B}^{n}\), therefore it is a critical point of \(f\).
Assume now that (3) holds. We consider the _double_ of \(\overline{\mathbb{B}^{n}}\), namely the manifold obtained by gluing two copies of \(\mathbb{B}^{n}\) along the boundary: \(\frac{\overline{\mathbb{B}^{n}}\times\{0,1\}}{\sim}\), where \((\xi,0)\sim(\xi,1)\) for \(\xi\in\mathbb{S}^{n-1}\). This manifold is clearly diffeomorphic to \(\mathbb{S}^{n}\), hence we will identify it as \(\mathbb{S}^{n}\).
\(f\) can be naturally extended to \(\tilde{f}:\mathbb{S}^{n}\to\mathbb{R}\) as \(\tilde{f}(\xi,i)=f(\xi)\) for \(i=0,1\). The extension is continuous and, after a suitable rescalement of \(g\) close to \(0\), of class \(C^{1}\). Such a rescalement does not affect the presence of critical points to \(\tilde{f}\), \(f\) and \(f|_{\mathbb{S}^{n-1}}=f_{0}\), which we will now investigate.
We use the Euler-Poincare formula to compute the Leray-Schauder degree of \(\tilde{f}\), which is a Morse function by assumption:
\[1+(-1)^{n}=\chi\left(\mathbb{S}^{n}\right) =\sum_{\{\xi\in\mathbb{S}^{n-1}:\nabla\tilde{f}(\xi)=0\}}(-1)^{ \operatorname{ind}_{\xi}\nabla\tilde{f}}+\sum_{\{\xi\not\in\mathbb{S}^{n-1}: \nabla\tilde{f}(\xi)=0\}}(-1)^{\operatorname{ind}_{\xi}\nabla\tilde{f}}\] \[=\sum_{\{\xi\in\mathbb{S}^{n-1}:\nabla\tilde{f}(\xi)=0\}}(-1)^{ \operatorname{ind}_{\xi}\nabla\tilde{f}}+2\sum_{\{\xi\in\mathbb{B}^{n}:\nabla f (\xi)=0\}}(-1)^{\operatorname{ind}_{\xi}\nabla f}.\]
To deal with the critical points on \(\mathbb{S}^{n-1}\), we notice that they are exactly the same critical points of \(f_{0}\), but their index may change, since each can be either a minimum or a maximum in the orthogonal direction; precisely:
\[f_{1}(\xi)>0\Rightarrow \operatorname{ind}_{\xi}\nabla\tilde{f}=\operatorname{ind}_{\xi} \nabla f_{0};\] \[f_{1}(\xi)<0\Rightarrow \operatorname{ind}_{\xi}\nabla\tilde{f}=\operatorname{ind}_{\xi} \nabla f_{0}+1.\]
Therefore, applying again the Euler-Poincare formula, this time to \(f_{0}\) on \(\mathbb{S}^{n-1}\), we get:
\[1-(-1)^{n}= \chi\left(\mathbb{S}^{n-1}\right)\] \[= \sum_{\{\xi\in\mathbb{S}^{n-1}:\nabla\tilde{f}(\xi)=0\}}(-1)^{ \operatorname{ind}_{\xi}\nabla\tilde{f}}\]
\[=\sum_{\{\xi:\nabla f_{0}(\xi)=0,\,f_{1}(\xi)>0\}}(-1)^{\operatorname{ ind}_{\xi}\nabla f_{0}}-\sum_{\{\xi:\nabla f_{0}(\xi)=0,\,f_{1}(\xi)<0\}}(-1)^{ \operatorname{ind}_{\xi}\nabla f_{0}}.\]
By summing the previous equalities we get:
\[\sum_{\{\xi\in\mathbb{B}^{n}:\nabla f(\xi)=0\}}(-1)^{\operatorname{ind}_{\xi} \nabla f}=1-\sum_{\{\xi\in\mathbb{S}^{n-1}:\nabla f_{0}(\xi)=0,\,f_{1}(\xi)>0 \}}(-1)^{\operatorname{ind}_{\xi}\nabla f_{0}}.\]
The latter quantity is non zero by assumptions, therefore the set of critical points of \(f\) on \(\mathbb{B}^{n}\), on which we are taking the first sum, cannot be empty.
Finally, if (4) holds one can be argue similarly, considering \(f_{2}\) instead of \(f_{1}\) in the analysis of critical points on \(\mathbb{S}^{n-1}\).
## 6. Appendix: Proof of Proposition 3.4
By introducing a rotation in \(\mathbb{B}^{n}\) and moving to \(\mathbb{R}^{n}_{+}\) via \(\mathscr{I}\) we can give an expression for \(\Gamma\) which is more convenient for our computation.
**Lemma 6.1**.: _Let \(A:\mathbb{B}^{n}\to\mathbb{B}^{n}\) be the rotation corresponding, via the \(\mathscr{I}\), to the translation of \(T:x\to x+x_{0}\) on the half-place, namely \(A=\mathscr{I}\circ T\circ\mathscr{I}^{-1}\). There holds:_
\[\Gamma(x_{0},\lambda)=\Lambda_{n}^{\frac{n}{2}}\int_{\mathbb{R}^{n}_{+}}\frac{ \tilde{K}_{A}(\lambda y)}{\left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^ {2}-1\right)^{n}}d\bar{y}dy_{n}-\Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D }\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A}(\lambda\bar{y})}{\left( \left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y},\]
_where \(\tilde{K}_{A}=K\circ A\circ\mathscr{I},\tilde{H}_{A}=H\circ A\circ\mathscr{I}\)._
Proof.: By doing a change of variables, we observe that
\[\Gamma(x_{0},\lambda)= \int_{\mathbb{B}^{n}}K_{A}(z)P_{x_{0},\lambda}(Az)^{\frac{n}{2}}- \beta_{n}\mathfrak{D}\int_{\mathbb{S}^{n-1}}H_{A}(z)P_{x_{0},\lambda}(Az)^{ \frac{n-1}{2}}\] \[= \int_{\mathbb{B}^{n}}K_{A}(z)P_{0,\lambda}(z)^{\frac{n}{2}}-\beta _{n}\mathfrak{D}\int_{\mathbb{S}^{n-1}}H_{A}(z)P_{0,\lambda}(z)^{\frac{n-1}{2}},\]
Here we are using that \(A\) is a rotation and its very definition, and we set \(K_{A}=K\circ A,H_{A}=H\circ A\). Finally, changing variables twice and using the definitions in Section 2:
\[\Gamma(x_{0},\lambda)\] \[= \Lambda_{n}^{\frac{n}{2}}\int_{\mathbb{R}^{n}_{+}}\frac{\tilde{K }_{A}(\lambda y)}{\left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1 \right)^{n}}d\bar{y}dy_{n}+\Lambda_{n}^{\frac{n-1}{2}}\beta_{n}\mathfrak{D} \int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A}(\lambda\bar{y})}{\left( \left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y}\]
Proof of Proposition 3.4.: We start by estimating the boundary term, where some cancellations occur due to symmetry. We expand \(\tilde{H}_{A}(\lambda y)\) in \(\lambda\) up to order \(n-2\):
\[\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A}(\lambda\bar{y })}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y}\] \[= \tilde{H}_{A}(0)\int_{\partial\mathbb{R}^{n}_{+}}\frac{d\bar{y}} {\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}+\sum_{1\leq \left|\alpha\right|\leq n-2}\frac{\lambda^{\left|\alpha\right|}}{\left| \alpha\right|!}\partial_{\bar{x}_{\alpha}}\tilde{H}_{A}(0)\int_{\partial \mathbb{R}^{n}_{+}}\frac{\bar{y}^{\alpha}}{\left(\left|\bar{y}\right|^{2}+ \mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y}\] \[+ \underbrace{\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\lambda\bar{y})-\sum_{\left|\alpha\right|\leq n-2}\frac{\lambda^{\left| \alpha\right|}}{\left|\alpha\right|!}\partial_{\bar{x}_{\alpha}}\tilde{H}_{A}( 0)\bar{y}^{\alpha}}_{=:I}d\bar{y}\] \[= H(\xi)\int_{\partial\mathbb{R}^{n}_{+}}\frac{d\bar{y}}{\left( \left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}+\lambda^{2}\frac{1}{4 (n-1)}\Delta\tilde{H}_{A}(0)\int_{\partial\mathbb{R}^{n}_{+}}\frac{|\bar{y}|^{2 }}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y}+\ldots\] \[+ \lambda^{2\lfloor\frac{n-2}{2}\rfloor}\frac{(n-3)!!}{\left(2 \lfloor\frac{n-2}{2}\rfloor\right)!\left(2\lfloor\frac{n-2}{2}\rfloor\right)!! \left(n-3+2\lfloor\frac{n-2}{2}\rfloor\right)!!}\Delta^{\lfloor\frac{n-2}{2} \rfloor}\tilde{H}_{A}(0)\int_{\partial\mathbb{R}^{n}_{+}}\frac{|\bar{y}|^{2 \lfloor\frac{n-2}{2}\rfloor}}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}- 1\right)^{n-1}}d\bar{y}+I\] \[= \frac{H(\xi)}{\left(\mathfrak{D}^{2}-1\right)^{\frac{n-1}{2}}}\int _{\partial\mathbb{R}^{n}_{+}}\frac{d\bar{y}}{\left(\left|\bar{y}\right|^{2}+1 \right)^{n-1}}\] \[+ \lambda^{2}\frac{1}{\left(\mathfrak{D}^{2}-1\right)^{\frac{n-3}{2 }}}\frac{1}{4(n-1)}\Delta\tilde{H}_{A}(0)\int_{\partial\mathbb{R}^{n}_{+}} \frac{|\bar{y}|^{2}}{\left(\left|\bar{y}\right|^{2}+1\right)^{n-1}}d\bar{y}+\ldots\] \[+ \lambda^{2\lfloor\frac{n-2}{2}\rfloor}\frac{1}{\left(\mathfrak{D }^{2}-1\right)^{\frac{n-2\lfloor\frac{n-2}{2}\rfloor}{2}}-1}\frac{(n-3)!!}{ \left(2\lfloor\frac{n-2}{2}\rfloor\right)!!\left(2\lfloor\frac{n-2}{2}\rfloor \right)!!\left(n-3+2\lfloor\frac{n-2}{2}\rfloor\right)!!}\] \[\times \Delta^{\lfloor\frac{n-2}{2}\rfloor}\tilde{H}_{A}(0)\int_{ \partial\mathbb{R}^{n}_{+}}\frac{|\bar{y}|^{2\lfloor\frac{n-2}{2}\rfloor}}{ \left(\left|\bar{y}\right|^{2}+1\right)^{n-1}}d\bar{y}\] \[+ I,\]
where we used the formula
\[\Delta^{j}|y|^{2j}=\frac{(2j)!!(n-3+2j)!!}{(n-3)!!} \tag{6.1}\]
and the vanishing, due to symmetry, of integrals of homogeneous polynomials of odd degree or of degree \(2j\) which are \(j\)-harmonic.
Moreover, in view of the conformal properties of the Laplacian, one has
\[\Delta^{j}\tilde{H}_{A}(0)=(1+\xi_{n})^{2j}\Delta^{j}H(\xi), \tag{6.2}\]
hence the \(j^{\text{th}}\) term in the expansion equals
\[\lambda^{2j}\frac{1}{(\mathfrak{D}^{2}-1)^{\frac{n-2j-1}{2}}}\frac{(n-3)!!}{(2j)!( 2j)!!(n+2j-3)!!}(1+\xi_{n})^{2j}\Delta^{j}H(\xi)\int_{\partial\mathbb{R}^{n}_{+ }}\frac{|\bar{y}|^{2j}}{\left(\left|\bar{y}\right|^{2}+1\right)^{n-1}}d\bar{y}.\]
In the \(j^{\text{th}}\) order expansion, the remainder is actually \(o\left(\lambda^{j}\right)\) because we get
\[\int_{|\bar{y}|\leq\frac{1}{\lambda}}\frac{\tilde{H}_{A}(\lambda \bar{y})-\sum_{|\alpha|\leq j}\frac{\lambda^{|\alpha|}}{|\alpha|!}\partial_{ \bar{x}_{\alpha}}\tilde{H}_{A}(0)\bar{y}^{\alpha}}{\left(\left|\bar{y}\right|^ {2}+\mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y}+\int_{|\bar{y}|>\frac{1}{\lambda} }\frac{\tilde{H}_{A}(\lambda\bar{y})-\sum_{|\alpha|\leq j}\frac{\lambda^{| \alpha|}}{|\alpha|!}\partial_{\bar{x}_{\alpha}}\tilde{H}_{A}(0)\bar{y}^{ \alpha}}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}d\bar{y}\] \[= \int_{|\bar{y}|\leq\frac{1}{\lambda}}\frac{O\left(\left|\lambda \bar{y}\right|^{j+1}\right)}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}- 1\right)^{n-1}}d\bar{y}+\int_{|\bar{y}|>\frac{1}{\lambda}}\frac{O\left(\left| \lambda\bar{y}\right|^{j}\right)}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^ {2}-1\right)^{n-1}}d\bar{y}\] \[= O\left(\lambda^{j+1}\log\frac{1}{\lambda}\right)+O\left(\lambda ^{n-1}\right).\]
In order to deal with higher order terms, we need another argument, since this would get non-converging integrals.
We split the cases \(n\) even and \(n\) odd.
If \(n\) is even, the main order term in the denominator of \(I\) is of odd order, hence its integral vanishes. Therefore,
\[I= \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\bar{x})-\sum_{|\alpha|\leq n-2}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha} }\tilde{H}_{A}(0)\bar{x}^{\alpha}}{\left(\left|\bar{x}\right|^{2}+\lambda^{2} \left(\mathfrak{D}^{2}-1\right)\right)^{n-1}}d\bar{x}\] \[= \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\bar{x})-\sum_{|\alpha|\leq n-2}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha} }\tilde{H}_{A}(0)\bar{x}^{\alpha}-\frac{1}{(n-1)!}\sum_{|\alpha|=n-1}\partial _{\bar{x}^{\alpha}}\tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{\left( \left|\bar{x}\right|^{2}+\lambda^{2}\left(\mathfrak{D}^{2}-1\right)\right)^{n -1}}d\bar{x}\] \[= \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\bar{x})-\sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha} }\tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{\left|\bar{x}\right|^{2(n-1)}}d \bar{x}\] \[+ O\left(\sum_{j=0}^{\frac{n-2}{2}}\Delta^{j}\tilde{H}_{A}(0) \lambda^{n-1}\right)\] \[= \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\bar{x})-\sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha} }\tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{\left|\bar{x}\right|^{2(n-1)}}d \bar{x}\] \[+ \frac{\lambda^{n}}{n!}\sum_{|\alpha|=n}\partial_{\bar{x}^{\alpha} }\tilde{H}_{A}(0)\int_{\partial\mathbb{R}^{n}_{+}}\bar{y}^{\alpha}\left( \frac{1}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}-\frac{1 }{\left|\bar{y}\right|^{2(n-1)}}\right)\]
\[+\underbrace{\int_{\partial\mathbb{R}^{n}_{+}}\left(\tilde{H}_{A}( \lambda\bar{y})-\sum_{|\alpha|\leq n}\frac{\lambda^{|\alpha|}}{|\alpha|!} \partial_{\bar{x}_{\alpha}}\tilde{H}_{A}(0)\bar{y}^{\alpha}\right)\left(\frac{1 }{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}-\frac{1}{ \left|\bar{y}\right|^{2(n-1)}}\right)d\bar{y}}_{=:I^{\prime}}\] \[+o\left(\sum_{j=0}^{\frac{n-2}{2}}\Delta^{j}\tilde{H}_{A}(0) \lambda^{2j}\right)\] \[= \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A }(\bar{x})-\sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha }}\tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{\left|\bar{x}\right|^{2(n-1)}}d \bar{x}\] \[+ \lambda^{n}\left(\mathfrak{D}^{2}-1\right)^{\frac{1}{2}}\frac{( n-3)!!}{n!n!!(2n-3)!!}(\Delta)^{\frac{n}{2}}\tilde{H}_{A}(0)\int_{\partial \mathbb{R}^{n}_{+}}|\bar{y}|^{n}\left(\frac{1}{\left(\left|\bar{y}\right|^{2} +1\right)^{n-1}}-\frac{1}{\left|\bar{y}\right|^{2(n-1)}}\right)d\bar{y}\] \[+ I^{\prime}+o\left(\sum_{j=0}^{\frac{n-2}{2}}\Delta^{j}\tilde{H} _{A}(0)\lambda^{2j}\right),\]
where we used again (6.1); one easily verifies that, due to the behaviors at \(0\) at infinity, all the integrals are converging, hence everything is well defined.
After changing variables, the main terms are now
\[\lambda^{n-1}(1+\xi_{n})^{n-1}\int_{\mathbb{S}^{n-1}}\frac{H(z)- \sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\xi_{\alpha}}H(\xi)(z-\xi )^{\alpha}}{|z-\xi|^{2(n-1)}}dz\] \[+ \lambda^{n}\left(\mathfrak{D}^{2}-1\right)^{\frac{1}{2}}\frac{( n-3)!!}{n!n!(2n-3)!!}(1+\xi_{n})^{n}(\Delta)^{\frac{n}{2}}H(\xi)\int_{\partial \mathbb{R}^{n}_{+}}|\bar{y}|^{n}\left(\frac{1}{\left(\left|\bar{y}\right|^{2} +1\right)^{n-1}}-\frac{1}{\left|\bar{y}\right|^{2(n-1)}}\right)d\bar{y}\] \[+ I^{\prime}+o\left(\sum_{j=0}^{\frac{n-2}{2}}\Delta^{j}H(\xi) \lambda^{2j}\right),\]
where we used the fact that \(\frac{1}{\left|\bar{x}\right|^{2}}=\frac{\left|z+\xi\right|^{2}}{\left|z-\xi \right|^{2}}\) and again (6.2). The small \(o\) term contains some new quantities arising when the terms of order \(\lambda^{n-1}\) are transformed into each other.
The smallness of the remainder can be shown similarly as before, here and in the following.
Due to the asymptotic behavior of both factors, \(I^{\prime}\) can be dealt with similarly as \(I\) and one can iterate the argument. In particular, using the series expansion
\[\frac{1}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}}=\sum _{j=0}^{\infty}(-1)^{j}\frac{(n+j-2)!}{j!(n-2)!}\frac{\left(\mathfrak{D}^{2}- 1\right)^{j}}{|\bar{y}|^{2(n+j-1)}},\]
we get, for any even \(m>n\), the following \(m^{\text{th}}\) order term:
\[\lambda^{m-1}(-1)^{\frac{m-n}{2}}(1+\xi_{n})^{m-1}\frac{\left(\frac{n+m}{2}- 2\right)!}{\left(\frac{m-n}{2}\right)!(n-2)!}\left(\mathfrak{D}^{2}-1\right)^ {\frac{m-n}{2}}\]
\[\times\int_{\mathbb{S}^{n-1}}\left(H(z)-\sum_{|\alpha|\leq m-1} \frac{1}{|\alpha|!}\partial_{\xi_{\alpha}}H(\xi)(z-\xi)^{\alpha}\right)\frac{|z +\xi|^{m-n}}{|z-\xi|^{n+m-2}}dz\] \[+\lambda^{m}\left(\mathfrak{D}^{2}-1\right)^{\frac{m-n+1}{2}}\frac {(n-3)!!}{m!m!(n+m-3)!!}(1+\xi_{n})^{m}(\Delta)^{\frac{m}{2}}H(\xi)\] \[\times\int_{\partial\mathbb{R}^{n}_{+}}|\bar{y}|^{m}\left(\frac{1 }{(|\bar{y}|^{2}+1)^{n-1}}-\sum_{j=0}^{\frac{m-n}{2}}(-1)^{j}\frac{(n+j-2)!}{ j!(n-2)!}\frac{1}{|\bar{y}|^{2(n+j-1)}}\right)d\bar{y}\] \[+\int_{\partial\mathbb{R}^{n}_{+}}\left(\tilde{H}_{A}(\lambda \bar{y})-\sum_{|\alpha|\leq m}\frac{\lambda^{|\alpha|}}{|\alpha|!}\partial_{ \bar{x}_{\alpha}}\tilde{H}_{A}(0)\bar{y}^{\alpha}\right)\] \[\times\left(\frac{1}{\left(\left|\bar{y}\right|^{2}+\mathfrak{D}^ {2}-1\right)^{n-1}}-\sum_{j=0}^{\frac{m-n}{2}}(-1)^{j}\frac{(n+j-2)!}{j!(n-2)!}\frac{\left(\mathfrak{D}^{2}-1\right)^{j}}{|\bar{y}|^{2(n+j-1)}}\right)d\bar {y}\] \[+o\left(\sum_{j=0}^{\frac{m-2}{2}}\Delta^{j}H(\xi)\lambda^{2j} \right).\]
In particular, we point out that if \(n=2\) this is the main order term in the boundary estimates, and it equals
\[-\lambda\pi(1+\xi_{n})(-\Delta)^{\frac{1}{2}}H(\xi). \tag{6.3}\]
Let us now consider the case \(n\) odd. Here, the first term does not vanish and it gives rise to a logarithmic term. In fact,
\[I= \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\bar{x})-\sum_{|\alpha|\leq n-2}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha }}\tilde{H}_{A}(0)\bar{x}^{\alpha}}{\left(\left|\bar{x}\right|^{2}+\lambda^{2} \left(\mathfrak{D}^{2}-1\right)\right)^{n-1}}d\bar{x}\] \[= \frac{\lambda^{n-1}}{(n-1)!}\sum_{|\alpha|=n-1}\partial_{\bar{x} ^{\alpha}}\tilde{H}_{A}(0)\int_{|\bar{x}|\leq 1}\frac{\bar{x}^{\alpha}}{\left( \left|\bar{x}\right|^{2}+\lambda^{2}\left(\mathfrak{D}^{2}-1\right)\right)^{n- 1}}d\bar{x}\] \[+\lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{ A}(\bar{x})-\sum_{|\alpha|\leq n-2}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha }}\tilde{H}_{A}(0)\bar{x}^{\alpha}-\frac{1}{(n-1)!}\sum_{|\alpha|=n-1} \partial_{\bar{x}^{\alpha}}\tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{ \left(\left|\bar{x}\right|^{2}+\lambda^{2}\left(\mathfrak{D}^{2}-1\right) \right)^{n-1}}d\bar{x}\] \[= \lambda^{n-1}\left(\log\frac{1}{\lambda}+O(1)\right)\frac{(n-3)!! }{(n-1)!(n-1)!!(2n-4)!!}\Delta^{\frac{n-1}{2}}\tilde{H}_{A}(0)\omega_{n-2}\] \[+ \lambda^{n-1}\int_{\partial\mathbb{R}^{n}_{+}}\frac{\tilde{H}_{A} (\bar{x})-\sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha }}\tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{\left(\left|\bar{x}\right|^{2}+ \lambda^{2}\left(\mathfrak{D}^{2}-1\right)\right)^{n-1}}d\bar{x}+O\left( \lambda^{n-1}\sum_{j=0}^{\frac{n-3}{2}}\Delta^{j}\tilde{H}_{A}(0)\right)\] \[= \lambda^{n-1}\log\frac{1}{\lambda}\frac{(n-3)!!}{(n-1)!(n-1)!!(2n- 4)!!}\Delta^{\frac{n-1}{2}}\tilde{H}_{A}(0)\omega_{n-2}(1+o(1))\]
\[+\lambda^{n-1}\int_{\partial\mathbb{R}_{+}^{n}}\frac{\tilde{H}_{A}( \bar{x})-\sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha}} \tilde{H}_{A}(0)\bar{x}^{\alpha}\chi_{|x|\leq 1}}{|\bar{x}|^{2(n-1)}}d\bar{x}\] \[-\int_{\partial\mathbb{R}_{+}^{n}}\left(\tilde{H}_{A}(\bar{x})- \sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\bar{x}_{\alpha}}\tilde{H} _{A}(0)\bar{x}^{\alpha}\right)\left(\frac{\lambda^{n-1}}{|\bar{x}|^{2(n-1)}}- \frac{\lambda^{n-1}}{\left(|\bar{x}|^{2}+\lambda^{2}\left(\mathfrak{D}^{2}-1 \right)\right)^{n-1}}\right)d\bar{x}\] \[+o\left(\sum_{j=0}^{\frac{n-3}{2}}\Delta^{j}\tilde{H}_{A}(0) \lambda^{2j}\right)\] \[= \lambda^{n-1}\log\frac{1}{\lambda}\frac{(n-3)!!}{(n-1)!(n-1)!!(2 n-4)!!}(1+\xi_{n})^{n-1}\Delta^{\frac{n-1}{2}}H(\xi)\omega_{n-2}(1+o(1))\] \[+\lambda^{n-1}(1+\xi_{n})^{n-1}\int_{\mathbb{S}^{n-1}}\frac{H(z) -\sum_{|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{\xi_{\alpha}}H(\xi)(z- \xi)^{\alpha}}{|z-\xi|^{2(n-1)}}dz\] \[-\int_{\partial\mathbb{R}_{+}^{n}}\left(\tilde{H}_{A}(\lambda \bar{y})-\sum_{|\alpha|\leq n}\frac{\lambda^{|\alpha|}}{|\alpha|!}\partial_{ \bar{x}_{\alpha}}\tilde{H}_{A}(0)\bar{y}^{\alpha}\right)\left(\frac{1}{|\bar{y }|^{2(n-1)}}-\frac{1}{\left(|\bar{y}|^{2}+\mathfrak{D}^{2}-1\right)^{n-1}} \right)d\bar{y}\] \[+o\left(\sum_{j=0}^{\frac{n-3}{2}}\Delta^{j}H_{\xi}(0)\lambda^{2 j}\right);\]
iterating, for any odd \(m>n\) we get
\[\lambda^{m-1}\log\frac{1}{\lambda}(-1)^{\frac{m-n}{2}}\frac{(n-3 )!!}{(m-1)!(m-1)!!(n+m-4)!!}\frac{\left(\frac{n+m}{2}-2\right)!}{\left(\frac{ m-n}{2}\right)!(n-2)!}\left(\mathfrak{D}^{2}-1\right)^{\frac{m-n}{2}}\] \[\times (1+\xi_{n})^{m-1}\Delta^{\frac{m-1}{2}}H(\xi)\omega_{n-2}(1+o(1))\] \[+ \lambda^{m-1}(-1)^{\frac{m-n}{2}}(1+\xi_{n})^{m-1}\frac{\left( \frac{n+m}{2}-2\right)!}{\left(\frac{m-n}{2}\right)!(n-2)!}\left(\mathfrak{D}^ {2}-1\right)^{\frac{m-n}{2}}\] \[\times \int_{\mathbb{S}^{n-1}}\left(H(z)-\sum_{|\alpha|\leq m-1}\frac{1} {|\alpha|!}\partial_{\xi_{\alpha}}H(\xi)(z-\xi)^{\alpha}\right)\frac{|z+\xi|^ {m-n}}{|z-\xi|^{n+m-2}}dz\] \[+ \int_{\partial\mathbb{R}_{+}^{n}}\left(\tilde{H}_{A}(\lambda \bar{y})-\sum_{|\alpha|\leq m}\frac{\lambda^{|\alpha|}}{|\alpha|!}\partial_{ \bar{x}_{\alpha}}\tilde{H}_{A}(0)\bar{y}^{\alpha}\right)\] \[\times \left(\frac{1}{\left(|\bar{y}|^{2}+\mathfrak{D}^{2}-1\right)^{n-1} }-\sum_{j=0}^{\frac{m-n}{2}}(-1)^{j}\frac{(n+j-2)!}{j!(n-2)!}\frac{\left( \mathfrak{D}^{2}-1\right)^{j}}{|\bar{y}|^{2(n+j-1)}}\right)d\bar{y}\] \[+ o\left(\sum_{j=0}^{\frac{m-2}{2}}\Delta^{j}H(\xi)\lambda^{2j} \right).\]
The argument to estimate the interior terms is similar. We expand \(\tilde{K}(\lambda y)\) up to order \(n-1\), which is the highest power that can be integrated against \(P_{x_{0},\lambda}^{\frac{n}{2}}\).
\[\int_{\mathbb{R}_{+}^{n}}\frac{\tilde{K}_{A}(\lambda y)}{\left( \left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1\right)^{n}}d\bar{y}dy_{n}\] \[= \tilde{K}_{A}(0)\int_{\mathbb{R}_{+}^{n}}\frac{d\bar{y}dy_{n}}{ \left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1\right)^{n}}\] \[+\sum_{1\leq|\alpha|\leq n-1}\frac{\lambda^{|\alpha|}}{|\alpha|!} \partial_{x_{\alpha}}\tilde{K}_{A}(0)\int_{\partial\mathbb{R}_{+}^{n}}\frac{y^ {\alpha}}{\left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1\right)^{n }}d\bar{y}dy_{n}\] \[+\underbrace{\int_{\mathbb{R}_{+}^{n}}\frac{\tilde{K}_{A}( \lambda y)-\sum_{|\alpha|\leq n-1}\frac{\lambda^{n}}{|\alpha|}\partial_{x_{ \alpha}}\tilde{K}_{A}(0)y^{\alpha}}{\left(\left|\bar{y}\right|^{2}+(y_{n}+ \mathfrak{D})^{2}-1\right)^{n}}d\bar{y}dy_{n}}_{=:I^{\prime\prime}}\] \[= K(\xi)\int_{\mathbb{R}_{+}^{n}}\frac{d\bar{y}dy_{n}}{\left( \left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1\right)^{n}}+\lambda \partial_{x_{n}}\tilde{K}_{A}(\xi)\int_{\mathbb{R}_{+}^{n}}\frac{y_{n}}{\left( \left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D})^{2}-1\right)^{n}}d\bar{y}dy_{n}\] \[+\sum_{j=2}^{n-1}\lambda^{j}\sum_{i=0}^{\lfloor\frac{j}{2}\rfloor }\frac{1}{(j-2i)!(2i)!}\frac{(n-3)!!}{(2i)!!(n+2i-3)!!}\partial_{x_{n}}^{j-2i} \Delta_{\bar{x}}^{i}\tilde{K}_{A}(0)\int_{\mathbb{R}_{+}^{n}}\frac{\left|\bar{y }\right|^{2i}y_{n}^{j-2i}}{\left(\left|\bar{y}\right|^{2}+(y_{n}+\mathfrak{D} )^{2}-1\right)^{n}}d\bar{y}dy_{n}\] \[+ I^{\prime\prime}\]
where we wrote the derivation in \(\bar{x}\) and \(x_{n}\) as
\[\sum_{|\alpha|=j}\partial_{x_{\alpha}}=\sum_{i=0}^{j}\frac{j!}{(j-i)!i!}\sum_{ |\beta|=i}\partial_{x_{n}}^{j-i}\partial_{\bar{x}_{\beta}}\]
and used again cancellation by symmetry and (6.1). In the last step, we used (6.2) (in \(\bar{x}\)) and that
\[\partial_{x_{n}}^{j}=(-1)^{j}(1+\xi_{n})^{j}\partial_{\nu}.\]
In the case \(n=2\), since \(\int_{\mathbb{R}_{+}^{2}}\frac{y_{2}}{(\bar{y}^{2}+(y_{n}+\mathfrak{D})^{2}-1)^{n} }d\bar{y}dy_{2}=\frac{\pi}{2}\left(\sqrt{\mathfrak{D}^{2}-1}-\mathfrak{D}\right)\), putting together with (6.3) we get the first order expansion
\[\Gamma(x_{0},\lambda)=\psi(\xi)-2\pi(1+\xi_{n})\lambda\left(\left(\mathfrak{D} -\sqrt{\mathfrak{D}^{2}-1}\right)\,\partial_{\nu}K(\xi)-2\mathfrak{D}(-\Delta )^{\frac{1}{2}}H(\xi)\right)+o(\lambda),\]
whereas when \(n\geq 3\) the first order expansion contains only the interior term:
\[\Gamma(x_{0},\lambda)=\psi(\xi)-a_{n,0,1}(1+\xi_{n})\lambda\partial_{\nu}K( \xi).\]
As for \(I^{\prime\prime}\), we get _local_ terms involving derivatives of \(\tilde{K}_{A}\) and _non-local_ terms similarly as before:
\[I^{\prime\prime}= \lambda^{n}\int_{\mathbb{R}_{+}^{n}}\frac{\tilde{K}_{A}(x)-\sum_ {|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{x_{\alpha}}\tilde{K}_{A}(0)x^{ \alpha}}{\left(|\bar{x}|^{2}+(x_{n}+\lambda\mathfrak{D})^{2}-\lambda^{2}\right) ^{n}}d\bar{x}dx_{n}\] \[= \frac{\lambda^{n}}{n!}\sum_{|\alpha|=n}\partial_{x_{\alpha}} \tilde{K}_{A}(0)\int_{|x|\leq 1}\frac{x^{\alpha}}{\left(|\bar{x}|^{2}+(x_{n}+ \lambda\mathfrak{D})^{2}-\lambda^{2}\right)^{n}}d\bar{x}dx_{n}\] \[+ \lambda^{n}\int_{\mathbb{R}_{+}^{n}}\frac{\tilde{K}_{A}(x)-\sum_ {|\alpha|\leq n-1}\frac{1}{|\alpha|!}\partial_{x_{\alpha}}\tilde{K}_{A}(0)x^{ \alpha}-\frac{1}{n!}\sum_{|\alpha|=n}\partial_{x_{\alpha}}\tilde{K}_{A}(0)x^{ \alpha}\chi_{|x|\leq 1}}{\left(|\bar{x}|^{2}+(x_{n}+\lambda\mathfrak{D})^{2}- \lambda^{2}\right)^{n}}d\bar{x}dx_{n}\] \[= \lambda^{n}\left(\log\frac{1}{\lambda}+O(1)\right)\sum_{i=0}^{ \lfloor\frac{n}{2}\rfloor}\frac{(n-3)!!}{(n-2i)!(2i)!(2i)!!(n+2i-3)!!}\partial _{x_{n}}^{n-2i}\Delta_{\bar{x}}^{i}\tilde{K}_{A}(0)\omega_{n-2}\int_{0}^{\pi }\sin^{n-2i}tdt\] \[+ \lambda^{n}\int_{\mathbb{R}_{+}^{n}}\frac{\tilde{K}_{A}(x)-\sum_ {|\alpha|\leq n}\frac{1}{|\alpha|!}\partial_{x_{\alpha}}\tilde{K}_{A}(0)x^{ \alpha}\chi_{|x|\leq 1}}{\left(|\bar{x}|^{2}+(x_{n}+\lambda\mathfrak{D})^{2}- \lambda^{2}\right)^{n}}d\bar{x}dx_{n}\] \[+ O\left(\lambda^{n}\sum_{j=0}^{n-1}\sum_{i=0}^{\lfloor\frac{j}{2 }\rfloor}\partial_{x_{n}}^{j-2i}\Delta_{\bar{x}}^{i}\tilde{K}_{A}(0)\right)\] \[= \lambda^{n}\log\frac{1}{\lambda}\sum_{i=0}^{\lfloor\frac{n}{2} \rfloor}\frac{(n-3)!!}{(n-2i)!(2i)!(2i)!!(n+2i-3)!!}\partial_{x_{n}}^{n-2i} \Delta_{\bar{x}}^{i}\tilde{K}_{A}(0)\omega_{n-2}\int_{0}^{\pi}\sin^{n-2i}tdt( 1+o(1))\] \[+ \lambda^{n}\int_{\mathbb{R}_{+}^{n}}\frac{\tilde{K}_{A}(x)-\sum_ {|\alpha|\leq n}\frac{1}{|\alpha|!}\partial_{x_{\alpha}}\tilde{K}_{A}(0)x^{ \alpha}\chi_{|x|\leq 1}}{|\bar{x}|^{2n}}d\bar{x}dx_{n}\] \[+ \underbrace{\int_{\mathbb{R}_{+}^{n}}\left(\tilde{K}_{A}(\lambda y )-\sum_{|\alpha|\leq n}\frac{\lambda^{|\alpha|}}{|\alpha|!}\partial_{x_{\alpha}} \tilde{K}_{A}(0)y^{\alpha}\right)\left(\frac{1}{\left(|\bar{y}|^{2}+\left(y_{n }+\mathfrak{D}\right)^{2}-1\right)^{n}}-\frac{1}{|y|^{2n}}\right)d\bar{y}dy_{n}} _{=:I^{\prime\prime\prime}}\] \[+ o\left(\sum_{j=0}^{n-1}\lambda^{j}\sum_{i=0}^{\lfloor\frac{j}{2 }\rfloor}\partial_{x_{n}}^{j-2i}\Delta_{\bar{x}}^{i}\tilde{K}_{A}(0)\right)\]
\[= \lambda^{n}\log\frac{1}{\lambda}\sum_{i=0}^{\lfloor\frac{n}{2} \rfloor}\frac{(n-3)!!}{(n-2i)!(2i)!(2i)!!(n+2i-3)!!}(-1)^{n}(1+\xi_{n})^{n} \partial_{\nu}^{n-2i}\Delta_{\tau}^{i}K(\xi)\omega_{n-2}\] \[\times \int_{0}^{\pi}\sin^{n-2i}tdt(1+o(1))\] \[+ \lambda^{n}(1+\xi_{n})^{n}\int_{\mathbb{B}^{n}}\frac{K(z)-\sum_{ |\alpha|\leq n}\frac{1}{|\alpha|!}\partial_{\xi\alpha}K(\xi)(z-\xi)^{\alpha}}{ |z-\xi|^{2n}}dz\] \[+ I^{\prime\prime}+o\left(\sum_{j=0}^{n-1}\lambda^{j}\sum_{i=0}^{ \lfloor\frac{j}{2}\rfloor}\partial_{\nu}^{j-2i}\Delta_{\tau}x^{i}K(\xi)\right).\]
In order to iterate and find the next order terms, we again need a series expansion: we get
\[\frac{1}{\left(|\bar{y}|^{2}+\left(y_{n}+\mathfrak{D}\right)^{2}-1\right)^{n} }=\sum_{j=0}^{\infty}\frac{1}{|y|^{2(n+j)}}\sum_{i=0}^{\lfloor\frac{j}{2} \rfloor}(-1)^{j-i}\frac{(n+j-i-1)!}{(n-1)!i!(j-2i)!}\left(\mathfrak{D}^{2}-1 \right)^{i}\left(2\mathfrak{D}\right)^{j-2i}|y|^{2i}y_{n}^{j-2i},\]
which in turn comes from
\[\frac{1}{at^{2}+bt+1}=\sum_{j=0}^{\infty}t^{j}\sum_{i=0}^{\lfloor\frac{j}{2} \rfloor}(-1)^{j-i}\frac{(n+j-i-1)!}{(n-1)!i!(j-2i)!}a^{i}b^{j-2i}.\]
Therefore, for \(m>n\) we get:
\[\lambda^{m}\log\frac{1}{\lambda}\sum_{i=0}^{\lfloor\frac{m-n}{2} \rfloor}(-1)^{m-n-i}\frac{(m-i-1)!}{(n-1)!i!(m-n-2i)!}\left(\mathfrak{D}^{2}- 1\right)^{i}\left(2\mathfrak{D}\right)^{m-n-2i}\] \[\times \sum_{j=0}^{\lfloor\frac{m}{2}\rfloor}\frac{(n-3)!!}{(m-2j)!(2j)! (2j)!!(n+2j-3)!!}(-1)^{m}(1+\xi_{n})^{m}\partial_{\nu}^{m-2j}\Delta_{\tau}^{j }K(\xi)\omega_{n-2}\] \[\times \int_{0}^{\pi}\sin^{2m-n-2i-2j}tdt(1+o(1))\] \[+ \lambda^{m}(1+\xi_{n})^{m}\sum_{i=0}^{\lfloor\frac{m-n}{2} \rfloor}(-1)^{m-n-i}\frac{(m-i-1)!}{(n-1)!i!(m-n-2i)!}\left(\mathfrak{D}^{2}- 1\right)^{i}\left(2\mathfrak{D}\right)^{m-n-2i}\] \[\times \int_{\mathbb{B}^{n}}\left(K(z)-\sum_{|\alpha|\leq m}\frac{1}{| \alpha|!}\partial_{\xi_{\alpha}}K(\xi)(z-\xi)^{\alpha}\right)\frac{|z+\xi|^{2 i}\left(1-|z|^{2}\right)^{m-n-2i}}{|z-\xi|^{2m}}dz\] \[+ \int_{\mathbb{R}_{+}^{n}}\left(\tilde{K}_{A}(\lambda y)-\sum_{| \alpha|\leq m}\frac{\lambda^{|\alpha|}}{|\alpha|!}\partial_{x_{\alpha}}\tilde {K}_{A}(0)y^{\alpha}\right)\left(\frac{1}{\left(|\bar{y}|^{2}+\left(y_{n}+ \mathfrak{D}\right)^{2}-1\right)^{n}}\right.\]
\[-\sum_{j=0}^{m-n}\frac{1}{|y|^{2(n+j)}}\sum_{i=0}^{\lfloor\frac{j}{2} \rfloor}(-1)^{j-i}\frac{(n+j-i-1)!}{(n-1)!i!(j-2i)!}\left(\mathfrak{D}^{2}-1 \right)^{i}(2\mathfrak{D})^{j-2i}\left|y\right|^{2i}y_{n}^{j-2i}\right)d\bar{y} dy_{n}\] \[+o\left(\sum_{j=0}^{m-1}\lambda^{j}\sum_{i=0}^{\lfloor\frac{j}{2 }\rfloor}\partial_{\nu}^{j-2i}\Delta_{\tau}K(\xi)\right).\]
The proof is now complete, since all the quantities are the same as in Definition 2.1.
|
2304.06638 | How Useful are Educational Questions Generated by Large Language Models? | Controllable text generation (CTG) by large language models has a huge
potential to transform education for teachers and students alike. Specifically,
high quality and diverse question generation can dramatically reduce the load
on teachers and improve the quality of their educational content. Recent work
in this domain has made progress with generation, but fails to show that real
teachers judge the generated questions as sufficiently useful for the classroom
setting; or if instead the questions have errors and/or pedagogically unhelpful
content. We conduct a human evaluation with teachers to assess the quality and
usefulness of outputs from combining CTG and question taxonomies (Bloom's and a
difficulty taxonomy). The results demonstrate that the questions generated are
high quality and sufficiently useful, showing their promise for widespread use
in the classroom setting. | Sabina Elkins, Ekaterina Kochmar, Jackie C. K. Cheung, Iulian Serban | 2023-04-13T16:05:25Z | http://arxiv.org/abs/2304.06638v1 | # How Useful are Educational Questions Generated by Large Language Models?
###### Abstract
Controllable text generation (CTG) by large language models has a huge potential to transform education for teachers and students alike. Specifically, high quality and diverse question generation can dramatically reduce the load on teachers and improve the quality of their educational content. Recent work in this domain has made progress with generation, but fails to show that real teachers judge the generated questions as sufficiently useful for the classroom setting; or if instead the questions have errors and/or pedagogically unhelpful content. We conduct a human evaluation with teachers to assess the quality and usefulness of outputs from combining CTG and question taxonomies (Bloom's and a difficulty taxonomy). The results demonstrate that the questions generated are high quality and sufficiently useful, showing their promise for widespread use in the classroom setting.
Keywords:Controllable Text Generation Personalized Learning Prompting Question Generation
## 1 Introduction
The rapidly growing popularity of large language models (LLMs) has taken the AI community and general public by storm. This attention can lead people to believe LLMs are the right solution for every problem. In reality, the question of the usefulness of LLMs and how to adapt them to real-life tasks is an open one.
Recent advancements in LLMs have raised questions about their impact on education, including promising use cases [1, 8, 9, 10]. A robust question generation (QG) system has the potential to empower teachers by decreasing their cognitive load while creating teaching material. It could allow them to easily generate personalized content to fill the needs of different students by adapting questions to Bloom's taxonomy levels (i.e., learning goals) or difficulty levels. Already, interested teachers report huge efficiency increases using LLMs to generate questions [1, 8]. These improvements hinge on the assumption that the candidates are high quality and are actually judged to be useful by teachers generally. To the best of our knowledge, there has yet to be a study assessing how a larger group of teachers perceive a set of candidates from LLMs.1 We investigate if LLMs can
generate different types of questions from a given context that teachers think are appropriate for use in the classroom. Our experiment shows this is the case, with high quality and usefulness ratings across two domains and 9 question types.
## 2 Background Research
Auto-regressive LLMs are deep learning models trained on huge corpora of data. Their training goal is to predict the next word in a sequence, given all of the previous words [11]. An example of an auto-regressive LLM is the GPT family of models, such as GPT-3. Recently, GPT-3 has been fine-tuned with reinforcement learning to create a powerful LLM called InstructGPT, which outperforms its predecessors in the GPT family [6]. Using human-annotated data, the creators of InstructGPT use supervised learning to train a reward model, which acts as a reward signal to learn to choose preferred outputs from GPT-3.
An emerging paradigm for text generation is to prompt (or 'ask') LLMs for a desired output [5]. This works by feeding an input prompt or 'query' (with a series of examples for a one- or few-shot setting) to a LLM. This paradigm has inspired a new research direction called _prompt engineering_. One of the most common approaches to prompt engineering involves prepending a string to the context given to a LLM for generation [4]. For controllable text generation (CTG), such a prefix must contain a control element, such as a keyword that will guide the generation [5].
Questions are one of the most basic methods used by teachers to educate. As this learning method is so broad, it uses many organizational taxonomies which take different approaches to divide questions into groups. One popular example is Bloom's taxonomy [3], which divides educational material into categories based on student's learning goals. Another example is a difficulty-level taxonomy, which usually divides questions into 3 categories of easy, medium, and hard [7]. By combining CTG and these question taxonomies, we open doors for question generation by prompting LLMs to meet specifications of the educational domain.
## 3 Methodology
### Controllable Generation Parameters
Parameter settings used in this paper were guided by preliminary experimentation. Firstly, 'long' context passages (6-9 sentences) empirically appeared to improve generation. Secondly, the few-shot setting outperformed the zero-shot setting, with five-shot (i.e., with 5 context/related question type pairs included in the prompt) performing best. Few-shot generation is where prompts consist of an instruction (e.g., "Generate easy questions."), examples (e.g., set of n context/easy question pairs), and the desired task (e.g., context to generate from). Thirdly, there was not a large enough sample size to definitively say which question taxonomies are superior to use as control elements for CTG. Two representative taxonomies were chosen for the experiments in Section 3.2: Bloom's taxonomy [3] (which includes _remembering_, _understanding_, _applying_, _analyzing_,
evaluating_, and _creating_ question types) and a difficulty-level taxonomy (which includes _beginner_, _intermediate_, and _advanced_ question types) [7]. These taxonomies approach the organization of questions in different ways, by the learning goal and by complexity respectively. This creates an interesting comparison among the taxonomic categories to help explore the limits of the CTG approach.
### Teacher Assessment Experiment
Question GenerationThe human assessment experiment was conducted with candidates generated in the machine learning (ML) and biology (BIO) domains. There are 68 'long' context passages (6-9 sentences) pulled from Wikipedia (31 are ML, 37 are BIO). Using hand-crafted examples for 5-shot prompting, InstructGPT was prompted to generate 612 candidate questions.6 Each passage has 9 candidates, one with each taxonomic category as the control element.
Footnote 6: The passages, few-shot examples, prompt format, taxonomic level definitions, annotator demographics and raw results are available: [https://tinyurl.com/y2hy8m4p](https://tinyurl.com/y2hy8m4p).
AnnotatorsThere are two cohorts of annotators, BIO and ML. The 11 BIO annotators have biology teaching experience at least at a high school level, and were recruited on the freelance platform Up Work. The 8 ML annotators have CS, ML, AI, math or statistics teaching experience at a university level, and were recruited through word of mouth at McGill and Mila. All of the annotators are proficient English speakers and are from diverse demographics. Their teaching experience ranges from 1-on-1 tutoring to hosting lectures at a university. The experiments are identical for both cohorts. As such, the experiment is explained in a domain-agnostic manner. The results will be presented separately, as the goal of this work is not to show identical trends between the two domains, but that CTG is appropriate for education in general.
MetricsEach annotator was trained to assess the generated candidates on two of four quality metrics, as well as a _usefulness_ metric. This division was done to reduce the cognitive load on an individual annotator. The quality metrics are: _relevance_ (binary variable representing if the question is related to the context), _adherence_ (binary variable representing if the question is an instance of the desired question taxonomy level); and _grammar_ (binary variable representing if the question is grammatically correct), _answerability_ (binary variable representing if there is a text span from the context that is an answer/leads to one). The _relevance_, _grammar_,7_answerability_, and _adherence_ metrics are binary as they are objective measures, often seen in QG literature to assess typical failures of LLMs such as hallucinations or malformed outputs [5]. The subjective metric assessed, the _usefulness_ metric, is rated on a scale because it is more nuanced. This is defined by a teacher's answer to the question: "Assume you wanted to teach about context X. Do you think candidate Y would be useful in a lesson, homework, quiz, etc.?" This ordinal metric has the following four categories: _not useful_, _useful with major edits_ (taking more than a minute), _useful with minor edits_ (taking less than a minute), and _useful with no edits_. If a teacher rates a
question as _not useful_ or _useful with major edits_ we also ask them to select from a list of reasons why (or write their own).
Reducing BiasWe first conducted a pilot study to ensure the metrics and annotator training were unambiguous. We randomized the order of candidates presented and asked annotators to rate one metric at a time to avoid conflation. We included unmarked questions in order to ascertain if the annotators were paying attention. These questions were obviously wrong (e.g., a random question from a different context, a candidate with injected grammatical errors). Any annotators who did not agree on a minimum of 80% of these 'distractor' questions were excluded. The annotators' performance on these is discussed in Section 4.2.
## 4 Results and Analysis
### Generation Overlap
We observed overlaps within the generated candidates for this experiment. Specifically, despite having different control elements, sometimes the LLM generates the same question for a given context passage twice. As a result, out of 612 candidates, there are 540 unique ones (88.24% are unique). We believe this overlap is low enough so the generated candidates are still sufficiently diverse for a teacher's needs. It is important to keep in mind that this overlap is not reflected in the following results, as teachers were asked to rank every candidate independently. Future work by the authors will remove this independence assumption.
### Annotator Agreement
All of the participants annotated candidates from 6 context passages. In order to assess their agreement on the task, they annotated a 7\({}^{th}\) passage that was the same for all annotators in a given domain cohort. The results for each metric are reported in Table 1. In both domains, _relevance_, _grammar_, and _answerability_ have between 85% and 100% observed agreement. The _adherence_ metric has lower agreement, between 60% and 80%. Since this metric is more complex than the others and captures the annotators' interpretations of the question taxonomies, we consider this moderate agreement to be acceptable and expected.
Unlike the binary metrics, all candidates were rated on _usefulness_ by two annotators. As before, only one context passage, the agreement on which is presented in Table 1, was seen by all annotators. Section 4.4 discusses the aggregation of the _usefulness_ scores on the rest of the data. In both cohorts, the observed agreement on _usefulness_ is around 63%. This metric is defined according to a teacher's opinion, and as such is subjective. Thus, the lower agreement between annotators is to be expected. Using Cohen's \(\kappa\) to measure the agreement yields a \(\kappa=0.537\) for the ML cohort and a \(\kappa=0.611\) for the BIO cohort, which implies moderate and substantial agreement respectively [2]. Additionally, the agreement of the annotators on the included 'distractor' candidates for this metric (see Section 3.2) is \(\kappa=1\) (i.e., perfect agreement), which shows that the annotators agree on the fundamental task but might find different questions useful for their particular approach to teaching.
### Quality Metrics
Three quality metrics, _relevance_, _grammar_, and _answerability_, are consistently high for all generated candidates (see in Table 1). The fourth quality metric, _adherence_, varies across the taxonomic categories as seen in Figure 0(a). This variation is similar within the two domains. As might be expected, the more objective categories are easier for the LLM to generate. For instance, looking only at the'remembering' category has an _adherence_ of 83.3% for the ML cohort and 91.7% for the BIO cohort. This category is intended to ask for a student to recall a fact or definition. This might be simple for the LLM to replicate by identifying a relevant text span, and reflects the traditional QG task. By contrast, asking a LLM to generate a 'creating' question is a more open-ended problem, where a text span from the context may not be the answer. Accordingly, the model struggles on this less constrained task, and has an _adherence_ of only 40.0% for the ML cohort and 36.1% for the BIO cohort.
### _Usefulness_ Metric
The _usefulness_ metric's ordinal categories (see Section 3.2) are mapped from 1 (_not useful_) to 4 (_useful with no edits_). The average usefulness for all candidates is 3.509 for the ML cohort and 3.593 for the BIO cohort. Note that an individual candidate's usefulness is already the average score between two annotator's ratings, and the whole average usefulness is the average across all candidates. This is a highly promising result showing that on average teachers find that these generated candidates will be useful in a classroom setting.
\begin{table}
\begin{tabular}{|c||c|c||c|c|} \hline
**Metric** & \(\mu\pm\sigma\) (ML) & Agreement \% (ML) & \(\mu\pm\sigma\) (BIO) & Agreement \% (BIO) \\ \hline Relevance & 0.967\(\pm\)0.180 & 100 & 0.972\(\pm\)0.165 & 100 \\ \hline Grammar & 0.957\(\pm\)0.204 & 92.6 & 0.970\(\pm\)0.170 & 100 \\ \hline Adherence & 0.674\(\pm\)0.470 & 62.2 & 0.691\(\pm\)0.463 & 79.9 \\ \hline Answerability & 0.914\(\pm\)0.282 & 94.4 & 0.930\(\pm\)0.256 & 86.7 \\ \hline Usefulness & 3.509\(\pm\)0.670 & 62.7 & 3.593\(\pm\)0.682 & 62.8 \\ \hline \end{tabular}
\end{table}
Table 1: The quality metrics’ mean (\(\mu\)), standard deviation (\(\sigma\)), and observed agreement (i.e., % of the time the annotators chose the same label).
Figure 1: Visualizations of the _usefulness_ and _adherence_ metrics.
There is no significant difference between the usefulness scores of any of the question taxonomy categories, though some variation is present (see Figure 0(b)). On average, each of the question taxonomies are rated between _useful with minor edits_ and _useful with no edits_ (i.e., [3, 4]). Considering the _adherence_ that differs across categories, it is also important to note that a question which does not adhere to its question taxonomy can still be useful in a different way than intended. 56.8% of the time the reason cited for 'not useful' candidates is related to their grammar or phrasing. This can possibly be fixed by a filter that removes malformed questions, but it will lower the available diversity of questions.
#### 4.1.2 Conclusion
This work takes steps to demonstrate the realistic usefulness of applying CTG to generate educational questions. The results show that CTG is a highly promising method that teachers find useful in a classroom setting. We do not include baselines because the goal is not to show these questions are better than others, only to show they are of high enough quality. Limitations include the single LLM considered, the independence assumption seen in Section 4.1, and the lack of comparison between human and machine-authored questions. The authors plan to explore these avenues in future work. Applying generated candidates to form real-world lessons and evaluate their impact will demonstrate their ultimate value. CTG could pave the way for a new approach to education and transform the experiences of millions of teachers and students.
#### 4.1.3 Acknowledgements
We'd like to thank Mitacs for their grant for this project, and CIFAR for their continued support. We are grateful to both the annotators for their time and the anonymous reviewers for their valuable feedback.
|
2303.15185 | A probabilistic view of wave-particle duality for single photons | One of the most puzzling consequences of interpreting quantum mechanics in
terms of concepts borrowed from classical physics, is the so-called
wave-particle duality. Usually, wave-particle duality is illustrated in terms
of complementarity between path distinguishability and fringe visibility in
interference experiments. In this work, we instead propose a new type of
complementarity, that between the continuous nature of waves and the discrete
character of particles. Using the probabilistic methods of quantum field
theory, we show that the simultaneous measurement of the wave amplitude and the
number of photons in the same beam of light is, under certain circumstances,
prohibited by the laws of quantum mechanics. Our results suggest that the
concept of ``interferometric duality'' could be eventually replaced by the more
general one of ``continuous-discrete duality''. | Andrea Aiello | 2023-03-27T13:21:25Z | http://arxiv.org/abs/2303.15185v4 | # A probabilistic view of wave-particle duality for single photons
###### Abstract
We describe a simple experiment exemplifying wave-particle duality in a light beam prepared in a single-photon state. By approaching the problem from the perspective of probability theory, we demonstrate that standard correlation functions fail to reveal an existing nonlinear dependence between certain wave and particle observables that can be simultaneously measured in the experiment. We circumvent this problem by using mutual information to quantify such nonlinear dependence. We find that the latter may be not at all negligible, depending on detectors' settings. This study sheds new light on wave-particle duality.
## 1 Introduction
In classical mechanics a physical system is characterized by a set of _degrees of freedom_, which defines its state or configuration at any fixed time [1]. Such a set may be either countable (finite or denumerable), or uncountable. The branch of classical mechanics that studies _discrete_ systems with a countable set of degrees of freedom, is traditionally called _particle mechanics_. Conversely, _continuous_ systems described by an uncountable set of degrees of freedom, are the subjects of _continuum mechanics_[2]. In particle mechanics, a system is described by a set of functions of _time_\(t\), the so-called generalized coordinates. In continuum mechanics a system is characterized by a set of functions of _spacetime_ points \((x,y,z,t)\), the components of scalar, vector or tensor fields. Thus, in classical mechanics a physical system is described either as discrete or continuous (or part discrete and part continuous), and the two descriptions are mutually exclusive1. Conversely, in quantum mechanics there is no such clear separation between continuous and discrete aspects of a physical system. This gives rise to the famous wave-particle duality in quantum mechanics (see, e.g., sec. 1.5 of Ref. [4], or [5, 6] and references therein).
Footnote 1: This does not mean, for example, that we cannot use coordinates to portray some characteristics of a field. Consider, for example, an electromagnetic wave-packet with energy density \(U(\mathbf{r},t)\). Such wave-packet is completely described by the electric and magnetic fields. However, we can introduce the “energy center of gravity of the field” \(\mathbf{R}=\mathbf{R}(t)\), defined by \(\mathbf{R}(t)=\int\mathbf{r}\,U(\mathbf{r},t)\,\mathrm{d}\mathbf{r}/\int U( \mathbf{r},t)\,\mathrm{d}\mathbf{r}\)[3], to picture the mean position and velocity \(\mathbf{V}=\mathrm{d}\mathbf{R}/\mathrm{d}t\) of the wave-packet. However, the coordinates \(\mathbf{R}(t)\) are _emergent_ quantities that are not necessary for the complete description of the system.
In this paper we study wave-particle duality for single-photon states of the electromagnetic field [7, 8], using a probabilistic approach. To be more specific, let us consider the following experiment. A collimated beam of light prepared in a single-photon Fock state [9], impinges upon a detection screen, as shown in Fig. 1. On this screen there are two spatially separated detec
Figure 1: A pictorial representation of the collimated light beam impinging upon the detection screen (gray surface). Blue and red spots on the screen depict the active surfaces of the two detectors \(D_{1}\) and \(D_{2}\).
tors, say \(D_{1}\) and \(D_{2}\). Each detector can be set to either measure the _W_ave amplitude \(\mathcal{W}\) of the electric field, or to _C_ount the number \(\mathcal{C}\) of photons, of the light falling on it. Depending on how we set up these two detectors, we can measure three different pairs of observables, that is \((\mathcal{W}_{1},\mathcal{W}_{2}),\ (\mathcal{C}_{1},\mathcal{C}_{2}),\) and \((\mathcal{W}_{1},\mathcal{C}_{2})\). The last pair is particularly interesting because it represents the simultaneous measurement of a wave \((\mathcal{W}_{1})\) and a particle \((\mathcal{C}_{2})\) observable of the system. By calculating the joint probability distribution for the random variables \(W_{1}\) and \(C_{2}\) representing the results of measurements of \(\mathcal{W}_{1}\) and \(\mathcal{C}_{2}\), respectively, we find that these two variables are linearly uncorrelated but not independent [10]. This means that there is a "hidden" nonlinear dependence between \(W_{1}\) and \(C_{2}\), which cannot be detected by measuring ordinary correlation coefficients. To reveal such a dependence, we use the mutual information [11], a statistical measure that finds numerous applications in contemporary physics (see, e.g., [12] and references therein). In this way, we are able to demonstrate wave-particle duality in single-photon states. This is the main results of our work. We also derive a simple inequality that quantifies wave-particle duality by imposing a constraint on the probability that the single-photon state is measured either as a wave or as a particle.
The rest of this paper is organized as follows. In section 2 we quickly present a phenomenological quantum field theory of paraxial beams of light, and we jot down the quantum states of the field. In section 3 we build up and characterize the Hermitian quantum operators representing the wave \((\mathcal{W})\) and particle \((\mathcal{C})\) observables of the electromagnetic field. In section 4 we briefly review probability theory for quantum operators. Next, in section 5 first we write down and discuss the formulas for the joint probability distributions for the three pairs of random variables \((W_{1},W_{2}),\ (C_{1},C_{2}),\) and \((W_{1},C_{2})\) describing the results of the experiment pictured above. Then, we apply these formulas to the cases of vacuum and single-photon input states of the electromagnetic field. Finally, in section 6 we briefly summarise our results and draw some conclusions. Four appendices report detailed calculations of the results presented in the main text.
## 2 Quantum field theory of light
In this section we give a brief overview of the quantum field theory of paraxial light beams. We also define and illustrate the quantum states of the electromagnetic field that will be used later.
### Paraxial quantum field operators
Following closely [13], we consider a monochromatic paraxial beam of light of frequency \(\omega\), propagating in the \(z\) direction and polarized along the \(x\) axis of a given Cartesian coordinate system. In the Coulomb gauge, the electric field operator can be written as \(\hat{\mathbf{E}}(\mathbf{r},t)=\hat{\Phi}(\mathbf{x},z,t)\,\hat{\mathbf{e}}_{x}\), where \(\mathbf{r}=x\hat{\mathbf{e}}_{x}+y\hat{\mathbf{e}}_{y}+z\hat{\mathbf{e}}_{z} \!:=\!\mathbf{x}+z\hat{\mathbf{e}}_{z}\) is the position vector and, in suitably chosen units,
\[\hat{\Phi}(\mathbf{x},z,t)=\frac{1}{\sqrt{2}}\left[e^{-i\omega t }\hat{\phi}(\mathbf{x},z)+e^{i\omega t}\hat{\phi}^{\dagger}(\mathbf{x},z) \right], \tag{1}\]
with
\[\hat{\phi}(\mathbf{x},z)=\sum_{\mu}\hat{a}_{\mu}u_{\mu}(\mathbf{x},z). \tag{2}\]
Here and below \(\mathbf{x}=x\mathbf{e}_{x}+y\mathbf{e}_{y}\) is the transverse position vector, and the elements \(u_{\mu}(\mathbf{x},z)\) of the countable set of functions \(\{u_{\mu}(\mathbf{x},z)\}\), are the so-called spatial modes of the field. By hypothesis, these modes are solutions of the paraxial wave equation [14], and form a complete and orthogonal set of basis functions on \(\mathbb{R}^{2}\), i.e.,
\[\sum_{\mu}u_{\mu}(\mathbf{x},z)u_{\mu}^{*}(\mathbf{x}^{\prime},z)=\delta\left( \mathbf{x}-\mathbf{x}^{\prime}\right), \tag{3}\]
with \(\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)=\delta\left(x-x^{\prime} \right)\delta\left(y-y^{\prime}\right)\), and
\[\left(u_{\mu},u_{\mu^{\prime}}\right)=\delta_{\mu\mu^{\prime}}, \tag{4}\]
respectively. Here and hereafter we use the suggestive notation
\[(f,g)=\int_{\mathbb{R}^{2}}f^{*}(\mathbf{x},z)g(\mathbf{x},z)\,\mathrm{d} \mathbf{x}, \tag{5}\]
where \(\mathrm{d}\mathbf{x}=\mathrm{d}x\,\mathrm{d}y\). As usual, the annihilation and creation operators \(\hat{a}_{\mu}\) and \(\hat{a}_{\mu}^{\dagger}\), respectively, satisfy the bosonic canonical commutation relations
\[\left[\hat{a}_{\mu},\,\hat{a}_{\mu^{\prime}}^{\dagger}\right]=\delta_{\mu\mu^ {\prime}}. \tag{6}\]
Finally, we remark that from (1)-(5) it follows that the dimension of \(\hat{\Phi}(\mathbf{x},z,t)\) is the inverse of a length: \([\hat{\Phi}]=L^{-1}\).
### Quantum states of the electromagnetic field
Consider a classical paraxial beam of light carrying the electric field \(\mathbf{E}_{\mathrm{cl}}(\mathbf{r},t)=\Phi(\mathbf{x},z,t)\,\hat{\mathbf{e}}_{x}\), where
\[\Phi(\mathbf{x},z,t)=\frac{1}{\sqrt{2}}\left[e^{-i\omega t}\phi( \mathbf{x},z)+e^{i\omega t}\phi^{*}(\mathbf{x},z)\right]. \tag{7}\]
Here the scalar field \(\phi(\mathbf{x},z)\) is a solution of the paraxial wave equation normalized to \((\phi,\phi)=1\). By construction, the classical field \(\Phi(\mathbf{x},z,t)\) is equal to the expectation value of the quantum field \(\hat{\Phi}(\mathbf{x},z,t)\) with respect to the coherent state \(|\{\phi\}\rangle\), i.e., \(\Phi(\mathbf{x},z,t)=\langle\{\phi\}|\hat{\Phi}(\mathbf{x},z,t)|\{\phi\}\rangle\), where \(|\{\phi\}\rangle=\exp\bigl{(}\hat{a}^{\dagger}[\phi]-\hat{a}[\phi]\bigr{)}|0\rangle\), \(|0\rangle\) is the vacuum state of the electromagnetic field defined by \(\hat{a}_{\mu}|0\rangle=0\) for all \(\mu\),
\[\hat{a}^{\dagger}[\phi]=\bigl{(}\hat{\phi},\phi\bigr{)}=\sum_{\mu} \hat{a}_{\mu}^{\dagger}\phi_{\mu}, \tag{8}\]
with \(\phi_{\mu}=(u_{\mu},\phi)\)[15, 13], and (2) has been used. Note that since both the modes \(u_{\mu}(\mathbf{x},z)\) and the field \(\phi(\mathbf{x},z)\) are solutions of the paraxial wave equation, then the coefficients \(\phi_{\mu}\) are independent of \(z\). It is not difficult to show that \(\bigl{[}\hat{a}[\phi],\hat{a}^{\dagger}[\psi]\bigr{]}=(\phi,\psi)\), for any pair of normalized fields \(\phi(\mathbf{x},z)\) and \(\psi(\mathbf{x},z)\). The field \(\phi(\mathbf{x},z)\) also determines the (improperly called) wave function of the photon, defined by \(\langle 0|\hat{\Phi}(\mathbf{x},z,t)|1[\phi]\rangle=e^{-i\omega t}\phi( \mathbf{x},z)/\sqrt{2}\), where
\[|N[\phi]\rangle=\frac{\bigl{(}\hat{a}^{\dagger}[\phi]\bigr{)}^{N}}{\sqrt{N!}} |0\rangle, \tag{9}\]
denotes the \(N\)-photon Fock state with \(N=0,1,2,\ldots\), such that \(\hat{N}|N[\phi]\rangle=N|N[\phi]\rangle\),
\[\hat{N}=\sum_{\mu}\hat{a}_{\mu}^{\dagger}\hat{a}_{\mu}, \tag{10}\]
and \(\langle N[\phi]|M[\psi]\rangle=\bigl{(}\phi,\psi\bigr{)}^{N}\delta_{NM}\) (see Supplemental Material in [13] for further details).
## 3 Wave-like and particle-like operators
In this section, we will construct what we call the "wave operator" \(\hat{W}\) and the "particle operator" \(\hat{C}\), which represent, respectively, the amplitude \(\mathcal{W}\) of the electric field and the number \(\mathcal{C}\) of counted photons of some light beam. In our jargon, a wave operator is simply a Hermitian operator with a _continuous spectrum_, while a particle operator is a Hermitian operator with a _discrete spectrum_. Herein lies the great conceptual difference between classical and quantum mechanics. In the former, the either continuous or discrete character of a physical system is a property of its description that we fix a priori. In the latter, on the other hand, the state of a physical system is always described by a ray in a Hilbert space, and there are certain physical quantities relative to the system, the so-called observables, some of which are discrete and others continuous in character. Hence the wave-particle duality.
#### 3.0.1 Wave-like operators
In quantum field theory, a mathematical object like \(\hat{\Phi}(\mathbf{x},z,t)\) defined by (1), does not really represent a proper observable, because it is not an Hermitian operator in the Hilbert space \(\mathcal{H}\) of the physical states of the electromagnetic field, but rather an "operator valued distribution" over the Euclidean spacetime \(\mathbb{R}^{2}\times\mathbb{R}\)[16]. This can be seen, for example, by showing that \(\hat{\Phi}(\mathbf{x},z,t)\) does not map the vacuum state \(|0\rangle\in\mathcal{H}\) into another state in \(\mathcal{H}\). To this end, let us define the vector \(|\psi\rangle=\hat{\Phi}(\mathbf{x},z,t)|0\rangle\). Then, it is not difficult to show that \(|\psi\rangle\notin\mathcal{H}\) because it has not a finite norm:
\[\langle\psi|\psi\rangle =\lim_{\mathbf{x}^{\prime}\rightarrow\mathbf{x}}\langle 0|\hat{ \Phi}(\mathbf{x},z,t)\hat{\Phi}(\mathbf{x}^{\prime},z,t)|0\rangle\] \[=\frac{1}{2}\,\lim_{\mathbf{x}^{\prime}\rightarrow\mathbf{x}} \langle 0|\bigl{[}\hat{\phi}(\mathbf{x},z,t),\hat{\phi}^{\dagger}(\mathbf{x} ^{\prime},z,t)\bigr{]}|0\rangle\] \[=\frac{1}{2}\lim_{\mathbf{x}^{\prime}\rightarrow\mathbf{x}} \delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)\] \[=\infty, \tag{11}\]
where (3) has been used. This means that the quantum fluctuations (variance) of \(\hat{\Phi}(\mathbf{x},z,t)\) in the vacuum state blow up for \(\mathbf{x}^{\prime}\rightarrow\mathbf{x}\). Thus, to obtain a bona fide Hermitian operator defined on the vectors in \(\mathcal{H}\), we must to smear out \(\hat{\Phi}(\mathbf{x},z,t)\) with a real-valued test function \(F(\mathbf{x},t)\in\mathbb{R}\)[17, 18, 19], namely to take
\[\hat{\Phi}[F]=\int_{\mathbb{R}^{2}\times\mathbb{R}}F(\mathbf{x},t)\hat{\Phi}( \mathbf{x},z,t)\,\mathrm{d}\mathbf{x}\,\mathrm{d}t. \tag{12}\]
In the case of free fields, we can choose \(F(\mathbf{x},t)=\delta(t-t_{0})f(\mathbf{x})\)[16, 20] where we normalize the real-valued function \(f(\mathbf{x})\) as
\[\int_{\mathbb{R}^{2}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}=1, \tag{13}\]
and \(t_{0}\) is any time. Without loss of generality, in the remainder we will set \(t_{0}=0\). Note that normalization condition (13) implies that the dimension of both \(f({\bf x})\) and \((f,f)\) is \(L^{-2}\). Then we define the smeared field operator \(\hat{W}\!:=\!\hat{\Phi}[f]\) by
\[\hat{W}\!:=\!\hat{\Phi}[f] = \int f({\bf x})\,\hat{\Phi}({\bf x},z,0)\,{\rm d}{\bf x} \tag{14}\] \[= \sum_{\mu}\Big{(}\hat{a}_{\mu}f_{\mu}^{*}+\hat{a}_{\mu}^{\dagger} f_{\mu}\Big{)}\,/\sqrt{2},\]
where \(f_{\mu}=(u_{\mu},f)=|f_{\mu}|e^{i\theta_{\mu}}\)[16, 20]. For example, \(f({\bf x})\) can be the Gaussian function
\[f({\bf x})=\frac{1}{(a\sqrt{\pi})^{2}}\,e^{-(x^{2}+y^{2})/a^{2}}, \tag{15}\]
where \(a>0\) is some length. In this case \(\hat{W}\) is a smoothed form of the field averaged over a region of area \(a^{2}\)[21].
More generally, the physical meaning of \(\hat{W}\) is that of a quadrature operator of the electric field, which can be measured by a homodyne detector [22]. To show this, first we write \(\hat{W}\) as \(\hat{W}=\big{(}f,\hat{\Phi}\big{)}\), and then we use the definition (8) to obtain
\[\hat{W}=\frac{1}{\sqrt{2}}\left(\hat{a}[f]+\hat{a}^{\dagger}[f]\right). \tag{16}\]
This is indeed the expression of the quadrature Hermitian operator of a single mode of the electromagnetic field [22, 23]. We remark that since the quadrature operator has a continuum of eigenvalues2, then the smeared field \(\hat{W}\) has a _continuum spectrum_ too.
Footnote 2: See, for example, Eq. (11.8) in Ref. [22].
If one prefers to work with the original operators \(\hat{a}_{\mu}\) associated with the modes \(u_{\mu}({\bf x},z)\), then \(\hat{W}\) is given by a sum of quadrature operators. This can be seen by rewriting
\[\hat{W}=\sum_{\mu}|f_{\mu}|\,\hat{x}_{\mu}, \tag{17}\]
where by definition
\[\hat{x}_{\mu}\!:=\!\big{(}\hat{a}_{\mu}e^{-i\theta_{\mu}}+\hat{a}_{\mu}^{ \dagger}e^{i\theta_{\mu}}\big{)}/\sqrt{2}, \tag{18}\]
is the quadrature Hermitian operator of the field component with respect to the mode \(u_{\mu}({\bf x},z)\).
From (13) and (14) it follows that the dimension of \(\hat{W}=\hat{\Phi}[f]\) is \(L\), as that of the original operator \(\hat{\Phi}({\bf x},z,0)\). For later purposes, it is useful to calculate the _finite_ variance \(\sigma^{2}\) of \(\hat{W}\) with respect to the vacuum state:
\[\sigma^{2} = \left\langle 0|\hat{W}^{2}|0\right\rangle \tag{19}\] \[= \left(f,f\right)/2.\]
In the remainder we will consider a set of \(M\) different test functions \(\{f_{n}({\bf x})\}=\{f_{1}({\bf x}),\ldots,f_{M}({\bf x})\}\), each normalized according to (13). The function \(f_{n}({\bf x})\) characterizes the action of detector \(D_{n}\), when the latter is set to measure the amplitude of the electric field of light falling on it (see, e.g., [24, 25] and SS9 of [26] for a thorough discussion about measurements of the strength of a quantum field).
In practice, each detector \(D_{n}\) has a limited active surface area. Let \({\cal D}_{n}\in\mathbb{R}^{2}\), \((n=1,2,\ldots,M)\) denote the domain in the \(xy\)-plane occupied by the active surface of detector \(D_{n}\). In principle, \({\cal D}_{n}\) may have any shape, we only require that \({\cal D}_{n}\cap{\cal D}_{m}=\emptyset\) if \(m\neq n\). With \({\bf 1}_{n}({\bf x})\) we denote the indicator function of the domain \({\cal D}_{n}\) defined by
\[{\bf 1}_{n}({\bf x})=\left\{\begin{array}{ll}1,&\mbox{for ${\bf x}\in{\cal D }_{n}$},\\ 0,&\mbox{for ${\bf x}\not\in{\cal D}_{n}$}.\end{array}\right. \tag{20}\]
Note that, by definition,
\[{\bf 1}_{n}({\bf x}){\bf 1}_{m}({\bf x})=\delta_{nm}{\bf 1}_{n}({\bf x}). \tag{21}\]
Then, we can take \(f_{n}({\bf x})={\bf 1}_{n}({\bf x})f({\bf x}-{\bf x}_{n})\), where \(f({\bf x}-{\bf x}_{0})\) is any smooth function concentrated in the neighborhood of \({\bf x}_{0}\), such that
\[\int_{\mathbb{R}^{2}}f_{n}({\bf x})\,{\rm d}{\bf x} = \int_{{\cal D}_{n}}f({\bf x}-{\bf x}_{n})\,{\rm d}{\bf x} \tag{22}\] \[\approx \int_{\mathbb{R}^{2}}f({\bf x}-{\bf x}_{n})\,{\rm d}{\bf x} = 1.\]
With this choice we have \(f_{n}({\bf x})f_{m}({\bf x})=0\) for \(n\neq m\).
Using (14) with \(f=f_{n}\), we obtain \(M\) smeared fields \(\hat{W}_{n}\!:=\!\hat{\Phi}[f_{n}]\) at \(M\) spatially separated points \({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{M}\) in the \(xy\)-plane. We can then write
\[\hat{W}_{n}=\sum_{\mu}\Big{(}\hat{a}_{\mu}f_{n\mu}^{*}+\hat{a}_{\mu}^{\dagger} f_{n\mu}\Big{)}\,/\sqrt{2}, \tag{23}\]
where \(f_{n\mu}=(u_{\mu},f_{n})\), and
\[\sigma_{n}^{2}=(f_{n},f_{n})\,/2,\qquad(n=1,\ldots,M). \tag{24}\]
#### 3.0.2 Particle-like operators
We consider now the "intensity" operator \(\hat{\mathbf{l}}(\mathbf{x},z)\) defined by \(\hat{\mathbf{l}}(\mathbf{x},z)=\hat{\phi}^{\dagger}(\mathbf{x},z)\hat{\phi}( \mathbf{x},z)\). This quantity can be interpreted as a photon-number operator per unit transverse surface, because
\[\int_{\mathbb{R}^{2}}\hat{\mathbf{l}}(\mathbf{x},z)\,\mathrm{d} \mathbf{x}=\hat{N}, \tag{25}\]
where \(\hat{N}\) is defined by (10).
We then define the photon-counting operator \(\hat{C}_{n}\!:=\!\hat{\mathbf{l}}[\mathbf{1}_{n}],\ (n=1,\ldots,M)\) representing the action of detector \(D_{n}\) when the latter is set to count the number of photons impinging on it, as
\[\hat{C}_{n}\!:=\!\hat{\mathbf{l}}[\mathbf{1}_{n}] = \int_{\mathbb{R}^{2}}\mathbf{1}_{n}(\mathbf{x})\,\hat{\mathbf{l} }(\mathbf{x},z)\,\mathrm{d}\mathbf{x} \tag{26}\] \[= \big{(}\hat{\phi},\mathbf{1}_{n}\hat{\phi}\big{)}\] \[= \sum_{\mu,\nu}\hat{a}_{\mu}^{\dagger}\hat{a}_{\nu}\mathbf{1}_{n \mu\nu},\]
where
\[\mathbf{1}_{n\mu\nu}=(u_{\mu},\mathbf{1}_{n}u_{\nu}). \tag{27}\]
By diagonalizing the linear operator whose matrix elements are \(\mathbf{1}_{n\mu\nu}\), it is not difficult to find the discrete eigenvalues and eigenvectors of \(\hat{C}_{n}\), but it is not necessary to do this. We need only to note that by definition the _discrete spectrum_ of \(\hat{C}_{n}\) gives the number of photons counted by detector \(D_{n}\). Note also that \(\hat{C}_{n}\) is dimensionless.
### Commutation relations
In the remainder we will need to use commutation relations for the wave and photon-counting operators \(\hat{W}_{n}\) and \(\hat{C}_{n}\). Such relations are calculated in Appendix A, and the results are:
\[[\hat{W}_{m},\,\hat{W}_{n}] = 0, \tag{28}\] \[= 0,\] (29) \[= \frac{\delta_{nm}}{\sqrt{2}}\left\{(f_{n},\hat{\phi})-(\hat{\phi},f_{n})\right\}, \tag{30}\]
where \(m,n=1,2,\ldots,M\). Since all commutators above are zero for \(m\neq n\), then all the wave an particle observables associated with different detectors are compatible and can be measured simultaneously.
## 4 Probability distributions
In random variable theory, the probability distribution or probability density function (p.d.f.) \(p_{\mathbf{Q}}(\mathbf{q})\) of a \(M\)-dimensional random variable \(\mathbf{Q}=(Q_{1},Q_{2},\ldots,Q_{M})\), can be written as \(p_{\mathbf{Q}}(\mathbf{q})=\langle\delta(\mathbf{Q}-\mathbf{q})\rangle\), where \(\langle\cdots\rangle\) denotes average over all possible realization of \(\mathbf{Q}\), and
\[\delta(\mathbf{Q}-\mathbf{q})=\prod_{n=1}^{M}\delta(Q_{n}-q_{n}), \tag{31}\]
[27]. Similarly, in quantum mechanics the spectral theorem [28] shows that given a Hermitian operator \(\hat{Q}\) and a vector state \(|\psi\rangle\) of norm 1, we can calculate the expectation value of any regular function \(F(\hat{Q})\) of \(\hat{Q}\) with respect to \(|\psi\rangle\), either as \(\langle F(\hat{Q})\rangle=\langle\psi|F(\hat{Q})|\psi\rangle\), or as
\[\langle F(\hat{Q})\rangle=\int_{\mathbb{R}}F(q)\,p_{Q}(q)\,\mathrm{d}q, \tag{32}\]
where the p.d.f. \(p_{Q}(q)\) of the random variable \(Q\) associated with the operator \(\hat{Q}\), is defined by
\[p_{Q}(q)=\langle\psi|\delta(\hat{Q}-q)|\psi\rangle, \tag{33}\]
(see, e.g., sec. 3-1-2 in [20], problem 4.3 in [21], or [28]). Using the Fourier representation of the Dirac delta function \(\delta(z)=\int_{\mathbb{R}}e^{i\alpha z}\,\mathrm{d}\alpha\,/(2\pi)\), it is straightforward to show that
\[p_{Q}(q)=\frac{1}{2\pi}\int_{\mathbb{R}}\langle\psi|e^{i\alpha\hat{Q}}|\psi \rangle e^{-i\alpha q}\,\mathrm{d}\alpha, \tag{34}\]
where \(\langle\psi|\exp(i\alpha\hat{Q})|\psi\rangle\) is the so-called quantum characteristic function [29].
The advantage of this formulation is that we can calculate \(p_{Q}(q)\) without knowing the spectrum of \(\hat{Q}\). Of course, if the latter were known, the calculation of \(p_{Q}(q)\) would be trivial. To see this, suppose that \(\hat{Q}\) is the position operator such that \(\hat{Q}|q\rangle=q|q\rangle\). Then, from (33) and a straightforward calculation it follows the well-known result
\[p_{Q}(q)=|\langle q|\psi\rangle|^{2}=|\psi(q)|^{2}\,. \tag{35}\]
## 5 Measuring the wave-like and particle-like aspects of light
Consider three different experiments where a light beam prepared in the \(N\)-photon Fock state \(|N[\phi]\rangle\) impinges upon a screen where two detectors are
placed at two spatially separated points in the \(xy\)-plane, as shown in Fig. 1. In the first experiment the detectors are set up to measures the amplitudes \(\mathcal{W}_{1}\) and \(\mathcal{W}_{2}\) of the electric field of the light falling on them. In the second experiment the detectors are set up to count the number of photons \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Finally, in the third and last experiment detector \(D_{1}\) measures the electric field amplitude \(\mathcal{W}_{1}\), and detector \(D_{2}\) counts the number of photons \(\mathcal{C}_{2}\).
The outcomes of these experiments can be described by the three pairs of random variables \((W_{1},W_{2}),\ (C_{1},C_{2})\) and \((W_{1},C_{2})\), distributed according to
\[(W_{1},W_{2}) \sim p_{W_{1}W_{2}}(N,w_{1},w_{2}),\] \[(C_{1},C_{2}) \sim p_{C_{1}C_{2}}(N,c_{1},c_{2}),\]
and
\[(W_{1},C_{2})\sim p_{W_{1}C_{2}}(N,w,c),\]
where either \(N=0\) (vacuum state, we calculate it for comparison), or \(N=1\) (single-photon state, the case of interest). Using the methods outlined in Sec. 4, these three p.d.f.s are calculated in Appendices B, C and D, according to the formulas
\[p_{W_{1}W_{2}}(N,w_{1},w_{2})\\ =\langle N[\phi]|\delta\big{(}\hat{W}_{1}-w_{1}\big{)}\delta \big{(}\hat{W}_{2}-w_{2}\big{)}|N[\phi]\rangle, \tag{36}\]
\[p_{C_{1}C_{2}}(N,c_{1},c_{2})\\ =\langle N[\phi]|\delta\big{(}\hat{C}_{1}-c_{1}\big{)}\delta \big{(}\hat{C}_{2}-c_{2}\big{)}|N[\phi]\rangle, \tag{37}\]
and
\[p_{W_{1}C_{2}}(N,w,c)\\ =\langle N[\phi]|\delta(\hat{W}_{1}-w)\delta\big{(}\hat{C}_{2}-c \big{)}|N[\phi]\rangle, \tag{38}\]
where the \(N\)-photon Fock state \(|N[\phi]\rangle\) is defined by (9). Note that there is not an operator ordering problem in the definitions (36)-(38) because all the operators involved do commute.
### Single detector
For illustration purposes, here we calculate the probability distributions of \(W\) and \(C\) alone, as if a single detector were present. In these two cases we have \(W\sim p_{W}(N,w)\) and \(C\sim p_{C}(N,c)\), where
\[p_{W}(N,w) =\langle N[\phi]|\delta(\hat{W}-w)|N[\phi]\rangle, \tag{39}\] \[p_{C}(N,c) =\langle N[\phi]|\delta(\hat{C}-c)|N[\phi]\rangle, \tag{40}\]
with \(N=0\) and \(N=1\).
#### 5.1.1 Vacuum state
In the simplest case of input vacuum state, that is \(N=0\), we have from (B.14) and (C.3), \(p_{W}(0,w)\!:=\!p_{0}(w)\) and \(p_{C}(0,c)\!:=\!p_{0}(c)\), respectively, where
\[p_{0}(w) =\left(2\pi\sigma^{2}\right)^{-1/2}\exp\left(-\frac{w^{2}}{2\sigma ^{2}}\right), \tag{41}\] \[p_{0}(c) =\delta(c), \tag{42}\]
and, as in (19), here \(\sigma^{2}=(f,f)/2\) fixes the variance of the smeared field \(\hat{W}\) in the vacuum state. As expected, \(p_{0}(w)\) and \(p_{0}(c)\) are the p.d.f.s of a continuous and a discrete random variable, respectively.
Equation (41) shows a well-know result from the quantum theory of _free fields_: the field amplitude in the ground (vacuum) state follows a Gaussian probability distribution that is centred on the value zero. The quantum field fluctuations are fixed by the smearing function via \(\sigma^{2}\). For example, if \(f(\mathbf{x})\) is given by (15), then \(\sigma^{2}=1/(4\pi a^{2})\). This implies that when the linear dimension \(a\) of the region in which the field amplitude is measured shrinks to zero, the quantum fluctuations become huge, eventually becoming infinite for \(a\to 0\)[21].
Equation (42) gives the trivial p.d.f. of a discrete-type random variable with a probability mass function that takes a single value: \(\mathrm{Prob}(C=0)=1\). Physically this means that in the vacuum state of the electromagnetic field, the probability to count one or more photons is equal to zero, as it should be.
#### 5.1.2 Single-photon state
Next, we write down the probability distributions (39) and (40) with respect to the single-photon state (\(N=1\)). From (B.39) and (C.23), we have
\[p_{W}(1,w) =(1-|s|^{2})p_{0}(w)+|s|^{2}p_{1}(w), \tag{43}\] \[p_{C}(1,c) =(1-P)\,\delta(c)+P\,\delta(c-1), \tag{44}\]
where \(p_{0}(w)\) and \(p_{1}(w)=p_{0}(w)w^{2}/\sigma^{2}\) are given by (41) and (B.40), respectively. Moreover, we have set
\[s=\frac{(\phi,f)}{(f,f)^{1/2}},\quad\text{and}\quad P=(\phi,\mathbf{1}\phi)\geq 0, \tag{45}\]
with \(\mathbf{1}(\mathbf{x})\) denoting the indicator function of the domain \(\mathcal{D}\) representing the active area of the single detector. As usual, \(f\) denotes the smearing function.
From (43) it is easy to calculate the average value \(\mathrm{E}[W]=0\), and the variance \(\sigma_{W}^{2}\) of \(W\),
\[\sigma_{W}^{2}=\mathrm{E}[W^{2}]-(\mathrm{E}[W])^{2}=\sigma^{2}(1+2|s|^{2}). \tag{46}\]
Equation (46) shows that the quantum fluctuations of the field in the single-photon state are always bigger that the fluctuations in vacuum. Similarly, from (44) it follows that
\[\mathrm{E}[C]=P=\mathrm{E}[C^{2}], \tag{47}\]
so that the variance \(\sigma_{C}^{2}\) of \(C\) is
\[\sigma_{C}^{2}=\mathrm{E}[C^{2}]-(\mathrm{E}[C])^{2}=P(1-P). \tag{48}\]
Equation (43) shows that \(p_{W}(1,w)\) is a so-called _mixture distribution_ (see, e.g., Sec. **3.5** of Ref. [10]), that is a convex combination of the elementary distributions \(p_{0}(w)\) and \(p_{1}(w)\), with weights \(1-|s|^{2}\) and \(|s|^{2}\), respectively. From a physical point of view, this means that when we measure some amplitude \(w\) of the field, there is a probability \(1-|s|^{2}\) that this amplitude was sampled from the "vacuum distribution" \(p_{0}(w)\), and a probability \(|s|^{2}\) that it was sampled from the "single-photon distribution" \(p_{1}(w)\), instead. This ambiguity can be reduced in one direction or the other by varying \(|s|\). When \(|s|=0\) only the vacuum state contribute to \(p_{W}(1,w)\). This is clear. However, if \(|s|=1\), the contribution of the vacuum to \(w\) will be zero, and
\[p_{W}(1,w)=p_{1}(w)=p_{0}(w)\frac{w^{2}}{\sigma^{2}}. \tag{49}\]
This kind of distribution is known as the Maxwell distribution of speeds in statistical physics when \(w\geq 0\) (see, e.g., Sec. 7.10 in [30]). Note that at \(w=0\) we have \(p_{W}(1,0)=p_{0}(0)(1-|s|^{2})\), with \(1-|s|^{2}\leq 1\) from (45) and the Cauchy-Schwarz inequality. This \(s\)-dependent dip at \(w=0\), is the signature of the single-photon state with respect to the vacuum state, as illustrated by Fig. 2.
The p.d.f. (44) of the discrete random variable \(C\) gives \(\mathrm{Prob}(C=0)=1-P\), and \(\mathrm{Prob}(C=1)=P\), where \(P\) is given by (45), here rewritten as
\[P=\int_{\mathcal{D}}\left|\phi(\mathbf{x},z)\right|^{2}\,\mathrm{d}\mathbf{x}. \tag{50}\]
This expression shows that \(P\) is equal to the fraction of the intensity \(\left|\phi(\mathbf{x},z)\right|^{2}\) of the incident beam that falls upon the detector surface. This makes sense.
### Two detectors
When there are two detectors located in two different places in the \(xy\)-plane, as shown in Fig. 1, we can choose between three different possibilities of detection: \(a)\) wave-wave detection; \(b)\) particle-particle detection; and \(c)\) wave-particle detection. In the remainder of this section we will analyze in detail these three cases.
#### 5.2.1 a) Wave-wave detection
For the vacuum state and \(M=2\), (B.15) gives
\[p_{W_{1}W_{2}}(0,w_{1},w_{2}) =p_{0}(w_{1})\,p_{0}(w_{2})\] \[:= p_{0}(w_{1},w_{2}), \tag{51}\]
where \(p_{0}(w)\) is defined by (41). Thus, the joint p.d.f. is the product of the marginal probability
Figure 2: Plots of \(p_{W}(1,w)\) given by (43) for different values of \(|s|\in[0,1]\). The blue dot-dashed curve for \(|s|=0\) is equal to the p.d.f. for the vacuum state \(p_{W}(0,w)\). When the superposition between the cross section of the beam and the detector surface increases, the central value \(p_{W}(1,0)\) decreases to zero. Note that from (43) it directly follows that all the curves plotted here intersect at \(w=\sigma\).
distributions \(p_{0}(w_{1})\) and \(p_{0}(w_{2})\). Therefore, the two random variables \(W_{1}\) and \(W_{2}\) defined by _both_ the operators \(\hat{W}_{1},\ \hat{W}_{2}\) and the quantum vacuum state \(|0\rangle\), are independent.
For the single-photon state \(|1[\phi]\rangle\), equations (B.36)-(B.37) give
\[p_{W_{1}W_{2}}(1,w_{1},w_{2})=(1-|{\bf s}|^{2})\,p_{0}(w_{1},w_{2})+|{\bf s}|^{ 2}\,p_{1}(w_{1},w_{2}), \tag{52}\]
which is, as in the single-detector case, a mixture distribution. In this equation \({\bf s}=(s_{1},s_{2})\), with \(|{\bf s}|^{2}=|s_{1}|^{2}+|s_{2}|^{2}\), and
\[p_{1}(w_{1},w_{2})=p_{0}(w_{1},w_{2})\left|\frac{w_{1}}{\sigma_{1}}\,\frac{s_ {1}}{|{\bf s}|}+\frac{w_{2}}{\sigma_{2}}\,\frac{s_{2}}{|{\bf s}|}\right|^{2}, \tag{53}\]
where
\[\sigma_{n}^{2}=\frac{(f_{n},f_{n})}{2},\qquad(n=1,2), \tag{54}\]
and
\[s_{n}=\frac{(\phi,f_{n})}{(f_{n},f_{n})^{1/2}}=\frac{(\phi,f_{n})}{\sqrt{2}\, \sigma_{n}},\qquad(n=1,2). \tag{55}\]
By definition \(|s_{n}|\leq 1\). However, the existence of (52) imposes the further joint condition \(|s_{1}|+|s_{2}|\leq 1\). A pictorial representation of the distribution \(p_{W_{1}W_{2}}(1,w_{1},w_{2})\) is shown in Fig. 3.
The set of parameters that characterize the bivariate distribution (52) can be straightforwardly calculated. The average values are equal to zero, i.e., \({\rm E}[W_{1}]=0={\rm E}[W_{2}]\). The variances are:
\[\sigma_{W_{n}}^{2}:={\rm E}[W_{n}^{2}]=\sigma_{n}^{2}(1+2|s_{n}|^{2}), \tag{56}\]
with \(n=1,2\), and the covariance is
\[{\rm E}[W_{1}W_{2}]=\sigma_{1}\sigma_{2}\left(s_{1}s_{2}^{*}+s_{1}^{*}s_{2} \right). \tag{57}\]
If we choose the smearing functions \(f_{1}\) and \(f_{2}\) and the location of the two detectors such that \(\sigma_{1}=\sigma_{2}\,{:=}\,\sigma\) and \(s_{1}=s_{2}\,{:=}\,s\), then using (56)-(57) we find the following correlation coefficient:
\[\frac{{\rm E}[W_{1}W_{2}]-{\rm E}[W_{1}]\,{\rm E}[W_{2}]}{\sigma_ {W_{1}}\sigma_{W_{2}}} =\frac{2\,|s|^{2}}{1+2\,|s|^{2}}\] \[\leq\frac{1}{2}. \tag{58}\]
The latter inequality follows from the condition \(0\leq 1-|s_{1}|^{2}-|s_{2}|^{2}\), which becomes \(|s|^{2}\leq 1/2\) in the present case. A positive correlation coefficient between \(W_{1}\) and \(W_{2}\) means that when \(W_{1}\) increases then \(W_{2}\) also increases, and when \(W_{1}\) decreases then \(W_{2}\) also decreases. The minimum value \(0\) of the correlation coefficient (58) is attained when \(s=0\). This may occur in two different ways: either _a_) both detectors have finite active surface but are located outside the section of the beam on the detection screen, or _b_) the detectors are placed within the section of the beam, but they are point-like detectors with zero-size active surface. Case _a_) is trivial and implies \(p_{W_{1}W_{2}}(1,w_{1},w_{2})=p_{0}(w_{1},w_{2})\). Case _b_) is more interesting because it shows that the amplitudes of the field measured by any pair of point-like detectors are always uncorrelated. This is due to the fact that a quantum field wildly fluctuates when it is strongly localized. To see this, let us take \(f({\bf x})\) as in (15), so that \((f,f)=1/(2\pi a^{2})=2\sigma^{2}\).
Then, to achieve \(s_{1}=s_{2}=s\) we must assume that the two point-like detectors are located at \({\bf x}_{1}\) and \({\bf x}_{2}\) chosen in such a way that \(|\phi({\bf x}_{1},z)|=|\phi({\bf x}_{2},z)|\). In this case from (54) and \(a\approx 0\), it follows that
\[|s|^{2}=\frac{|(\phi,f)|^{2}}{2\,\sigma^{2}}\approx\sqrt{2\pi}\,a\,|\phi({\bf x }_{1},z)|^{2}. \tag{59}\]
Since \(|\phi({\bf x}_{1},z)|^{2}\) is always a finite quantity for any physically realisable light beam, then \(|s|^{2}\to 0\) when the size \(a\) of the detectors goes to zero. But when \(a\to 0\), the quantum fluctuations blow up because \(\sigma^{2}=1/(4\pi a^{2})\to\infty\).
It is interesting to note that when either \(s_{1}=0\) or \(s_{2}=0\), the two random variables \(W_{1}\) and \(W_{2}\) become independent. However, when both \(s_{1}\neq 0\) and \(s_{2}\neq 0\), then \(W_{1}\) and \(W_{2}\) are not independent although the two corresponding operators \(\tilde{W}_{1}\) and \(\tilde{W}_{2}\) do commute. This is a consequence of the non-localizability of the single-photon field \(\phi({\bf x},z)\), whose section in the \(xy\)-plane extends over both regions \({\cal D}_{1}\) and \({\cal D}_{2}\) covered by the active surfaces of the two detectors \(D_{1}\) and \(D_{2}\). In fact, the joint p.d.f. \(p_{W_{1}W_{2}}(1,w_{1},w_{2})\) is determined by _both_ the operators \(\tilde{W}_{1}\), \(\tilde{W}_{2}\)_and_ the quantum state \(|1[\phi]\rangle\). Therefore, it is the spatial transverse extension of the light field \(\phi({\bf x},z)\) that establishes a correlation between the two random variables \(W_{1}\) and \(W_{2}\). Very similar conclusions were reached in Ref. [31] where the authors investigated, in their own words, "the delocalized state formed by a photon" using homodyne tomography.
#### 5.2.2 Case b): particle-particle detection
For the vacuum state and \(M=2\), (C.2) gives
\[p_{C_{1}C_{2}}(0,c_{1},c_{2})=\delta(c_{1})\,\delta(c_{2}). \tag{60}\]
This equation simply shows that in the vacuum state we always count zero photons.
More interesting is the single-photon state case, for which (C.21) gives
\[p_{C_{1}C_{2}}(1,c_{1},c_{2})=(1-P_{1}-P_{2})\,\delta(c_{1})\,\delta(c_{2})+P _{1}\,\delta(c_{1}-1)\,\delta(c_{2})+P_{2}\,\delta(c_{1})\,\delta(c_{2}-1), \tag{61}\]
where
\[P_{n}=(\phi,{\bf 1}_{n}\phi)\leq 1,\qquad(n=1,2), \tag{62}\]
is the fraction of the intensity of the beam impinging upon the \(n\)th detector. Note that the first term in (61) enforces the constraint \(P_{1}+P_{2}\leq 1\). Using (61) it is not difficult to calculate
\[\text{E}[(C_{n})^{k}]=P_{n},\qquad(k\in\mathbb{N}), \tag{63}\]
with \(n=1,2\), and
\[\text{E}[C_{1}C_{2}]=0. \tag{64}\]
The latter two equations imply for the correlation coefficient,
\[\frac{\text{E}[C_{1}C_{2}]-\text{E}[C_{1}]\,\text{E}[C_{2}]}{ \sigma_{C_{1}}\sigma_{C_{2}}} =-\sqrt{\frac{P_{1}P_{2}}{(1-P_{1})(1-P_{2})}}\] \[\geq-1, \tag{65}\]
where (48) has been used. A negative covariance means that there is an inverse relationship between the random variables \(C_{1}\) and \(C_{2}\). In physical terms this means that when the number of photons counted by \(D_{1}\) increases, the one counted by \(D_{2}\) must decrease, and vice versa. This is a consequence of both the fixed the number of photons in Fock states and the non-localizability of the electromagnetic field, which we have previously discussed. Differently from (58), here the correlation coefficient can achieve the minimum value \(-1\), which means perfect anticorrelation between \(C_{1}\) and \(C_{2}\). This occurs when \(P_{1}=P_{2}=1/2\), which means that we are using a split detector to count photons in each half of the beam.
#### 5.2.3 Case c): wave-particle detection
This is the last and more interesting case. By hypothesis, detector \(D_{1}\) measures the electric-field amplitude \({\cal W}_{1}\) of a portion of the impinging light beam, and detector \(D_{2}\) counts the number of photons \({\cal C}_{2}\) in a different portion of the same beam. For the vacuum state (D.2) gives
\[p_{W_{1}C_{2}}(0,w,c)=p_{0}(w)\,\delta(c), \tag{66}\]
as expected. This result is very simple and there is not much to say about it.
Conversely, for the single-photon state from (D.7) we have,
\[p_{W_{1}C_{2}}(1,w,c)=\left(1-|s|^{2}-P\right)p_{0}(w)\,\delta(c)+|s|^{2}\,p_{1}( w)\,\delta(c)+P\,p_{0}(w)\,\delta(c-1), \tag{67}\]
where \(p_{0}(w)\) and \(p_{1}(w)=p_{0}(w)w^{2}/\sigma^{2}\) are given by (41) and (B.40), respectively, and \(s\) and \(P\) are defined by (45). This is again a mixture distribution but, differently from (52), we have now three terms. The first two terms tell us that if for the pair of observables \((\mathcal{W}_{1},\mathcal{C}_{2})\) we measure the values \((w,0)\), then there is a probability equal to \(1-|s|^{2}-P\) that the value \(w\) has been sampled from the vacuum distribution \(p_{0}(w)\), and a probability \(|s|^{2}\) that it has instead been sampled from the single-photon distribution \(p_{1}(w)\). The last term shows that whenever we measure the pair of values \((\mathcal{W}_{1},\mathcal{C}_{2})=(w,1)\), then the value of \(w\) has been sampled from the vacuum distribution with certainty.
Interestingly, the demand for spatial separation between the two detectors together with their finite extent leads to condition \(1-|s|^{2}-P\geq 0\), which results in the inequality
\[|s|^{2}+P\leq 1. \tag{68}\]
This simple expression somehow quantifies wave-particle duality in that it establishes a connection between the probability \(|s|^{2}\) that the photon reaches the wave detector \(D_{1}\), thus revealing its wave-like nature, and the probability \(P\) that it hits the particle detector \(D_{2}\), then manifesting its particle-like character. While in the particle-particle detection case the analogous relation \(P_{1}+P_{2}\leq 1\), due to the first term in (61), eventually leads to perfect anticorrelation between \(C_{1}\) and \(C_{2}\), in this case (68) does not signal a (linear) correlation between the random variables \(W_{1}\) and \(C_{2}\). To see this, we calculate
\[\mathrm{E}[W_{1}]=0,\qquad\mathrm{E}[C_{2}]=P, \tag{69}\]
and
\[\mathrm{E}[W_{1}^{2}] =\sigma^{2}\left(1+2|s|^{2}\right), \tag{70}\] \[\mathrm{E}[C_{2}^{2}] =P,\] (71) \[\mathrm{E}[W_{1}C_{2}] =0. \tag{72}\]
This implies
\[\sigma_{W_{1}}=\sqrt{\mathrm{E}[W_{1}^{2}]-(\mathrm{E}[W_{1}])^{2}}=\sigma^{2 }(1+2|s|^{2}), \tag{73}\]
\[\sigma_{C_{2}}=\sqrt{\mathrm{E}[C_{2}^{2}]-(\mathrm{E}[C_{2}])^{2}}=\sqrt{P(1 -P)}, \tag{74}\]
with \(0\leq P\leq 1\).
From (69)-(74) it follows that the _linear_ correlation coefficient of \(W_{1}\) and \(C_{2}\) is zero:
\[\frac{\mathrm{E}[W_{1}C_{2}]-\mathrm{E}[W_{1}]\,\mathrm{E}[C_{2}]}{\sigma_{W_{ 1}}\,\sigma_{C_{2}}}=0. \tag{75}\]
To find the first nonzero correlation coefficient we must calculate the _quadratic_ (with respect to \(W_{1}\)), central moment of the wave-particle distribution \(p_{W_{1}C_{2}}(1,w,c)\), that is
\[\mathrm{E}\left[(W_{1}^{2}-\mathrm{E}[W_{1}^{2}])(C_{2}-\mathrm{E}[C_{2}]) \right]=-2\sigma^{2}|s|^{2}P. \tag{76}\]
Thus, (75) and (76) show that there is no _linear_ dependence between the wave and particle random variables \(W_{1}\) and \(C_{2}\), but there is at least a quadratic one [10]. It is well known in probability theory that when random variables are correlated in a non-linear manner we have to consider more suitable measures of dependence such as the mutual information [11]. Mutual information of \(W_{1}\) and \(C_{2}\) tells how different the joint distribution \(p_{W_{1}C_{2}}(1,w,c)\) is, from the product of the marginal distributions \(p_{W_{1}}(1,w)\) and \(p_{C_{2}}(1,c)\). In practice, mutual information quantifies the reduction in the average uncertainty about one random variable given the knowledge of another. Thus, large values of mutual information indicate high reduction in uncertainty; small values of mutual information denote low reduction; and zero mutual information means that the two random variables are independent. We stress that here the term "uncertainty" is referring to the values taken by the random variables \(W_{1}\) and \(C_{2}\), and it should not be confused with the
well-known Heisenberg uncertainty, which refers to non-compatible, conjugate observables, which, differently from \(\mathcal{W}_{1}\) and \(\mathcal{C}_{2}\), cannot be measured simultaneously. Clearly, for conjugate observables a joint probability distribution cannot be calculated and, consequently, mutual information cannot be defined.3
Footnote 3: However, using Wigner’s functions and _linear entropy_, a different form of mutual information can be defined also for non-compatible observables [32, 33].
For a mixture of discrete and continuous variables the mutual information can be written as [34],
\[\mathrm{I}(W_{1};C_{2})=\mathrm{h}(W_{1})+\mathrm{H}(C_{2})-\mathcal{H}(W_{1},C_{2}), \tag{77}\]
where we have defined the continuous (differential), discrete and mixed entropies, \(\mathrm{h}(W_{1})\), \(\mathrm{H}(C_{2})\), and \(\mathcal{H}(W_{1},C_{2})\), respectively, as
\[\mathrm{h}(W_{1}) =-\int_{\mathbb{R}}p_{W_{1}}(1,w)\ln[p_{W_{1}}(1,w)]\,\mathrm{d}w, \tag{78}\] \[\mathrm{H}(C_{2}) =-P\ln P-(1-P)\ln(1-P), \tag{79}\]
and
\[\mathcal{H}(W_{1},C_{2})=-\sum_{i=0}^{1}\int_{\mathbb{R}}g_{i}(w)\ln[g_{i}(w) ]\,\mathrm{d}w. \tag{80}\]
The two functions \(g_{0}(w)\) and \(g_{1}(w)\) are defined by rewriting \(p_{W_{1}C_{2}}(1,w,c)\) as
\[p_{W_{1}C_{2}}(1,w,c) =\delta(c)\Big{[}(1-|s|^{2}-P)\,p_{0}(w)\] \[\quad+|s|^{2}p_{1}(w)\Big{]}+\delta(c-1)P\,p_{0}(w)\] \[:=\delta(c)\,g_{0}(w)\] \[\quad+\delta(c-1)\,g_{1}(w). \tag{81}\]
The quantities in (78)-(80) can be calculated explicitly, for example by using Mathematica [35]. We do not write down the formulae here as they are very complicated and not particularly enlightening. However, we plot \(\mathrm{I}(W_{1};C_{2})\) in Fig. 4. The existence of a nonzero mutual information witnesses the presence of a nonlinear relationship between the wave and the particle observables \(\mathcal{W}_{1}\) and \(\mathcal{C}_{2}\), respectively. This is the main result of this paper.
The maximum value of the mutual information is achieved for \(P=1-|s|^{2}\approx 0.47\), and it is given \(\max[\mathrm{I}(W_{1};C_{2})]\approx 0.18\). To understand what this number means, we compare the maximum of \(\mathrm{I}(W_{1};C_{2})\) with the maximum of \(\mathrm{I}(C_{1};C_{2})\), the latter being given by
\[\mathrm{I}(C_{1};C_{2}) =-(1-P_{1})\ln(1-P_{1})\] \[\quad-(1-P_{2})\ln(1-P_{2})\] \[\quad+(1-P_{1}-P_{2})\ln(1-P_{1}-P_{2}). \tag{82}\]
In this case we have \(\max[\mathrm{I}(C_{1};C_{2})]=\ln 2\), which is the maximum value attainable by the mutual information of two dichotomic discrete random variables. Therefore,
\[\frac{\max[\mathrm{I}(W_{1};C_{2})]}{\max[\mathrm{I}(C_{1};C_{2})]}\approx 0.2 6\sim\frac{1}{3}. \tag{83}\]
This result tells us that the maximum value of the mutual information of \(W_{1}\) and \(C_{2}\), which are linearly uncorrelated, is about one third of the maximum of the mutual information of \(C_{1}\) and \(C_{2}\), which can be, instead, perfectly anticorrelated. Thus, the nonlinear correlation between \(W_{1}\) and \(C_{2}\) is by no means negligible.
## 6 Conclusions
In this paper we have described a simple experiment that illustrates the so-called wave-particle duality in single-photon states. Unlike more conventional approaches, we have adopted a probabilistic framework which permitted us to detect
Figure 4: Plot of \(\mathrm{I}(W_{1};C_{2})\) given by \((\ref{eq:1})\). Note that the domain of the function is the region of the \(|s|^{2}P\)-plane defined by \(1-|s|^{2}-P\geq 0\) (gray area in the plot). For point-like detectors we have \(P,|s|^{2}\ll 1\), so that \(\mathrm{I}(W_{1};C_{2})\approx P|s|^{4}/(1-P)\).
a somewhat "hidden" nonlinear dependence between some wave and particle observables that are actually measured in the experiment. We believe that this work can stimulate the use of these probabilistic techniques in various applications of quantum optics.
## Acknowledgements
I acknowledge support from the Deutsche Forschungsgemeinschaft Project No. 429529648-TRR 306 QuCoLiMa ("Quantum Cooperativity of Light and Matter"). Many thanks to Valerio Scarani for useful comments on a preliminary version of this work.
|
2301.04829 | Federated Transfer-Ordered-Personalized Learning for Driver Monitoring
Application | Federated learning (FL) shines through in the internet of things (IoT) with
its ability to realize collaborative learning and improve learning efficiency
by sharing client model parameters trained on local data. Although FL has been
successfully applied to various domains, including driver monitoring
applications (DMAs) on the internet of vehicles (IoV), its usages still face
some open issues, such as data and system heterogeneity, large-scale
parallelism communication resources, malicious attacks, and data poisoning.
This paper proposes a federated transfer-ordered-personalized learning (FedTOP)
framework to address the above problems and test on two real-world datasets
with and without system heterogeneity. The performance of the three extensions,
transfer, ordered, and personalized, is compared by an ablation study and
achieves 92.32% and 95.96% accuracy on the test clients of two datasets,
respectively. Compared to the baseline, there is a 462% improvement in accuracy
and a 37.46% reduction in communication resource consumption. The results
demonstrate that the proposed FedTOP can be used as a highly accurate,
streamlined, privacy-preserving, cybersecurity-oriented, and personalized
framework for DMA. | Liangqi Yuan, Lu Su, Ziran Wang | 2023-01-12T06:12:04Z | http://arxiv.org/abs/2301.04829v2 | # Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application
###### Abstract
Federated learning (FL) shines through in the internet of things (IoT) with its ability to realize collaborative learning and improve learning efficiency by sharing client model parameters trained on local data. Although FL has been successfully applied to various domains, including driver monitoring application (DMA) on the internet of vehicles (IoV), its usages still face some open issues, such as data and system heterogeneity, large-scale parallelism communication resources, malicious attacks, and data poisoning. This paper proposes a federated transfer-ordered-personalized learning (FedTOP) framework to address the above problems and test on two real-world datasets with and without system heterogeneity. The performance of the three extensions, transfer, ordered, and personalized, is compared by an ablation study and achieves 92.32\(\%\) and 95.96\(\%\) accuracy on the test clients of two datasets, respectively. Compared to the baseline, there is a 462\(\%\) improvement in accuracy and a 37.46\(\%\) reduction in communication resource consumption. The results demonstrate that the proposed FedTOP can be used as a highly accurate, streamlined, privacy-preserving, cybersecurity-oriented, personalized framework for DMA.
Federated learning, internet of things (IoT), driver monitoring, privacy protection, personalization.
## I Introduction
With the rapid development of sensing, computing, and communication technologies, the internet of things (IoT) is a popular solution to solve the problems in industry, agriculture, energy, transportation, etc. However, privacy issues in IoT are often a significant concern have been raised due to the intrusive behavior of sensors [1]. Specifically for the internet of vehicles (IoV), it massively parallels each vehicle and various sensors it carries, including global positioning system (GPS), radar, camera, light detection and ranging (LiDAR), etc., enabling pedestrian detection [2], automated driving [3], mobility digital twins [4], and other transportation applications. Federated learning (FL) has received extensive attention for protecting user privacy by sharing only model weights and not including users' raw data. FL is widely known for its successful business case in Google mobile keyboard prediction [5]. Nowadays, It has also become one of the mainstream and thriving solutions for privacy protection and efficient learning.
### _Federated Learning and Related Work_
FL is a potentially feasible solution to the privacy problem in IoT, which is able to avoid the proliferation, distribution, and exchange of local client data by sharing model parameters after training the model on local client data. FL frameworks are widely used in healthcare [6, 7], industrial [8, 9], IoV [10, 11], etc., due to their usages of large scale and personalized data in an efficient and privacy-preserving way. Although FL has significant contributions to massively parallel devices and computations, it still has a notable drawback in that it cannot efficiently handle non-independent and identically distributed (non-i.i.d.) data. It is required to customize the applicable FL framework according to the features, resources, and constraints possessed by users, data, clients, and servers.
Non-i.i.d. data and heterogeneity have always been a challenge and a key to research in FL [12, 13, 14]. Non-i.i.d. data is a common phenomenon for real-world clients that are scattered and not interoperable: Taking IoV as an example, each driver is heterogeneous as a client. FedAvg [15], as one of the first proposed feasibility methods, has been the subject and center of research. FedAvg averages all local models to get the global model so that the local model may deviate far from the global optimum in the parameter space leading to some limitations in FedAvg. It is necessary to ensure that the local model does not deviate from the global model (prevent overfitting) and, simultaneously, that the local model can effectively learn the local client dataset (prevent underfitting). Based on FedAvg, FedProx [16] is proposed to limit the deviation of the local model from the global model by adding a proximal term.
Besides considering accuracy, the FL framework in IoT should not underestimate communication and training resource constraints, cybersecurity, and ubiquity. Some of the recent surveys summarized challenges, threats, and solutions of the FL decentralization paradigm for IoT, including limited computing power, unreliable and limited availability, local training, accuracy, communication overhead, etc. [17, 18, 19, 20, 21, 22].
Transfer and edge learning are popular solutions to reduce communication resource consumption in FL frameworks. Zhang _et al._[23] performed a federated transfer learning framework to detect driver drowsiness, where transfer learning was employed to save the communication cost in the FL framework. Su _et al._[24] introduced edge servers as a collaborative mechanism, where aggregation of local models was aggregated in the edge server and then sent to the global server to aggregate the global model. The benefit of the additional edge server was that the communication between massively parallel clients and the edge server was consumed because the edge server was geographically close to the clients. High latency and intermittent connections could be mitigated. In addition, the edge server could also provide personalized aggregated local models due to the similarity of geographically adjacent
clients.
Cyber attack is a problem that cannot be ignored for FL frameworks. Sun _et al._[25] developed an attack method for FL framework in IoT, in which a bi-level optimization framework was proposed to compute optimal poisoning attacked FL framework, including direct, indirect, and hybrid attacks. Meanwhile, Zhang _et al._[26] utilized a generative adversarial network (GAN)-based approach to attack the FL framework, especially since the attacker did not need any prior knowledge to carry out the attack.
Personalization is a common approach for FL frameworks to improve applicability for diverse users [27]. Fallah _et al._[28] proposed a personalized variant of the FL, which allowed clients to perform several gradient descent iterations on an initial global model using local data to obtain a personalized local model. Wu _et al._[29] explored a cloud edge-based personalized FL framework for in-home health monitoring, which addressed the problem that a single global model performed poorly on a specific client. Since the global model could only capture the common features of all clients, it lacked the ability to analyze fine-grained information of specific clients.
### _Federated Learning in Driver Monitoring Applications_
Driver monitoring application (DMA) in IoV is adopted as the research direction in this paper due to its real and visual image data, valuable application scenarios, and relatively blank research area. DMA also has challenges in terms of driver privacy issues, communication, and diversity and personalized driver behavior. Related DMA literature covers a wide variety of devices with algorithms to achieve different purposes, such as dangerous state detection [30], driver emotion recognition [31], driver lane change inference [32], etc. Compared to other methods [33, 34, 35], FL not only highlights efficient learning but also effectively protects the privacy of driver, passenger, and pedestrian biometric information, driving routes, and confidential driving areas such as military installations.
In this paper, we introduce and adapt FL to DMA. Although some FL frameworks exist for DMA, they all suffer from some critical problems. Doshi _et al._[36] proposed a FL edge-device framework to obtain a global model by aggregation feature representations and obtained considerable accuracy in recognizing driver activities. For the i.i.d. setting, the dataset was partitioned for each edge node in a random way, while for the non-i.i.d. setting, the dataset was assigned selectively. Zhao _et al._[37] proposed a FL framework to monitor fatigue driving, where the non-i.i.d. setting was simulated by controlling the number of images per client. The above FL frameworks for DMA did not really take into account the actual situation of the application but artificially created a simulation scenario. Therefore, there is an urgent need for realistic analysis and research for real-world DMA, considering that the user (driver) should exist independently and be non-interoperable with different clients (vehicles). Moreover, in addition to the necessity of test datasets, the test client is also a critical evaluation criterion, which can reflect the universality of the FL framework. We summarize the existing neglects and challenges in the current FL for DMA framework as follows.
* Clients in FL for DMA frameworks are often defined in unreasonable and incomprehensible forms. A real and natural definition of a client should be a driver or a vehicle.
* There is no paper proposing to test on a testing client (not involved in training process), which lacks universal testing for the FL framework.
* For DMA scenario, there is a great diversity and individuality of driver behaviors, postures, and facial expressions, which call for more personalized studies than other general IoV scenarios.
* Similarly, DMA also has diverse scenarios, including diverse vehicle models, interior colors, seat positions, etc., which will greatly increase the learning difficulty.
### _Proposed Solution and Contribution_
In this paper, we aim to propose a FL framework applicable and specific to practical applications in IoV, especially DMA, where an imaginary FL framework for IoV is illustrated in Fig. 1. Each local client, i.e., vehicle, includes a training module and a perception module. The training module uploads the model parameters to the server after learning and training the local data. After aggregation and optimizing the parameters of the local client models, the server downloads the global model parameters to the perception module in the local client. Moreover, transfer learning can be used to reduce the number of trainable parameters, resulting in reduced communication consumption. The server can save different global models for different scenarios, such as road types, weather types, and vehicle types, so that the model can have better applicability.
Therefore, a federated transfer-ordered-personalized learning (FedTOP) framework is proposed to address the problems of accuracy, cybersecurity, communication resources, and diversified scenarios. In addition to the transfer-extension shown in Fig. 1, the FedTOP framework also enhances robustness and cybersecurity by orderly dropout clients due to their possible overfitting and poisoning of the data. Furthermore, the FedTOP framework is able to remarkably improve accuracy by adapting all clients through personalized-extension. The contributions of this paper are:
Fig. 1: Structure illustration of a FL framework for IoV. The server interacts with the local client and saves different scenarios as different models. Transparent neurons are non-trainable parameters, and non-transparent neurons are trainable parameters.
* For realistic problems and usage scenarios in DMA, we propose a feasible FL framework FedTOP, realizing privacy protection, high accuracy, low communication requirements, cybersecurity, and pervasiveness. To the best of our knowledge, this is one of the first papers to establish a feasible FL framework for DMA.
* The proposed FedTOP framework is tested on two real-world driver monitoring datasets with and without system heterogeneity, systematically characterizing system heterogeneity in real-world datasets and achieving considerable accuracies with 92.32\(\%\) and 95.96\(\%\), respectively.
* The experiments highlight a realistic and natural client setup, i.e., drivers and vehicles are naturally formed as clients. Moreover, we innovatively propose evaluation criteria for training and testing clients to test the generalization ability of the proposed FedTOP on different clients.
* Through an ablation study, we demonstrate the performance and utility of the transfer, ordered, and personalized extensions. These detachable extensions can be selectively installed according to the task description, and the FL framework combined with different extensions can effectively adapt to different IoT application scenarios.
The presentation of this paper is as follows. The problem statement and proposed solution are described in Section II. The experimental setup, heterogeneity, and results have been demonstrated in Section III. Section IV discusses the performances of three extensions of the proposed framework, followed by Section V summarizing the paper and expounding on future work.
## II Methodologies
### _Problem Statement_
FL framework protects privacy, increases training efficiency, and saves communication resources by sharing only model parameters in IoT. In this paper, the FL framework is used to solve a driver activity classification task in DMA. Clients in real-world IoT are independent and heterogeneous due to the presence of only a minimal number of users per client. Considering the more general application scenarios, the global model \(\omega\) for training clients \(C\) aggregation needs to be compatible with non-training clients \(C^{\prime}\) in addition to \(C\). The data of each client \(D_{c}\) is non-i.i.d. when the data is not interoperable. We can consider a nested model
\[L_{c}=\omega_{c}(D_{c}), \tag{1}\]
where \(\omega_{c}\) is the classifier model corresponding to client \(c\in C\). \(D_{c}\in\mathbb{R}^{n_{c}\times i\times j\times d}\) is the image set with \(n_{c}\) samples, \(i\) rows, \(j\) columns, and \(d\) channels. \(L_{c}\in\mathbb{Z}^{n_{c}}\) is the corresponding label set. The global model \(\omega\) are obtained by aggregating, e.g., averaging the weights of the local models,
\[\omega=\sum_{c\in C}p_{c}\omega_{c}=\mathbb{E}[\omega_{c}|c\in C], \tag{2}\]
where \(p_{c}\in[0,1]\) is a weight density function of clients, for which \(\sum p_{c}=1\), \(p_{c}\) will be assigned according to the number of samples. Therefore, the optimization problem of the FL algorithm can be formulated as minimizing the global loss, which is equivalent to minimizing the sum of the local losses,
\[\min_{\omega}\mathcal{L}(\omega)=\sum_{c\in C}p_{c}\mathcal{L}(\omega_{c})= \mathbb{E}[\mathcal{L}(\omega_{c})|c\in C], \tag{3}\]
where \(\mathcal{L}\) is the loss function that will be assigned.
For real-world classification tasks, we assume that the distribution of the local model in the parameter space presents a multivariate Normal distribution \(\omega_{c}\sim\mathcal{N}\left(\mu_{\omega},\sigma_{\omega}^{2}\right)\), where \(\mu_{\omega}\) is mean of all local models, and \(\sigma_{\omega}^{2}\) is the variance of all local models. Fig. 2 shows the process of the FL algorithm finding the optimal solution of the global model in the parameter space. After the initial model is trained locally, communicated, and aggregated globally, the final global model will be obtained by averaging and can be estimated as \(\hat{\omega}=\mu_{\omega}\). Especially in the large-scale parallel application scenarios of IoT, according to the law of large numbers, \(\hat{\omega}=\mu_{\omega}=\omega^{*}\) is an unbiased estimation.
However, there are still some defects in the method of obtaining the global model through average aggregation. Firstly, we can confirm that there is enormous system heterogeneity in IoT, and the global model cannot ensure high accuracy for all clients. Secondly, we inevitably need a measure to prevent system heterogeneity and potential attacks and poisoning. As shown in Fig. 2, the farther the optimal local model is from the global model, the lower the accuracy, and vice versa. Therefore, it is conceivable that in the FL problem with heterogeneity, the clients' accuracy will also obey a Normal distribution.
### _Proposed Solution_
According to the problem statement, we propose a FedTOP algorithm to address all of the following issues. First, the aggregation of global models needs to be more stable, which can be achieved by preventing the overfitting of local models. Second, considering the actual communication situation in IoT, we propose transfer learning to reduce the trainable parameters and hence reduce communication requirements. Third, the global model should have the ability to resist interference, attacks, and data poisoning, which can be achieved by orderly dropping out local models with large loss. Fourth, a global model cannot take into account the situation of all clients, especially in the presence of data and system heterogeneity. Therefore, we recommend personalizing the global model to suit all the training and testing clients.
Fig. 2: Illustration of the FL algorithm finds the optimal global model solution in the parameter space. The shaded areas are accuracy contour areas. The farther the optimal local model dissociates from the global model, the lower the client accuracy. Local models enclosed by shaded areas have similar accuracies.
We refer to FedProx [16] using a proximal term to prevent local models \(\omega_{c}\) from deviating from the global model \(\omega\). In which, the proximal item \(\mathcal{L}_{p}\) that computes the distance between the local and global model is added to the loss function,
\[\mathcal{L}_{p}=\frac{\mu}{2}\|\omega_{c}-\omega\|^{2}, \tag{4}\]
where \(\mu\) is deviation coefficient, \(\omega_{c}\) is local client model parameters, and \(\omega\) is global model parameters. The overall loss function can be updated as
\[\mathcal{L}=\mathcal{L}_{l}+\mathcal{L}_{p}, \tag{5}\]
where \(\mathcal{L}_{l}\) is the loss between the true labels and the predicted labels, such as the negative log-likelihood loss used in our experiments.
_Transfer-extension_ is a common and popular solution in many learning frameworks. In particular, FL framework is favored because it can effectively reduce local client training resources and communication resources. In our experiments, the base model is ResNet34 [38] pre-trained on ImageNet, where only the last residual block and fully connected layer are trainable parameters. Although ImageNet is a large object classification dataset far from DMA images, the lower layers are similar for convolutional neural networks (CNN) and are used to extract image features. Therefore, the upper layers that are used to obtain high-level features and representations are given more attention. The ratio of reduced communication resource requirement in the network is approximately equal to the ratio of non-trainable parameters to total parameters,
\[\text{Commun}_{\downarrow}\approx\frac{|\omega_{\text{non-trainable}}|}{| \omega|}=37.46\%, \tag{6}\]
where \(\text{Commun}_{\downarrow}\) is the reduced communication resource requirement, \(|\omega_{\text{non-trainable}}|\) is the number of non-trainable model parameters, and \(|\omega|\) is the total number of the model parameters. Therefore, the transfer-extension reduces the communication requirement by 37.46\(\%\) by decreasing the trainable parameters.
_Ordered-extension_ is for orderly dropout clients with enormous variance, which may be subject to malicious attacks and poisoning, extensive data and system heterogeneity, and model underfitting. These local clients with large losses should be discarded to enhance the applicability of the global model. Ordered-extension not only enhances accuracy and robustness but also secures the global model. After all of the clients upload the local model parameters and the final training loss to the server, the server only aggregates the \(q\in\mathbb{N}\leq|C|\) local models with the lowest loss as the global model. The set of \(q\) local models can be expressed as
\[C_{q}\in q-\arg\min_{c\in C}\mathcal{L}(\omega_{c}). \tag{7}\]
_Personalized-extension_ is to promote, popularize, and adapt the global model to the heterogeneity of all clients. As shown in Fig. 2, the global model cannot be applied to all clients due to the ubiquitous heterogeneity. The region of interest (ROI) of the model may vary depending on system heterogeneity, such as different camera angles, seat positions, and vehicle structures, resulting in differences in the relative position of the driver in the image. However, personalized-extension proposes to train the global model several times in each client to obtain a more personalized local model to improve accuracy. On the one hand, compared with the traditional FL algorithm, the personalized-extension can significantly and effectively improve accuracy and confidence. On the other hand, compared to the method that only trains locally, the personalized FL algorithm improves the training efficiency and avoids the overfitting of the local model. In particular, the personalized FL algorithm can help and generalize to other non-training clients \(C^{\prime}\), which may have minimal training resources. After receiving the global model, the non-training clients \(C^{\prime}\) can obtain a highly accurate and reliable local model
Fig. 3: The global model is shared with training and testing clients after iterative training and optimization on massively parallel training clients. Both training and testing clients are personalized locally and then get results on the testing set, respectively. Among them, some attack or poison clients will be discarded, such as Client 2 has a large loss.
with minimal training. The system diagram of the proposed FedTOP is shown in Fig. 3
For the proposed FedTOP framework, the client communicates with the server \(T\) rounds, and all clients \(C\) train \(E\) epochs in parallel between each communication. For our preliminary experiments, we set \(T=10\) and \(E=5\). For transfer-extension, the local model is the transfer learning model of ResNet34 pre-trained on ImageNet. Only the last residual block and fully connected layer are set as trainable parameters. In addition, we add an additional fully connected layer to match the number of our classification categories. Based on FedProx, the activation function of the last layer is LogSoftmax, and the setting of the loss function \(\mathcal{L}_{l}\) is a negative log-likelihood loss. \(\omega^{1}\) is the initial model parameter. The proposed FedTOP is described in Algorithm 1, and the personalization process is described in Algorithm 2.
## III Experiment and Results
Considering the data and system heterogeneity, experiments are conducted on two open real-world driver monitoring datasets, including State Farm Distracted Driver Detection (SFDDD) [39] and DriveAct [40]. In addition to comparing with FedProx as a baseline, this paper also compares the performance of the transfer, ordered, and personalized extensions through an ablation study.
### _Experiment Setup_
To compare the impact of system heterogeneity on FL frameworks, the proposed FedTOP is tested on driver monitoring datasets with and without system heterogeneity. SFDDD dataset includes 26 drivers and 10 activities, and DriveAct dataset includes 15 drivers and 12 activities. SFDDD dataset considers system heterogeneity, that is, different drivers have different vehicles, different seat positions, different camera angles, etc., as shown in Fig. 3(a), 3(b), 3(c), and 3(d). DriveAct dataset does not take into account system heterogeneity, i.e., all subjects had their data collected in the same system. Recorded from the same camera angle, different drivers read the same magazine in the same vehicle, as shown in Fig. 3(e), 3(f), 3(g), and 3(h).
To show more clearly and visually the heterogeneity between different clients in the two datasets, Fig. 5 shows histograms of the sample images of the two datasets. It can be seen that the SFDDD dataset with system heterogeneity has a more considerable difference in the distribution of histograms than the DriveAct dataset without system heterogeneity, and the mean value of the SFDDD images is larger. The possible reason is that the vehicle interiors of the DriveAct dataset view are darker, resulting in most of the pixel values being lower. Therefore, the FL framework may be more challenged by the scene information when training on the SFDDD dataset, such as different vehicle interiors.
Clients are naturally divided based on the drivers. In order to better demonstrate the role of personalized-extension, the datasets are first divided into training clients and testing clients at a ratio of about 0.8, 0.2, with \(|C_{\text{SFDDD}}|=20\), \(|C_{\text{SFDDD}}^{\prime}|=6\), \(|C_{\text{DriveAct}}|=12\), and \(|C_{\text{DriveAct}}^{\prime}|=3\). And then, the datasets for each client are divided into a training set, verification set, and testing set at a ratio of 0.7, 0.15, and 0.15, respectively.
Fig. 4: Examplesd activities of four drivers in each of SFDDD and DriveAct datasets.
Fig. 5: Sampled client image histograms of SFDDD and DriveAct datasets.
After the global model is trained by the training dataset of training clients, the final trained global model is shared with all clients for personalization. The personalization of the global model will only be processed on the training sets, while the personalized local model will be tested on the unseen testing sets. The FL architectures are established on Pytorch and trained on an Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz, and a Nvidia GeForce RTX(TM) 3080 GPU.
### _Ablation Study and Results_
We explore the role of each FedTOP extension on two real-world datasets through an ablation study. FedProx is used as a baseline for comparison. According to the experimental setup described in the previous subsection, the experimental results are shown in Table I.
The results and comparisons for two datasets and three extensions are shown in Fig. 6, which is equivalent to demonstrating Algorithm 1. By observing the accuracy and loss curves on the two datasets, it can be concluded that the SFDDD dataset with system heterogeneity is fundamentally different from the DriveAct dataset without system heterogeneity. It can be clearly seen that the SFDDD dataset with system heterogeneity requires more communication to
Fig. 6: Accuracy and loss curves of the FL framework and its extensions on the SFDDD and DriveAct datasets, which is the training process of Algorithm 1. Personalization does not affect the convergence of the global model in the FL framework.
Fig. 7: Testing accuracy of the training and testing clients on both SFDDD and DriveAct datasets varies with personalized epoch, which is the testing results of Algorithm 2.
converge, while the DriveAct dataset without system heterogeneity has a fast convergence speed, especially at the first communication. Therefore, for real-world datasets, system heterogeneity can be mitigated by more communication times.
By observing Fig. 6c, 6d, 6g, and 6h, it can be found that the ordered-extension diminishes the stability of the system. Although the anomalous large-loss local model is discarded to reduce the bias of the global model, it also increases the variance of the global model resulting in reduced generalizability. By observing Fig. 6b, 6d, 6f, and 6h, we can see that the effect of transfer-extension is different for datasets with and without system heterogeneity. On the one hand, transfer-extension increases the variance of the model on the SFDDD dataset and leads to a reduced and unstable model convergence. On the other hand, transfer-extension improves the speed of model convergence on DriveAct, and the convergence effect is more stable. The possible reason is that the transfer-extension retains only a small number of trainable parameters, resulting in the neural network model not being able to learn human behavioral features effectively in the SFDDD dataset with system heterogeneity. However, for the DriveAct dataset without system heterogeneity, the factors are constant except for the driver, and the local model does not need to focus on these exact same pixels, but only on the changing pixels, including objects such as drivers, computers, and magazines. Therefore, for the DriveAct dataset, transfer-extension can effectively increase convergence and stability. The proposed FedTOP framework is able to obtain 92.32\(\%\) and 95.96\(\%\) accuracy on the SFDDD and DriveAct datasets, respectively, when considering five times of personalization training. Compared to FedProx as a baseline, FedTOP can effectively improve the accuracy by 462\(\%\) in addition to considering a 37.46\(\%\) reduction in communication resources. The results demonstrate the feasibility of the proposed FedTOP in terms of communication resource saving, accuracy improvement, robustness, and cybersecurity.
### _Performance of Personalized-Extension_
Personalized-extension needs to be further discussed and analyzed as the most effective approach to improve accuracy. Based on the division of training and testing clients in Section III-A, in this subsection, we further discuss how the trained and aggregated global model is adapted to both training and testing clients. The results of the personalized-extension on the two datasets are shown in Fig. 7 with different personalization epochs, which is equivalent to demonstrating Algorithm 2. It can be seen that the personalization process differs significantly on the datasets with and without system heterogeneity, which is similar to the results in Fig. 6. The clients in the DriveAct dataset have faster convergence, minor accuracy variance, and higher final accuracy. On the contrary, the clients in the SFDDD dataset not only converge slower but also have an anomalous client with relatively low accuracy. The possible reason is that the anomalous client has a huge data and system heterogeneity, causing the optimal model to deviate significantly far from the aggregated global model.
Fig. 8 further demonstrates that the trained global model repositions the ROI during the personalized training process via class activation map (CAM) [41]. The test client of the SFDDD dataset can be seen struggling with the personalization process. The trained global model focus the ROI on the seat backrest, driver's chest, hand, and knee, and vehicle door. Due to the system heterogeneity present in the SFDDD dataset, the positions of the driver, seat, and steering wheel, as shown in Fig. 8a is different from other clients, as shown in Fig. 4b, 4c, and 4d. Therefore, the initial ROI is likely to be a driver's position among other clients. During the five personalization training processes, the local model is able to effectively reposition the ROI to the driver, which is what the personalized-extension is intended to show. Moreover, the
Fig. 8: CAMs of the test clients in SFDDD and DriveAct datasets during the personalization process. (a), (b), (c), and (d) are a test client in the SFDDD dataset, which is the same as Fig. 4a. (e), (f), (g), and (h) are a test client in the DriveAct dataset, which is the same as Fig. 4e.
personalization process also reduces the number of ROIs while targeting more attention to a specific area.
On the contrary, for the test clients in the DriveAct dataset, the adjustment of the ROI is negligible. Note that the ROI does not necessarily have to cover the driver's body or an object such as the magazine. The ROI should cover those pixels that can distinguish between different activities, such as static activities like reading the magazine, and dynamic activities like wearing a seatbelt in the DriveAct dataset activity setting. These ROIs focus on areas where large differences are likely to occur. The fact that the ROIs in the DriveAct dataset cover almost the same pixels during the personalization process can also prove the negative impact of system heterogeneity on the FL framework.
## IV Discussion
The two datasets used, SFDDD and DriveAct, still have some flaws. First, although the SFDDD dataset takes system heterogeneity into account, quite a few drivers collect data in the same vehicle, that is, the number of clients is greater than the number of users. Therefore, there are still some differences between the dataset and the real-world data, which leads to the fact that the proposed FedTOP may need more communication rounds to achieve similar accuracy on a real-world dataset. Second, there is currently no driver monitoring dataset with real poisoning data currently existing, resulting in the effect of ordered-extension not being reflected. The different modalities, positions, and angles of the camera or the method of generating fake data may be a hypothesis for poisoned data, but it cannot be highlighted as real. Moreover, due to road safety guidelines, the current dataset is only driving on safe roads or simulated driving. Therefore, the driver's posture, demeanor, facial concentration, etc., are far from the real driving behavior. Therefore, there is an urgent need for a more realistic dataset that can include camera images of different positions and angles, different vehicle scenes, and more drivers driving on real roads.
For a FL framework in IoT, in addition to accuracy being the evaluation criterion, factors like communication requirements, robustness, fairness, cybersecurity, etc., also need to be considered. Although it seems that transfer and ordered extensions may not improve accuracy but rather reduce it in the current experimental results, it can potentially improve the performance of the FL framework. Therefore, we keep two extensions as one of our future directions. Personalized-extension is an approach similar to transfer learning and incremental learning. On the one hand, the local client is incrementally learned based on the trained global model, but it does not intentionally retain the previously learned knowledge. On the other hand, the global model is transferred to the client dataset as in transfer-extension, but the low-level non-trainable weights are still pre-trained on ImageNet. Therefore, the proposed personalized-extension actually uses the trained global model weights to fit different client data, such as the reposition of ROIs. Although the personalized-extension requires additional training locally for each client, there are many benefits, including high accuracy, applicability to non-training clients, customization, etc. Conceivably, personalized-extension can effectively address the problem of system heterogeneity, e.g., it can be applied to different cameras, camera angles, vehicle interiors, etc.
## V Conclusion
In this paper, we propose a FL framework FedTOP for DMA to address the issues of privacy preservation, efficient training, communication resource-saving, poisoned data, and diversified scenarios. Through the ablation study, the impact, role, and performance of three extensions, including transfer, ordered, and personalized on the model are disclosed. Moreover, the experiments demonstrate dramatic differences between datasets with and without system heterogeneity. In addition to the proposed FedTOP being able to exhibit 92.32\(\%\) and 95.96\(\%\) accuracy in two datasets for testing clients, it is also appreciated that FedTOP reduces communication consumption by 37.46\(\%\) and potentially improves cybersecurity. The experimental results show that the proposed FedTOP is a highly accurate, lightweight, privacy-preserving, robust, cybersecure, and universally applicable FL framework for potential DMA.
Future work lies in the continued research of extensions. For the ordered-extension, a possible plan is to introduce some malicious local clients to attack and poison with the global model. For example, subjects may not place the camera on the side as instructed but place it on the front or behind instead. Such outliers may cause the global model to deviate significantly from the optimal solution, so in the case, ordered-expansion can prevent the deviation of the global model by discarding the larger value of the losses. For the transfer-extension, there is currently a lack of a general driver monitoring model, so we used a model pre-trained on ImageNet. Future work can pre-train a driver model ourselves as a base model, which will get better performance in DMA. Fig. 1 shows the FL framework for foresight in IoV, but the dataset used does not contain scenario information such as road, weather, vehicle models, etc. Therefore, we expect a well-developed real-world dataset to include such scenario information, data and system heterogeneity, etc.
|
2307.06368 | Metallicity beats sSFR: The connection between superluminous supernova
host galaxy environments and the importance of metallicity for their
production | We analyse 33 Type I superluminous supernovae (SLSNe) taken from ZTF's Bright
Transient Survey to investigate the local environments of their host galaxies.
We use a spectroscopic sample of galaxies from the SDSS to determine the
large-scale environmental density of the host galaxy. Noting that SLSNe are
generally found in galaxies with low stellar masses, high star formation rates,
and low metallicities, we find that SLSN hosts are also rarely found within
high-density environments. Only $3\substack{+9 \\-1}$ per cent of SLSN hosts
were found in regions with 2 or more bright galaxies within 2 Mpc. For
comparison, we generate a sample of 662 SDSS galaxies matched to the
photometric properties of the SLSN hosts. This sample is also rarely found
within high-density environments, suggesting that galaxies with properties
required for SLSN production favour more isolated environments. Furthermore, we
select galaxies within the Illustris-TNG simulation to match SLSN host galaxy
properties in colour and stellar mass. We find that the fraction of simulated
galaxies in high-density environments quantitatively matches the observed SLSN
hosts only if we restrict to simulated galaxies with metallicity
$12+\log($O/H$) \leq 8.12$. In contrast, limiting to only the highest sSFR
galaxies in the sample leads to an overabundance of SLSN hosts in high-density
environments. Thus, our measurement of the environmental density of SLSN host
galaxies appears to break the degeneracy between low-metallicity or high-sSFR
as the driver for SLSN hosts and provides evidence that the most constraining
factor on SLSN production is low-metallicity. | Cressida Cleland, Sean L. McGee, Matt Nicholl | 2023-07-12T18:00:03Z | http://arxiv.org/abs/2307.06368v2 | Metallicity beats sSFR: The connection between superluminous supernova host galaxy environments and the importance of metallicity for their production
###### Abstract
We analyse 33 Type I superluminous supernovae (SLSNe) taken from ZTF's Bright Transient Survey to investigate the local environments of their host galaxies. We use a spectroscopic sample of galaxies from the SDSS to determine the large-scale environmental density of the host galaxy. Noting that SLSNe are generally found in galaxies with low stellar masses, high star formation rates, and low metallicities, we find that SLSN hosts are also rarely found within high-density environments. Only \(3^{+9}_{-1}\) per cent of SLSN hosts were found in regions with 2 or more bright galaxies within 2 Mpc. For comparison, we generate a sample of 662 SDSS galaxies matched to the photometric properties of the SLSN hosts. This sample is also rarely found within high-density environments, suggesting that galaxies with properties required for SLSN production favour more isolated environments. Furthermore, we select galaxies within the Illustris-TNG simulation to match SLSN host galaxy properties in colour and stellar mass. We find that the fraction of simulated galaxies in high-density environments quantitatively matches the observed SLSN hosts only if we restrict to simulated galaxies with metallicity \(12+\log(\mathrm{O/H})\leq 8.12\). In contrast, limiting to only the highest sSFR galaxies in the sample leads to an overabundance of SLSN hosts in high-density environments. Thus, our measurement of the environmental density of SLSN host galaxies appears to break the degeneracy between low-metallicity or high-sSFR as the driver for SLSN hosts and provides evidence that the most constraining factor on SLSN production is low-metallicity.
keywords: galaxies: evolution, star formation transients: supernovae
## 1 Introduction
While the detection rates of supernovae have rapidly increased over the last decade or so, thanks to wide-field high-cadence surveys such as Pan-STARRs (Chambers et al., 2016), ATLAS (Tonry et al., 2018) and ZTF (Bellm et al., 2019), superluminous supernovae (SLSNe) remain a rare and elusive transient event. SLSNe were originally classed as a supernova event that is 10-100 times brighter that typical supernovae (Quimby et al., 2011; Gal-Yam, 2012), however are now classified by their unique spectra (e.g. Lunnan et al., 2018; Quimby et al., 2018; Angus et al., 2019). Their lightcurves differ from those of other supernovae as they are both broader and brighter. There are two classes of SLSNe: Type I and Type II. Type I SLSNe (often referred to as simply SLSNe, a convention we adopt in this work) are dominated by OII absorption lines at maximum luminosity, while Type II SLSNe exhibit sharply peaked hydrogen emission lines from circumstellar interaction. The physical mechanisms which cause Type I SLSNe remain a topic of debate, with mechanisms such as a central engine in the form of a magnetar (see e.g. Kasen and Bildsten, 2010; Inserra et al., 2013; Nicholl et al., 2017) or interaction with the circumstellar medium (see e.g. Smith and McCray, 2007; Chevalier and Irwin, 2011; Chatzopoulos et al., 2012; Ginzburg and Balberg, 2012) being proposed as possible power sources for the increased luminosity.
We can use properties of their host galaxies to infer the conditions required for a SLSN to occur. This can help constrain the processes at work. SLSNe have been found mostly in low-mass galaxies, with very few being found in host galaxies with stellar mass \(>10^{8}\) M\({}_{\odot}\)(Schulze et al., 2018). This is surprising, as one might expect that with more stars comes more supernova events (Sullivan et al. (2006); Smith et al. (2012); Wiseman et al. (2021), see also Perley et al. (2016)). Various authors have found that SLSN host galaxies typically have high specific star-formation rates (\(\sim 10^{-9}\) yr\({}^{-1}\), Neill et al., 2011; Leloudas et al., 2015; Angus et al., 2016), leading to the idea that SLSNe may be the first stars to explode following a starburst. Leloudas et al. (2015) note that SLSN host galaxies have properties which are consistent with extreme emission line galaxies (EELGs), i.e. high-sSFR and also highly ionised gas. Meanwhile, Lunnan et al. (2014), Chen et al. (2017) and Perley et al. (2016) argue that the low-metallicity observed in SLSN hosts is the more important factor in the production of these rare transients, and that
their low metallicities can explain the extreme energies seen (Chen et al., 2013). Nevertheless, since dwarf galaxies tend to have both low metallicities and bursty star-formation histories, and the production of SLSNe appears averse to high-metallicity and low specific star-formation rate (Schulze et al., 2018), there is a degeneracy regarding which of these factors is more critical.
It is well known that the local environment of galaxies, that is whether a galaxy is located within a galaxy group or a galaxy cluster or it is isolated, has a profound effect on a number of galaxy properties, including the metallicity (Tremonti et al., 2004; Ellison et al., 2009; Cooper et al., 2009), star formation rate (Wetzel et al., 2013; Cleland and McGee, 2021) and overall evolution of the galaxy (Dressler, 1980). Gravitational interactions between galaxies can disturb the gas and dust which fuels star-formation, leading to starburst phases, or similarly galaxies may merge which will also have an effect on the star-formation rates of the galaxies involved. Conversely, high-density environments may cause the quenching of star-formation in an infalling galaxy. It naturally follows to investigate the local environment of SLSNe host galaxies to uncover any environmental factors that may impact SLSNe rates. Orum et al. (2020) find that up to 50 per cent of superluminous supernova host galaxies have at least one companion within 5 kpc, which they use to explain the increased star-formation rates required for SLSNe to occur. However, for the purposes of this work we are interested in the overall density of the local environment of the SLSN host galaxy. That is, a close companion galaxy of arbitrary stellar mass within kiloparsecs of the supernova host galaxy does not necessarily lead to the same environmental effects at larger scales (e.g., Mpc). In order to probe for such effects, we aim to use bright galaxies as a proxy for density, noting that this is a first order approach, with the absolute magnitude of galaxies roughly tracing stellar mass. By investigating the environments of these galaxies in this way, we aim to break the degeneracy between the requirements of low-metallicity and high specific star-formation rate for SLSN production.
In this work we identify isolated SLSNe and SLSNe in groups, and compare between various host galaxy properties. We also compare against Type Ia supernova host galaxies, which occur in a wider variety of host galaxies. We then investigate these host galaxies further within the Illustris-TNG simulation suite, by exploring the metallicities and specific star-formation rates of a sample of simulated galaxies with observational properties similar to those of SLSN hosts.
The paper is structured as follows: in Section 2, we describe the data used and how it was obtained; in Section 3 we explain the results and discuss their implications; finally in Section 4 we summarise our findings. For the computation of cosmological distances, we use a flat universe with \(H_{0}=70.2\) and \(\Omega_{m}=0.277\).
## 2 Data
We obtain coordinates, redshifts and host photometry (\(g-i\) colours and \(M_{i}\) absolute magnitudes) of 55 Type I SLSNe (hereafter simply referred to as SLSNe) and 4816 Type Ia supernovae from the ZTF Bright Transient Survey1(BTS; Masci et al., 2019; Perley et al., 2020). This was the full sample of Type I SLSNe and Type Ia SNe available on the BTS at the time of analysis (September 2022). The BTS is used to obtain properties on supernova events and their host galaxies due it being a well-defined sample with a limiting magnitude of \(\approx 18.5\) mag that selects for nearby events with a higher chance of host detection. The supernovae in this sample have been classified spectroscopically, and by a variety of research groups. Table 1 lists and cites the classification group for each SLSN. The code Sherlock2 is used to cross-identify for a host galaxy and its photometry (Smith et al., 2020); this information is available directly on the LAsair broker (Smith et al., 2019) for each source. Some sources have no catalogued host, and are marked as 'orphans'. See Figure 1 for examples of a SLSN source with a host detection (left panel) and a SLSN source where no host was detected (middle panel). We denote the SLSN host galaxy sample as SLSN-g, and the Type Ia supernova host galaxy sample as SN-Ia.
Footnote 1: [https://sites.astro.caltech.edu/ztf/bts/bts.php](https://sites.astro.caltech.edu/ztf/bts/bts.php)
Footnote 2: [https://github.com/thespacedoctor/sherlock](https://github.com/thespacedoctor/sherlock)
We obtain a spectroscopically-redshifted sample of 300000 random galaxies (Spec-z) from the Sloan Digital Sky Survey (SDSS) DR17 (Smee et al., 2013; Blanton et al., 2017; Abdurro'u et al., 2022), such that their redshifts are local and precisely-known, \(0<z<0.4\). We also obtain 16494 random photometrically-redshifted galaxies from SDSS DR17 (Fukugita et al., 1996; Gunn et al., 1998, 2006; Doi et al., 2010) to better match the parameter space of the SLSNe sample, with \(0<z<0.4\), such that \(z_{\rm err}<0.1\) and \(z_{\rm err}/z<0.5\). Both samples were obtained using the SDSS query service CasJobs3. For each sample of supernova hosts, we restrict to sources within the on-sky footprint of the spectroscopic sample, resulting in 35 SLSNe (18 with detected hosts) and 3450 Type Ia supernovae.
Footnote 3: [http://cas.sdss.org/dr17/](http://cas.sdss.org/dr17/)
### Matched sample generation
Since the number of SLSN host galaxies is so low, it is desirable to generate a matched galaxy sample, based on SLSN host galaxy properties. This means we can make statistical comparisons of SLSN host-like galaxies, but on a much larger sample of galaxies. The properties of the SLSN host galaxies are based on their photometry, so we generate a sample of galaxies that are matched based on photometric properties such as absolute magnitude and colour. We generate this matched sample to the SLSN host galaxies from the SDSS photometric galaxies (Cooper et al., 2009). For each SLSN host galaxy,
\begin{table}
\begin{tabular}{l c c c c c} Sample & \# obj. & Redshift & Median(\(z\)) & Magnitude & Median(\(M_{I}\)) & Description \\ \hline \hline SLSN-g & 33 & \(0.064\leq z\leq 0.39\) & 0.159 & \(-22.74\leq M_{I}\leq-15.32\) & -18.41 & SLSN host galaxies \\ \hline SDSS-m & 662 & \(0.036\leq z\leq 0.312\) & 0.117 & \(-22.196\leq M_{I}\leq-16.709\) & -19.632 & SDSS photometric-z galaxies matched to SLSN-g \\ \hline SN-Ia & 3268 & \(0.001\leq z\leq 0.156\) & 0.065 & \(-26.56\leq M_{I}\leq-11.56\) & -20.76 & Type Ia supernova host galaxies \\ \hline Spec-z & 300000 & \(0.0\leq z\leq 0.4\) & 0.144 & \(-31.214\leq M_{I}\leq-10.006\) & -22.385 & SDSS spectroscopic-z galaxies \\ \hline \end{tabular}
\end{table}
Table 1: Descriptions of each of the samples used. The ranges shown in the redshift and magnitude columns refer to the minima and maxima of that property for each sample, not necessarily the selection constraints.
we randomly search for a photometric galaxy within a sphere in parameter space according to Equation 1:
\[\left(\frac{\Delta\rm colour}{0.2}\right)^{2}+\left(\frac{\Delta M_{i}}{1.2}\right) ^{2}+\left(\frac{\Delta z}{0.04}\right)^{2}=1, \tag{1}\]
where \(\Delta\rm colour\), \(\Delta M_{i}\) and \(\Delta z\) refer to the difference between the \((g-i)\) colour, absolute magnitude, and redshift of the SLSN host galaxy and the photometric galaxy. The scaling factor in the denominator of each term in Equation 1 comes from taking the range of each property and multiplying by a factor of 0.1. This factor was chosen by visual inspection to ensure an adequate chance of finding an appropriate match, while reducing mismatches in parameter space. Applying this factor ensures each property has the same weight in parameter space. We attempt a random search for a unique SDSS galaxy that satisfies Equation 1 on a SLSN host galaxy; this search gets repeated until a match is found, to a maximum of 1000 times. This process is repeated on SLSN host galaxies 10000 times. This results in a matched sample (SDSS-m) of 662 galaxies. Note that varying the multiplicative factor in Equation 1, and the number of repetitions performed on each galaxy, has no quantitative effect on the results discussed in Section 3.1. For this matching procedure, we only use the 18 SLSN sources with detected hosts and reliable host photometry. The rest of the analysis depends on the positions and redshifts of each SLSN source directly, and so the entire sample of 33 SLSNe is used.
The details of each sample are listed in Table 1. Histograms of the distributions of the colour, absolute magnitude, and redshift of SLSN-g and SDSS-m are plotted in Figure 2. Kolmogorov-Smirnov tests validate that the distributions match well, with the added benefit of over an order of magnitude more galaxies in the matched sample than SLSN host galaxies.
## 3 Results and Discussion
### Observations
Using bright, well-observed galaxies as a proxy for density, we count the number of bright (\(M_{i}<-21.5\)) Spec-z galaxies that are at an
Figure 1: PanSTARRS \(i\)-band cut-out images (Flewelling et al., 2020) of SLSN sources and of a SN Type Ia host. Panel a) shows object ZTF21aasarmti, panel b) shows object ZTF22aazrjdc and panel c) shows object ZTF18aangbkx. The magenta circle in each panel depicts the location of the source. In panel b), no host galaxy was found for the source. The host galaxy in panel c) is in a higher-density environment than the SLSN hosts; see Section 3.1 for details. The absolute magnitude of the host galaxy in panel a) is \(M_{i}=-21.36\) and its redshift is \(z=0.193\). The absolute magnitude of the host galaxy in panel c) is \(M_{i}=-22.67\) and its redshift is \(z=0.070\). The redshift of the SLSN source in panel b) is \(z=0.200\). The size of each of the images is \(62.50^{\prime\prime}\).
Figure 2: Histograms of the colours, absolute magnitudes, and redshifts of galaxies in SLSN-g with reliable photometry compared to the matched SDSS sample. KS-tests result in p-values of 0.46, 0.36, and 0.75 respectively, validating that the SLSN-g and SDSS-m samples match each other.
on-sky projection of 2 Mpc or less of the target galaxy, within \(\Delta z=0.005\), based off a typical velocity dispersion of a galaxy group/cluser (Yang et al., 2007). We denote this quantity as \(N_{2}\). We use \(M_{i}<-21.5\) as the threshold for bright galaxies as this magnitude is the completion limit for the entire spectroscopic sample for this redshift range, and we use 2 Mpc as it is the upper end of galaxy group diameters (Yang et al., 2007). Note that varying the \(M_{i}\) threshold does not have a qualitative effect on the results. We calculate \(N_{2}\) for SLSN-g, SDSS-m, and SN-Ia as a control. SN-Ia works as a control sample because these galaxies are much less confined to a specific region of parameter-space compared to SLSNe, and the photometry is easily comparable to SLSN-g since both samples come from ZTF. A 2-d histogram showing these results is shown in Figure 3, with \(N_{2}\) shown as a function of absolute magnitude. Note the log scale in the \(z\)-axis. This quantity is also plotted for SLSN-g and SDSS-m in dark purple and light purple, respectively. We can see that while the distribution \(N_{2}\) for SN-Ia occupies a large and varied region of parameter-space (note 70 per cent of SN-Ia still has \(N_{2}=0\)), \(N_{2}\leq 2\) for SLSN-g and \(N_{2}\leq 3\) for SDSS-m, with the majority of galaxies having \(N_{2}=0\). This is not simply a consequence of reduced numbers of SLSN-g compared to SN-Ia; binned by magnitude, the fraction of SLSN-g with any bright neighbour (i.e. \(N_{2}\geq 1\)) reaches a maximum of about 10 per cent4, which is only half of the minimum of the fraction of SN-Ia with any neighbour in the same bins. Similarly, SDSS-m only reaches a maximum of about 10 per cent.
Footnote 4: This number is entirely dependent on the choice of binning since \(N_{2}\approx 1\) for SLSN-g in any magnitude bin
We consider galaxies with \(N_{2}\geq 2\) as being in a high-density environment, with everything else being in a low-density environment. This number is chosen under the assumption that a typical galaxy group or cluster halo is of the order of a few Mpc in diameter (Yang et al., 2007), so any more than two bright (large) galaxies would be considered a high-density environment. We emphasise the fact that this is a generous constraint, set in an attempt to maximise the number of SLSN hosts in high-density environments. An example of a SN-Ia host galaxy with a high \(N_{2}\) number (\(N_{2}=8\)) is shown in the right panel of Figure 1. We plot SLSN-g, SDSS-m, and SN-Ia on a colour-magnitude diagram in Figure 4, separated by high- and low-density. Contour lines map out the SN-Ia galaxies that are in high-density environments. We see that the high-density SLSN-g and SDSS-m galaxies are found on the outer regions of these contour lines. That is, they preferentially avoid the region in phase-space where high-density environments occur. This result, along with finding that the matched sample galaxies are also almost exclusively found in low-density environments, is indicative that the galactic properties of SLSN hosts are the constraining factor in the density of the environment of these galaxies. In other words, the type of galaxies where SLSNe are found (blue, low-mass, low-metallicity) are rarely found in high-density environments. As a fraction of each sample, SLSNe that occur in high-density environments is \(1/33=0.03\pm^{0.06}_{0.01}\), and matched galaxies that occur in high-density environments is even lower, at \(4/662=0.006\pm^{0.005}_{0.002}\).
Figure 4: A colour-magnitude diagram comparing SLSN-g and SDSS-m to SN-Ia. Low-density SLSN-g galaxies are plotted in purple circles, high-density SLSN-g galaxies are plotted in orange pluses, low-density SDSS-m galaxies are plotted in purple crosses, high-density SDSS-m are plotted in orange triangles, and the distributions for low- and high-density SN-Ia galaxies are plotted as purple filled in contours and orange contour lines respectively. Open purple circles represent the SLSN-g sources with unreliable host photometry. Purple and orange points shown individual SN-Ia galaxies when they are not captured by the contours.
Figure 3: A scatter plot of the number of bright Spec-z galaxies within 2 Mpc of a target galaxy against absolute magnitude, compared to a 2d histogram of the same distribution for Type Ia SNe host galaxies. SLSN-g is shown as dark purple circles and the matched sample, SDSS-m, is shown as light purple triangles. Normalised histograms are included on the \(x-\) and \(y-\)axes and show the absolute magnitude and \(N_{2}\) distributions for SN-Ia, SDSS-m and SDSS-g.
### Simulations
Since these galaxies are so faint, often the reliable host photometry or spectroscopy required to accurately measure physical properties like metallicity and star-formation rate is not available. Therefore it can be useful to use simulations to distinguish between real physical properties, and observational constraints. In Section 3.3 we utilize the IllustrisTNG suite of simulations5(Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2018; Naiman et al., 2018; Marinacci et al., 2018; Nelson et al., 2019). IllustrisTNG is a set of large-scale cosmological, gravo-magnetohydrodynamical simulations based on the Arepo code (Springel, 2010). We use the TNG-100 simulation, which has a volume of \(106.5^{3}\) Mpc. This simulation provides the best balance of large haloes (i.e., \(M_{h}>10^{12}\) M\({}_{\odot}\)) and smaller resolution subhaloes (i.e. \(M_{h}<10^{9}\) M\({}_{\odot}\)), although qualitatively similar results were found with the use of TNG-300. From this simulation, we use stellar masses, star-formation rates, gas-phase metallicity abundances, galaxy colours, group halo masses, and information about group membership.
Footnote 5: [https://www.tng-project.org/data/](https://www.tng-project.org/data/)
### IllustrisTNG
To investigate the tendency of SLSN host-like galaxies to avoid high-density environments further, we make use of the IllustrisTNG suite of simulations. From the TNG100-1 simulation, we obtain stellar masses, gas metallicities, star-formation rates, and \(g\) and \(i\) photometry on all 437121 subhaloes in the most recent redshift snapshot. Each subhalo resides in a parent halo with a total halo mass \(M_{200}\), defined as the total mass of a sphere whose mean density is 200 times the critical density of the Universe. In line with the properties of SLSN host galaxies, we define a sample of blue, low-mass galaxies (BLM), with \((g-i)<1\) and \(10^{7}<M_{*}/\mathrm{M}_{\odot}<10^{9}\). This stellar mass range has an upper limit in line with observations of SLSN host galaxies (e.g. Schulze et al., 2018), and a lower limit to avoid contamination from haloes not yet formed into galaxies. Note that at this stellar mass range, virtually all galaxies (\(\sim 98\) per cent) have \((g-i)<1\) and also are star-forming. For each subhalo, we retrieve the number of subhaloes (\(N_{\mathrm{subs}}\)) in its parent halo. For a galaxy to be in a group, we require \(N_{\mathrm{subs}}>1\) and \(M_{200}>10^{12}\) M\({}_{\odot}\). This is to ensure we avoid including field galaxies with smaller (non-galaxy) haloes associated with them. Everything else is assigned as being in the field.
In Figure 5, we plot a histogram of the log of the metallicity of BLM and a histogram of the log of sSFR, separated by group membership. Immediately it is clear that this sample is biased towards low metallicities, as is expected from observations of SLSN host galaxies (see Figure 11 in Chen et al. (2015), and Figure 8 in Perley et al. (2016)). Notably there are much fewer BLM galaxies in groups than in the field, with group galaxies making up 35.5 per cent of the BLM sample. Additionally, the BLM galaxies that reside in groups lie at the high end of the metallicity distribution. We also see that BLM galaxies typically have intermediate sSFR, as is seen in the properties of SLSN host galaxies. However, BLM galaxies in groups dominate over field galaxies at high-sSFR.
Thus, in agreement with our observational results in the previous section, the simulation results qualitatively suggest that the reason SLSN-g are not found in dense environments is because the galaxies that host SLSN, blue low-mass, are not often found in groups. However, quantitatively, there is a potential discrepancy. From the simulation results in Figure 5 we measure that 35.5 per cent of such BLM galaxies are found in groups within the simulation. This means, observationally, we would expect about 12(235) of SLSN-g(SDSS-m) to be in high-density environments, compared to the 1(4) in SLSN-g(SDSS-m).
This potential discrepancy is pointing to an additional requirement in our simulated sample to properly recover the observed environmental distribution - that of low-metallicity. It is known that SLSNe require host galaxies with low metallicities (e.g. Lunnan et al., 2014), and in the simulation results of Figure 5, the lower metallicity ranges are dominated by field galaxies. We better illustrate this in Figure 6, where we plot the fractions of field galaxies and group galaxies with respect to the total BLM sample as a function of metallicity. Field galaxies clearly dominate at \(Z<Z_{\odot}\), and group galaxies dominate at \(Z>Z_{\odot}\); here we assume \(Z_{\odot}=8.69\) following Asplund et al. (2009). We also plot the metallicity at which the fraction of
Figure 5: Histograms of metallicity (left) and sSFR (right) in IllustrisTNG for all blue low-mass (BLM) galaxies (black), BLM field galaxies (purple) and BLM group galaxies (orange).
observed SLSN host galaxies in groups is 0.03. We can then apply generous constraints by considering the uncertainties on the SLSN occurrence rate above. By computing beta distribution confidence intervals at \(c=0.68\)(Cameron, 2011) on the fraction of observed SLSNe in groups we find this fraction to be \(0.03^{+0.09}_{-0.02}\). In Figure 6 these confidence intervals are shaded in gray. Even with such a wide confidence interval, the highest metallicity that is consistent with the observed environmental density of SLSN hosts, and therefore the apparent upper limit of SLSN production is \(12+\log\)(O/H) \(=8.12\). This is approximately 0.2 dex lower than the apparent threshold reported in Schulze et al. (2018), \(12+\log\)(O/H) \(\sim 8.3\), and 0.3 dex lower than the threshold reported in Chen et al. (2017), \(12+\log\)(O/H) \(\sim 8.4\).
In addition, we find that only about 26 per cent of BLM galaxies in groups reside in haloes of \(M_{h}\geq 10^{13}\) M\({}_{\odot}\). This supports the idea that the typical galaxies that host SLSNe will rarely be found in large groups or clusters.
The distributions in Figure 5 provide evidence that low-metallicity is the constraining requirement for SLSN production, rather than high-SFR. Focusing on the right panel, we see that the distribution of BLM-g in sSFR is more evenly distributed compared to metallicity in the left panel. In particular, noting SLSN host galaxies require sSFR\(\gtrsim 10^{-9}\) yr\({}^{-1}\)(Schulze et al., 2018), there is actually an excess of BLM-g galaxies at this sSFR. This means that if a high-sSFR was the predominant factor for SLSN production compared to low-metallicity, we would see many more SLSN host galaxies in denser environments: \(\approx 24\) out of the 33 SLSN events in this sample. Since this is not the case, our analysis suggests that low-metallicity is likely the more important requirement.
## 4 Conclusions
Using SLSN sources from the ZTF BTS, we calculate whether their host galaxies are found in high-density or low-density large-scale environments. We use bright, spectroscopically redshifted galaxies with absolute magnitude \(M_{i}<-21.5\) as tracers of density, and classify a source as in a high-density environment if there is 2 or more bright galaxies within 2 Mpc. We test our findings by comparing the SLSN sample with a sample of Type Ia supernovae, whose host galaxies span a wider range of galaxy colour and absolute magnitude. We also create another sample, derived from SDSS photometrically-redshifted galaxies, matched to the SLSN host galaxies in galaxy colour, absolute magnitude, and redshift. Our main findings may be summarised as follows:
* SLSN host galaxies are almost always located in low-density environments, with all but 2 (of 33) host galaxies having no bright galaxies within 2 Mpc. The photometrically selected matched sample (in luminosity, colour, and redshift) shows similar results. This is in contrast to Type Ia supernova host galaxies, 8.5 per cent of which have 2 or more bright neighbours This is consistent with the fraction of SNe Ia found in clusters compared to field galaxies, according to Larison et al. (2023). In any relevant magnitude bin, the maximum high-density fraction is \(<10\) per cent for the SLSN and matched samples, while for the SN-Ia sample the minimum is \(>20\) per cent.
* In analysis of the Illustris-TNG simulations, over 70 per cent of blue, low-mass galaxies are found in the field, rather than in groups. These galaxies have low metallicities and high star-formation rates, which are typical of SLSN host galaxies.
* Crucially, we find that in order to quantitatively match the rate of SLSN host galaxies found in dense environments, an additional condition on the metallicity of the host in the simulation is required. Taking the uncertainties into account, SLSN production is only favoured in galaxies with \(12+\log\)(O/H) \(=8.12\). In contrast, selecting only high-sSFR galaxies would lead to an over-representation of high-density hosts in the simulation.
The fact that simulations suggest that galaxies capable of producing SLSNe and having high-sSFR would preferentially be found in groups, but such galaxies with low-metallicity would preferentially be found in the field, suggests that the metallicity of a galaxy is the more important factor in the occurrence of these explosive transients. However, this result will be strengthened with more observations of SLSNe and their host galaxies, which will be achieved over the coming years when new high-cadence surveys come online such as LSST at the Vera Rubin Observatory.
## Acknowledgements
The authors thank the anonymous reviewer for their helpful and thoughtful comments on this paper. CC acknowledges support from the School of Physics and Astronomy at the University of Birmingham. SLM acknowledges support from STFC grant ST/S000305/1 and UK Space Agency Grants No. ST/Y000692/1 and ST/X002071/1. MN is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381). Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns
Figure 6: Fractions of BLM-f (purple) and BLM-g (orange) of the total BLM sample with respect to metallicity. The gray dashed line shows the metallicity at which group galaxies account for 3 per cent of the entire BLM sample, in accordance with observations, with upper and lower limits shaded in gray.
Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany.
## Data Availability
The data used for this analysis is available at the links in the text above where the data is described.
|
2304.07526 | On the interplay between activity, elasticity and liquid transport in
self-contractile biopolymer gels | Active gels play an important role in biology and in inspiring biomimetic
active materials, due to their ability to change shape, size and create their
own morphology; the relevant mechanics behind these changes is driven by
self-contraction and liquid flow. Here, we couple contraction and liquid flow
within a nonlinear mechanical model of an active gel disc to discuss how
contraction dynamics inherits length scales which are typical of the liquid
flow processes. The cylindrically symmetric model we present, which
recapitulate our previous theoretical modeling in its basic lines, reveals that
when also liquid flow is taken into account, the aspect ratio of the disc is
not the only geometrical parameter which characterizes the contraction dynamics
of the gel. The analyses we present provide important insights into the
dependence of contraction dynamics on geometry and allow to make some progress
in designing materials which can be adapted for different applications in soft
robotics. | Anne Bernheim-Groswasser, Gefen Livne, Paola Nardinocchi, Filippo Recrosi, Luciano Teresi | 2023-04-15T10:23:36Z | http://arxiv.org/abs/2304.07526v2 | # On the interplay between activity, elasticity and liquid transport
###### Abstract
Active gels play an important role in biology and in inspiring biomimetic active materials, due to their ability to change shape, size and create their own morphology; the relevant mechanics behind these changes is driven by self-contraction and liquid flow. Here, we couple contraction and liquid flow within a nonlinear mechanical model of an active gel disc to discuss how contraction dynamics inherits length scales which are typical of the liquid flow processes. The cylindrically symmetric model we present, which recapitulate our previous theoretical modeling in its basic lines, reveals that when also liquid flow is taken into account, the aspect ratio of the disc is not the only geometrical parameter which characterizes the contraction dynamics of the gel. The analyses we present provide important insights into the dependence of contraction dynamics on geometry and allow to make some progress in designing materials which can be adapted for different applications in soft robotics.
## I Introduction
Self-contractile active gels are usually generated by polymerizing actin in the presence of cross-linkers and clusters of myosin as molecular motors[1; 2; 3; 4; 5]. Mechanics of active gels present interesting characteristics: self-contractions generate internal stresses and stiffen the material, so driving the network into a highly nonlinear, stiffened regime [2]; morphing from flat to curved geometries can be expected when thin discs of active gels are considered [5]; boundaries affect morphing [3].
The first models [6; 7; 8] of these materials were based on a physical description of the contraction dynamics within the framework of active generalized hydrodynamics: transient force dipoles are generated by myosin pulling on actin chains and creating active contractile stresses. These models are very accurate in modeling the contraction dynamics, looking at the network mesh scale, and less interested in coupling that dynamics with the nonlinear mechanics of active gels, which is also strongly affected by liquid flow [5].
Recently, the mechanics of active gels have been at the centre of a few theoretical studies, set within the framework of nonlinear mechanics, where the interactions between elastic stresses and liquid flow have been investigated [9; 10; 11; 12; 13]. In [10], a dynamic cross-linking mechanism is introduced to take into account the active behaviour of the gel. It drives an evolution of the mechanical stiffness of the polymeric network and an increase of the strain energy. The approach exploited in [9; 11; 12; 13] by some of the authors is quite different: the activity in the gel is modeled as an external remodeling force that drives the microscopic reorganization of the network due to activity and competes with the passive deformation of the gel due to elasticity and liquid flow [14]. Network remodeling drives both the evolution of the mechanical stiffness of the polymer and the chain shortening, which are two of the main mechanisms [5; 7] evidenced in the experiments [5].
In the present work, we start from that approach [13] to focus on the interactions between activity, elasticity and liquid diffusion in active gels, which are largely unexplored. The variety of of phenomena to be understood is wide, and robust macroscopic models of contractile networks can inspire further experiments to improve the control of the active characteristics of the gel and of its relevant mechanics.
The point we discuss here is about the competitive roles of contraction and liquid flow in driving the mechanics of the active gel. We refer to a specific problem, whose analysis has been inspired by the work presented in Ref.5, where the contraction dynamics of an active gel disc, whose geometry is defined by radius and thickness, has been followed and described with great details. Through the analysis of the problem, we'll show how: (i) gel dynamics inherits length scales which are typical of the liquid flow processes; (ii) two different regimes characterize the dynamics of the disc, which can be ascribed to gel contraction and to liquid flow; (iii) the gel dimensions' aspect ratio (radius to thickness) impact on the gel dynamics and affects also stress distribution.
The model is presented in Sec. II and III. Sec. IV describes the equilibrium states of the active gel and Sec.
IV deals with contraction dynamics.
## II Active volume and polymer fraction
Differently from passive polymer gels, active gels have the ability to remodel their mesh by self-contractions. The key elements of our model of active gel are here contrasted with the standard Flory-Rehner model of passive gels, which is at the bases of the stress-diffusion theories describing the chemo-mechanical interactions in swollen gels [12; 15; 16; 17; 18; 19].
A key variable in the Flory-Rehner model is the polymer fraction \(\phi\), defined as the ratio between the volume of the polymer \(V_{p}\) and the total volume \(V\):
\[\phi=\frac{V_{p}}{V},\quad\text{with}\quad V=V_{p}+V_{s},\] (II.1)
where \(V_{s}\) is the volume of solvent content. This formula is based on the assumption that a given mass of polymer occupies a constant volume \(V_{p}\), be it dry or not; thus, any volume increase must be entirely due to the solvent volume \(V_{s}\). Moreover, the Flory-Rehner model assumes that the polymer chains are not stretched at dry state, and that it is the solvent absorption that stretches these chains. Equilibrium is given by a balance between the elastic energy, which prefers unstretched chains, and the mixing energy that favours swelling and thus requires stretching to accommodate more solvent.
The active gel model removes the assumption of constant polymer volume, and considers the volume that can be occupied by a given mass of dry polymer as an additional state variable, named _active volume_\(V_{a}\). The volume \(V_{a}\) can vary because of a change of the mean free-length of the polymer chains, that is, of the average mesh size \(\xi_{a}\) measured at dry conditions; thus, \(V_{a}\) can be considered as a coarse-grained modeling of the microscopic arrangement of the polymer chains. It turns out that a change of \(V_{a}\) also describes a change of the effective stiffness of the gel. For the active gel model, the polymer fraction is measured by
\[\phi=\frac{V_{a}}{V},\quad\text{with}\quad V=V_{a}+V_{s}.\] (II.2)
The hypothesis that the polymer chains are not stretched at dry state is maintained; thus, the thermodynamical equilibrium is still a consequence of the balance between elastic energy and mixing energy. The new formula \(\phi=V_{a}/(V_{a}+V_{s})\) describes interactions between activity and solvent content. For example, we might have the same polymer fraction \(\phi\) with different pairs \(V_{a}\), \(V_{s}\):
\[\phi=\frac{V_{ao}}{V_{ao}+V_{so}}=\frac{V_{a1}}{V_{a1}+V_{s1}}\quad\Rightarrow \quad\frac{V_{a1}}{V_{ao}}=\frac{V_{s1}}{V_{so}}\,,\] (II.3)
as \(1/\phi=1+V_{so}/V_{ao}=1+V_{s1}/V_{a1}\). From (II.3), it follows that a contraction of the polymer network yields a proportional reduction of its solvent content. For very soft gels, as is our case, \(\phi\) can be very small and a small volume contraction of \(V_{a}\) can yield a huge expulsion of solvent volume \(V_{s}\). As example, by assuming \(V_{ao}=1\) mm\({}^{3}\) and \(V_{so}=1000\) mm\({}^{3}\), we have \(\phi=1/1001\); a contraction that halves the polymer volume yields \(V_{a1}=0.5\) mm\({}^{3}\) and \(V_{s1}=500\) mm\({}^{3}\). Following our example, the average mesh size \(\xi_{a}\) corresponding to the two active volumes \(V_{ao}\) and \(V_{a1}=V_{ao}/2\) scales as \(\xi_{a1}/\xi_{ao}=(1/2)^{1/3}\simeq 0.8\). It is worth remembering that \(\xi_{a}\) is the mesh size of the unstretched chains, which determines the so-called spontaneous metric of the network, whereas the actual mesh size \(\xi\) is related to the actual swollen volume and determines the current metric of the network: \(\xi\propto(V_{a}+V_{s})^{1/3}\). Both \(\xi_{a}\) and \(\xi\) may be very different from the reference mesh size \(\xi_{d}\) of the dry polymer, due to activity and liquid flow, see figure 2.
In the mathematical model, the microscopic arrangement of the polymer chains (from now on, the _remodeling_) is driven by a new evolution equation with its own source term, which is represented by the external remodeling force that maintains the system steady, or that drives it out of equilibrium [20]. It affects the solvent flow in the gel: a contraction of the polymer mesh yields a liquid flow towards the boundary of the body, favouring its release. Indeed, as we shall shortly review in the following, the polymer fraction \(\phi\) depends on \(V_{a}\) through a balance equation of solvent concentration, driven by a Flory-Rehner thermodynamics, which is so affected by gel activity.
## III Stresses, liquid fluxes and self-contractions
The active gel model is formulated in the framework of 3D continuum physics, see [9; 11] for details, which allow to set up initial-boundary value problems well suited to describe real experiments. Inspired by the experiments in [5], we consider a disc-like continuum body: at the initial time, it is a fully swollen, flat gel disc \(\mathcal{B}_{o}\), having radius \(R_{o}\) and thickness \(H_{o}\), which is \(\lambda_{o}\) times larger then the corresponding dry disc \(\mathcal{B}_{d}\) (see figure 1, panel a).
The region \(\mathcal{B}_{d}\) is assumed as the reference configuration of the active gel disc and the mathematical model describes the state of the gel by using three state variables: the solvent concentration per unit of dry volume \(c_{d}:\mathcal{B}_{d}\times\mathcal{T}\rightarrow\mathcal{R}\) (\([c_{d}]=\)mol/m\({}^{3}\)); the mechanical displacement \(\mathbf{u}:\mathcal{B}_{d}\times\mathcal{T}\rightarrow\mathcal{V}\) (\([\mathbf{u}]=\)m); the active contractions \(\mathbf{F}_{a}:\mathcal{B}_{d}\times\mathcal{T}\rightarrow\texttt{Lin}\) (\([\mathbf{F}_{a}]=\)1), usually called _remodeling tensor_. Here, \(\mathcal{R}\), \(\mathcal{V}\), and \(\texttt{Lin}\) denote a scalar, a vector and a tensor, respectively; \(\mathcal{T}\) is the time interval under study (see [11; 12] for details).
Solvent concentration \(c_{d}\) and displacement \(\mathbf{u}\) are the standard state variables based on Flory-Rehner model; the active contraction \(\mathbf{F}_{a}\) is the new variable used to describe active gels. The tensor \(\mathbf{F}_{a}\) is the 3D equivalent of the volume \(V_{a}\) mentioned in the previous section: it
describes not only the change in volume, but also the macroscopic changes in length and angles of the polymeric network due to self-contractions (see figure 2). The time-dependent symmetric tensor field \(\mathbf{C}_{a}=\mathbf{F}_{a}^{T}\mathbf{F}_{a}\) accounts for the reduction of the free length of the polymer chains, due to self-contraction, and describes the spontaneous metric of the gel.
Given the deformation gradient \(\mathbf{F}=\mathbf{I}+\nabla\mathbf{u}\), the key relations (II.2) are now represented in terms of Jacobian determinants
\[\phi=\frac{J_{a}}{J},\quad\text{with}\quad J=\det\mathbf{F}=J_{a}+\Omega\,c_{d },\quad J_{a}=\det\mathbf{F}_{a}\,;\] (III.4)
it holds \(\xi_{a}/\xi_{d}\simeq J_{a}^{1/3}\) and \(\xi/\xi_{d}\simeq J^{1/3}\). Equations (III.4) imply that any actual volume change \(J\) is the sum of a volume change of the active mesh \(J_{a}\), plus the volume of the solvent \(\Omega\,c_{d}\). Polymer fraction \(\phi\) delivers a measure of the gel density, which increases when solvent content decreases and depends on the volume change of the active mesh, as equations (III.4) indicate.
The deformation of the actual mesh with respect to the unstretched one is measured by \(\mathbf{F}_{e}=\mathbf{F}\,\mathbf{F}_{a}^{-1}\), called elastic deformation and the symmetric tensor field \(\mathbf{C}_{e}=\mathbf{F}_{e}^{T}\mathbf{F}_{e}\) describes the so-called elastic metric, which affects stresses distribution in the network.
### Model equations under cylindrical symmetry
We exploit the cylindrical symmetry that greatly simplifies the evolution equations of the problem; thus, the reference disc \(\mathcal{B}_{d}\) is represented by its vertical cross section \(\mathcal{S}_{d}\) spanned by the radial coordinate \(r\in(0,R_{d})\) and the vertical one \(z\in(0,H_{d})\). With this, the displacement \(\mathbf{u}\) has two non trivial components: the radial \(u\) and the vertical \(w\) component; within the class of remodeling tensors \(\mathbf{F}_{a}\) which are cylindrically symmetric, we choose a diagonal one \(\mathbf{F}_{a}=\text{diag}(\gamma_{r},\gamma_{\theta},\gamma_{z})\).
Hence, the state variables of the problem are reduced to the following six scalar fields: the solvent concentration \(c_{d}\), the two displacements \((u,w)\), and the three contractions \((\gamma_{r},\gamma_{\theta},\gamma_{z})\); each field is a function of the coordinates \((r,z)\) and the time \(\tau\). Moreover, we assume that the derivatives \(u_{,z}\) and \(w_{,r}\) can be neglected; it follows that the deformation gradient \(\mathbf{F}\) simplifies to \(\mathbf{F}=\text{diag}(\lambda_{r},\lambda_{\theta},\lambda_{z})\) with the radial, hoop and vertical deformations defined as
\[\lambda_{r}=1+u_{,r}\,,\quad\lambda_{\theta}=1+u/r,\quad\lambda_{z}=1+w_{,z}\,\] (III.5)
respectively. Under the symmetry assumption, the volumetric constraint (III.4) takes the form
\[\lambda_{r}\lambda_{\theta}\lambda_{z}=1+\Omega\,c_{d}\,.\] (III.6)
The active chemo-mechanical state of the active gel is ruled by a set of three balance equations, which can be rationally derived from basic principles [9]: balance of solvent content, of forces, and of remodeling forces. The first two balance equations, under the cylindrical symmetry hypotheses, reduce to the following three scalar equations
\[\begin{split}&-\dot{c}_{d}=h_{r,r}+\frac{h_{r}}{r}+h_{z,z}\,,\\ & s_{r,r}+\frac{s_{r}-s_{\theta}}{r}=0\,,\\ & s_{z,z}=0\,.\end{split}\] (III.7)
In equations (III.7), \(h_{r}\) and \(h_{z}\) are the radial and vertical components of the solvent flux, whereas \(s_{r}\), \(s_{\theta}\) and \(s_{z}\) are
Figure 2: The characteristic states of an active gel: the three cartoons might be considered as representative volume elements. (a) Dry-reference meshwork (red) of size \(\xi_{d}\) with crosslinks (blue dots). (b) Dry-contracted meshwork: mesh size \(\xi_{a}\) is reduced with respect to \(\xi_{d}\), and crosslink density is higher; the polymer chains are considered unstretched. (c) Swollen meshwork: liquid molecules (light blue dots) swell the dry-contracted meshwork: the elastic energy is proportional to the stretch \(\xi/\xi_{a}\) between the contracted mesh and the swollen one.
the radial, hoop and vertical components of the reference stress (also called Piola-Kirchhoff stress).
Flux, chemical potential \(\mu\) and stresses are related to the stretches \(\lambda_{i}\) and the contractions \(\gamma_{i}\) (\(i=r,\theta,z\)), by constitutive equations, whose derivation is fully described in many texts and papers (see [21; 17; 22]). Shortly, liquid transport in the elastic solid is described by a kinetic law, based on the assumption that the liquid molecules diffuse in the gel and the coefficients of diffusion can be different in the radial and vertical direction but independent of the deformation and the concentration. In the end, the liquid flux is related to the gradient of the chemical potential by the following equations
\[h_{r}=-\frac{D_{r}\,c_{d}}{R\,T\,\lambda_{r}^{2}}\,\mu_{,r}\quad\text{and} \quad h_{z}=-\frac{D_{z}\,c_{d}}{R\,T\,\lambda_{z}^{2}}\,\mu_{,z}\,\] (III.8)
where \(D_{r}\) and \(D_{z}\) are the coefficients of diffusion in the radial and vertical direction, \(R\) and \(T\) are the gas constant and the temperature, respectively, and \(\mu\) is the chemical potential of the solvent in the gel:
\[\mu=R\,T\,g(J_{e})+\Omega\,p\,,\] (III.9)
with
\[g(J_{e})=\left[\log\left(\frac{J_{e}-1}{J_{e}}\right)+\frac{\chi+J_{e}}{J_{e} ^{2}}\right]\,,\quad J_{e}=\det{\bf F}_{e}=\frac{J}{J_{a}}\,.\] (III.10)
Therein, \(\Omega\) is the molar volume of the liquid (\([\Omega]=\text{m}^{3}/\text{mol}\)) and \(\chi\) is the non dimensional dis-affinity parameter[15]. The pressure field \(p\) is is the Lagrangian multiplier of the constraint \(J=J_{a}+\Omega\,c_{d}\) (equation (III.4)). Finally, the stresses are given by
\[s_{r} =G\,\lambda_{r}\,\frac{\gamma_{\theta}\gamma_{z}}{\gamma_{r}}-p \,\lambda_{\theta}\lambda_{z}\,,\] \[s_{\theta} =G\,\lambda_{\theta}\frac{\gamma_{r}\gamma_{z}}{\gamma_{\theta} }-p\,\lambda_{r}\,\lambda_{z}\,,\] (III.11) \[s_{z} =G\,\lambda_{z}\frac{\gamma_{r}\gamma_{\theta}}{\gamma_{z}}-p\, \lambda_{r}\,\lambda_{\theta}\,,\]
where \(G\) is the shear modulus of the dry polymer network (\([G]=\)J/m\({}^{3}\)). The (actual) Cauchy stresses are: \(\sigma_{r}=s_{r}/\lambda_{\theta}\lambda_{z}\), \(\sigma_{\theta}=s_{\theta}/\lambda_{r}\lambda_{z}\) and \(\sigma_{z}=s_{z}/\lambda_{\theta}\lambda_{r}\).
The third balance equation, which describes the time evolution of the spontaneous metric delivered by the self-contractions \(\gamma_{i}\), reduces to three scalar equations [23]:
\[\frac{\dot{\gamma}_{r}}{\gamma_{r}} =\frac{1}{\eta_{r}}\left(\beta_{r}-E_{r}\right),\] \[\frac{\dot{\gamma}_{\theta}}{\gamma_{\theta}} =\frac{1}{\eta_{\theta}}\left(\beta_{\theta}-E_{\theta}\right),\] (III.12) \[\frac{\dot{\gamma}_{z}}{\gamma_{z}} =\frac{1}{\eta_{z}}\left(\beta_{z}-E_{z}\right).\]
The evolution of the self-contractions \(\gamma_{i}\) is driven by the differences \((\beta_{i}-E_{i})\) (\(i=r,\theta,z\)). Therein, \(\beta_{i}\) describes the effect of molecular motors on the mesh, is a control parameter of the model and will be denoted as _active stress_ from now on. On the other hand, the three functions \(E_{i}\) are the components of the Eshelby tensor, which is completely determined by the elasto-chemical state of the gel through the Flory-Rehner free-energy and the stress state in the gel as:
\[E_{i}=e_{y}-J\,\sigma_{i}\,,\quad(i=r,\theta,z)\] (III.13)
with
\[e_{y}=\frac{R\,T}{\Omega}\,J_{a}\left(f_{c}(J_{e})+m\,f_{e}({\bf C}_{e}) \right)-c_{d}\,\mu\,.\] (III.14)
Therein, \(f_{c}\) and \(f_{e}\) are the dimensionless mixing and elastic free-energy:
\[f_{c}(J_{e}) =(J_{e}-1)\text{log}(1-\frac{1}{J_{e}})+\chi(1-\frac{1}{J_{e}})\,,\] \[f_{e}({\bf C}_{e}) =\frac{1}{2}(\text{tr}{\bf C}_{e}-3)\,.\] (III.15)
So, the equations (III.12)-(III.15) show as the interplay between activity, elasticity and liquid transport depends on the effective stresses \((\beta_{i}-E_{i})\) and on the frictions \((\eta_{r},\eta_{\theta},\eta_{z})\) of the mesh, that is, the resistance of the mesh to remodel, in the three-directions. Frictions bring in the model one or more characteristic times, which affect the mesh remodeling and though it the whole process. Large frictions yield small contraction time rates, under the same effective stresses.
As a first work hypothesis, we assume \(\beta_{i}\) to be uniform and isotropic: \(\beta_{r}=\beta_{\theta}=\beta_{z}=\beta\). We also assume that the disc is not constrained, nor loaded and, as chemical boundary conditions, we assume that all the disc boundary is permeable and chemical equilibrium holds at the boundary, that is,
\[\mu=\mu_{e}\quad\text{on}\quad\partial{\cal S}_{d}\,,\] (III.16)
where \(\mu_{e}\) is the difference between the chemical potential of the bath and that of pure water (\(\mu_{e}=0\) corresponds to a pure water bath). Finally, the initial conditions for the displacements \(u,w\), the concentration \(c_{d}\) and the contractions \(\gamma_{i}\) (\(i=r,\theta,z\)) are the following:
\[u=(\lambda_{o}-1)\,r,\quad w=(\lambda_{o}-1)\,z,\quad c_{d}=c_{do},\quad \gamma_{i}=1\,.\] (III.17)
It means that the deformation \(f_{o}\) from the reference region \({\cal B}_{d}\) to the initial region \({\cal B}_{o}\) is \(f_{o}(X)=\lambda_{o}\,X\) for any \(X\in{\cal B}_{d}\) (see figure 1).
### Details of Finite Element Analysis
Equations (III.6), (III.7) and (III.12), together with the boundary (III.16) and initial (III.17) conditions, are rewritten in a weak form and implemented in the software COMSOL Multiphysics by using the Weak-Form physics interface. The calculus domain is the rectangular domain \({\cal S}_{d}\), which is meshed with triangular elements
whose maximum mesh size is \(H_{d}/10\), yielding about 200K dofs. Lagrangian polynomials are used as shape functions: polynomials of order 4 for the displacement and the solvent concentration, of order 3 for the volumetric constraint, of order 2 for the boundary conditions (also implemented in weak form) and of order 1 for the remodeling variables. The whole set of coupled equations are solved by using the Newton method with variable damping, as nonlinear solver, the direct solver Pardiso as linear solver is the direct solver Pardiso and the BDF method with order 1-2 as time dependent solver.
As non linear method, it is used the Newton method with variable damping; the linear solver is the direct solver Pardiso, while the time dependent solver uses the BDF method with order 1-2. The time-dependent analysis starts at the initial state \(\mathcal{B}_{o}\) and stops at a final equilibrium state \(\mathcal{B}_{1}\), which is pre-selected, as we'll discuss in the next section.
## IV Initial and final equilibrium states
The definition of the steady states where contraction dynamics and liquid transport start and finish is an important issue. Here, we get some data on the conditions of the gel discs at the initial and final states in the experiments which have inspired us [5], and reproduce those conditions in the numerical model.
Firstly, we define a steady state as a solution of the balance equations (III.7), (III.12) with \(\hat{c}_{d}=0\) and \(\dot{\gamma}_{i}=0\) (\(i=r,\theta,z\)). Such a state is controlled by the pair \((\mu_{e},\beta)\), that is by the conditions
\[\mu=\mu_{e}\quad\text{and}\quad E_{i}=\beta\quad(i=r,\theta,z).\] (IV.18)
We study the contraction dynamics between the initial steady state \(\mathcal{B}_{o}\), which is represented by a black dot in the diagram of figure 4, and a final steady state \(\mathcal{B}_{1}\) (red or blue dots in the diagram of figure 4), corresponding at a time \(\tau=\tau_{1}\).
We assume that at the steady states \(\mathbf{F}\) and \(\mathbf{F}_{a}\) are uniform and spherical, that is \(\mathbf{F}=\lambda\,\mathbf{I}\), \(\mathbf{F}_{a}=\gamma\,\mathbf{I}\), and that initial and final states are stress-free. With this, equations (III.11), (III.9) and (III.13) deliver a representation form for both the chemical potential and the Eshelby components, in terms of \(J_{a}\) and \(J\): \(\mu=\mu(J/J_{a})=\mu(J_{e})\) and \(E_{i}=e_{y}(J_{a},J_{e})\). Equations (IV.18) deliver the relationships between the values of \(J_{a}\) and \(J\) at the initial and final states and the pair \((\mu_{e},\beta)\) which determines those values:
\[\mu_{e}=\mu(J_{e})\quad\text{and}\quad\beta=e_{y}(J_{a},J_{e})\,.\] (IV.19)
In the following, we adopt the following notation: \(J_{o}\) and \(J_{1}\) denote the values of \(J\) at \(\mathcal{B}_{o}\) and \(\mathcal{B}_{1}\), and the same we do for all the other quantities.
The evolution of the system from \(\mathcal{B}_{o}\) to \(\mathcal{B}_{1}\), that is, the contraction-liquid transport dynamics, is triggered by defining time laws for the two controls, which have both a characteristic evolution dynamics. For the motors, the characteristic time is dependent on the binding/unbinding kinetics of the motors to the actin filaments. For the chemical potential, the characteristic time reflects the mixing kinetic of possibly free biopolymer chains and the liquid in the bath. We set
\[\begin{split}&\mu_{e}=\mu_{e}(\tau)=\mu_{o}+(\mu_{1}-\mu_{o})\, \mathrm{s}(\tau/\tau_{\mu})\,,\\ &\beta=\beta(\tau)=\beta_{o}+(\beta_{1}-\beta_{o})\,\mathrm{s}( \tau/\tau_{\beta})\,,\end{split}\] (IV.20)
where \(\mathrm{s}(\cdot)\) is a smoothed step function running from 0 to 1 in the interval \((0,1)\) and \(\tau_{\mu}\) and \(\tau_{\beta}\) (both less than \(\tau_{1}\)) are the characteristic time of the controls (see Table 1). Thus, \(\beta(0)=\beta_{o}\), and \(\beta(\tau_{\beta})=\beta_{1}\), and analogously for \(\mu_{e}\).
#### iv.0.1 Material parameters
The values assigned to the initial thickness and aspect ratio have been inspired by [5], and the successive parametric analyses always consider values of \(AR\) and \(H_{o}\) not too far from those ones. Moreover, we considered a highly swollen initial state of the gel, which motivated our choice for the Flory parameter \(\chi\) and the shear modulus \(G\). Finally, as it has been observed in [5] that the characteristic time of the process is about 200s, and the characteristic time of the discharge velocity is about 40s, we tuned the values assigned to the diffusion constants and to the friction in such a way to qualitative match the characteristic times of the dynamics.
Then, we fix the material and geometrical parameters as in Table (1).
\begin{table}
\begin{tabular}{l l} \hline shear modulus & \(G=135\) Pa \\ \hline \hline Flory parameter & \(\chi=0.4\) \\ \hline \hline water molar volume & \(\Omega=1.8e-5\) m\({}^{3}\)/mol \\ \hline temperature & \(T=293\) K \\ \hline energy ratio & \(m=G\,\Omega/R\,T=1e-6\) \\ \hline diffusivity & \(D_{r}=D_{z}=1e-3\) m\({}^{2}\)/s \\ \hline friction & \(\eta=1e\)5 Pa s \\ \hline initial radius & \(R_{o}=1500\,\mu\)m \\ \hline initial swollen volume \& stretch ratio \(J_{o}=1000\), \(\lambda_{o}=10\) \\ \hline initial aspect ratio & \(AR=2\,R_{o}/H_{o}=20\sim 40\) \\ \hline initial thickness & \(H_{o}=150\,\mu m\sim 75\,\mu\)m \\ \hline final volume/initial volume & \(J_{a1}=0.05\) \\ \hline control time for \(\beta\) & \(\tau_{\beta}=20\) s \\ \hline control time for \(\mu\) & \(\tau_{\mu}=100\) s \\ \hline \end{tabular}
\end{table}
Table 1: Material and geometrical parameters
Initial state
We assume a fully swollen state as initial state of the gel, characterized by an unstretched mesh size \(\xi_{a}\) equal to the reference mesh size \(\xi_{d}\). From an experimental point of view, it means that self contraction and liquid release are going to be initiated; from the modeling point of view, it means that
\[\mu_{eo}=\mu_{o}=0\ \text{J/mol}\quad\text{and}\qquad J_{ao}=1\,.\] (IV.21)
Given these values, we can use equations (IV.19) to get the initial swollen state \(J_{o}\) and the value of the active stress \(\beta_{o}\) which maintains it: from
\[0=\mu(J/J_{ao})\quad\text{and}\quad\beta_{o}=e_{y}(J_{ao},J_{eo})\,,\] (IV.22)
we get \(J_{o}\) and \(\beta_{o}\). In particular, being \(J_{eo}=J_{o}=\lambda_{o}^{3}\), equation (IV.22)\({}_{1}\) takes the form
\[0=\left[\log\left(\frac{\lambda_{o}^{3}-1}{\lambda_{o}^{3}}\right)+\frac{ \chi+\lambda_{o}^{3}}{\lambda_{o}^{6}}\right]+\frac{m}{\lambda_{o}}\,,\] (IV.23)
and can be solved for \(\lambda_{o}\), the free-swelling stretch ratio at \(\mathcal{B}_{o}\). Therein, \(m=G\Omega/R\,T\) is the ratio between the elastic energy and the mixing energy. Equation (IV.22)\({}_{2}\) determines the active stress \(\beta_{o}\) (J/m\({}^{3}\)) corresponding to null self-contraction (\(\xi_{a}=\xi_{d}\)) and to the free swelling stretch \(\lambda_{o}\):
\[\frac{\Omega}{RT}\beta_{o}=(\lambda_{o}^{3}-1)\left(\frac{\lambda_{o}^{3}-1}{ \lambda_{o}^{6}}\chi-\frac{1}{\lambda_{o}^{3}}\right)+m\left(\frac{1}{\lambda _{o}}+\frac{\lambda_{o}^{2}}{2}-\frac{3}{2}\right)\,.\] (IV.24)
It is worth noting that equation (IV.23) is quite standard in stress-diffusion theories based on a Flory-Rehner thermodynamics [24; 25]; it is easy to verify that, given \(\mu_{o}\), the free-swelling stretch \(\lambda_{o}\) increases as \(m\) decreases, as shown in figure 1 (panel b), where the relation between \(\lambda_{o}\) and \(m\) has been represented. On the contrary, equation (IV.24) does not belong to standard stress-diffusion theory, and is peculiar of the present augmented model. Figure 1 (panel b) also shows the dependence of \(\beta_{o}\) on \(\lambda_{o}\).
Finally, equations (IV.23) and (IV.24) deliver the following initial values of \(J\), \(J_{e}\), \(c_{d}\), \(p\) and \(\beta\):
\[\begin{split}& J_{eo}=J_{o}=1000\,,\\ & c_{do}=(J_{o}-1)/\Omega=5.5e7\ \text{mol/m${}^{3}$},\\ & p_{o}=G\,(1/J_{eo})^{1/3}=13.6\ \text{Pa},\\ &\beta_{o}=e_{y}(1,J_{eo})=-8e7\ \text{J/m${}^{3}$}.\end{split}\] (IV.25)
#### iv.2.3 Final states
In the experiments, it has been observed the attainment of final steady states, when self contraction and liquid transport stop. In the modeling, we studied the conditions to get final steady states, which are not too far, in terms of some characteristic elements, from those experimental final states.
We considered two different protocols: (a), where only active stresses \(\beta\neq\beta_{o}\) drive the active contractions and liquid transport; (b), where bot active stresses \(\beta\neq\beta_{o}\) and a change in the chemical potential of the bath from \(\mu_{o}\) to \(\mu_{e}\) drive the active contractions and liquid transport. In both the protocols, based on the outcomes of the experiments presented in [5], we assume that the unstretched mesh size is contracted by \(\xi_{a}/\xi_{d}=J_{a1}^{1/3}\simeq 0.38\) with respect to the dry mesh size, and set the final value \(J_{a1}\) of \(J_{a}\) in such a way to produce that result: \(J_{a1}=(\xi_{a}/\xi_{d})^{3}=0.05\).
Protocol aAssuming that
\[\mu_{e1}=\mu_{o}=\mu_{1}=0\,\text{J/mol}\,,\quad J_{a1}=0.05\,,\] (IV.26)
equations (IV.19) deliver the final swelling ratio \(J_{1}\) and the active stress \(\beta_{1}\) needed to maintain it:
\[\mu(J/J_{a1})=0,\,e_{y}(J_{a1},J_{e1})=\beta_{1}\quad\Rightarrow\quad J_{1}, \ \beta_{1}\] (IV.27)
Given our parameters, for case (a) we have the following characteristic values of the final state:
\[\begin{split}& J_{e1}=J_{1}/J_{a1}=J_{eo}=1000\,,\quad J_{1}=50\,, \\ & c_{d1}=(J_{1}-J_{a1})/\Omega=2.8e7\ \text{mol/m${}^{3}$},\\ &\beta_{1}=e_{y}(J_{a1},J_{e1})=-4e6\ \text{J/m${}^{3}$}.\end{split}\] (IV.28)
It is worth noting that at the initial state, in absence of contraction (\(J_{a1}=1\)), we get \(J_{1}=J_{e1}=1000\), whereas at the final contracted state \(\mathcal{B}_{1}\), we get \(J_{1}=J_{e1}=50\), that is, a much smaller volume change under the same chemical conditions. It means that the model includes an effective bulk stiffening of the gel due to self-contraction, that is, to motor activity, that has already been recognized as crucial in other works [6].
Protocol bTypically, in a Lab, the chemical potential of the bath is not controlled. We can suppose it is constant, as in the protocol a) or, as it is possible that some chains of the gel, which are not perfectly cross-linked, are released during the gel contraction, we can assume that it varies [26]. This motivated our choice to follow protocol b), too. We assume that the final swelling ratio \(J_{1}\) is half the value of case (a), while \(J_{a1}\) is the same as before:
\[J_{1}=25\,,\quad J_{a1}=0.05\,.\] (IV.29)
Now, equations (IV.19) are used to identify the final chemical potential \(\mu_{1}\) and the active stress \(\beta_{1}\) needed to maintain this final state:
\[\mu(J_{1}/J_{a1})=\mu_{1},\,e_{y}(J_{a1},J_{e1})=\beta_{1}\,\Rightarrow\,\mu_{1 },\ \beta_{1}\,.\] (IV.30)
Given the parameters, for case (b) we have the following characteristic values of the final state:
\[J_{e1}=J_{1}/J_{a1}=J_{eo}=500,\] \[p_{1}=G\,(J_{a1}/J_{1})^{1/3}=17.1\ \text{Pa},\] \[c_{d1}=(J_{1}-J_{a1})/\Omega=1.4e6\ \text{mol}/\text{m}^{3},\] (IV.31) \[\mu_{1}=\mu(J_{a1},J_{e1})=-6.7e-4\ \text{J/mol},\] \[\beta_{1}=e_{y}(J_{a1},J_{e1})=-4e6\ \text{J}/\text{m}^{3}.\]
We note that for the two cases, the value of \(\beta_{1}\) is the same, but the de-swollen volume \(J_{1}\) is quite different (50 vs 25), as for case (b) liquid transport and release is driven by both the mesh contraction and the change in the chemical conditions of the bath, whereas for the case a) only the driving force is only the gel activity.
## V Contraction dynamics
Our idea is that geometry greatly affect contraction dynamics due to the liquid transport, which has its own characteristic length, diversely from self-contraction dynamics, which don't have it since motor activity is homogenous across the system.
The key geometrical parameter in a disc is its aspect ratio \(AR\); hence, we start investigating the effects of \(AR\) on the contraction dynamics with two complementary studies: 1) at fixed radius \(R_{o}=1.5\) mm, and varying \(H_{o}\); 2) at fixed thickness \(H_{o}=0.10\) mm and varying \(R_{o}\). The investigated range of parameter \(AR\) is described in Table 2: the analysis goes from discs whose AR varies from 20 (thick discs) to discs of aspect ratio 45 (thin discs). The study is carried on under the conditions of scenario a).
All the experiments start with \(J_{o}=1000\), a highly swollen initial state, and \(J_{ao}=1\), and evolve towards the new steady values \(J_{1}=50\) and \(J_{a1}=0.05\). As stated above, these values correspond to a reduction in mesh = \(\xi_{a1}/\xi_{ao}=0.05^{1/3}=0.38\), where \(\xi_{a1}\) represents the final mesh size at zero stress, see Section IV.
In the regime under study, the system reaches the new equilibrium state at a time \(\tau_{1}\simeq 200\) s, that is, we have \(\tau_{\beta}<<\tau_{1}\) and dynamics is ruled by the redistribution of water, which has a length scale that is the disc thickness \(H_{o}\).
To present our results, we focus on: evolution paths in the plane \((\bar{J},\bar{J}_{a})\); velocities of the lateral boundary of the disc, _i.e._, radial velocity; radius and thickness reduction. The averages \(\bar{J}\) and \(\bar{J}_{a}\) of the fields \(J(r,z,\tau)\) and \(J_{a}(r,z,\tau)\) are introduced to give a global glance at the contraction dynamics and, due to the cylindrical symmetry of the system, are averaged on the cross section
\begin{table}
\begin{tabular}{l l l} \hline AR & \(R_{o}(H_{o}=0.1)\) & \(H_{o}(R_{o}=1.5)\) \\ \hline
20 & 1.0 & 0.15 \\ \hline
25 & 1.25 & 0.12 \\ \hline
30 & 1.50 & 0.1 \\ \hline
35 & 1.75 & 0.086 \\ \hline
40 & 2.0 & 0.075 \\ \hline
45 & 2.25 & 0.066 \\ \hline \end{tabular}
\end{table}
Table 2: Data about aspect ratios; values of \(R_{o}\) and \(H_{o}\) are in mm
Figure 4: Swelling-contraction diagram \(J_{a}\) versus \(J/J_{o}\) at equilibrium and stress-free states. The isolines \(\mu=\mu_{o}\) and \(\mu=\mu_{1}\) are identified by straight lines in this diagram, where the isoline \(\beta=\beta_{o}\) and \(\beta=\beta_{1}\) are hyperbololes. The black dot corresponds to the initial state whereas the red and blue dots corresponds to the final state attained under case a) and b), respectively.
of area \(R_{d}\cdot H_{d}\). Changes in volume, boundary velocities and changes in radius and thickness have a large effect on the global change in shape of the disc. They are visible in experiments and can be measured, if the appropriate tests are performed. Finally, we analyse the stress state of the gel during the contraction process.
### Dynamics in the plane \((\bar{J},\bar{J}_{a})\)
The main features of the contraction dynamics are well represented by the curves \(\tau\mapsto(\bar{J}(\tau),\bar{J}_{a}(\tau))\), which are plotted in the plane \((\bar{J},\bar{J}_{a})\). That plane allows us to glance at the quasi-static stress-free path characteristic of an evolution which occurs as a sequence of equilibrium states (straight dashed line). Thinner discs (higher AR) show an evolution in the plane which is closer to the stress-free path. Under the same contraction dynamics, liquid transport is faster for those discs and it allows to quickly recover the original stress-free state. On the contrary, for ticker discs (lower AR) the evolution path is very far from the quasi-static regime: namely, motor-induced contraction is faster than the water trasnport across the gel pores, which makes the gel highly stressed during its evolution.
Figures 5 and 6 show evolution paths for different \(AR\) for varying \(H_{o}\) at constant \(R_{o}\) (figure 5) and varying \(R_{o}\) at constant \(H_{o}\) (figure 6). In the first case, it is shown as decreasing the thickness \(H_{o}\), that is, the characteristic length scale across which water flows, decreases the characteristic time scale of water trasnport (from blue to yellow solid lines). Interestingly, in the second case, that is, changing AR by varying radius under constant thickness, we get a series of fully overlapped curves, so confirming that the important length scale for water exit is \(H_{o}\).
### Gel contraction velocity
Through the aforementioned studies, we investigate also the effects of \(AR\) on the radial contraction velocity \(\dot{R}(\tau)\) of the lateral boundary of the gel disc.
The radial contraction velocity \(\dot{R}(\tau)\) is determined from the average current radius \(R(\tau)\)
\[R(\tau)=\Lambda_{r}(\tau)R_{d}\quad\text{such that}\quad\dot{R}(\tau)=\dot{ \Lambda}_{r}(\tau)\,R_{d}\,,\] (V.32)
where \(\Lambda_{r}\) is an average stretch defined as
\[\Lambda_{r}(\tau)=1+\frac{1}{H_{d}}\int_{0}^{H_{d}}\frac{u(R_{d},z,\tau)}{R_{ d}}\,dz\,.\] (V.33)
From (V.32) and (V.33), the radial contraction velocity can be also rewritten as \(\dot{R}(\tau)=\dot{\Lambda}_{r}(\tau)\,\frac{H_{d}}{2}\,AR\). It is easy to verify that the average stretch \(\Lambda_{r}\) also corresponds to the average \(\bar{\lambda}_{r}(\tau)\) of the radial deformation \(\lambda_{r}(r,z,\tau)\) on the cross section \(\mathcal{S}_{d}\) of area \(R_{d}\cdot H_{d}\).
The numerical results obtained for a constant radius show that the radial velocity \(\dot{R}(\tau)\) is characterized by two time scales, one that characterizes the phase in which the velocity increases and the second of the velocity decrease phase (figure 7). During the growth phase, the curves fit to a linear law, that is, \(\dot{R}(\tau)\propto\tau/\tau_{r}\) with \(\tau_{r}\) the characteristic time of rising. During the decreasing phase, curves fit to an exponential law \(\dot{R}(\tau)\propto v_{max}\exp(-\,\tau/\tau_{decay})\), with \(\tau_{decay}\) the characteristic time of decay (see Table 3).
The inset in the figure shows that the maximum radial velocity \(v_{max}\), attained at peak time \(\tau_{p}\), depends on the geometric parameter \(R_{o}\) and \(H_{o}\).
Actually, the analysis of the equations (V.32) and (V.33) shows that when \(AR\) changes with \(H_{d}\) (or, equivalently,
Figure 5: Plane \((\bar{J},\bar{J}_{a})\): evolution path at constant radius \(R_{o}=1.5\)mm for different values of the aspect ratio AR. Lower AR correspond to evolution path far from equilibrium; higher AR correspond to paths which tend to the quasi-static stress-free path (dashed line).
Figure 6: Plane \((\bar{J},\bar{J}_{a})\): evolution path at constant thickness \(H_{o}=0.1\)mm for different values of the aspect ratio AR. All the paths are superimposed and the master curve is the one corresponding to \(AR=30\) in figure 5.
with \(H_{o}\) as the initial free-swelling is homogeneous), with \(R_{o}\) constant, the dependence of \(\dot{R}\) on \(AR\) is also affected by \(H_{d}\) and can't be linear. The same equations show that, for \(H_{d}\) constant the dependence of \(\dot{R}\) on \(AR\) is simply linear. This is what the inset in figure 7 shows for the maximum velocity \(v_{max}\), relative to the study at varying radius.
Moreover, we can split the average stretch \(\Lambda_{r}\) into an elastic component \(\Lambda_{e}\) and an active component \(\Lambda_{a}\), related to the analogous multiplicative decomposition of the deformation gradient \(\mathbf{F}=\mathbf{F}_{e}\mathbf{F}_{a}\) and of the radial deformation \(\lambda_{r}\). Thus, the stretching velocity \(\dot{\Lambda}_{r}\) can be additively split in two summands:
\[\dot{R}=\left(\dot{\Lambda}_{a}\,\Lambda_{e}+\Lambda_{a}\,\dot{\Lambda}_{e} \right)R_{d}\,,\] (V.34)
where \(\Lambda_{a}\) and \(\Lambda_{e}\) are defined as the average of the active \(\gamma_{r}\) and elastic \(\lambda_{r}/\gamma_{r}\) radial deformation, with first due to self-contraction and the second driven by liquid transport. Equation (V.34) highlights the existence of two time scales for \(\dot{R}\): for \(\tau\leq\tau_{\beta}\) the stretching velocity is dominated by the time evolution of \(\beta(\tau)\), while for \(\tau\geq\tau_{\beta}\) is dominated by solvent release; we have
\[\dot{R} \simeq \dot{\Lambda}_{a}\,\Lambda_{e}\,R_{d}\ \ \tau<\tau_{\beta},\,\text{contraction-dominated regime}\,,\] \[\dot{R} \simeq \Lambda_{a}\,\dot{\Lambda}_{e}\,R_{d}\ \ \tau>\tau_{\beta},\,\text{ liquid-dominated regime}\,.\] (V.35)
Equation (V.35)\({}_{1}\) shows that during the contraction-dominated regime, that is, for \(t<\tau_{\beta}=20\) s, the radial velocity \(\dot{R}\) changes with the same rate of \(\Lambda_{a}\), which depends on \(\beta\), as figures 7 and 8 show (compare the coloured lines with the dashed black line in both figures). On the other side, equation (V.35)\({}_{2}\) shows that during the liquid-dominated regime, that is, for \(t>\tau_{\beta}=20\) s, the radial velocity \(\dot{R}\) changes with the rate of \(\Lambda_{e}\), which depends on liquid transport and on the \(AR\) of the disc, as figure 7 shows.
Figures 7 and 8 show also clearly that the maximal velocity is reached when contraction is maximal - as was suggested in [5] (see figure 4f in [5]).
It is worth noting that the remodeling action \(\beta\), needed to change the target mesh size, does not further change once it has taken its maximal value. Beyond that, the system evolves towards its steady state by releasing liquid and the steady state is reached when motor applied activity stresses are balanced by network elasticity such that the system reaches a stress free configuration.
It is also worth noting that the difference in the behaviour of the \(\dot{R}\) vs time curves in the liquid-dominated regime for \(R_{o}\) (figure 7) and \(H_{o}\) (figure 8) constant is different as for these geometries thickness is important.
### Stress distribution
Stress analysis in the active disc can be relevant, as it might drive mechanical instability, which lead to a variety of different shapes at the end of the contraction[28; 29; 30; 5; 27; 28]. The analysis of instabilities is beyond the scope of the present work, and will mark our future efforts. However, through the aforementioned studies, we might have interesting clues about shape transitions by investigating the effects of \(AR\) on the the evolution of radial and hoop stresses in the disc, which may drive further experiments.
We only report results for the case of constant radius. We compare the stress state in a thick (\(AR\simeq 20\)) and a thin (\(AR\simeq 45\)) disc. Panels A) and B) of figure 9 show the existence of two stress patterns: stress is constant in a core region (beige) and varying in a peripheral one (cyan). As bulk contraction \(\beta\) is homogeneous and isotropic in the whole disc, these two regions are determined by the dynamics of liquid transport. In particular, the width of the peripheral region is of the order of the thickness because the solvent in this region can escape from both the lateral boundary and the top and bottom surfaces. In contrast, for the solvent in the core the shortest path to exit the gel disc is through the top and bottom surfaces.
In particular, in figure 9, for \(AR=20\) we have essentially \(\sigma_{r}<0\) along all the radius (see panel A), and \(\sigma_{\theta}\) varying from negative to positive, (see panel A); for \(AR=45\) we have \(\sigma_{r}>0\) along all the radius (see panel B) and \(\sigma_{\theta}\) varying from positive to negative (see panel B). Corresponding to our values of \(AR\), we have \(H_{\rm thin}\simeq 0.04\,R_{d}\) and \(H_{\rm thick}=0.1\,R_{d}\). The stress distribution for the two cases is typical of that found in frustrated dome-like or saddle-like discs (see figure (9), panels C and D)[28; 29; 30].
That is a preliminary requirement for observing instability patterns which can deliver domes or saddles, depending on other key factors, which are not investigated in the present paper.
### Evolution of aspect ratio during contraction
Finally, the geometry of the gel body suggested to investigate the possibility to have frictions \(\eta_{r}\) and \(\eta_{\theta}\) in the plane, different from the vertical friction \(\eta_{z}\). frictions are related to the resistances of the mesh to remodel, which can be expected to be different. Our conjecture needs to be validated and the analysis may stimulate further experiments in this direction.
As noted at the end of Section II, the system is controlled by the pair \((\mu_{e},\beta)\), and here we also analyse the combined effects of varying the chemical potential \(\mu_{e}\) and active force \(\beta\) (protocol b).
We model the motor activity by introducing a uniform and isotropic active stress \(\beta\). Nevertheless, during gel contraction, the radial and vertical stretches might differ locally and each one of them can vary in time and space. We use the average values \(R(\tau)\) and \(H(\tau)\), defined as \(H(\tau)=\Lambda_{z}(\tau)\,H_{d}\) with
\[\Lambda_{z}(\tau)=1+\frac{1}{R_{d}}\int_{0}^{R_{d}}\frac{w(r,H_{d},\tau)}{H_{ d}}\,dr\,,\] (V.36)
to describe the change in the aspect ratio of the disc. At any time \(\tau\), the ratio \(H(\tau)/H_{o}\) can be plotted against the ratio \(R(\tau)/R_{o}\) to illustrate the evolution path of the radial and vertical stretches, that is the curve \(\tau\mapsto(R(\tau)/R_{o},H(\tau)/H_{o})\), plotted in the plane \((R/R_{o},H/H_{o})\). In figure 10), the curve has been represented for a disc with \(AR=22\) and \(R_{o}=1.5\) mm. In that plot, the dashed line represents an isotropic evolution, during which the aspect ratio remains constant during network contraction.
For each of the two analyzed cases a) (red) and b) (blue), we show two curves, one corresponding to equal frictions (diamond), \(\eta_{r}=\eta_{\theta}=\eta_{z}\), and the other with different horizontal and vertical frictions (asterisk), \(\eta_{r}=\eta_{\theta}=2\,\eta_{z}\). We note that the evolution is very sensitive to friction, while the differences between case a) and b) are less noticeable. For all simulations, the system evolves via a characteristic path. It departs from the isotropic contraction path, but in the case with equal frictions the steady state configuration ends on the dashed line (i.e., on the isotropic path), while the case with different frictions ends far from it. In particular, when \(\eta_{r}=\eta_{z}\), the contraction is almost isotropic until \(H/H_{o}=R/R_{o}\sim 0.8\); then, radial contraction is faster, and eventually the vertical one becomes faster. When \(\eta_{r}=2\,\eta_{z}\), vertical contraction is much faster than the radial one, and the final state is not isotropic.
Figure 9: Effect of \(AR\) on stress distribution for simulations at constant radius. Panels A) and B) show the radial \(\sigma_{r}\) (red) and hoop \(\sigma_{\theta}\) (blue) stresses versus the non dimensional radius \(r/R_{d}\) at \(\tau=20\) s, for \(AR=20\) and \(AR=45\). A) \(AR=20\): the hoop stress is negative in the core (beige) and positive at the periphery (cyan), a typical pattern of frustrated dome-like shape.
## VI Conclusions and future directions
We discussed the interplay between elasticity, liquid transport and self-contractions in active gel discs from the perspective of continuum mechanics. It has been shown that, even if contraction dynamics doesn't have a characteristic length, the aspect ratio of active gel discs may greatly affect the changes in shape, due to the dependence of contraction dynamics on liquid transport, which is system-size dependent.
To keep the model easy, the numerical model has been developed under the hypothesis of cylindrical symmetry, which excludes the challenge to observe disc morphings which are not compatible with the cylindrical symmetry. Actually, we are planning to give up the symmetry hypothesis above and investigate the blossom of stresses in the disc, which may drive instability patterns and, consequently, a variety of steady shapes of the gel. It was beyond the scope of the present work and it'll mark our future efforts.
Giving up the symmetry hypothesis makes also more interesting the identification of the determinants of possible changes in shape, whose control would open to the possibility to get actuators based on self-contractile gels, a promising field which can be set within the framework here presented.
**ACKNOWLEDGMENTS**
This work has been supported by MAECI (Ministry of Foreign Affairs and International Cooperation) and MOST (Ministry of Science and Technology - State of Israel) through the project PAMM. F.R. also thanks INDAM-GNFM for being supported with Progetti Giovani GNFM 2020.
|
2310.19877 | Magnetotransport in spin-orbit coupled noncentrosymmetric and Weyl
metals | Recently, chiral anomaly (CA) has been proposed to occur in spin-orbit
coupled noncentrosymmetric metals (SOC-NCMs), motivating CA to be a Fermi
surface property rather than a Weyl node property. Although the nature of the
anomaly is similar in both SOC-NCMs and Weyl systems, here we point out
significant fundamental differences between the two. We show that the different
nature of the orbital magnetic moment (OMM) in the two systems leads to
non-trivial consequences -- particularly the sign of the longitudinal
magnetoconductance always remains positive in a SOC non-centrosymmetric metal,
unlike a Weyl metal that displays either sign. Furthermore,we investigate the
planar Hall effect and the geometrical contribution to the Hall effect in the
two systems and point out significant differences in the two systems. We
conduct our analysis for magnetic and non-magnetic impurities, making our study
important in light of current and upcoming experiments in both SOC-NCMs and
Weyl metals. | Gautham Varma K, Azaz Ahmad, Sumanta Tewari, G. Sharma | 2023-10-30T18:00:02Z | http://arxiv.org/abs/2310.19877v1 | # Magnetotransport in spin-orbit coupled noncentrosymmetric and Weyl metals
###### Abstract
Recently, chiral anomaly (CA) has been proposed to occur in spin-orbit coupled noncentrosymmetric metals (SOC-NCMs), motivating CA to be a Fermi surface property rather than a Weyl node property. Although the nature of the anomaly is similar in both SOC-NCMs and Weyl systems, here we point out significant fundamental differences between the two. We show that the different nature of the orbital magnetic moment (OMM) in the two systems leads to nontrivial consequences- particularly the sign of the longitudinal magnetoconductance always remains positive in a SOC non-centrosymmetric metal, unlike a Weyl metal that displays either sign. Furthermore, we investigate the planar Hall effect and the geometrical contribution to the Hall effect in the two systems and point out significant differences in the two systems. We conduct our analysis for magnetic and non-magnetic impurities, making our study important in light of current and upcoming experiments in both SOC-NCMs and Weyl metals.
## I Introduction
Chiral anomaly roots its origin in high-energy physics [1; 2]. It refers to the non-conservation of left and right-handed Weyl fermions separately in the presence of external gauge fields. Over the past decade, its unexpected appearance in solid-state systems has caused great excitement in the condensed matter community [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Specifically, Weyl fermions, discovered as electronic excitations in specific systems (termed as Weyl semimetals (WSMs)), can manifest the anomaly that can be detected via relatively simple transport [10; 11; 12; 13; 14; 15; 16] or optical [32; 33; 34; 35; 36; 37; 38] measurements. The key requirement is that the elementary excitations should be chiral and relativistic in odd spatial dimensions [39].
The realization of the anomaly has recently been extended to certain other systems distinct from Weyl semimetals [40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. Specifically, it has been suggested that the anomaly can be realized in spin-orbit-coupled (SOC) non-centrosymmetric metals (NCMs) that host nonrelativistic fermions with only one relevant band touching point [48]. The effect of the anomaly on charge and thermal transport properties of SOC-NCMs has recently been studied, and it has been suggested that the anomaly results in positive longitudinal magnetoconductance (LMC) [48; 49], akin to Weyl semimetals. The sign of LMC has been a subject of much debate and exploration in WSMs. It is expected to crucially depend on the nature of impurities, the strength of the magnetic field, and the strength of the intervalley scattering. Under strong magnetic fields, due to Landau quantization, the LMC sign depends on the nature of scattering impurities [50; 51; 52; 53; 54; 55].
Recently, we pointed out that the sign of LMC is, in fact, more nuanced [56]. LMC in Weyl systems can typically be expressed as \(\sigma_{zz}=\sigma_{0}+\sigma_{zz}^{(2)}(B-B_{0})^{2}\). 'Strong-sign-reversal' is characterized by the reversal of orientation of the magnetoconductance parabola with respect to the magnetic field, while in 'weak-sign-reversal,' the magnetoconductivity depends on the direction of the magnetic field and is not correlated with the orientation of the LMC parabola. Fig. 1 (c) shows a schematic description of strong and weak-sign-reversal of LMC. Specifically, in the case of weak-sign-reversal, LMC is linear near zero magnetic field, while the vertex of the parabola (\(B_{0}\)) is shifted from the origin, but the quadratic coefficient of LMC (\(\sigma_{zz}^{(2)}\)) remains positive. In the case of strong-sign-reversal, importantly, the quadratic coefficient \(\sigma_{zz}^{(2)}\) becomes negative. When Landau quantization can be ignored under weak magnetic fields, quasiclassical Boltzmann analysis suggests that sufficiently strong intervalley scattering can reverse the sign of LMC from positive to negative (strong-sign-reversal) [19; 20; 57]. Whether or not the longitudinal magnetoconductance in SOC-NCMs shows similar characteristics also remains an important and pertinent question in the field. Furthermore, the focus of all the previous works has been particularly on point-like scalar non-magnetic impurities. The fate of LMC in both spin-orbit-coupled and Weyl metals in the presence of (pseudo)magnetic impurities remains to be determined.
In SOC-NCMs, we focus on the vicinity of one nodal point surrounded by two Fermi surfaces as depicted in Fig. 2 (b). This is in contrast to the two separate nodal points and Fermi surfaces we are concerned with in WSMs (Fig. 2 (a)). The role of intranode scattering in WSMs is replaced by intraband scattering in SOC-NCMs. This scattering preserves the chirality of the scattered quasiparticles. Internode scattering in WSMs is equivalent to interband scattering in SOC-NCMs, reversing the quasiparticle chirality. Internode scattering in WSMs requires the transfer of large momentum of the order of separation between the Weyl nodes, which is usually weaker than intranode scattering requiring a small momentum transfer. In contrast, in SOC-NCMs, the momentum transfer with interband scattering is not necessarily small, as both the Fermi surfaces surround a single nodal point. Thus, interband scattering is expected to be at least as significant in SOC-NCMs as it is in WSMs, and its exploration remains an open question.
In this work, we probe the role of interband scattering in SOC-WSMs and show that in the quasiclassical low-field regime, unlike WSMs, the sign of LMC is _not_ sensitive to the relative strength of the interband scattering. Longitudinal magnetoconductance in SOC-NCMs is found to be always positive, irrespective of the strength of the interband scattering. We trace the reason to the orbital magnetic moment (OMM) in SOC-NCMs that is of equal magnitude and sign at both the bands, as compared to the case of WSMs where OMM has equal magnitudes but opposite signs at the two nodes (see Fig. 2). We examine how the subtle difference in OMM can lead to drastic differences in other transport properties, such as the planar Hall conductivity, and also give rise to a finite geometrical contribution to the Hall conductivity in SOC-NCMs. Furthermore, we also analyze all the properties of both WSMs and SOC-NCMs in the presence of both point-like scalar and magnetic impurities, which has remained an open problem so far.
## II Model and Formalism
We begin with the following extended model of a spin-orbit coupled non-centrosymmetric metal that can be expressed near the high-symmetry point as
\[H_{\mathrm{soc}}(\mathbf{k})=\frac{\hbar^{2}k^{2}}{2m}\sigma_{0}+\hbar\vartheta \mathbf{k}\cdot\sigma+\hbar\vartheta(k_{x}t_{x}+k_{z}t_{z})\sigma_{0} \tag{1}\]
Here, \(m\) is the effective electron mass. The second term represents the spin-orbit coupling term, and \(\sigma\) denotes the vector of Pauli matrices in the spin space. The third
Figure 1: Spin texture of (a) SOC-NCM in the \(k_{z}=0\) plane, and (b) a Rashba coupled system. The arrows point in the direction of the spin expectation value \(\mathbf{S}^{\lambda}\). Blue and red arrows are for the outer and inner Fermi surfaces, respectively. (c) Schematic figure representing weak-sign-reversal (WSR) and strong-sign-reversal (SSR) compared to normal LMC in Weyl systems.
Figure 3: (a) Orientation of magnetic field in the \(xz-\)plane. (b) Symbolic interpretation for the types of impurity studied in this manuscript (Eq. 7).
Figure 2: Quasiparticle scattering in WSMs (a) and SOC-NCMs (b). Unlike WSMs, quasiparticle scattering in SOC-NCMs occurs between the surfaces (FSs) associated with a single nodal point. The two Fermi FSs in SOC-NCMs have opposite Berry curvature (\(\mathbf{\Omega}^{\lambda}(\mathbf{k})\)), but crucially, unlike WSMs, have the same orbital magnetic moment (\(\mathbf{m}^{\lambda}(\mathbf{k})\)). Blue and yellow arrows represent the internode (interband for SOC-NCMs) and intranode (intrabard for SOC-NCMs) scattering, respectively, in WSMs. Here, \(\lambda\) is the band/node index. The oval shape of the Fermi surfaces is due to the coupling of the orbital magnetic moment in an external magnetic field.
term in the Hamiltonian tilts the dispersion along a particular direction, and the dimensionless parameters \(t_{x}\) and \(t_{z}\) represent the tilting along \(x\)- and \(z\)-direction, respectively. Similar to the case of WSMs, the tilt term may arise naturally in the bandstructure in SOC-NCMs or may model the effect of strain in the material.
It is instructive to also compare the above Hamiltonian to a Rashba coupled system given by
\[H_{\rm Rashba}(\mathbf{k})=\frac{\hbar^{2}k^{2}}{2m}\sigma_{0}+\alpha_{R}(k_ {x}\sigma_{y}-k_{y}\sigma_{x}), \tag{2}\]
where \(\alpha_{R}\) is the Rashba coefficient. The spin-texture for both \(H_{\rm SOC}\) and \(H_{\rm Rashba}\) can be evaluated as \(\mathbf{S}^{\lambda}=\langle u^{\lambda}(\mathbf{k})|\mathbf{\sigma}|u^{\lambda}( \mathbf{k})\rangle\), where \(|u^{\lambda}(\mathbf{k})\rangle\) is the spinor wavefunction and \(\lambda\) is the band-index. The spin-texture for both the above Hamiltonians (Eq. 1 and Eq. 2) is given in Fig. 1. The spin rotates as we traverse along the Fermi surface. Importantly, the spins in the two Fermi surfaces point in the opposite direction to each other, indicating their opposite chirality.
To compare our results with WSM, we use the following prototype model of a two-node time-reversal symmetry broken WSM.
\[H_{\rm wsm}=\left(\sum_{\chi=\pm 1}\hbar v_{F}\chi\mathbf{k}\cdot\mathbf{\sigma} \right)+\hbar v_{F}(k_{x}t_{x}+k_{z}t_{z})\sigma_{0}. \tag{3}\]
Here, \(\chi\) is the chirality and \(v_{F}\) is the Fermi velocity. The Hamiltonian in Eq.1 has the following energy dispersion
\[\epsilon^{\lambda}(\mathbf{k})=\frac{\hbar^{2}k^{2}}{2m}+\lambda\hbar\vartheta k ++\hbar\vartheta(k_{x}t_{x}+k_{z}t_{z}) \tag{4}\]
Here, \(\lambda=\mp 1\) is the band index. The corresponding eigenvectors are: \(\left|u^{\lambda}\right\rangle^{T}=[\lambda e^{-i\phi}\cos(\theta/2),\sin( \theta/2)]\). We assume that the Fermi energy \(\epsilon_{F}\) lies above the nodal point \(\mathbf{k}=0\), and thus we have two Fermi surfaces corresponding to the two energy bands as shown in Fig. 2 (b). The Berry curvature (\(\mathbf{\Omega}_{\mathbf{k}}^{\lambda}\)) for both these surfaces has equal magnitudes and opposite signs, just like the Fermi surfaces in the vicinity of two nodal points in WSMs. Interestingly, the orbital magnetic moment (\(\mathbf{m}_{\mathbf{k}}^{\lambda}\)) carries the same sign and magnitude, distinct from WSMs where the signs are reversed. In the presence of an external magnetic field (\(\mathbf{B}\)), the orbital magnetic moment couples to the dispersion as \(-\mathbf{m}_{\mathbf{k}}^{\lambda}\cdot\mathbf{B}\) leading to the oval-shaped Fermi surfaces as shown in Fig. 2 (b). In WSMs, the coupling is opposite, and thus, the shapes of the surfaces are reversed.
We study charge transport in the presence of perturbative electric and magnetic fields using the quasiclassical Boltzmann formalism. This is valid in the limits of weak magnetic fields, \(B\ll B_{c}\), where \(eB_{c}\hbar/2m\epsilon_{F}=1\). The non-equilibrium distribution function \(f_{\mathbf{k}}^{\lambda}\) obeys the following steady-state equation:
\[\dot{\mathbf{r}}_{\mathbf{k}}^{\lambda}\cdot\nabla_{\mathbf{r}}f_{\mathbf{k}} ^{\lambda}+\dot{\mathbf{k}}^{\lambda}\cdot\nabla_{\mathbf{k}}f_{\mathbf{k}}^{ \lambda}=I_{\rm coll}[f_{\mathbf{k}}^{\lambda}]. \tag{5}\]
Here, \(f_{\mathbf{k}}^{\lambda}=f_{0}+g_{\mathbf{k}}^{\lambda}\), with \(f_{0}\) being the Fermi-Dirac distribution and \(g_{\mathbf{k}}^{\lambda}\) is the deviation due to the presence of the external fields. We restrict ourselves to the first order in the electric field, i.e., \(g_{\mathbf{k}}^{\lambda}=-e\left(\frac{\partial f_{0}}{\partial\mu}\right)_{ \epsilon_{F}}\mathbf{E}\cdot\mathbf{\Lambda}_{\mathbf{k}}^{\lambda}\). The collision integral (\(I_{\rm coll}\)) in Eq. 5 is chosen in such a way that it can incorporate both interband and intraband scattering, given by
\[I_{\rm coll}[f_{\mathbf{k}}^{\lambda}]=\sum_{\lambda^{\prime}}\sum_{\mathbf{k }^{\prime}}\mathbf{W}_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{ \prime}}(f_{\mathbf{k}^{\prime}}^{\lambda^{\prime}}-f_{\mathbf{k}}^{\lambda}), \tag{6}\]
where, the scattering rate \(\mathbf{W}_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{\prime}}\) calculated using the Fermi's golden rule,
\[\mathbf{W}_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{\prime}}=\frac{2 \pi n}{\mathcal{V}}\left|\left\langle u^{\lambda^{\prime}}(\mathbf{k}^{\prime}) \right|U_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{\prime}}\left|u^{ \lambda}(\mathbf{k})\right\rangle\right|^{2}\!\delta(\epsilon_{\mathbf{k}^{ \prime}}^{\lambda^{\prime}}-\epsilon_{F}) \tag{7}\]
Here, '\(\mathrm{n}\)' is the impurity concentration, '\(\mathcal{V}\)' is the system volume, \(\left|u^{\lambda}(\mathbf{k})\right\rangle\) is the spinor wavefunction, \(U_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{\prime}}\) is the scattering potential profile, and \(\epsilon_{F}\) is the Fermi energy. Here we choose \(U_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{\prime}}\) in such a manner that it can include both magnetic and non-magnetic point-like scattering centers. In general \(U_{\mathbf{k}\mathbf{k}^{\prime}}^{\lambda\lambda^{\prime}}=U^{\lambda \lambda^{\prime}}\sigma_{i}\) with \(i=0,1,2,3\), where \(U^{\lambda\lambda^{\prime}}\) distinguishes the interband (\(\lambda\neq\lambda^{\prime}\)) and intraband (\(\lambda=\lambda^{\prime}\)) scattering. Here, we work in the geometry represented in Fig. 3 (a), i.e., we fix the direction of the electric field along the \(z-\)direction and rotate the magnetic field in the \(xz-\)plane that makes an angle \(\gamma\) with respect to the \(x-\)axis. Further calculation details for the solution of the distribution function \(f_{\mathbf{k}}^{\lambda}\) are presented in the Appendix. Finally, the current is evaluated as \(\mathbf{j}=-e\sum\limits_{\chi}\sum\limits_{\mathbf{k}}\dot{\mathbf{r}}^{ \chi}f_{\mathbf{k}}^{\chi}\), and the conductance tensor \(\hat{\sigma}\) is given by \(j_{\alpha}=\sigma_{\alpha\beta}E_{\beta}\). Unless otherwise specified, we choose the following values for our calculations: \(m=10^{-32}\)kg, \(\vartheta=5\times 10^{5}\)ms\({}^{-1}\), \(v_{F}=10^{6}\)ms\({}^{-1}\), \(\epsilon_{\rm F}=50\)meV.
## III Results
### Longitudinal Magnetoconductance
We first discuss longitudinal magnetoconductance for SOC-NCM and compare it with a standard WSM. We examine the behavior of each impurity type (magnetic and non-magnetic) individually. In Fig. 4 (b) we plot the LMC in SOC-NCM as a function of the magnetic field for different values of the relative interband scattering strengths \(\alpha\) (the ratio of interband scattering strength to intraband scattering strength), for non-magnetic \(\sigma_{0}-\)impurities as well as \(\sigma_{z}-\)magnetic impurities. The LMC is always positive for any value of \(\alpha\). This is in striking contrast to WSMs where LMC changes sign (strong sign-reversal [56]) when \(\alpha>\alpha_{c}\) (Fig. 4 (a)).
In WSMs, the sign of the OMM is different at Fermi surfaces at both nodes. This breaks the symmetry between them, thus also b
their chiral partners. This has been attributed to result in a strong-sign-reversal of LMC [19; 20; 56]. On the other hand, OMM shifts the energy dispersion in SOC-NMCs in both the Fermi surfaces by the same amount as shown in Fig. 2 (b). The symmetry between the chiral partners between both the Fermi surfaces thus remains intact and therefore no sign-reversal is observed. Since the magnetic-\(\sigma_{z}\) impurity doesn't flip the chirality of the quasiparticles in both SOC-NCMs and WSMs, we observe the same effect on LMC as a nonmagnetic impurity (\(\sigma_{0}\)). The \(\sigma_{x}\) and \(\sigma_{y}\) impurities, on the other hand, flip the chirality of the quasiparticles. We obtain quadratic and positive LMC for both WSMs and SOC-NCMs for \(\sigma_{x}\) and \(\sigma_{y}\) impurities (not plotted explicitly).
Next, we examine LMC as a function of the parameter \(t_{z}\) and \(\alpha\) for \(\sigma_{0}\) and \(\sigma_{z}\) impurities in Fig. 5. We obtain strikingly different behavior for WSMs and SOC-NCMs. For WSMs, the zero-LMC contour \(\alpha_{c}(t_{z})\), which separates positive and negative LMC, is now a function of \(t_{z}\) (Fig. 5 (a)). This change of sign corresponds to strong-sign-reversal. For SOC-NCMs, the zero-LMC contour appears for nonzero values of \(t_{z}\) and this change of sign is associated with weak-sign-reversal (Fig. 5 (b)). Specifically, the \(t_{z}\)-term tilts the parabola along a particular direction but does not flip its orientation.
In Fig. 5 (c) and Fig. 5 (d) we examine the behavior of LMC as a function of \(\alpha\) and \(t_{z}\) for \(\sigma_{x}\) and \(\sigma_{y}-\)magnetic impurities. In WSMs, we observe weak-sign-reversal for large values of the tilt parameter \(t_{z}\), and no sign-reversal for smaller values of \(t_{z}\). Furthermore, increasing internode scattering restores positive LMC. This is in sharp contrast to the effect of \(\sigma_{0}\) and \(\sigma_{z}\) impurities in WSMs, where we observe strong-sign-reversal and decreasing internode scattering restoring positive LMC. In SOC-NCMs, for \(\sigma_{x}\) and \(\sigma_{y}\) impurities, the effect of weak-sign-reversal is more pronounced as shown in Fig. 5 (d). Again, larger interband scattering restores positive LMC. This feature is understood as follows. The \(\sigma_{x}\) (or \(\sigma_{y}\)) impurities flip the chirality of the fermions; further imposing interband scattering back-flips the reversed chirality, and thus interband \(\sigma_{x}\) scattering behaves like intraband scattering.
In Fig. 6 we compare the behavior of LMC of WSM with that of SOC-NCM in the presence of the tilt parameter \(t_{x}\). In WSM, just like the case when \(t_{z}\neq 0\), we find that the behavior in the presence of non-magnetic and \(\sigma_{z}-\)impurities is similar; we observe strong-sign-reversal.
Figure 4: LMC in Weyl semimetals and SOC-NCMs as a function of the magnetic field for different values of the relative interband (internode for WSMs) scattering strengths \(\alpha\). As we move in the direction of the arrow from the blue to the green curve, \(\alpha\) is increased from 0.35 to 1.25. We obtain the same behavior for a non-magnetic impurity profile i.e., \(U^{\lambda\lambda^{\prime}}_{\mathbf{k}\mathbf{k}^{\prime}}=U^{\lambda\lambda^ {\prime}}\sigma_{0}\), as well as magnetic impurity i.e, \(U^{\lambda\lambda^{\prime}}_{\mathbf{k}\mathbf{k}^{\prime}}=U^{\lambda\lambda^ {\prime}}\sigma_{z}\). (a) For WSM there is strong-sign-reversal above \(\alpha>\alpha_{c}\), (b) For SOC-NCM, there is no sign-reversal for any interband scattering strength.
Figure 5: LMC in Weyl semimetals and SOC-NCMs as a function of the relative interband (internode for WSMs) scattering strengths \(\alpha\) and the parameter \(t_{z}\). The dashed black line shows the contour separating positive and negative LMC regions.
Figure 6: LMC in WSMs and SOC-NCMs in the presence of \(t_{x}\) parameter. The dashed black line shows the contour separating positive and negative LMC regions.
The zero-LMC contour \(\alpha_{c}(t_{x})\) is qualitatively similar to the case of nonzero \(t_{z}\), but nevertheless exhibits quantitative differences. In SOC-NCM, weak-sign-reversal is observed. Unlike the \(t_{z}\neq 0\) case, we observe quantitative differences between non-magnetic and \(\sigma_{z}-\)impurities (Fig. 6 (b) and Fig. 6 (c)). Surprisingly, WSMs in the presence of \(\sigma_{x}\) or \(\sigma_{y}\) impurities, exhibit neither weak-sign-reversal nor strong-sign-reversal in the presence of \(t_{x}\) parameter (Fig. 6 (d)). This can be again understood from the fact that either of these impurity types changes the roles of internode scattering and that tilting the Weyl cone in a direction orthogonal to the direction of the magnetic field doesn't add an overall linear component to the magnetoconductivity. For SOC-NCM, we observe weak-sign-reversal, with quantitative differences between \(\sigma_{x}\) and \(\sigma_{y}\) impurities.
### Planar Hall conductance
We next discuss the planar Hall conductance in SOC-NCM and compare the results with a standard WSM. In WSMs, the PHC can be expressed as [56]
\[\sigma_{xz}(B)=\sigma_{xz}^{(2)}(B-B_{0})^{2}+\sigma_{xz}^{(0)}, \tag{8}\]
where \(B_{0}\) is the vertex of the parabola, and \(\sigma_{xz}^{(2)}\) is the quadratic coefficient. The above form allows us to generalize PHC away from the origin, i.e., \(B_{0}\neq 0\). The angular dependence for WSM is \(\sin{(2\gamma)}\) for a point-like non-magnetic impurity profile. We find that this dependence is retained for magnetic impurities pointing in the \(z-\)direction as well (Fig. 7 (a)), and tilting the Weyl cones (\(t_{z}\neq 0\)) only has a quantitative effect. In contrast, SOC-NCMs have a qualitatively different dependence on \(t_{z}\) (Fig. 7 (b)). We observe that unlike in WSMs, the planar Hall conductance in SOC-NCMs exhibits weak-sign-reversal as a function of the parameter \(t_{z}\). For \(\sigma_{x}\) and \(\sigma_{y}\) impurities, PHC in both WSMs and SOC-NCMs exhibit weak-sign-reversal and exhibit similar qualitative behavior (Fig. 7 (c) and (d)). In both systems, interband (intermode) scattering is found to have no significant qualitative effect on planar Hall conductance.
For the nodes tilted along the \(x-\)direction, we observe qualitatively very different behavior. For \(\sigma_{0}\) point-like impurities, the \(\sin{2\gamma}\) trend is observed irrespective of the value of \(t_{x}\), as expected for Weyl cones that are oriented in the same direction. We find qualitatively similar behavior irrespective of the impurity type (Fig. 8 (a)). Note that if the Weyl cones were oriented opposite to each other, one instead finds a \(\sin{\gamma}\) behavior of PHC [16]. In the case of SOC-NCMs, one finds a transition from \(\sin{2\gamma}\) trend to \(\sin{\gamma}\) as the parameter \(t_{x}\) is increased from zero in either direction (Fig. 8). Like WSMs, we observe qualitatively similar behavior for both magnetic (any direction) and non-magnetic impurities.
### Anomalous contribution to the Hall Conductance
In WSMs, the nonvanishing anomalous Hall conductance (AHC) has been attributed to the presence of a finite vector \(\mathbf{k}_{0}\) that separates Weyl cones of opposite chiralities. The net AHC is given by \(\sigma_{xy}^{a}=e^{2}k_{0}/\hbar\). In the presence of time-reversal symmetry, multiple such vectors add up to zero, and AHC is zero. It is noteworthy that the intrinsic AHC contribution of one node, which is given by the integral of the Berry curvature of the filled band up to the Fermi surface, exactly cancels the contribution of the other node. The nonzero AHC in TR-broken WSMs is understood by considering a gapped 2D Chern insulator \(H(\mathbf{k}_{\perp},k_{z})\) that undergoes a topological
Figure 8: Planar Hall conductivity for WSM and SOC-NCM for different impurity types in the presence of parameter \(t_{x}\). Plots are appropriately normalized.
Figure 7: Planar Hall conductivity for WSM and SOC-NCM for different impurity types in the presence of parameter \(t_{z}\). Plots are appropriately normalized.
phase transition at the Weyl node [7].
In SOC-NCMs, we particularly focus on a single nodal point. It is expected that the two Fermi surfaces that enclose the nodal point cancel out their contributions of anomalous Hall conductivity, but as we show next, the presence of an external magnetic field induces finite anomalous contribution to the Hall conductivity. In the presence of an external magnetic field, Zeeman coupling will introduce an additional term in the Hamiltonian given by [58; 59; 60]
\[H_{z}=-g\mu_{B}\mathbf{\sigma}\cdot\mathbf{B}. \tag{9}\]
This causes an opposite energy shift in both bands. Furthermore, the anomalous shift in the energy dispersion due to the orbital magnetic moment (\(\epsilon_{\mathbf{k}}^{\lambda}\rightarrow\epsilon_{\mathbf{k}}^{\lambda}- \mathbf{m}_{\mathbf{k}}^{\lambda}\cdot\mathbf{B}\)). Both of these effects, in concurrence, lead to a finite and measurable anomalous contribution to the Hall conductance, which at zero temperature is calculated as
\[\sigma_{xy}^{a}=\frac{e^{2}}{\hbar}\sum_{\lambda=\pm 1}\int\frac{d^{3} \mathbf{k}}{(2\pi)^{3}}\mathcal{D}_{\mathbf{k}}^{\lambda}\theta(\epsilon_{F} -\epsilon_{\mathbf{k}}^{\lambda})\Omega_{z}^{\lambda}(\mathbf{k}), \tag{10}\]
where \(\mathcal{D}_{\mathbf{k}}^{\lambda}=(1+e\mathbf{B}\cdot\mathbf{\Omega}_{\mathbf{k} }^{\lambda}/\hbar)\). In Fig. 9 we plot the geometrical contribution to the Hall conductivity for SOC-NCM as evaluated from Eq. 10. Interestingly, the behavior is non-monotonic with respect to the magnetic field, and for low enough magnetic field, the anomalous contribution is found to be independent of the strength of the spin-orbit coupling parameter. The non-monotinicity can be understood as follows. With increasing magnitude of the magnetic field, the net Berry curvature contribution increases, but eventually decreases as for larger magnetic fields as the magnitude of the Berry curvature itself reduces. This can be easily tested in current and upcoming transport experiments in SOC-NCMs.
## IV Conclusions and Discussions
Chiral anomaly is a Fermi surface property with similar characteristics in Weyl and spin-orbit coupled non-centrosymmetric metals. It manifests itself in the measurement of longitudinal magnetoconductance and the planar Hall conductance. However, in striking contrast to WSMs, where the sign of the LMC is sensitive to the internode scattering strength, the sign of the LMC in SOC-NCMs is independent of the interband scattering strength and always remains positive. The reason is traced down to the subtle difference in the orbital magnetic moment in WSMs and SOC-NCMs. Orbital magnetic moment in SOC-NCMs is of equal magnitudes and signs in both the bands but has opposite signs at the two nodes in WSMs. This difference also yields drastic differences in other transport properties, such as the planar Hall conductivity. We also examined all the properties in the presence of a tilt parameter (\(t_{z}\) and \(t_{x}\)) and for different impurity types (magnetic and non-magnetic). The behavior for \(\sigma_{0}\) and \(\sigma_{z}\) impurities was found to be qualitatively similar to each other as they do not flip with chirality. On the other hand, \(\sigma_{x}\) and \(\sigma_{y}\) flip the chirality and behave qualitatively similar to each other. Lastly, we predict that the combination of the anomalous orbital magnetic moment and the Zeeman field gives rise to a geometrical contribution to the Hall conductivity in SOC-NCMs that is non-monotonic in the magnetic field. Our study is highly pertinent in light of current and upcoming experiments in the field of spin-orbit coupled noncentrosymmetric and Weyl metals.
_Acknowledgements_ G.V.K. and A.A. acknowledge financial support from IIT Mandi HTRA. G.S. acknowledges support from IIT Mandi Seed Grant. We thank Shubhanshu Karoliya for his technical support.
## Appendix A Maxwell Boltzmann transport theory
Due to Berry phase effects, in the presence of electric and magnetic fields, the semiclassical dynamics of the Bloch electrons are modified and governed by the following equation [19; 28].
\[\dot{\mathbf{r}}^{\lambda} =\mathcal{D}^{\lambda}\left(\frac{e}{\hbar}\left(\mathbf{E} \times\mathbf{\Omega}^{\lambda}\right)+\frac{e}{\hbar}(\mathbf{v}^{\lambda}\cdot \mathbf{\Omega}^{\lambda})\mathbf{B}+\mathbf{v}_{\mathbf{k}}^{\lambda}\right)\] \[\dot{\mathbf{p}}^{\lambda} =-e\mathcal{D}^{\lambda}\left(\mathbf{E}+\mathbf{v}_{\mathbf{k} }^{\lambda}\times\mathbf{B}+\frac{e}{\hbar}\left(\mathbf{E}\cdot\mathbf{B} \right)\mathbf{\Omega}^{\lambda}\right). \tag{11}\]
where \(\mathbf{v}_{\mathbf{k}}^{\lambda}=\frac{1}{\hbar}\frac{\partial e^{\lambda}( \mathbf{k})}{\partial\mathbf{k}}\) is the band velocity, \(\mathbf{\Omega}^{\lambda}=-\lambda\mathbf{k}/2k^{3}\) is the Berry curvature, and \(\mathcal{D}^{\lambda}=(1+e\mathbf{B}\cdot\mathbf{\Omega}^{\lambda}/\hbar)^{-1}\) is the factor modifying the density of the states in the presence of the Berry curvature. The self-rotation of the Bloch wave packet also gives rise to an orbital magnetic moment, \(\mathbf{m}_{\mathbf{k}}^{\lambda}\)[61]. In the presence of a magnetic field, the orbital magnetic moment shifts the energy dispersion as \(\epsilon_{\mathbf{k}}^{\lambda}\rightarrow\epsilon_{\mathbf{k}}^{\lambda}- \mathbf{m}_{\mathbf{k}}^{\lambda}\cdot\mathbf{B}\). Using Eq. 11 and Eq. 6 and retaining terms only up to linear order in electric and magnetic
Figure 9: Geometrical contribution to the Hall conductivity showing a non-monotonic behavior. The legends indicate the strength of spin-orbit coupling parameter \(\vartheta\) in units of \(10^{5}\)ms\({}^{-1}\). We chose \(g=2\).
fields, the Boltzmann transport equation becomes
\[\left[\left(\frac{\partial f^{\lambda}_{0}}{\partial\epsilon^{ \lambda}_{\mathbf{k}}}\right)\mathbf{E}\cdot\left(\mathbf{v}^{\lambda}_{\mathbf{ k}}+\frac{e\mathbf{B}}{\hbar}(\mathbf{\Omega}^{\lambda}\cdot\mathbf{v}^{\lambda}_{ \mathbf{k}})\right)\right]\] \[=-\frac{1}{e\mathcal{D}^{\lambda}}\sum_{\lambda^{\prime}}\sum_{ \mathbf{k}^{\prime}}W^{\lambda\lambda^{\prime}}_{\mathbf{kk}^{\prime}}(g^{ \lambda}_{\mathbf{k}^{\prime}}-g^{\lambda}_{\mathbf{k}}) \tag{10}\]
We have fixed the direction of the electric field along increasing \(x\)-direction, and the magnetic field is rotated in \(xz\)-plane (See Fig.3). Therefore, \(\mathbf{E}=E(0,0,1)\) and \(\mathbf{B}=B(\cos\gamma,0,\sin\gamma)\). In this case, only the \(z\)-component of \(\mathbf{\Lambda}\) is relevant. Therefore Eq. 10 reduces to,
\[\mathcal{D}^{\lambda}(k)\left[v^{\lambda,z}_{\mathbf{k}}+\frac{eB \sin\gamma}{\hbar}(\mathbf{v}^{\lambda}_{\mathbf{k}}\cdot\mathbf{\Omega}^{ \lambda}_{k})\right]=\sum_{\lambda^{\prime}\mathbf{k}^{\prime}}\mathbf{W}^{ \lambda\lambda^{\prime}}_{\mathbf{kk}^{\prime}}(\Lambda^{\lambda^{\prime}}_{ \mathbf{k}^{\prime}}-\Lambda^{\lambda}_{\mathbf{k}}). \tag{11}\]
We define the valley scattering time (\(\tau^{\lambda}_{\mathbf{k}}\)) as follows
\[\frac{1}{\tau^{\lambda}_{\mathbf{k}}(\theta,\phi)}=\sum_{\lambda^{\prime}} \mathcal{V}\int\frac{d^{3}\mathbf{k}^{\prime}}{(2\pi)^{3}}(\mathcal{D}^{ \lambda^{\prime}}_{\mathbf{k}^{\prime}})^{-1}\mathbf{W}^{\lambda\lambda^{ \prime}}_{\mathbf{kk}^{\prime}} \tag{12}\]
\(\mathbf{W}^{\lambda\lambda^{\prime}}_{\mathbf{kk}^{\prime}}\) is defined in Eq. 7 and the corresponding overlap of the Bloch wave-function is \(\mathcal{G}^{\lambda\lambda^{\prime}}_{i}(\theta,\phi)=[1+\lambda\lambda^{ \prime}\xi_{i}(\cos\theta\cos\theta^{\prime}+\alpha_{i}\sin\theta\sin\theta^{ \prime}\cos\phi\cos\phi^{\prime}+\beta_{i}\sin\theta\sin\theta^{\prime}\sin \phi\sin\phi^{\prime}]\) with \(i=0,1,2,3\) (Please see Tab. 1). Taking Berry phase into account and corresponding change in the density of states, \(\sum_{k}\longrightarrow\mathcal{V}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}} \mathcal{D}^{\lambda}(k)\), Eq. 11 becomes
\[h^{\lambda}_{\mu}(\theta,\phi)+\frac{\Lambda^{\lambda}_{\mu,i}( \theta,\phi)}{\tau^{\lambda}_{\mu,i}(\theta,\phi)}\\ =\sum_{\lambda^{\prime}}\mathcal{V}\int\frac{d^{3}\mathbf{k}^{ \prime}}{(2\pi)^{3}}\mathcal{D}^{\lambda^{\prime}}(k^{\prime})\mathbf{W}^{ \lambda\lambda^{\prime}}_{\mathbf{kk}^{\prime}}\Lambda^{\lambda^{\prime}}_{ \mu,i}(\theta^{\prime},\phi^{\prime}) \tag{13}\]
Here \(h^{\lambda}_{\mu}(\theta,\phi)=\mathcal{D}^{\lambda\mathbf{k}}[v^{\lambda}_{z, \mathbf{k}}+eB\sin\gamma(\mathbf{\Omega}^{\lambda}_{k}\cdot\mathbf{v}^{\lambda }_{\mathbf{k}})]\). In the zero-temperature limit, for a constant Fermi energy surface, Eq. 12 and RHS of Eq. 13 is reduced to the integration over \(\theta^{\prime}\) and \(\phi^{\prime}\):
\[\frac{1}{\tau^{\lambda}_{\mu,i}(\theta,\phi)}=\mathcal{V}\sum_{\lambda^{\prime }}\Pi^{\lambda\lambda^{\prime}}\iint\frac{(k^{\prime})^{3}\sin\theta^{\prime}} {|\mathbf{v}^{\lambda^{\prime}}_{k^{\prime}}\cdot\mathbf{k}^{\prime\lambda^{ \prime}}|}d\theta^{\prime}d\phi^{\prime}\mathcal{G}^{\lambda\lambda^{\prime}} _{i}(D^{\lambda^{\prime}}_{\mathbf{k}^{\prime}})^{-1} \tag{14}\]
\[\mathcal{V}\sum_{\lambda^{\prime}}\Pi^{\lambda\lambda^{\prime}} \iint f^{\lambda^{\prime}}(\theta^{\prime},\phi^{\prime})\mathcal{G}^{\lambda \lambda^{\prime}}_{i}d\theta^{\prime}d\phi^{\prime}\times[d^{\lambda^{\prime}} -h^{\lambda^{\prime}}_{\mu}(\theta^{\prime},\phi^{\prime})\\ +a^{\lambda^{\prime}}\cos\theta^{\prime}+b^{\lambda^{\prime}}\sin \theta^{\prime}\cos\phi^{\prime}+c^{\lambda^{\prime}}\sin\theta^{\prime}\cos \phi^{\prime}] \tag{15}\]
where \(\Pi^{\lambda\lambda^{\prime}}=N|U^{\lambda\lambda^{\prime}}|^{2}/4\pi^{2}\hbar^ {2}\), \(f^{\lambda}(\theta,\phi)=\frac{(k)^{3}}{|\mathbf{v}^{\lambda}_{\mathbf{k}}\cdot \mathbf{k}^{\prime}|}\sin\theta(\mathcal{D}^{\eta}_{\mathbf{k}})^{-1}\tau^{ \lambda}_{\mu}(\theta,\phi)\). Using ansatz \(\Lambda^{\lambda}_{\mathbf{k}}=[d^{\lambda}-h^{\lambda}_{k^{\prime}}+a^{\lambda }\cos\phi+b^{\lambda}\sin\theta\cos\phi+c^{\lambda}\sin\theta\sin\phi]\tau^{ \lambda}_{\mu}(\theta,\phi)\) the above equation is written in the following form:
\[d^{\lambda}+a^{\lambda}\cos\phi+b^{\lambda}\sin\theta\cos\phi+c^{ \lambda}\sin\theta\sin\phi\\ =\sum_{\lambda^{\prime}}\mathcal{V}\Pi^{\lambda\lambda^{\prime}} \iint f^{\lambda^{\prime}}(\theta^{\prime},\phi^{\prime})d\theta^{\prime}d \phi^{\prime}\\ \times[d^{\lambda^{\prime}}-h^{\lambda^{\prime}}_{k^{\prime}}+a^{ \lambda^{\prime}}\cos\theta^{\prime}+b^{\lambda^{\prime}}\sin\theta^{\prime} \cos\phi^{\prime}+c^{\lambda^{\prime}}\sin\theta^{\prime}\sin\phi^{\prime}] \tag{16}\]
When the aforementioned equation is explicitly put out (for each value of \(i\)), it appears as seven simultaneous equations that must be solved for eight variables. The particle number conservation provides another restriction:
\[\sum_{\lambda}\sum_{\mathbf{k}}g^{\lambda}_{\mathbf{k}}=0 \tag{17}\]
For the eight unknowns (\(d^{\pm 1},a^{\pm 1},b^{\pm 1},c^{\pm 1}\)), equations 16 and 17 are simultaneously solved with Eq. 14. Due to the intricate structure of the equations, all two-dimensional integrals with respect to \(\theta^{\prime}\) and \(\phi^{\prime}\) the simultaneous equations' solution are carried out numerically. |
2304.03768 | SparseFormer: Sparse Visual Recognition via Limited Latent Tokens | Human visual recognition is a sparse process, where only a few salient visual
cues are attended to rather than traversing every detail uniformly. However,
most current vision networks follow a dense paradigm, processing every single
visual unit (e.g,, pixel or patch) in a uniform manner. In this paper, we
challenge this dense paradigm and present a new method, coined SparseFormer, to
imitate human's sparse visual recognition in an end-to-end manner. SparseFormer
learns to represent images using a highly limited number of tokens (down to 49)
in the latent space with sparse feature sampling procedure instead of
processing dense units in the original pixel space. Therefore, SparseFormer
circumvents most of dense operations on the image space and has much lower
computational costs. Experiments on the ImageNet classification benchmark
dataset show that SparseFormer achieves performance on par with canonical or
well-established models while offering better accuracy-throughput tradeoff.
Moreover, the design of our network can be easily extended to the video
classification with promising performance at lower computational costs. We hope
that our work can provide an alternative way for visual modeling and inspire
further research on sparse neural architectures. The code will be publicly
available at https://github.com/showlab/sparseformer | Ziteng Gao, Zhan Tong, Limin Wang, Mike Zheng Shou | 2023-04-07T17:59:58Z | http://arxiv.org/abs/2304.03768v1 | # SparseFormer: Sparse Visual Recognition via Limited Latent Tokens
###### Abstract
Human visual recognition is a _sparse_ process, where only a few salient visual cues are attended to rather than traversing every detail uniformly. However, most current vision networks follow a _dense paradigm_, processing every single visual unit (_e.g._, pixel or patch) in a uniform manner. In this paper, we challenge this dense paradigm and present a new method, coined _SparseFormer_, to imitate human's _sparse_ visual recognition in an end-to-end manner. SparseFormer learns to represent images using a highly limited number of tokens (down to \(49\)) in the latent space with sparse feature sampling procedure instead of processing dense units in the original pixel space. Therefore, SparseFormer circumvents most of dense operations on the image space and has much lower computational costs. Experiments on the ImageNet classification benchmark dataset show that SparseFormer achieves performance on par with canonical or well-established models while offering better accuracy-throughput tradeoff. Moreover, the design of our network can be easily extended to the video classification with promising performance at lower computational costs. We hope that our work can provide an alternative way for visual modeling and inspire further research on sparse neural architectures. The code will be publicly available at [https://github.com/showlab/sparseformer](https://github.com/showlab/sparseformer).
+
Footnote †: boxtimes\) Corresponding Author.
## 1 Introduction
Designing neural architectures for visual recognition has long been an appealing yet challenging topic in the computer vision community. Convolutional neural networks (CNNs) [28, 18, 20, 42, 33] use convolutional filters on every unit of images or feature maps to build features. Recently proposed vision transformers [11, 54, 32, 60] use attention operation on each patch or unit to dynamically interact with other units to mimic human attention mechanism. Both convolution-based and Transformer-based architectures need to traverse every unit, like pixel or patch on the grid map, to densely perform operations. This dense per-unit traversal originates from sliding windows [39], reflecting the assumption that foreground objects may appear uniformly with respect to spatial locations in an image.
However, as humans, we do not need to examine every detail in a scene to recognize it. Instead, we are fast enough to first roughly find discriminative regions of interest with several glimpses and then recognize textures, edges, and high-level semantics within these regions [10, 37, 23, 46]. This contrasts greatly with existing visual networks, where the convention is to exhaustively traverse every visual unit. The dense paradigm incurs soaring computational costs with larger input resolutions and does not directly provide details about what a vision model is looking at in an image.
In this paper, we propose a new vision architecture, coined **SparseFormer**, to explore _sparse visual recognition_ by explicitly imitating the perception mechanism of human eyes. Specifically, SparseFormer learns to represent an image by latent transformers along with a highly limited number of tokens (_e.g._, down to \(49\)) in the latent space _from the very beginning_. Each latent token is associated with a region of interest (RoI) descriptor, and the token RoI can be refined across stages. Given an image, SparseFormer first extracts image features by a lightweight early convolution module. A latent focusing transformer adjusts token RoIs to focus on foregrounds and sparsely extracts image features according to these token RoIs to build latent token embeddings in
Figure 1: **Dense** versus our proposed **sparse** paradigm for visual recognition. The dense paradigm requires traversing \(H\times W\) units to perform convolution or attention, while our proposed sparse paradigm performs transformers over only \(N\) latent tokens where \(N\ll H\times W\).
an iterative manner. SparseFormer then feeds tokens with these region features into a larger and deeper network, a standard transformer encoder in the latent space, to enable precise recognition. The transformer operations are only applied over the limited tokens in the latent space. Since the number of latent tokens is _highly limited_ (,, ) and the feature sampling procedure is _sparse_ (, based on direct bilinear interpolation), it is reasonable to call our architecture a _sparse_ approach for visual recognition. The overall computational cost of SparseFormer is _almost irrelevant_ to the input resolution except for the early convolution part, which is lightweight in our design. Moreover, SparseFormer can be supervised _solely with classification signals_ in an _end-to-end_ manner without requiring separate prior training with localizing signals.
As an initial step to sparse visual recognition, the aim of SparseFormer is to explore an alternative paradigm for vision modeling, rather than state-of-the-art results with bells and whistles. Nonetheless, SparseFormer still delivers very promising results on the challenging ImageNet classification benchmark, on par with dense counterparts but at lower computational costs. Since most operators of SparseFormer are performed on tokens in the latent space rather than the dense image space, and the number of tokens is limited, the memory footprints are lower and throughputs are higher compared to dense architectures, yielding a better accuracy-throughput trade-off, especially in the low-compute region. For instance, with ImageNet 1K training, the tiny variant of SparseFormer with \(2.0\)G FLOPs gives \(81.0\) top-1 accuracy at a throughput of \(1270\) images/s, while Swin-T with \(4.5\)G FLOPs achieves \(81.3\) at a lower throughput of \(726\) images/s. Visualizations of SparseFormer validate its ability to differentiate foregrounds from backgrounds in an end-to-end manner using only classification signals. Furthermore, we also investigate several scaling-up strategies of SparseFormer for better performance.
The simple design of SparseFormer can also be easily extended to video classification, which is more data-intensive and computationally expensive for dense vision models but is suitable for the SparseFormer architecture. Experimental results on the challenging video classification Kinetics-400 [7] benchmark show that our extension of SparseFormer in video classification yields promising performance with lower computation than dense architectures. This highlights the efficiency and effectiveness of the proposed sparse vision architecture with denser input data.
## 2 Related Work
**Convolutional neural networks.** AlexNet [28] pioneered convolutional neural networks for general visual recognition by using stacked local convolutions on dense pixels to build semantic features. Since then, CNNs have dominated visual understanding with improvement on several aspects, pathway connections [18, 20], convolutional filter configurations [42, 53, 33], multi branches [52, 51], normalizations [22, 63]. To cover discriminative details, all convolutional networks require dense convolution applied over pixels. Although pooling layers are used to downsize feature maps spatially, exhaustive convolution over dense units still slows down networks, particularly for large inputs. The case becomes more severe when dealing with data-intensive video inputs, as the dense paradigm for videos introduces a significantly heavier computational burden [55, 7, 7]. Even with adopted factorized convolution [56, 41, 67] or sampling strategies [59, 14], the computational burden is still significant.
**(Efficient) vision transformers.** Recently proposed Vision Transformer (ViT) [11] makes use of a fully Transformer-based architecture originating in NLP [58] to enable visual recognition with a larger model capacity. ViT first splits an input image into patches with a moderate number (, 196) and feeds these patches as tokens into a standard Transformer for recognition. Attention operators inside ViT learn dynamic and long-term relationships between patches with a minimum of inductive bias. This makes ViT a model with greater capacity and friendly for a dataset of massive samples like JFT [49]. Since ViT, attention-based methods have deeply reformed vision understanding, and many works attempted to find better attention operators for computer vision [32, 69, 60, 12, 17, 64, 57]. Among these, Swin Transformer [32] introduces shifting windows for vision transformers and truncates the attention range inside these windows for better efficiency. Reducing the number of tokens in the attention operator in the Pyramid or Multi-Scale Vision Transformers (PVT or MViT) [60, 12] also maintains a better trade-off between performance and computation.
However, despite their ability to stress semantic visual areas dynamically, attention in vision transformers still requires dense per-unit modeling and also introduces computation up to the quadratic level with respect to input resolution. Zhang _et al._[70] use the sparse attention variant first proposed in NLP [3, 27] to speed up vision transformers for larger inputs. Another attempts for faster vision transformer inference is to keep only discriminative tokens and reduce the number of tokens at the inference stage via various strategies and patch score criteria [30, 13, 5, 44, 68].
In contrast to sparsifying attention operators or reducing the number of visual tokens in pre-trained models at the inference stage, SparseFormer learns a highly limited number of latent tokens to represent an image and performs transformers solely on these tokens in the latent space efficiently. Since the number of latent tokens is highly constrained and irrelevant to the input resolution, the computational cost of SparseFormer is low, controllable, and practical.
**Perceiver architectures and detection transformers.** Perceiver architectures [25, 24] aim to unify different modal
inputs using latent transformers. For image inputs, one or more cross attentions are deployed to transform image grid-form features into latent tokens, and then several Transformer stages are applied to these latent tokens. Our proposed SparseFormer is greatly inspired by this paradigm, where sparse feature sampling can be regarded as "cross attention" between the image space and the latent space. However, SparseFormer differs from Perceiver architectures in several aspects: First, in SparseFormer, the "cross attention" or sparse feature sampling is performed sparsely and based on direct bilinear interpolation. SparseFormer does not need to traverse every pixel in an image to build a latent token, while Perceiver architectures require an exhausting traversal. Second, the number of latent tokens in Perceiver is large (_i.e_., 512). In contrast, the number of tokens in SparseFormer is much smaller, down to \(49\), only \(0.1\times\) times compared to Perceiver. As a result, the proposed SparseFormer enjoys better efficiency than Perceiver.
Recently proposed detection transformers (DETR models) [6, 72, 50] also use cross attention to directly represent object detections by latent tokens from encoded image features. SparseFormer shares similar ideas with DETR to represent an image by latent tokens but in a more efficient manner. SparseFormer does not require a heavy CNN backbone or a Transformer encoder to encode image features first. The "cross attention" layer in SparseFormer is based on the image bilinear interpolation and is also efficient, inspired by the adaptive feature sampling in AdaMixer [15] as a detection transformer. Furthermore, SparseFormer does not require localization signals in object detection, and it can be optimized to spatially focus tokens in an end-to-end manner solely with classification signals.
**Glimpse models.** In the nascent days of neural vision understanding research, glimpse models [38, 43] are proposed to imitate human visual perception by capturing several glimpses over an image for recognition. These glimpse models only involve computation over limited parts of an image, and their computation is fixed. While glimpse models are efficient, they are commonly non-differentiable regarding where to look and necessitate workarounds like the expectation-maximization algorithm [8] or reinforcement learning to be optimized. Furthermore, the performance of glimpse models has only been experimented with promising results on rather small datasets, like MNIST [29], and the potential to extend to larger datasets has not yet been proven.
Our presented SparseFormer architecture can be seen as an extension of glimpse models, where multiple glimpses (latent tokens) are deployed in each stage. Since bilinear interpolation is adopted, the entire pipeline can be optimized in an end-to-end manner. Furthermore, the proposed method has been shown to be empirically effective on large benchmarks.
## 3 SparseFormer
In this section, we describe the SparseFormer architecture in detail: we first introduce SparseFormer as a method for the image classification task and then we discuss how to extend it to video classification with minimal additional efforts. As SparseFormer performs most of the operations over tokens in the latent space, we begin with the definition of latent tokens. Then we discuss how to build latent tokens and how to perform recognition based on them.
**Latent tokens.** Different from dense models, which involve per-pixel or per-patch modeling in the original image space, SparseFormer recognizes an image in a _sparse_ way by learning a limited number of tokens in the latent space and applying transformers to them. Similar to tokens or queries in the Transformer decoder [58], a token in SparseFormer is an embedding \(\mathbf{t}\in\mathbb{R}^{d}\) in the latent space. To explicitly model the spatial focusing area, we associate each latent token \(\mathbf{t}\) in SparseFormer with an RoI descriptor \(\mathbf{b}=(x,y,w,h)\), where \(x\), \(y\), \(w\), and \(h\) are the center coordinates, width, and height, respectively. We choose to normalize the components of \(\mathbf{b}\) by the image size, resulting in a range of \([0,1]\). Thus, a latent token consists of an embedding \(\mathbf{t}\) and an RoI descriptor \(\mathbf{b}\) as its geometric property. The entire set of latent tokens in SparseFormer can be described as follows:
\[\mathbf{T}=\{(\mathbf{t}_{1},\mathbf{b}_{1}),(\mathbf{t}_{2},\mathbf{b}_{2}), \cdots,(\mathbf{t}_{N},\mathbf{b}_{N})\}, \tag{1}\]
where \(N\) is the number of latent tokens, and both the embedding \(\mathbf{t}\) and the RoI \(\mathbf{b}\) can be refined throughout the network. The initial \(\mathbf{t}\) and \(\mathbf{b}\) are learnable parameters of SparseFormer, and their initialization will be described in the experiment section.
### Building Latent Tokens
SparseFormer includes two successive parts for building tokens in the latent space, the focusing Transformer and the cortex Transformer. The focusing Transformer addresses the challenge of sparsely extracting image features, decoding them into latent tokens, and adjusting token RoIs. The subsequent cortex Transformer accepts these latent token embeddings as inputs and models them with a standard Transformer encoder.
**Sparse feature sampling.** In sampling stages, a latent token in SparseFormer generates \(P\) sampling points in the image space according to its geometric property, _i.e_., the token RoI. These sampling points are explicitly described as image coordinates (_i.e_., \(x\) and \(y\)) in an image and can be directly used for feature sampling via bilinear interpolation. Since bilinear interpolation takes \(\mathcal{O}(1)\) time for every sampling point, we term this procedure as _sparse feature sampling_. In contrast, cross attention requires \(\mathcal{O}(n)\) time to traverse the input, where \(n\) is the input size. To produce
such sampling points, SparseFormer first produces relative offsets for each latent token RoI and then derives absolute sampling locations based on these RoIs. SparseFormer uses a learnable linear layer to generate a set of sampling offsets for a token conditionally on its embedding \(\mathbf{t}\):
\[\left\{\left(\triangle x_{i},\triangle y_{i}\right)\right\}_{P}=\mathrm{Linear}( \mathbf{t}), \tag{2}\]
where the \(i\)-th sampling offset \(\left(\triangle x_{i},\triangle y_{i}\right)\) indicates the relative position of a sampling point \(i\) with respect to a token RoI. The linear layer contains layer normalization [2] over \(\mathbf{t}\), which is omitted here and below for more clarity. These offsets are then translated to absolute sampling locations \((\tilde{x},\tilde{y})\) in an image with the RoI \(\mathbf{b}=(x,y,w,h)\):
\[\begin{cases}\tilde{x}_{i}=x+0.5\cdot\triangle x_{i}\cdot w,\\ \tilde{y}_{i}=y+0.5\cdot\triangle y_{i}\cdot h,\end{cases} \tag{3}\]
for every \(i\). To stabilize training, we perform standard normalization over \(\left\{\left(\triangle x_{i},\triangle y_{i}\right)\right\}_{P}\) by three standard deviations to keep most of the sampling points inside the RoI.
SparseFormer can directly and efficiently extract image features through bilinear interpolation based on these explicit sampling points, without the need for dense traversal over the grid. Given an input image feature \(\mathbf{I}\in\mathbb{R}^{C\times H\times W}\) with \(C\) channels, the shape of the sampled feature matrix \(\mathbf{x}\) for a token is \(\mathbb{R}^{P\times C}\). The computational complexity of this sparse feature sampling procedure is \(\mathcal{O}(N\cdot P\cdot C)\), with the number of latent tokens \(N\) given and independent of the input image size \(H\times W\).
**Adaptive feature decoding.** Once image features \(\mathbf{x}\) have been sampled for a token, the key question for our sparse architecture is how to effectively decode them and build a token in the latent space. While a linear layer of \(\mathbb{R}^{P\times C}\rightarrow\mathbb{R}^{d}\) can be a simple method to embed features into the latent token, we have found it to be rather ineffective. Inspired by [15], we use the adaptive mixing layer to decode sampled features to leverage spatial and channel semantics in an _adaptive_ way. Specifically, we use a lightweight network \(\mathcal{F}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{C\times C+P\times P}\) conditionally on the token embedding \(\mathbf{t}\) to produce _adaptive_ channel decoding weight \(M_{c}\in\mathbb{R}^{C\times C}\) and spatial decoding weight \(M_{s}\in\mathbb{R}^{P\times P}\). We then decode sampled features \(\mathbf{x}\) in the channel axis and spatial axis with adaptive weights in the following order:
\[[M_{c}|M_{s}] =\mathcal{F}(\mathbf{t}),M_{c}\in\mathbb{R}^{C\times C},M_{s}\in \mathbb{R}^{P\times P}, \tag{4}\] \[\mathbf{x}^{(1)} =\mathrm{GELU}(\mathbf{x}^{(0)}M_{c})\in\mathbb{R}^{P\times C},\] (5) \[\mathbf{x}^{(2)} =\mathrm{GELU}(M_{s}\mathbf{x}^{(1)})\in\mathbb{R}^{P\times C}. \tag{6}\]
Here, \(\mathbf{x}^{(0)}\) represents the sampled feature \(\mathbf{x}\), while \(\mathbf{x}^{(2)}\) is the output of the adaptive feature decoding process. GELU refers to the Gaussian Error Linear Unit activation function [19], and learnable biases before GELU are omitted for clarity. We choose two successive linear layers without activation functions as our \(\mathcal{F}\), where the hidden dimension is \(d/4\), for efficiency. The final output, \(\mathbf{x}^{(2)}\), is passed through a learnable linear layer to dimension \(d\) and then added back to the token embedding to update it. Adaptive feature decoding can be viewed as a token-wise spatial-channel factorization of dynamic convolution [26], adding convolutional inductive bias for SparseFormer.
The adaptive feature decoding method enables SparseFormer to reason conditionally on the embedding \(\mathbf{t}\) about
Figure 2: **The overall architecture** of the proposed SparseFormer. The MHSAs and FFNs are multi-head self-attention over latent tokens and feed-forward networks applied on these tokens, respectively. Note that different from the illustration aimed for details, the cortex Transformer is actually much deeper and wider than the focusing Transformer. We omit layer normalization here for more clarity. All of these operations are performed in the latent token space, except for image feature sampling in the original image space.
what a token expects to see._ Since SparseFormer performs feature sampling and decoding several times, the token embedding \(\mathbf{t}\) contains information about _what a token has seen before_. Thus, a token can infer _where to look_ and focus on discriminating foregrounds with the RoI adjusting mechanism, which is described below.
**Adjusting RoI.** A first, quick glance at an image with human eyes is usually insufficient to fully understand its contents. Our eyes need several adjustments to bring the foreground into focus before we can recognize objects. This is also the case for SparseFormer. The RoI of a latent token in SparseFormer can be iteratively adjusted together with the update of its corresponding token embedding. In a stage, a token RoI \(\mathbf{b}=(x,y,w,h)\) is adjusted to \((x^{\prime},y^{\prime},w^{\prime},h^{\prime})\) in the following way:
\[x^{\prime}=x+t_{x}\cdot w,\quad\ y^{\prime}=y+t_{y}\cdot h, \tag{7}\] \[w^{\prime}=w\cdot\exp(t_{w}),\ h^{\prime}=h\cdot\exp(t_{h}), \tag{8}\]
where \((t_{x},t_{y},t_{w},t_{h})\) are normalized adjustment deltas for an RoI, a parameterization adopted from object detectors [45]. The adjustment deltas are produced by a linear layer, which takes the token embedding as input in a tokenwise manner:
\[\{t_{x},t_{y},t_{w},t_{h}\}=\mathrm{Linear}(\mathbf{t}). \tag{9}\]
With the RoI adjusting and sufficient training, SparseFormer can focus on foregrounds after several stages. It is worth noting that unlike object detectors, we do not supervise SparseFormer with any localizing signals. The RoI adjustment optimization is achieved end-to-end by back-propagating gradients from sampling points in bilinear interpolation. Though the bilinear interpolation might incur noisy gradients due to local and limited sampling points, the optimization direction for RoI adjusting is still non-trivial by ensembling gradients of these points according to experiments.
**Focusing Transformer.** In practice, we perform token RoI adjustment first, generate sparse sampling points, and then use bilinear interpolation to obtain sampled features. We then apply adaptive decoding to these sampled features and add the decoded output back to the token embedding vector. The RoI adjusting, sparse feature sampling, and decoding in all can be regarded as a "cross attention" alternative in an _adaptive_ and _sparse_ way. Together with self-attention over latent tokens and feed-forward network (FFN), we can stack these sampling and decoding as a repeating Transformer stage in the latent space to perform iterative refinement for latent token embedding \(\{\mathbf{t}_{i}\}_{N}\) and RoI \(\{\mathbf{b}_{i}\}_{N}\). We name this repeating Transformer as _the focusing Transformer_. The focusing Transformer in our design is lightweight, and parameters are shared between repeating stages for efficiently focusing on foregrounds. These latent tokens are then fed into a standard large Transformer encoder, termed as _the cortex Transformer_.
**Cortex Transformer.** The cortex Transformer follows a standard Transformer encoder architecture except for the first stage with the "cross attention" in the focusing Transformer. As the name implies, the Cortex Transformer behaves like the cerebral cortex in the brain, processing visual signals from the focusing Transformer in a manner more similar to the way the cortex processes signals from the eyes. Parameters of different cortex Transformer stages are independent.
### Overall SparseFormer Architecture
The overall architecture in SparseFormer is depicted in Figure 2. The input image features are the same and shared across all sampling stages. The final classification is done by averaging embeddings \(\{\mathbf{t}_{i}\}_{N}\) over latent tokens and applying a linear classifier following.
**Early convolution.** As discussed, gradients regarding sampling points through bilinear interpolation might be very noisy. This case is even worse for raw RGB inputs since the nearest four RGB values on the grid are usually too noisy to estimate local gradients. Thus, it is necessary to introduce early convolution for more interpolable feature maps to achieve better training stability and performance, like ones for vision transformers [66]. In the experiment, the early convolution is designed to be as lightweight as possible.
**Sparsity of the architecture.** It is worth noting that the number of latent tokens in SparseFormer is limited and independent of the input resolution. The sparse feature sampling procedure extracts features from image feature maps in a non-traversing manner. Therefore, the computational complexity and memory footprints of transformers in the latent space are _independent_ of the input size. So it is reasonable to call our method a sparse visual architecture. Moreover, the SparseFormer latent space capacity also demonstrates sparsity. The largest capacity of the latent space, \(N\cdot d_{c}\), is \(81\cdot 768\) according to Table 1, still smaller than the input image size \(3\cdot 224^{2}\). This distinguishes SparseFormer from Perceivers [25, 24], whose latent capacity is typically \(512\cdot 1024\), exceeding the input image size.
It should be also noted that our presented method is quite different from post-training token sparsifying techniques [30, 13, 5, 44, 68]: SparseFormer, as a new visual architecture, learns to represent an image with sparse tokens from scratch. In contrast, token sparsifying techniques are usually applied to pre-trained vision transformers. In fact, token sparsification can also be further applied to latent tokens in SparseFormer, but this is beyond the scope.
**Extension to video classification.** Video classification is similar to image classification but requires more intensive computation due to multiple frames. Fortunately, Sparse
Former architecture can also be extended to video classification with minor additional efforts. Given the video feature \(\mathbf{V}\in\mathbb{R}^{C\times T\times H\times W}\), the only problem is to deal with an extra temporal axis compared to \(\mathbf{I}\in\mathbb{R}^{C\times H\times W}\). We associate the token RoI with an extension \((t,l)\), where \(t\) is the center temporal coordinate and \(l\) is the temporal length of the tube, to make the RoI a tube. In the sparse feature sampling procedure, an extra linear layer produces temporal offsets, and we transform them to 3D sampling points \(\{(\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i})\}_{P}\). Bilinear interpolation is replaced by trilinear interpolation for 4D input data. Alike, the RoI adjusting is extended to the temporal dimension. Other operators like early convolution, adaptive feature decoding, self attention, and FFN remain untouched. For larger capacity for videos, we inflate tokens in the temporal axis by \(n_{t}\) times and initialize their \((t,l)\) to cover all frames, where \(n_{t}\) is much smaller than the frame count \(T\) (_e.g._, \(8\) versus \(32\)).
## 4 Experiments
We conduct experiments on the canonical ImageNet classification [9] and Kinetics [7] video classification benchmarks to investigate our proposed SparseFormer. We also report our preliminary trials of SparseFormer on downstream tasks, semantic segmentation and object detection, in the supplementary.
**Model configurations.** We use ResNet-like early convolutional layers (a \(7\times 7\) stride-\(2\) convolution, a ReLU, and a \(3\times 3\) stride-\(2\) max pooling) to extract initial \(96\)-d image features. We design several variants of SparseFormer with the computational cost from 2G to \(\sim\)8G FLOPs, as shown in Table 1. We mainly scale up the number of latent tokens \(N\), the dimension of the focusing and cortex Transformer \(d_{f}\) and \(d_{c}\), and layers of the cortex Transformer \(L_{c}\). For all variants, we keep the repeats of the focusing Transformer, \(L_{f}\), to \(4\). We bridge the focusing Transformer and the cortex Transformer with a linear layer to increase the token dimension. Although the token number is scaled up, it is still smaller than that of conventional vision transformers. The center of the latent token RoIs and their sampling points are initialized to a grid. The width and height of these RoIs are initialized to half the image. For the sake of unity in building blocks, the first cortex Transformer stage also performs RoI adjusting, feature sampling, and decoding. We do not inject any positional information into latent tokens. For more details, please refer to the supplementary.
**Training recipes.** For ImageNet-1K classification [9], we train the proposed Transformer according to the recipe in [32], which includes the training budget of 300 epochs, the AdamW optimizer [36] with an initial learning rate 0.001, the weight decay 0.05 and sorts of augmentation and regularization strategies. The input resolution is fixed to \(224^{2}\). We add EMA [40] to stabilize the training. The stochastic depth (_i.e._, drop path) [21] rate is set to 0.2, 0.3, and 0.4 for SparseFormer-T, -S, and -B.
To pre-train on ImageNet-21K [9], we use a subset suggested by [47] due to the availability of the winter 2021 release. We follow the training recipe in [32] and use a similar recipe to 1K classification but with 60 epochs, an initial learning rate \(2\times 10^{-3}\), weight decay 0.05, and drop path 0.1. After pre-training, we fine-tune models with a recipe of 30 epochs, an initial learning rate \(2\times 10^{-4}\) with cosine decay and weight decay \(10^{-8}\).
We adopt the ImageNet pre-trained models to initialize parameters for training on Kinetics-400. Since our architecture is endurable to large input sizes, the number of input frames is set to \(T=32\). Formally, 32 frames are sampled from the 128 consecutive frames with a stride of 4. We mildly inflate initial latent tokens by \(n_{t}=8\) times in the temporal axis to cover all input frames. We use AdamW [36] for optimization with 32 GPUs, following the training protocol in [12]. We train the model for 50 epochs with 5 linear warm-up epochs. The mini-batch size is 8 video samples per GPU. The learning rate is set to \(5\times 10^{-4}\), and we adopt a cosine learning rate schedule [35]. For evaluation, we apply a 12-view testing scheme (three 3 spatial crops and 4 temporal clips) as previous work [34].
\begin{table}
\begin{tabular}{c|c c c c c c} & \#tokens & foc. dim & cont. dim & cont. stage & FLOPs & \#params \\ variant & \(N\) & \(d_{f}\) & \(d_{c}\) & \(L_{c}\) & & FLOPs & \#params \\ \hline tiny (T) & 49 & 256 & 512 & 8 & 2.0G & 32M \\ small (S) & 64 & 320 & 640 & 8 & 3.8G & 48M \\ base (B) & 81 & 384 & 768 & 10 & 7.8G & 81M \\ \end{tabular}
\end{table}
Table 1: **Configurations** of SparseFormer variants. FLOPs is calculated with the input image size \(224^{2}\).
\begin{table}
\begin{tabular}{c|c|c c c} & top-1 & FLOPs & \#params & throughput (img/s) \\ \hline ResNet-50 [62] & 80.4 & 4.1G & 26M & 1179 \\ ResNet-101 [18] & 81.5 & 7.9G & 45M & 691 \\ RegNet-S6 [42] & 81.7 & 8.0G & 39M & 592 \\ EfficientNet-B3\({}^{\circ}\)[53] & 81.6 & 1.8G & 12M & 745 \\ \hline DeiT-S [54] & 79.8 & 4.6G & 22M & 983 \\ DeiT-B [54] & 81.8 & 17.5G & 86M & 306 \\ Swin-T [32] & 81.3 & 4.5G & 29M & 726 \\ Swin-S [32] & 83.0 & 8.7G & 50M & 437 \\ Perceiver [25] & 78.0 & 707G & 45M & 17 \\ Perceiver [10] & 82.1 & 369G & 49M & 30 \\ \hline SparseFormer-T & 81.0 & 2.0G & 32M & 1270 \\ SparseFormer-S & 82.0 & 3.8G & 48M & 898 \\ SparseFormer-B & 82.6 & 7.8G & 81M & 520 \\ \end{tabular}
\end{table}
Table 2: Comparison of different networks on ImageNet-1K classification. The input resolution is \(224^{2}\) except for Efficient-B3 with the resolution \(300^{2}\). The throughput is measured on a single V100 GPU, following [32]
### Main Results
**ImageNet-1K classification.** We first benchmark SparseFormer on the ImageNet-1K classification and compare them to other well-established methods in Table 2. SparseFormer-T achieves 81.0 top-1 accuracy on par with the well-curated dense transformer Swin-T [32], with less than half FLOPs of it (2.0G versus 4.5G) and 74% higher throughput (1270 versus 726). The small and base variants of SparseFormer, SparseFormer-S, and -B also maintain a good balance between the performance and actual throughput over highly-optimized CNNs or transformers. We can see that Perceiver architectures [25, 24], which also adopt the latent transformer as ours, incur extremely large FLOPs and have impractical inference speed due to a large number of tokens (_i.e_., \(512\)) and dense cross-attention traversal.
**Scaling up SparseFormer.** We scale up the base SparseFormer variant in Table 4. We first adopt ImageNet-21K pre-training, and it brings 1.0 top-1 accuracy improvement. Then we investigate SparseFormer with large input resolution fine-tuning. Large resolution inputs benefit SparseFormer (0.5\(\uparrow\) for \(384^{2}\)) only extra 5% FLOPs. Moreover, we try a more aggressive way to scale up the model by reinitializing tokens (_i.e_., embeddings and RoIs) with a more number _in fine-tuning_, and find it with better results. We leave further scaling up to future work.
**Kinetics-400 classification.** We also investigate the extension of SparseFormer to the video classification task. Results on the Kinetics-400 dataset are reported in Table 5. Our VideoSparseFormer-T achieves the performance of well-established video CNNs (I3D or SlowFast) with a much lower computational burden. Surprisingly, our VideoSparseFormer-S pre-trained on ImageNet-1K even surpasses the Transformer-based architectures pre-trained on ImageNet-21K, like TimeSFormer [4] and ViViT [1]. Furthermore, our VideoSparseFormer-S pre-trained on ImageNet-21K can improve the performance to 79.8 with only 74 GFLOPs.
### Ablation Study
In this section, we ablate key designs in the proposed SparseFormer. Limited by computational resources, we resort to SparseFormer-T on ImageNet-1K classification.
**The number of latent tokens.** The number of latent tokens \(N\) is a key hyper-parameter of SparseFormer as it controls the latent space capacity. Table (a)a shows the performance with different numbers of latent tokens. The performance of SparseFormer is highly determined by the number of tokens \(N\), and so is the computational cost. We can see when increasing \(N\) to 81, SparseFormer-T reaches the performance of SparseFormer-S. As there are no dense units in the latent space, the number of tokens serves a crucial role in the information flow in the network. We are still in favor of fewer tokens in the design of SparseFormer for efficiency.
\begin{table}
\end{table}
Table 4: **Scaling up of SparseFormer-B. Except for the 1K entry, all follow first the same pre-training on ImageNet-21K (\(224^{2}\) input, \(81\) tokens) and then individual fine-tuning on ImageNet-1K.**
\begin{table}
\end{table}
Table 3: **Ablations study on SparseFormer. The default choice for SparseFormer is colored gray\(\backslash\).**
\begin{table}
\end{table}
Table 5: **Comparison with well-established video classification methods on Kinetics-400. The GFLOPs is in the format of a single view \(\times\) the number of views. “N/A” indicates the numbers are not available.**
**Focusing stages.** Given a limited number of latent tokens, SparseFormer attempt to focus them on foregrounds by adjusting their RoI and sample corresponding features to make a visual recognition. Table 2(b) investigates repeats of the parameter-shared focusing Transformer. The 'nil' stands for fixing token RoIs to specific areas and making them unlearnable. Sampling points of tokens in the 'nil' row are also set to fixed spatial positions. The adjusting RoI mechanism and its repeats are vital to the SparseFormer performance.
**Sparsity of sampling points.** The other important factor for the sparsity of the proposed method is the number of sampling points in the feature sampling. Table 2(c) shows the ablation on this. Compared to increasing the number of latent tokens (_e.g._, \(49\to 64\), 30% up, 81.4 accuracy), more sampling points are not economical for better performance, and 81.3 accuracy needs 77% more (\(36\to 64\)) sampling points. To maintain a tradeoff between sparsity and performance, we choose 36 sampling points as our default.
**Image features and how to decode sampled features.** Table 2(d) investigates input image features to be sampled into the latent space in SparseFormer. As discussed before, input image features can be in the raw RGB format, but we find it hard to train1. We also ablate it with ViT-like embedding layer [11], namely, patchifying but keeping the spatial structure, and find it worse than the ResNet-like early convolutional image feature. Table 2(e) ablates how to decode sampled features for a token. The static mixing uses the static weights, which are not conditional on token embedding, to perform mixing on sampled features. We can find that the adaptive mixing decoding design is better.
Footnote 1: Attempts of a preliminary design of SparseFormer with the raw RGB input are with about 60 top-1 accuracy.
**Inflation of latent tokens on video classification.** We also investigate the inflation rate of tokens on videos. Intuitively, video data with multiple frames need more latent tokens than a static image to model. Results in Table 6 show this. Note that the input video has 32 frames, but the token inflation rate 8 is already sufficient for the favorable performance of VideoSparseFormer. As a contrast, dense CNNs or Transformers usually require at least exactly \(\#\)frames times the computational cost if no temporal reduction is adopted. This also validates the sparsity of the proposed SparseFormer method on videos.
### Visualizations
As discussed in Section 3, we argue that SparseFormer, with the RoI adjusting and sparse feature sampling, can focus on foregrounds by reasoning about where to look. To show this, we perform visualizations of token sampling points across different sampling stages in Figure 3. We apply kernel density estimation (KDE) [48] spatially about sampling points with top-hat kernels to obtain the sampling density map. We can find that SparseFormer initially looks at the image in a relatively uniform way and gradually focuses on discriminative details of foregrounds. It is worth noting that SparseFormer is supervised with only classification signals, and it can roughly learn _where discriminative foregrounds are_ by _weak supervision_.
## 5 Conclusion
In this paper, we have presented a neural architecture, SparseFormer, to perform visual recognition with a limited number of tokens along with the Transformer in the latent space. To imitate human eye behavior, we design SparseFormer to focus these sparse latent tokens on discriminative foregrounds and make a recognition sparsely. As a very initial step to the sparse visual architecture, SparseFormer consistently yields promising results on challenging image classification and video classification benchmarks with a good performance-throughput tradeoff. We hope our work can provide an alternative way and inspire further research about sparse visual understanding.
\begin{table}
\begin{tabular}{c|c c} inflation & top-1 & GFLOPs \\ \hline
1 & 69.5 & 7\(\times\)4\(\times\)3 \\
4 & 74.7 & 13\(\times\)4\(\times\)3 \\
8 & 77.9 & 22\(\times\)4\(\times\)3 \\
16 & 78.2 & 32\(\times\)4\(\times\)3 \\ \end{tabular}
\end{table}
Table 6: **Different inflation rates of VideoSparseFormer-T on Kinetics-400.**
Figure 3: **Visualizations of sampling points and their sampling density maps across sampling stages in SparseFormer-S. Stage 1-4 refer to the feature sampling in the focusing Transformer, and Stage 5 refers to the cortex Transformer. Better view with zoom-in.**
## Appendix
### SparseFormer itself as Object Detector
Since the SparseFormer architecture produces embedding and RoI together for a token given an image, it is natural to ask whether SparseFormer _per se_ can perform object detection task? The answer is _yes_. In other words, we can train a SparseFormer model to detect objects _without making any architectural changes_ by simply adding a final classifier and a RoI refining layer upon it.
Specifically, we follow the training strategy of DETR [6] to train a SparseFormer-S for object detection. We adopt the ImageNet-1K pre-trained SparseFormer-S in the main paper. We first inflate the number of latent tokens to \(400\) by re-initializing token embeddings to the normalization distribution and the center of token RoIs to the uniform distribution on \([0,1]\). The RoI height and width is \(0.5\times 0.5\) still. We use a fixed set of \(100\) latent tokens to detect objects. The other tokens, which are not used for detection, aim to enrich the semantics in the latent space. We do not change the matching criterion together with the loss function of DETR and we train for \(300\) epochs. The final classifier is simply one-layer FC layer and the RoI refining layer is 2-layer FC layer, also following DETR. The final refining of RoIs is performed in the same way as RoI adjustment in the main paper. The result is shown in Table 7.
Although the performance of SparseFormer is currently inferior to DETR, it is important to note that this is very preliminary result and we do not add any additional attention encoders or decoders to SparseFormer for object detection. Actually, SparseFormer can be considered a decoder-only architecture for object detection if we treat the early convolution part as the embedding layer.
### SparseFormer Performing Per-Pixel Task
SparseFormer learns to represent an image by limited tokens in the latent space and outputs token embeddings with their corresponding RoIs. It is appealing to investigate whether SparseFormer can perform per-pixel tasks, like semantic segmentation. Yet, SparseFormer itself cannot perform dense tasks since it outputs discrete token set. However, we can _restore_ a dense structured feature map from these discrete tokens by the vanilla cross-attention operator and build a final classifier upon the dense feature map.
Specifically, to perform semantic segmentation, we use a location-aware cross-attention operator to map latent token embeddings back to the structured feature map, whose height and width is one fourth the input image (namely, stride \(4\), \(H/4\) and \(W/4\)). The location-aware cross-attention is the vanilla cross-attention with geometric prior as biases in attention matrix:
\[\mathrm{Attn}(Q_{ds},K_{lt},V_{lt})=\mathrm{Softmax}(Q_{ds}K_{lt}^{T}/\sqrt{d} +B)V_{lt},\]
where \(Q_{ds}\in\mathbb{R}^{N_{ds}\times d}\) is the query matrix for the dense map (\(N_{ds}=H/4*W/4\)), \(K_{lt},V_{lt}\in\mathbb{R}^{N\times d}\) is the key and value matrix for the latent tokens,
\[B_{i,j}=-(\frac{x_{ds,i}-x_{lt,j}}{w_{lt,j}})^{2}-(\frac{y_{ds,i}-y_{lt,j}}{h _{lt,j}})^{2}\]
, where \((x_{lt},y_{lt},w_{lt},h_{lt})\) is the RoI descriptor for a latent RoI and \((x_{ds},y_{ds})\) are \(x\) and \(y\) coordinates for a unit on the dense feature map. In our current design, the input dense map to cross attend latent tokens is the early convolved feature map, which has the same height and width \(H/4\) and \(W/4\). We put two \(3\times 3\) convolution layers and a following classifier on the restored feature map following common pr
We also inflate the number of latent tokens in the semantic segmentation as we do in object detector for better performance. The performance of SparseFormer-T with 400 latent tokens is near the well-established Swin [32] and UperNet [65] but with merely \(1/8\) Swin-T's GFLOPs. This validates that our proposed SparseFormer can perform per-pixel task and is suitable to handle high resolution input data with limited latent tokens.
### Model Initializations in Details
We initialize all weights of linear layers in SparseFormer unless otherwise specified below to follow a truncated normalization distribution with a mean of \(0\), a standard deviation of \(0.02\), and a truncated threshold of \(2\). Biases of these linear layers are initialized to zeros if exisiting.
Sampling points for every token are initialized in a grid-like shape (\(6\times 6\) for \(36\) sampling points by default) by zeroing weights of the linear layer to generate offsets and setting
\begin{table}
\begin{tabular}{c|c|c c c c c c} detector & GFLOPs & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\ \hline DETR & 86 & 42.0 & 62.4 & 44.2 & 20.5 & 45.8 & 61.1 \\ DETR-DC5 & 187 & 43.3 & 63.1 & 45.9 & 22.5 & 47.3 & 61.1 \\ \hline SparseFormer-S & 27 & 26.4 & 43.8 & 26.6 & 8.3 & 26.0 & 45.0 \\ \end{tabular}
\end{table}
Table 7: Detection performance of SparseFormer-S on MS COCO [31] val set.
\begin{table}
\begin{tabular}{c|c|c c} segmentor & GFLOPs & mIoU & mAcc \\ \hline Swin-T [32] + UperNet [65] & 236 & 44.4 & 56.0 \\ \hline SF-T w/ 49 tokens & 33 & 36.1 & 46.0 \\ SF-T w/ 256 tokens & 39 & 42.9 & 53.7 \\ SF-T w/ 400 tokens & 43 & 43.5 & 54.7 \\ \end{tabular}
\end{table}
Table 8: Semantic segmentation performance of SparseFormer-T on Ade20K [71] validation set. The GFLOPs are computed with \(512\times 512\) input resolution.
its bias using meshgrid. Alike, we initialize the center of initial token RoIs (as parameters of the model) to the grid-like (_e.g._, \(7\times 7\) for \(49\) SF-Tiny variant) shape in the same way. The token's height and width are set to half of the image's height and width, which is expressed as \(0.5\times 0.5\). We also try other initializations for tokens' height and width in Table 9. For training stability, we also initialize adaptive decoding in SparseFormer following [15] with an initial Xavier decoding weight [16]. This initialization makes the adaptive decoding behaves like unconditional convolution (weights not dependent on token embeddings) at the beginning of the training procedure.
For alternative ways for token height and width initializations, we can find that there is no significant difference between the 'half' and 'cell' initializations. We prefer the 'half' initialization as tokens can see more initially. However, setting all token RoIs to the whole image, the 'whole' initialization, is lagging before other initializations. We suspect that the model is unable to differentiate between different tokens and is causing training instability due to identical RoIs and sampling points for all tokens.
### Visualizations
**More visualizations on RoIs and sampling points.** In order to confirm the general ability of SparseFormer to focus on foregrounds, we present additional visualizations in Figure 4 and 5 with ImageNet-1K [9] validation set inputs. Note that these samples are not cherry-picked. we observe that SparseFormer progressively directs its attention towards the foreground, beginning from the roughly high contrasting areas and eventually towards more discriminative areas. The focal areas of SparseFormer adapt to variations in the image and mainly concentrate on discriminative foregrounds when the input changes. This also validate the semantic adaptability of SparseFormer to different images.
**Visualizations on specific latent tokens.** We also provide visualizations of specific latent tokens across stages to take a closer look at how the token RoI behaves at the token level. We choose 5 tokens per image that respond with the highest values to the ground truth category. To achieve this, we remove token embedding average pooling and place the classifier layer on individual tokens. The visualizations are shown in Figure 6. We can observe that the token RoIs progressively move towards the foreground and adjust their aspect ratios at a mild pace by stage.
**Visualizations on disturbed input images.** We also show visualizations on disturbed input images in Figure 7, where images are either random erased or heavily padding with zero values or reflection. We can see that although SparseFormer initially views the image in a almost uniform way, it learns to avoid sampling in uninformative areas in subsequent stages. This illustrates the robustness and adaptability of SparseFormer when dealing with perturbed input images.
Figure 4: **More visualizations of token RoIs, their sampling points, and density across sampling stages in SparseFormer-S (64 tokens). RoIs and sampling points of different tokens are colored with different colors. Better view with zoom-in.**
\begin{table}
\begin{tabular}{c|c} width and height initialization & top-1 \\ \hline half, \(0.5\times 0.5\) & 81.0 \\ cell, \(1/\sqrt{N}\times 1/\sqrt{N}\) & 81.0 \\ whole, \(1.0\times 1.0\) & 80.2 \\ \end{tabular}
\end{table}
Table 9: Alternative ways to initialize the token height and width for SparseFormer-T. \(N\) is the number of latent tokens for the ‘cell’ initialization. The ‘cell’ initialization tiles RoIs without overlapping over the image. The ‘whole’ initialization is with all token RoIs centered at the image center. |
2305.03441 | Multi S-graphs: A Collaborative Semantic SLAM architecture | Collaborative Simultaneous Localization and Mapping (CSLAM) is a critical
capability for enabling multiple robots to operate in complex environments.
Most CSLAM techniques rely on the transmission of low-level features for visual
and LiDAR-based approaches, which are used for pose graph optimization.
However, these low-level features can lead to incorrect loop closures,
negatively impacting map generation.Recent approaches have proposed the use of
high-level semantic information in the form of Hierarchical Semantic Graphs to
improve the loop closure procedures and overall precision of SLAM algorithms.
In this work, we present Multi S-Graphs, an S-graphs [1] based distributed
CSLAM algorithm that utilizes high-level semantic information for cooperative
map generation while minimizing the amount of information exchanged between
robots. Experimental results demonstrate the promising performance of the
proposed algorithm in map generation tasks. | Miguel Fernandez-Cortizas, Hriday Bavle, Jose Luis Sanchez-Lopez, Pascual Campoy, Holger Voos | 2023-05-05T11:36:39Z | http://arxiv.org/abs/2305.03441v1 | # _Multi S-graphs_: A Collaborative Semantic SLAM architecture.
###### Abstract
Collaborative Simultaneous Localization and Mapping (CSLAM) is a critical capability for enabling multiple robots to operate in complex environments. Most CSLAM techniques rely on the transmission of low-level features for visual and LiDAR-based approaches, which are used for pose graph optimization. However, these low-level features can lead to incorrect loop closures, negatively impacting map generation. Recent approaches have proposed the use of high-level semantic information in the form of Hierarchical Semantic Graphs to improve the loop closure procedures and overall precision of SLAM algorithms. In this work, we present _Multi S-Graphs_, an _S-graphs_[1] based distributed CSLAM algorithm that utilizes high-level semantic information for cooperative map generation while minimizing the amount of information exchanged between robots. Experimental results demonstrate the promising performance of the proposed algorithm in map generation tasks.
## I Introduction
Collaborative Simultaneous Localization and Mapping (CSLAM) is a fundamental capability that enables multiple robots to operate in complex environments with multiple robots coordinately.
Most CSLAM techniques, such as [2][3][4] are heavily based on the transmission of low-level features, such as keyframe descriptors, for both visual and LiDAR-based approaches. These low-level features constitute the core of the majority of the Pose Graph Optimization (PGO) SLAM based methods and relies on these low-level features for the creation and optimization of each Pose Graph. Using this low-level feature to align and extend the pose graphs created for each robot usually leads to incorrect loop closures; some works like [2] or [5] are focused on robustifying their loop closure algorithms to avoid incorrect loop closures that could ruin the overall map generation. The main problem about these methods emerges from the fact that the system has no awareness about what each low-level feature means, or if it has sense to create a loop closure between nodes or not.
Lately, some SLAMs approaches like _Hydra_[6] or _S-Graphs+_[1] tend to deal with this issue of lack of awareness in the field of SLAM, betting for the use of Hierarchical Semantic Graphs during the generation of the Pose Graphs, to include high-level semantic information about the architectural components (Walls, Rooms, floors) into their "mental model", which can later be used to improve the loop closure procedures and to improve the overall precision of the SLAM algorithms.
However, as far as we know, these high-level semantic representations have not been used to improve the performance of multi-robot SLAM algorithms that can take advantage of this semantic knowledge to reduce the amount of information that has to be transmitted between agents and to robustify loop closures, pursuing the best overall mapping and localization quality.
In this work, we present _Multi S-Graphs_, a LiDAR based distributed CSLAM algorithm that relies on high-level semantic information to generate a complete map of a building cooperatively exchanging a minimum amount of information between them.
The main contributions presented in this work are as follows:
1. A novel distributed multi-robot SLAM architecture that relies on high-level semantic features for communicating information between agents.
2. A hybrid descriptor that combines the fine-grained information of a pointcloud with semantic knowledge.
3. A real-time CSLAM algorithm robust to multiple robot initialization, considering the multiple kidnapped robot problem.
## II Related Work
Although, the algorithm presented is a LiDAR based multi-robot SLAM pipeline, we will include Visual Based algorithms to further understand how the multi-robot approaches are accomplished within the field.
Currently, the vast majority of the multi-robot SLAM methods relies on Pose Graph Optimization (PGO) approaches in which the agents exchange information of the same type that each graph uses for generating the internal loop closures.
In LiDAR-based approaches, Zhong et al. [3] and Huang et al. [4] proposed a framework based on Scan Context Descriptors [7]. In [3] also presents a P2P communication protocol for exchanging the descriptors of each keyframe and uses Binarized Scan Contexts. In both works, each Robot runs its own PGO pipeline.
Within visual-based approaches, Deustch et al. [8] proposed a framework that relies on a BoW of the keyframes obtained with an RGB-d camera. Lajoie et al. [2][9] proposed a distributed CSLAM system based on NetVLAD descriptors. KIMERA multi [10], also uses BoW and needs a _Robust Distributed Initialization_ to initialize all robot poses in a shared (global) coordinate frame.
Finally, Berneiter et al. [11] presented a centralized CSLAM method based on spectral graph waves, which consists of analyzing the SE(3) Pose graph of each robot and trying to find coincidences and discrepancies in the graph structure of each robot compared to the global graph. This algorithm does not rely on a specific sensor, but just on the pose graph generated.
## III Collaborative S-Graphs
### _Nomenclature_
In this work, we present a distributed approach for multi-robot semantic SLAM. In our approach, we consider each robot (agent) that interacts in a 1 to N fashion. This means that each robot will interact with as many robots as possible independently, each robot will be denoted as Agent \(A_{i}\). An schema of the architecture is shown in Fig. 1.
Each agent will run its own _S-graphs_ pipeline. _S-Graphs_ are four-layered optimizable hierarchical graphs built online using 3D LiDAR measurements. The full details of the _S-Graphs_ we use in this work can be found in [1]. In brief, their four layers can be summarized as follows:
* Keyframes Layer. It consists of robot poses factored as SE(3) nodes in the agent map frame \(A_{i}\) with pairwise odometry measurements constraining them.
* Walls Layer. It consists of the planar wall surfaces extracted from the 3D LiDAR measurements and factored using minimal plane parameterization. The planes observed by their respective keyframes are factored using pose-plane constraints.
* Rooms Layer: It consists of two-wall rooms or four-wall rooms, each constraining either two or four detected wall surfaces, respectively.
* Floors Layer: It consists of a floor node 2 positioned in the center of the current floor.
From _S-graphs_, we will only consider the following vertices: Rooms \(R_{i,k}\), Planes \(P_{i,k}\) and Keyframes \(K_{i,k}\), where the \(i\) index denoted the agent that contains this vertex in its own graph, and \(k\) the index of the vertex.
Each vertex can be translated into different agents coordinated frames. We denote \({}^{A_{j}}V_{i,k}\) as the \(k\)-vertex \(V\) of the robot \(i\) expressed in the agent \(j\) reference frame.
### _Room descriptors_
In order to avoid errors aligning the robot positions in very symmetric situations, like a corridor with multiple rooms, one on side of the order, we cannot only rely on the structural information stored in the top layers of the _S-graphs_, lower level information may be needed to break the symmetry and decide if two rooms are the same or not.
Compared to other LiDAR-based SLAM methods, _S-graphs_ does not take continuous snapshots of the pointcloud measures, these measures are very sparse, so using classical pointcloud feature-based pointcloud matching is not the most convenient way. In order to take advantage of the semantic information that each room contains, we decided to generate a hybrid descriptor that combines the fine-grained information of a pointcloud with high-level semantic knowledge, a _Room Descriptor_.
For generating these descriptors, we use an Scan Context descriptor [7] approach, an egocentric, yaw-invariant descriptor. This descriptor has achieved satisfactory results in multiple LiDAR odometry, and SLAM works because of its simplicity and fast generation. However, one of the drawbacks of these descriptors is the sensitiveness of these descriptions to translation.
Here, we take advantage of the semantic information in the room, by generating a scan context from the centre of
Fig. 1: _Multi S-graph_ architecture schema viewed from \(Agent_{i}\) perspective.
each room, avoiding translation errors. To generate the Room Descriptor, we need a _Room Keyframe_, which is built by combining all point clouds obtained by the robot from within a room. Each Room Keyframe \(Rk_{i}\) can be expressed as:
\[Rk_{i}=U\{^{R_{i}}K_{j}\}\quad;\quad\forall j\mid K_{j}\in R_{i} \tag{1}\]
where \({}^{R_{i}}K_{j}\) represents the pointcloud associated with the keyframe \(K_{j}\) in the \(R_{i}\) frame (a frame located in the center of the room \(i\)).
To obtain the Room Descriptor \(Rd_{i}\) from a Room Keyframe \(Rk_{i}\) a downsample of the \(Rk_{i}\) with a voxel size of 0.1 \(m\) is done to homogenize the number of points that each keypoint has independently of the number of keyframes associated with each room. Finally, the scan context descriptor of each Room Keyframe \(Rk_{i}\) is computed to create the Room Descriptor \(Rd_{i}\):
\[Rd_{i}=SC(\phi(Rk_{i})) \tag{2}\]
where \(\phi(Rk_{i})\) represents the downsample of the keyframe, and \(SC(\vec{X}):R^{n\times 3}\to R^{n_{s}\times n_{r}}\) is the Scan Context obtention from a pointcloud. An example of this room descriptor is shown in Fig. 2.
The use of this descriptor will make the difference in the alignment and further optimization steps.
### _Robots alignment_
As we start from the problem of multiple kidnapped robots, no initial estimation of the relative positions of the robots is provided. If we try to align the complete pointclouds obtained from the multiple robots, we will meet the global registration problem, which, combined with the noise of each pointcloud and no prior information of a possible transformation, leads to unsuitable alignments.
In order to generate good candidates for alignment, we leverage in the Room Descriptors to generate a global alignment of each robot coordinated system, which is crucial for the further graph sharing and collective optimization.
The module in charge of finding this relative transformation between the robots is _Graph Broker_.
In order to compute this transformation, we perform a two-step process:
1. Descriptor matching: The broker receives and stores the room descriptors of the rest of the agents, trying to find a suitable match.
2. Fine alignment: Whenever a match is found between robot and other agents' keyframes, it tries to obtain an improved transform from the room keyframe using a VGICP [12] registration algorithm. The validity of the relative transform is determined by alignment distance and matching threshold \(d_{t}\). If suitable, the rest of the graph information can be transformed into the local robot frame for optimization.
### _Multi-robot mapping_
Whenever a transformation between robots is found, then the top layers of the _S-graphs_ can be shared and incorporated into the other robot graph.
In this approach, there are 2 types of graph vertices that are exchanged:
* Room vertices: Each room vertex includes the \(SE(3)\) transformation of the Room center in the agent frame \({}^{A_{i}}R_{i,k}\).
* Plane vertices: Each plane vertex includes the \(\vec{n}\) normal to the plane and the distance \(d\) from this plane to the agent frame \(A_{i}\)
These vertices are joined with edges that relate the planes that conform each room.
The optimization pipeline consists of 3 steps that repeat:
#### Iii-D1 Vertex transform
After the transformation between agents \({}^{A_{i}}T_{A_{j}}\) is found, the vertices that came from the \(j\) agent can be transformed and added to the graph of the agent \(i\).
The rooms transforms are:
\[{}^{A_{i}}R_{j,k}=^{A_{i}}T_{A_{j}}\ ^{A_{j}}R_{j,k}\quad;T_{j,k}\in SE(3) \tag{3}\]
Considering each plane as follows:
\[{}^{A_{i}}P_{i,k}=\begin{bmatrix}^{A_{i}}\mathbf{n}_{k}\\ {}^{A_{i}}d_{k}\end{bmatrix} \tag{4}\]
where \({}^{A_{i}}\mathbf{n}_{k}\) is the normal vector to the plane in the \(i\)-agent map frame, and \(d_{k}\) is the distance between this plane and the \(i\)-agent origin of coordinates.
The plane transforms are:
\[{}^{A_{i}}P_{i,k}=\begin{bmatrix}^{A_{i}}\mathbf{n}_{k}\\ {}^{A_{i}}d_{k}\end{bmatrix}=\begin{bmatrix}^{A_{i}}\mathbf{R}_{A_{j}}&0\\ {}^{-A_{j}}\mathbf{t}_{A_{j}}&1\end{bmatrix}\begin{bmatrix}^{A_{j}}\mathbf{n} _{k}\\ {}^{A_{j}}d_{k}\end{bmatrix} \tag{5}\]
#### Iii-D2 Data association
Whenever the external vertices are transformed into the corresponding agent frame, a data association process is performed. In this step, similarities between vertices are searched for, no matter if they are internal or external vertices. If two vertices are similar, then an association is made and a new factor is created between them. Further details on data association criteria can be found in [1].
#### Iii-D3 Graph Optimization
After this data association, the rest of the optimization process is similar to the one used in _S-graphs_[1].
Fig. 2: Room Descriptor (right) obtained from a Room Keyframe (left).
## IV Experimental Results
In our experiments, we generate a map of a building floor collaboratively with two robots. Each robot starts at a different place and is unknown to the rest.
During the experiment, we divided a floor into two parts to be explored; the first robot covers the right-hand rooms of the floor and the second one covers the left-hand rooms. A central room is covered for both robots to have a common room, which could lead to the alignment of the robot frames. The data of the experiment were collected using a Boston Dynamics Spot carrying a Velodyne VLP-16 in a real construction site.
As is shown in Fig. 3 both robots are capable of integrating the information collected by the other robot into its own graph, and both robots optimize its own graph by taking into account the information provided by the counterpart. Table I compares the mapping times between the _S-graphs_ with one robot and our proposal with two robots, to map one area.
## V Conclusions and Future Work
In this work a distributed multi-robot SLAM algorithm is presented, leveraging in the semantic features extracted by the _S-graphs_ SLAM algorithm, in order to filter and reduce the amount of data that has to be transmitted between robots. This algorithm considers the kidnapped robot problem for all their robots, and is able to align the maps of the different robots taking advantage of the Room Keyframe descriptor, which combines semantic information with low-level features. We have tested this algorithm for a map generation task, achieving promising results.
In this work, each robot optimizes its own graph with the information obtained by the others, but the optimization that each one modes is not feedbacked to the rest of the agents. In order to achieve the best results, this optimization should be transmitted to the rest in order to achieve a global graph optimization. Moreover, a thorough experimental evaluation in different simulated and real environments has to be done for measuring the performance of the proposed algorithm.
|
2308.13876 | Class Binarization to NeuroEvolution for Multiclass Classification | Multiclass classification is a fundamental and challenging task in machine
learning. The existing techniques of multiclass classification can be
categorized as (i) decomposition into binary (ii) extension from binary and
(iii) hierarchical classification. Decomposing multiclass classification into a
set of binary classifications that can be efficiently solved by using binary
classifiers, called class binarization, which is a popular technique for
multiclass classification. Neuroevolution, a general and powerful technique for
evolving the structure and weights of neural networks, has been successfully
applied to binary classification. In this paper, we apply class binarization
techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting
Topologies (NEAT), that is used to generate neural networks for multiclass
classification. We propose a new method that applies Error-Correcting Output
Codes (ECOC) to design the class binarization strategies on the neuroevolution
for multiclass classification. The ECOC strategies are compared with the class
binarization strategies of One-vs-One and One-vs-All on three well-known
datasets Digit, Satellite, and Ecoli. We analyse their performance from four
aspects of multiclass classification degradation, accuracy, evolutionary
efficiency, and robustness. The results show that the NEAT with ECOC performs
high accuracy with low variance. Specifically, it shows significant benefits in
a flexible number of binary classifiers and strong robustness. | Gongjin Lan, Zhenyu Gao, Lingyao Tong, Ting Liu | 2023-08-26T13:26:13Z | http://arxiv.org/abs/2308.13876v1 | # Class Binarization to NeuroEvolution for Multiclass Classification
###### Abstract
Multiclass classification is a fundamental and challenging task in machine learning. The existing techniques of multiclass classification can be categorized as (i) decomposition into binary (ii) extension from binary and (iii) hierarchical classification. Decomposing multiclass classification into a set of binary classifications that can be efficiently solved by using binary classifiers, called class binarization, which is a popular technique for multiclass classification. Neuroevolution, a general and powerful technique for evolving the structure and weights of neural networks, has been successfully applied to binary classification. In this paper, we apply class binarization techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting Topologies (NEAT), that are used to generate neural networks for multiclass classification. We propose a new method that applies Error-Correcting Output Codes (ECOC) to design the class binarization strategies on the neuroevolution for multiclass classification. The ECOC strategies are compared with the class binarization strategies of One-vs-One and One-vs-All on three well-known datasets of _Digit_, _Satellite_, and _Ecooli_. We analyse their performance from four aspects of multiclass classification degradation, accuracy, evolutionary efficiency, and robustness. The results show that the NEAT with ECOC performs high accuracy with low variance. Specifically, it shows significant benefits in a flexible number of binary classifiers and strong robustness.
Multiclass classification, Binary classification, Error Correcting Output Codes, NEAT, One-vs-One, One-vs-All.
## I Introduction
The classification tasks can be divided into binary (two-class) classification and multiclass classification. Multiclass classification is a crucial branch of machine learning, and has been applied in a wide variety of applications, such as medicine, speech recognition, and computer vision. The existing multiclass classification techniques can be basically divided into three categories, decomposition into binary, extension from binary, and hierarchical classification [1]. Although some classifiers such as Neural Networks (NNs) can classify multiple classes directly as a monolithic multiclass classifier, many state-of-the-art classifiers are inherently proposed for binary classification. Currently, a popular technique of multiclass classification is to decompose multiclass classification into binary classification [2], which is an efficient method to decode the classification, called class binarization. The class binarization approaches for multiclass classification have many advantages. First, developing binary classifiers is generally much easier than developing multiclass classifiers [3]. Second, many classifiers such as Support Vector Machine (SVM) and C4.5 are inherently proposed for binary classification with outstanding performance [4, 5].
The binary classifiers (e.g., NNs and SVM) have been successfully applied to the decomposition of multiclass classification. Neural networks are generally designed by researchers manually. Using algorithms to automatically generate efficient neural networks is another popular approach for designing neural networks. Neuroevolution is a popular and powerful technique for evolving the structure and weights of neural networks automatically. Although neuroevolution approaches have been successfully applied to evolve efficient neural networks for binary classification, it generally struggles to generate neural networks for high accuracy in complex tasks such as multiclass classification [6]. In this work, we therefore investigate class binarization techniques in neuroevolution for multiclass classification.
NeuroEvolution of Augmenting Topologies (NEAT) is a popular neuroevolution algorithm that applies evolutionary algorithms (EAs) to generate desired neural networks by evolving both weights and structures [7]. NEAT-based approaches have been successfully applied to a broad range of machine learning tasks such as binary classification [8, 9], regression [10], and robotics [11]. However, it is notorious that neural networks evolved by NEAT-based approaches generally suffer severe multiclass classification degradation [6, 12]. The performance of neural networks evolved by NEAT degrades rapidly as the number of classes increases [6, 9]. To solve this issue, we apply the class binarization technique of Error-Correcting Output Codes (ECOC) to decompose multiclass classification into multiple binary classifications that NEAT-based approaches have been successfully applied to.
In general, there are three well-known types of class binarization approaches: One-vs-One (OvO), One-vs-All (OvA), and ECOC [2] (see subsection III-B). Theoretically, these three approaches work perfectly for multiclass classification when binary classifier predictions are 100% correct. However, realistic binary classifiers inevitably make wrong predictions, and these class binarization approaches therefore perform differently for multiclass classification. Although the class
binarization techniques of OvO and OvA have been applied to NEAT-based multiclass classification [6], it is a novel method that applies ECOC to NEAT for multiclass classification, noted as ECOC-NEAT.
In this work, we mainly concentrate on the two research questions: 1) how ECOC-NEAT performs for multiclass classification? 2) how the size and quality of ECOC impact the performance of ECOC-NEAT for multiclass classification? To answer these two research questions, this study investigates 1) the performance of OvO-NEAT, OvA-NEAT, ECOC-NEAT, and the standard (original) NEAT for multiclass classification, 2) the performance of ECOC-NEAT with different number of classifiers and different ECOCs. We analyse their performance from four aspects of multiclass degradation, accuracy, training efficiency, and robustness.
To the convincing conclusions, we choose three popular datasets, (_Digit_, _Satellite_, and _Ecoli_) that are usually used to evaluate the methods in multiclass classification. The main findings are summarized into two points.
1. ECOC-NEAT offers various benefits compared to the standard NEAT and the NEAT with other class binarization techniques for multiclass classification. * ECOC-NEAT performs comparable high accuracy as OvO-NEAT. * ECOC-NEAT outperforms OvO-NEAT and OvA-NEAT in terms of robustness. * ECOC-NEAT performs significant benefits in a flexible number of base classifiers.
2. The size and quality of ECOC greatly influence the performance of ECOC-NEAT. * Larger size ECOCs usually contribute to better performance for a given multiclass classification. * High quality (optimized) ECOCs perform significantly better than normal ECOCs.
The rest of this paper is organized as follows. In section II, we provide an overview of the state-of-the-art studies of class binarization for multiclass classification. We present the methodology of NEAT and class binarization in section III. Datasets and experimental setup are addressed in section IV. We present the results in section V from four aspects: multiclass classification degradation, breadth evaluation, evolution efficiency, and robustness. Finally, we discuss this work in-depth and outlook the future work in section VI, followed by the conclusions in section VII.
## II Related work
OvO, OvA, and ECOC are three well-known class binarization techniques for multiclass classification. Although these three class binarization techniques have been successful applied to many applications, there is a lack of study that applies them (particularly ECOC) to neuroevolution for multiclass classification.
In [13], OvA is applied to the diagnosis of concurrent defects with binary classifiers of SVM and C4.5 decision tree. Adnan and Islam [14] applied OvA to the context of Random Forest. Allwein et al. proposed a general method for combining binary classifiers, in which the ECOC method is applied to a unifying approach with code matrices [15]. These studies applied the three class binarization techniques into the traditional classifiers for multiclass classifications.
In the early studies of binary classification in neural networks and neuroevolution, Liu and Yao [16] proposed a new cooperative ensemble learning system for designing neural network ensembles, in which a problem is decomposed into smaller and specialized ones, and then each subproblem is solved by an individual neural network. Abbass et al. [17] and Garcia-Pedrajas et al. [18] presented evolution-based methods to design neural network ensembles. Lin and Damminda proposed a new algorithm of learning-NEAT that combines class binarization techniques and backpropagation for multiclass classification [8].
In the recent study [6], the class binarization techniques of OvO and OvA are applied to decompose multiclass multiclass into a set of binary classifications for solving the multiclass classification degradation of NEAT, in which binary classifiers are the individual NEAT-evolved neural networks. Two ensemble approaches of OvO-NEAT and OvA-NEAT are developed to achieve both higher accuracy and higher efficiency than the standard NEAT. Although the class binarization techniques of OvO and OvA have been applied to NEAT for multiclass classification, there is a lack of study that investigates the well-know technique of ECOC in NEAT for multiclass classification.
## III Methodology
In this section, we describe the neuroevolution algorithm of NEAT, the class binarization techniques of OvO, OvA, and ECOC.
### _NeuroEvolution of Augmenting Topologies_
NEAT is a widely used neuroevolution algorithm that generates neural networks by evolving both weights and structure [7, 19]. NEAT evolves neural networks with flexible topology, starting from the elementary topology where all input nodes are connected to all output nodes, and adding nodes and connections via the operations of recombination and mutations, which leads to an augmented topology. In this work, NEAT is also allowed to delete nodes as well as connections. NEAT searches optimal neural networks through weight space and topological space simultaneously. There is no need for an initial or pre-defined fixed-topology that relies on the experience of researchers. Recombination and mutation induce an optimal topology of NN to an effective network.
An example of evolving neural networks by NEAT for multiclass classification is illustrated in Fig. 1. NEAT aims to generate an optimal neural network (i.e., highest fitness) as the winning multiclass classifier. In particular, NEAT generates a binary classifier when the number of classes is two, where the NEAT is referred to as binary-NEAT (B-NEAT) as shown in the left part of Fig. 2. The number of nodes of the input layer is the dimensions of feature (\(\mathcal{D}\)), and the number of output nodes is the number of classes (\(k\)). We apply a softmax operation in the final layer to output probabilities of each
class for multiclass classification. The class with the highest probability is predicted as the result.
NEAT is essentially a variant of evolutionary algorithms. Therefore, the fitness function is crucial to guide the convergence of evolving desired neural networks. In this work, we evaluate the performance of evolved neural networks with the prediction accuracy, that is the percentage of correct predictions. We note the number of correct predictions as \(\mathcal{N}_{c}\), the number of total predictions as \(\mathcal{N}_{t}\). The fitness (\(f\)) can be calculated as \(f=\mathcal{N}_{c}/\mathcal{N}_{t}\).
Although NEAT can directly evolve neural networks for multiclass classification, it suffers the notorious multiclass classification degradation [8]. We apply NEAT as the baseline method for multiclass classification in this study, i.e., standard NEAT.
### _Class Binarization_
#### Ii-B1 One-vs-One
The class binarization of OvO (also called All-vs-All) technique converts \(k\)-class classification into \(\binom{k}{2}\) binary classifications that are constructed by using the class \(i\)\((i=1,...,k-1)\) as the positive examples and other classes \(j>i\)\((j=2,...,k)\) as the negative examples [1]. That is, each class is compared with each other class separately. The existing studies [15, 20] show that OvO generally performs better than OvA approaches.
NEAT evolves neural networks as binary classifiers for each binary classification. An example of evolving binary classifiers (base classifiers) by NEAT is shown in the left part of Fig. 2. The voting strategy is usually used to fuse these binary classifications for multiclass classification. Each binary classifier votes to one class, and the class with the highest votes is predicted as the result. The OvO technique and base classifiers evolved by NEAT are combined for multiclass classification, i.e., OvO-NEAT. Although NEAT are effective to generate binary classifiers, the OvO-NEAT technique requires building a large number of \(\binom{k}{2}\) base classifiers.
#### Ii-B2 One-vs-All
OVA (also called One-vs-Rest or One-against-All) technique converts a \(k\)-class classification into \(k\) binary classifications. These binary classifications are constructed by using class \(i\) as the positive examples and the rest of classes \(j\)\((j=1,...,k,j\neq i)\) as the negative examples. Each binary classifier is used to distinguish class \(i\) from all the other \(k-1\) classes. When testing an unknown example, the class with maximum prediction is considered the winner [1]. Compared to OvO, OvA provides considerable performance but requires fewer (\(k\)) classifiers.
#### Ii-B3 Error-Correcting Output Codes
ECOC is a class binarization method for multiclass classification, inspired by error-correcting code transmission techniques from communications theory [21]. It encodes \(\mathcal{N}\) binary classifiers to predict \(k\) classes. Each class is given an \(\mathcal{N}\)-length codeword according to an ECOC matrix \(\mathbb{M}\). Each codeword in \(\mathbb{M}\) is mapped to a certain class. An example of ECOC for \(k=4\) classes and \(\mathcal{N}=7\)-bit codewords are shown in Table I. Each column is used to train a binary classifier. When testing an unseen class, the codeword predicted by \(\mathcal{N}\) classifiers is matched to the \(k\) codewords in \(\mathbb{M}\). In this work, we adopt hamming distance to match predicted codeword and the ECOC codewords. The class with the minimum hamming distance is considered as the predicted class.
Unlike OvO and OvA methods that convert a multiclass classification into a fixed number of binary classifications, ECOC allows each class to be encoded with a flexible number of binary classifications, and allows extra models to act as overdetermined predictions that can result in better predictive performance [22]. The row of ECOC needs to be a unique codeword, and columns are neither identical nor complementary. In ECOC, the size of codewords (rows) is the number of classes, and thus the size of ECOC refers to the number of base classifiers in this work. The larger size ECOC provides more bits to correct errors, but too many classifiers cause redundancy which costs a lot of computation in training and classification.
For \(k\) classes, the minimum size of ECOC is \(\lceil log_{2}k\rceil\). For example, 10 classes require a minimum size of 4 bits ECOC that are sufficient for representing each class with a unique codeword. We call the ECOC with a minimum size of \(\mathcal{N}=\lceil log_{2}k\rceil\) as minimal ECOC. The maximum size of ECOC is \(2^{k-1}-1\) for \(k\) classes. The ECOC with a maximum size is generally called as exhaustive ECOC [21]. The upper and lower bounds of ECOC size can be expressed as:
\[\lceil log_{2}k\rceil\leq\mathcal{N}\leq 2^{k-1}-1,\ \mathcal{N}\in\mathbb{Z} \tag{1}\]
where \(\mathbb{Z}\) is the positive integer set. Besides OvO, OvA, minimal ECOC, and exhaustive ECOC, the mid-length ECOC is another representative class binarization technique with intermediate-length code whose size is \(\mathcal{N}=\lceil 10\log_{2}(k)\rceil\)[15].
Fig. 1: Illustration of evolving neural networks by NEAT for multiclass classification
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{classes} & \multicolumn{8}{c}{classifiers} \\ \cline{2-7} & \(f_{1}\) & \(f_{2}\) & \(f_{3}\) & \(f_{4}\) & \(f_{5}\) & \(f_{6}\) & \(f_{7}\) \\ \hline \(c_{1}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \(c_{2}\) & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ \(c_{3}\) & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ \(c_{4}\) & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ \hline \hline \end{tabular}
\end{table} TABLE I: An example of ECOC for \(k=4\) classes with a size of \(\mathcal{N}=7\) bit codewords.
The number of base classifiers varies as the number of classes increases using these class binarization is shown in Fig. 3. OvO requires a polynomial number of base classifiers (\(O(k^{2})\)). However, OvA needs a linear number of classifiers (\(O(k)\)). For minimal ECOC and mid-length ECOC, \(O(\log(k))\) binary classifiers are required. The number of base classifiers used in exhaustive ECOC is exponential (\(O(2^{k})\)).
The exhaustive ECOC is not generally applied to the multiclass classifications with a large number of classes because it requires too many binary classifiers. The mid-length ECOC can be constructed by choosing codewords from exhaustive ECOC to satisfy row and column separation conditions. \(\mathcal{N}\) columns are randomly chosen from an exhaustive code to construct the random code matrix when the number of binary classifiers is \(\mathcal{N}\). For example, if \(k=4,\mathcal{N}=3\), we can choose \(f_{1}\), \(f_{2}\), and \(f_{3}\) from the exhaustive ECOC (Table I) to construct a mid-length ECOC. By contrast, we cannot choose \(f_{5}\), \(f_{6}\), and \(f_{7}\) because in that case the codeword of \(c_{1}\) will be exactly the same as the codeword of \(c_{2}\), in which the class \(c_{1}\) and \(c_{2}\) can not be classified.
In general, optimized ECOC performs better than normal ECOC [23] at the same size. In this work, we investigate whether optimized minimal-ECOC outperforms minimal ECOC (see subsection V-B). NEAT evolves neural networks to constitute a set of binary classifiers. Hamming distance is used to determine the final prediction. The pseudo-code of ECOC-NEAT is shown in Algorithm 1.
```
Data: Construct an ECOC \(\mathbb{M}\) with \(\mathcal{N}\) columns and the corresponding positive dataset \(\mathcal{S}_{j}\) and negative dataset \(\mathcal{S}_{j}\) for each base classifier, where \(j\in[1,\mathcal{N}]\); Test dataset \(\mathcal{X}\); Initialize binary classifier set \(\mathbb{F}(f_{1},f_{2},...,f_{\mathcal{N}})\).
1 Output\(\mathcal{N}\) binary classifiers; predictions \(\mathcal{Y}\) for each test sample \(x\) in \(\mathcal{X}\). // Generating base classifiers by NEAT
2foreach\(j\in[1,\mathcal{N}]\)do
3while\(i<=\mathcal{G}/\mathcal{N}\), \(\mathcal{G}\) is total generations.do
4 binary-NEAT evolves neural networks \(\mathbb{N}_{i}\) for predicting the data from \(\mathcal{S}_{j}\) and \(\mathcal{S}_{j}\).
5\(f_{j}=\operatorname*{argmax}(\mathbb{N}_{i})\).
6 Update \(\mathbb{F}(f_{1},f_{2},...,f_{\mathcal{N}})\). // multiclass classification
7foreach\(x\in\mathcal{X}\)do
8 binary classification of \(\mathcal{N}\) base classifiers on each test sample \(x\).
9\(\mathbb{F}(x)\leftarrow\{f_{1}(x),f_{2}(x),\cdots,f_{\mathcal{N}}(x)\}\); // multiclass classification by
10 hamming distance.
11\(\mathcal{Y}\leftarrow\operatorname*{argmin}_{x}\Delta(\mathbb{M}_{r},\mathbb{ F}(x)),r\in[1,k]\);
```
**Algorithm 1** ECOC-NEAT for multiclass classification
## IV Experiments
In this section, we introduce the datasets, hyperparameter configurations, implementation, and the measurements.
### _Datasets_
In this work, we choose the three well-known datasets of _Digit_ from the ski-learn package [24], _Satellite_ and _Ecoli_ from the machine learning repository of the University of California, Irvine (UCI) [25]. These three datasets with high quality data are prevalent and widely used in multiclass classification tasks. The properties of these three datasets are summarized in Table II.
Fig. 3: Number of classifiers over the number of classes for class binarization techniques.
Fig. 2: ECOC-NEAT for multiclass classification. The left part shows an evolved base (binary) classifier by NEAT. The right part shows the ECOC-NEAT with base classifiers.
### _Experimental Setup_
This work compares the newly proposed ECOC-NEAT with the standard NEAT, OvO-NEAT, OvA-NEAT, and ECOC-NEAT. A hyper-parameter configuration of NEAT is summarized in Table III which are the same for evolving binary classifiers on the three datasets. The dimensions of the input layer for evolved binary classifiers equal the dimensions of feature for a dataset (the last column in Table II). The dimension of outputs in NEAT is set to 2 for evolving binary classifiers. In the standard NEAT, the dimension of outputs equals the number of classes \(k\) for multiclass classification.
We set the number of generations as \(\mathcal{G}=3000\) for each evolution process of the standard NEAT. For a fair comparison, we apply the same total number of generations (\(\mathcal{G}=3000\)) to evolve binary classifiers for these class binarization techniques. Specifically, each base classifier is generated by an evolution of (\(\mathcal{G}/\mathcal{N}\)) generations in NEAT if there are \(\mathcal{N}\) classifiers for a class binarization technique.
We implement the standard and binary NEAT based on an open-source NEAT-Python 1. The experiments are run on the computer with a dual 8-core 2.4 GHz CPU (Intel Haswell E5-2630-v3) and 64 GB memory.
Footnote 1: [https://github.com/CodeRecclaimers/neat-python](https://github.com/CodeRecclaimers/neat-python)
## V Results
We show the results from the following four aspects: multiclass classification degradation, breadth evaluation, evolution efficiency, and robustness.
### _Multiclass Classification Degradation_
The accuracy of multiclass classification generally decreases as the number of classes increases due to the task that becomes more difficult. We test the multiclass classification degradation of NEAT (including the standard NEAT and NEAT with class binarization) on the _Digit_ dataset, in which the number of classes varies from two to ten. For example, the two-class and three-class classification predicts the digit "0, 1" and "0, 1, 2" respectively.
#### V-A1 Multiclass Classification Degradation of the Standard NEAT
The standard NEAT is used to evolve neural networks for the classification from two classes to ten classes. The experiments are repeated ten times on the _Digit_ dataset. The convergence processes of the standard NEAT are shown in Fig. 4 where we presents the training accuracy over generations during the evolution of neural networks with 2-10 classes.
The results clearly show that the accuracy decreases dramatically as the number of classes increases. The classification of two and three classes quickly converges to the high accuracy of more than 95% with narrow confidence intervals which means their evolution processes are steady. However, the accuracy converges to the catastrophic value for the classifications with many classes. In particular, the 10-classes classification (yellow line) converges to an accuracy of less than 50% slowly. In summary, the results show that NEAT performs well for the classification with a few classes (particularly binary classification), but its performance significantly degrades over the number of classes increases.
#### V-A2 Multiclass Classification Degradation of NEAT with Class Binarization
We investigate the degradation of the standard NEAT, OvO-NEAT, OvA-NEAT, and three different sizes of ECOC-NEAT (including minimal ECOC-NEAT, mid-length NEAT, exhaustive ECOC-NEAT) for multiclass classification. Fig. 5 presents the performance of the standard NEAT, OvO-NEAT, OvO-NEAT, and three ECOC-NEAT for multiclass classifications with a varying number of classes from three to ten.
The results show that not only the resulting accuracy of the standard NEAT decreases dramatically but also that of NEAT with class binarization techniques decreases as the number of classifications increases. Importantly, the methods of NEAT with class binarization techniques perform slighter decreases than the standard NEAT. In particular, exhaustive ECOC-NEAT, OVO-NEAT, and mid-length ECOC-NEAT perform remarkable robustness over the number of classes increases. Moreover, they exhibit higher accuracy and less variance than the standard NEAT. The mid-length ECOC-NEAT with a
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Dataset & \begin{tabular}{l} Training \\ samples \\ \end{tabular} & Test samples & classes \((k)\) &
\begin{tabular}{l} Dimensions \\ of feature \\ \end{tabular} \\ \hline Digit & \(1,617\) & \(180\) & 10 & 64 \\ Satellite & 4,435 & 2,000 & 6 & 36 \\ Ecoli & 336 & 10-fold & 8 & 7 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The properties of three popular datasets of _Digit_, _Satellite_ and _Ecoli_.
Fig. 4: The convergence processes of NEAT for the multiclass classification from two to ten classes. The shadows show 95% confidence intervals.
\begin{table}
\begin{tabular}{l l l l} \hline \hline parameters & value & parameters & value \\ \hline pop\_size & 200 & weight\_mutate\_rate & 0.8 \\ elitism & 2 & activation\_mutate\_rate & 0.3 \\ initial\_connection & 0.1 & conn\_delete\_prob & 0.1 \\ conn\_add\_prob & 0.8 & node\_delete\_prob & 0.1 \\ node\_add\_prob & 0.7 & bias\_mutate\_rate & 0.7 \\ survival\_threshold & 0.2 & max\_fitness\_threshold & 1.0 \\ max\_stagnation & 15 & compatibility\_threshold & 2.5 \\ elite\_species & 3 & compatibility\_weight\_coefficient & 0.6 \\ feed\_forward & true & compatibility\_disjoint\_coefficient & 1.0 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The parameter configurations of NEAT.
moderate number of base classifiers provides competitive performance compared to OvO-NEAT and the exhaustive ECOCC-NEAT that requires a large number of base classifiers. The exhaustive ECOC-NEAT outperforms the mid-length ECOC-NEAT that outperforms minimal ECOC-NEAT. We summarize that ECOC-NEAT methods with large size ECOC (i.e., a large number of base classifiers) generally tends to perform better than small size ECOC. Intriguingly, minimal ECOC-NEAT with a few bases learners still significantly performs better than the standard NEAT for multiclass classification.
### _Comprehensive Comparison_
We investigate the standard NEAT, OvO-NEAT and OvA-NEAT and the proposed ECOC-NEAT methods with different codes including the minimal, mid-length and exhaustive code on the three datasets. Specially, we apply the mid-lengths ECOC-NEAT with different sizes to investigate the relationship between the size of ECOC-NEAT and their resulting accuracy. The performance of these methods is shown in Table IV where we presents 1) testing accuracy (accuracy on test set), 2) variance of testing accuracy over ten repetitions, 3) training accuracy on the training set, 4) average training accuracy of each base classifier, and 5) average training time per generation.
Fig. 5: Testing accuracy over number of classes for the multiclass classification methods.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Dataset & \multirow{2}{*}{Method} & \multirow{2}{*}{Method} & Number of & Testing & \multirow{2}{*}{Variance} & \multirow{2}{*}{Training accuracy} & \multirow{2}{*}{\(\overline{\mathcal{A}_{b}}\)} & \multirow{2}{*}{\(\overline{\mathcal{A}_{b}}\)} & \multirow{2}{*}{Average training time/Generation(s)} \\ \cline{6-8} \multirow{6}{*}{Digit} & Existing & Standard NEAT & 1 & 0.449 & 9.56\(\times 10^{-4}\) & 0.484 & 0.484 & 13.74 \\ & & OvO-NEAT & 45 & 0.866 & 4.94\(\times 10^{-4}\) & **0.953** & **0.989** & **0.99** \\ & & OvA-NEAT & 10 & 0.740 & 10.46\(\times 10^{-4}\) & 0.820 & 0.976 & 8.40 \\ \cline{2-8} & & Minimal ECOC-NEAT & 4 & 0.535 & 72.01\(\times 10^{-4}\) & 0.614 & 0.865 & 10.49 \\ \cline{2-8} & & 10-bit ECOC-NEAT & 10 & 0.651 & 15.78\(\times 10^{-4}\) & 0.724 & 0.860 & 8.49 \\ & & 45-bit ECOC-NEAT & 45 & 0.819 & 9.04\(\times 10^{-4}\) & 0.876 & 0.837 & 5.62 \\ & & 100-bit ECOC-NEAT & 100 & 0.845 & 6.77\(\times 10^{-4}\) & 0.894 & 0.812 & 4.96 \\ & & 250-bit ECOC-NEAT & 250 & 0.876 & 2.76\(\times 10^{-4}\) & 0.908 & 0.793 & 4.60 \\ \cline{2-8} & & Exhaustive ECOC-NEAT & 511 & **0.899** & **0.95\(\times 10^{-4}\)** & 0.909 & 0.783 & 4.53 \\ \hline \hline \multirow{8}{*}{Satellite} & Existing & Standard NEAT & 1 & 0.754 & 0.99\(\times 10^{-4}\) & 0.774 & 0.774 & 5.09 \\ & & OvO-NEAT & 28 & 0.842 & 0.79\(\times 10^{-4}\) & **0.914** & **0.989** & **0.12** \\ & & OvA-NEAT & 8 & 0.787 & 1.53\(\times 10^{-4}\) & 0.848 & 0.979 & 0.75 \\ \cline{2-8} & & Minimal ECOC-NEAT & 3 & 0.765 & 2.28\(\times 10^{-4}\) & 0.816 & 0.922 & 1.11 \\ \cline{2-8} & & 8-bit ECOC-NEAT & 8 & 0.790 & 3.24\(\times 10^{-4}\) & 0.844 & 0.926 & 0.94 \\ & & 15-bit ECOC-NEAT & 15 & 0.828 & 3.90\(\times 10^{-4}\) & 0.870 & 0.922 & 0.84 \\ & & 28-bit ECOC-NEAT & 28 & **0.849** & 0.79\(\times 10^{-4}\) & 0.881 & 0.917 & 0.73 \\ \cline{2-8} & & 40-bit ECOC-NEAT & 40 & 0.848 & **0.46\(\times 10^{-4}\)** & 0.885 & 0.914 & 0.68 \\ & & 60-bit ECOC-NEAT & 60 & 0.848 & 2.16\(\times 10^{-4}\) & 0.885 & 0.910 & 0.62 \\ \cline{2-8} & & Exhaustive ECOC-NEAT & 127 & 0.837 & 0.88\(\times 10^{-4}\) & 0.873 & 0.900 & 0.55 \\ \hline \hline \multirow{8}{*}{Ecoli.} & \multirow{2}{*}{Existing} & Standard NEAT & 1 & 0.754 & 0.99\(\times 10^{-4}\) & 0.774 & 0.774 & 5.09 \\ & & OvO-NEAT & 28 & 0.842 & 0.79\(\times 10^{-4}\) & **0.914** & **0.989** & **0.12** \\ & & OvA-NEAT & 8 & 0.787 & 1.53\(\times 10^{-4}\) & 0.848 & 0.979 & 0.75 \\ \cline{2-8} & & Minimal ECOC-NEAT & 3 & 0.765 & 2.28\(\times 10^{-4}\) & 0.816 & 0.922 & 1.11 \\ \cline{2-8} & & 8-bit ECOC-NEAT & 8 & 0.790 & 3.24\(\times 10^{-4}\) & 0.844 & 0.926 & 0.94 \\ & & 15-bit ECOC-NEAT & 15 & 0.828 & 3.90\(\times 10^{-4}\) & 0.870 & 0.922 & 0.84 \\ & & 28-bit ECOC-NEAT & 28 & **0.849** & 0.79\(\times 10^{-4}\) & 0.881 & 0.917 & 0.73 \\ \cline{2-8} & & 40-bit ECOC-NEAT & 40 & 0.848 & **0.46\(\times 10^{-4}\)** & 0.885 & 0.914 & 0.68 \\ & & 60-bit ECOC-NEAT & 60 & 0.848 & 2.16\(\times 10^{-4}\) & 0.885 & 0.910 & 0.62 \\ \cline{2-8} & & Exhaustive ECOC-NEAT & 127 & 0.837 & 0.88\(\times 10^{-4}\) & 0.873 & 0.900 & 0.55 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparison of different methods on the three datasets of _Digit_ (10 classes), _Satellite_ (6 classes), and _Ecoli_. (8 classes). Each method is run ten times and token an average of results. The total generation of each method is identical \(\mathcal{G}=3000\). \(\mathcal{N}\)-bit ECOC-NEAT represents different sizes of mid-length ECOC-NEAT that is \(\mathcal{N}\) base classifiers. \(\overline{\mathcal{A}_{b}}\) represents the average training accuracy of binary classifiers.
The results show that NEAT with class binarization techniques significantly outperform the standard NEAT in terms of accuracy. ECOC-NEAT even the minimal ECOC-NEAT performs higher accuracy than the standard NEAT on the three datasets. The exhaustive ECOC-NEAT with the largest number of base classifiers performs the smallest variances that represent the strong robustness. Conversely, the minimal ECOC-NEAT with a few binary classifiers performs large variances that mean the fluctuating performance.
The average training accuracy of each base classifier shows the performance of each evolved binary classifier on the training dataset. The binary classifiers in OvO-NEAT perform the best average training accuracy because it decomposes multiclass classifications into simple binary classification tasks. The evolved binary classifiers in ECOC-NEAT methods perform lower average accuracy than OvO-NEAT and OvA-NEAT because the binary classifications in ECOC-NEAT are generally challenging and each classifier in ECOC-NEAT is assigned a few \((\mathcal{G}/\mathcal{N})\) generations to evolve. However, the ECOC-NEAT methods still perform high accuracy for multiclass classification due to the high quality of ensemble in ECOC.
NEAT in these methods takes different computation times to evolve binary classifiers. The standard NEAT takes much more computation time per generation to evolve classifiers than the NEAT with class binarization techniques.
### _Size of ECOC-NEAT_
The size of ECOC performs a significant influence on their performance for multiclass classification [26]. To further observe the influence of the size of ECOC on their performance, we visualize the testing accuracy and variance over the size of ECOC (the results in Table IV) in Fig. 6. The visualization shows that testing accuracy increases as the number of base classifiers increases and the small size ECOC-NEAT performs fluctuating testing accuracy. A similar observation can be illustrated from the results on the _Satellite_ and _Ecoli_ datasets (Table IV).
### _Quality of ECOC-NEAT_
Besides the size of ECOC, the quality of ECOC is another crucial factor for the performance of ECOC-NEAT. The minimal ECOC-NEAT with a few base classifiers generally perform sensitive to the quality of ECOC. Thus, we concentrate on the quality of the minimal ECOC-NEAT.
#### Iv-D1 on the Satellite Dataset
ECOC-NEAT with high training accuracy binary classifiers generally performs high testing accuracy for multiclass classification. The binary classification tasks in an ECOC-NEAT are generally with various difficulty. The exhaustive ECOC for the _Satellite_ dataset with \(k=6\) classes is with 31 columns (see Table IV). We run an exhaustive ECOC-NEAT to evolve the 31 binary classifiers on the _Satellite_ dataset for three repetitions. The training accuracy of these 31 binary classifiers is shown in the bar chart of Fig. 7. The results show that these binary classifiers in the exhaustive ECOC-NEAT perform significant different accuracy from around 70% to 98%.
For the _Satellite_ dataset with \(k=6\) classes, the minimal ECOC-NEAT needs a minimum of 3-bit codeword (three columns) to construct the ECOC. We random choose three columns from the 31 columns of the exhaustive ECOC to construct minimal ECOCs. For an exhaustive ECOC with 31 columns, there are \(\binom{31}{3}=4495\) combinations, and 420 out of these 4495 combinations are available minimal ECOCs that satisfy both row and column conditions. We run all 420 minimal ECOC-NEAT on the _Digit_ dataset. Fig. 8 (a) shows the distribution of average training accuracy of binary classifiers (noted as \(\overline{\mathcal{A}_{b}}\)) in these 420 minimal ECOCs. These 420 minimal ECOCs perform different qualities in terms of their average training accuracy of binary classifier from around \(70\%\) to \(90\%\). We divide these 420 minimal ECOCs into the three-level performance of low, middle, high accuracy with the ratio of 10%, 80%, and 10% respectively. These 10% minimal ECOCs with high accuracy are the optimized minimal ECOCs. The results indicate that different ECOCs perform significantly accuracy and the quality of ECOC is crucial for the high accuracy of binary classifiers.
Moreover, we randomly choose minimal ECOCs with low, middle, and high accuracy respectively. Each minimal ECOC-NEAT evolves three binary classifiers (three columns in each minimal ECOC) with an evolution of total 3000 generations for multiclass classification, which results is shown in Table V. The results indicate that the average training accuracy of bi
Fig. 6: Testing accuracy over ECOC-NEAT size on _Digit_.
Fig. 7: Training accuracy of the 31 binary classifiers in an exhaustive ECOC-NEAT on the _Satellite_ dataset for three repetitions.
nary classifiers significantly impacts the testing accuracy. The optimized minimal ECOC-NEAT performs a testing accuracy of \(0.7735\) that is much higher than the low accuracy minimal ECOC-NEAT and the standard NEAT (\(0.6377\) in Table IV) for 6-classes classification on the _Satellite_ dataset. Conversely, the low accuracy minimal ECOC-NEAT perform a similar testing accuracy with the standard NEAT.
Finally, we randomly choose 6 ECOCs from high, middle, low accuracy ECOCs respectively, that is 18 various ECOCs in total, to observe the relationship between their training/testing error and average training error of binary classifiers (1-\(\overline{\mathcal{A}_{b}}\)), as shown in Fig. 9 (a). The lines are applied to fit the data, and indicate that the training/testing error is linear with the average training error of binary classifiers. The optimized minimal ECOCs perform the results that are shown in the left-bottom points with low training/testing error and low average training error of binary classifiers.
#### Iv-B2 On the Digit Dataset
For the _Digit_ dataset with 10 classes, an exhaustive ECOC and a minimal ECOC consists of 511 base classifiers and four base classifiers respectively (as shown in Table IV). An exhaustive ECOC with 511 columns can be used to construct a large number of \(\begin{pmatrix}511\\ 4\end{pmatrix}=2,807,768,705\) 4-bit possible minimal ECOCs (4 columns) that is a huge amount of work and not necessary to be investigated. In this work, we random choose 10,000 minimal ECOCs to investigate the performance of various minimal ECOCs on the _Digit_ dataset. The distribution of the average training accuracy of binary classifiers is shown in Fig. 8 (b). Interestingly, the distribution looks like a normal distribution. We divide these minimal ECOCs into the three-level performance of low, middle, high accuracy with the ratio of 10%, 80%, and 10% respectively. As the standard NEAT with an evolution of 3000 generations, each classifier of these 511 binary classifiers is generated by an evolution of \(\lceil 3000/511\rceil\approx 6\) generations. Theoretically and empirically, the average training accuracy of binary classifiers can be improved with a longer evolution than 6 generations, and thus lead to the higher accuracy for multiclass classification on the _Digit_ dataset.
We randomly choose minimal ECOCs from low, middle, high accuracy (in Fig. 8 (b)) respectively. These minimal ECOC-NEAT evolve binary classifiers with an evolution of \(3000/4=750\) generations. The results of these minimal ECOC-NEAT on the _Digit_ dataset is shown in Table V. The high accuracy 4-bit ECOC-NEAT performs a remarkable testing accuracy that is comparable with the 10-bit mid-length ECOC-NEAT (a testing accuracy of 0.6506, see Table IV), and saves 60% classifiers (from 10 to 4). The low accuracy ECOC-NEAT still perform a low testing accuracy of 0.4832 that is only a little superior to the standard NEAT.
We randomly choose 9 minimal ECOCs from low, middle, high accuracy respectively, that is 27 various ECOCs in total, to investigate the relationship between their training/testing error and average training error of binary classifiers (1-\(\overline{\mathcal{A}_{b}}\)), as shown in Fig. 9 (b). The lines are applied to fit the data, and indicate that the training/testing error is linear with the average training error of binary classifiers. The 27 minimal ECOC-NEAT generate binary classifiers by an evolution of \(3000/4=750\) generations and thus the binary classifiers performs higher average training accuracy (1 - average training error of binary classifiers) than the results in Fig. 8 (b).
of average training accuracy of binary classifiers is shown in Fig. 8 (c). We categorize these minimal ECOC-NEAT into three levels of high, middle, low average training accuracy of binary classifiers.
Moreover, we randomly choose a minimal ECOC from low, middle, high accuracy respectively and run the minimal ECOC-NEAT to evolve binary classifiers with an evolution of 1000 (3000/3) generations. The results of the low, middle, high accuracy (optimized) minimal ECOC-NEAT on the _Ecoli_ dataset are shown in Table V. The high accuracy 3-bit minimal ECOC-NEAT performs a test accuracy near with 15-bit mid-length ECOC-NEAT. The low accuracy ECOC-NEAT performs a low test accuracy of 0.6782 that is even lower than that of the standard NEAT.
In addition, we randomly choose 7 minimal ECOCs from low, middle, high accuracy (optimized) minimal ECOCs (i.e., 21 various ECOCs in total) to validate the relationship between the quality of ECOCs and their training/testing error, as shown in Fig. 9 (c). The lines that fit the results and indicate the linear relation between the quality of ECOCs and their training/testing error.
To summarize, we conclude that a high quality ECOC generally performs high testing accuracy. It is crucial to design a high quality ECOC for multiclass classification of neuroevolution approaches.
### _Evolutionary Efficiency_
We observe the convergence of training accuracy and average training accuracy of binary classifiers during the evolution. We randomly choose an optimized minimal ECOC-NEAT from Table V and a 10-bit ECOC-NEAT from Table IV, and run them 10 repetitions on the _Digit_ dataset. The minimal ECOC-NEAT and 10-bit mid-length ECOC-NEAT generate binary classifiers with an evolution of 750 generations and 300 generations respectively. The results are shown in Fig. 10.
The results show that the training accuracy performs a significant similar convergence process with the average training accuracy of binary classifiers. Both of them dramatically increase in the beginning and gradually converge to a stable value over generations. The high accuracy 4-bit minimal ECOC-NEAT performs a training accuracy of 72% approximately, which performs even higher accuracy than the 10-bit mid-length ECOC-NEAT with a training accuracy of 71%.
Moreover, we compare the training accuracy of the standard NEAT and NEAT with class binarization techniques during the evolution, as shown in Fig. 11. The number of generations for each evolution of ECOC-NEAT is \(\mathcal{G}/\mathcal{N}\) which is different for various ECOC-NEAT. To compare various ECOC-NEAT in the same scale, we apply proportional scaling to match an identical x-axis. For example, 10-bit mid-length ECOC-NEAT with the evolution of 300 generations for each binary classifier in Fig. 10 is scaled 10 times in Fig. 11.
The results show that NEAT with class binarization techniques perform significantly better in terms of accuracy than the standard NEAT for multiclass classification. OvO-NEAT, exhaustive ECOC-NEAT, mid-length ECOC-NEAT (including 250-bit, 100-bit, 45-bit ECOC-NEAT) perform remarkable training accuracy. The NEAT with large size ECOC (e.g., exhaustive ECOC-NEAT, OvO-NEAT) generally performs better than the NEAT with small size ECOC (e.g., 4-bit ECOC-NEAT). Compared to the normal 4-bit ECOC-NEAT with a training accuracy of 60% approximately, the optimized 4
Fig. 10: The training accuracy and average training accuracy of binary classifiers of 4-bit optimized minimal ECOC-NEAT and 10-bit mid-length ECOC-NEAT on the _Digit_ dataset. The lines and shadow represent the mean and 95% confidence intervals for 10 repetitions.
Fig. 9: Training/Testing error and average training error of binary classifiers on the _Satellite_ problem. Distribution of all minimal ECOC-NEAT in terms of average classifiers training accuracy on the three datasets of _Satellite_, _Digit_, _Ecoli_.. The frequency of the right vertical axis represents the number of ECOC. The lines are applied to fit the data, and \(R^{2}\) is the goodness of fit.
bit ECOC-NEAT perform an efficient multiclass classification with a training accuracy of 72% approximately. Moreover, the optimized 4-bit ECOC-NEAT performs significantly similar evolution process (the purple line) with the 10-bit ECOC-NEAT (the brown line). The results demonstrate that the size and quality of ECOC are crucial for the multiclass classification performance of ECOC-NEAT.
### _Robustness_
Robustness is an important measurement for the evaluation of multiclass classification. The ECOC-NEAT usually performs a remarkable ability to correct errors for multiclass classification. Gunjan Verma and Ananthram Swami applied ECOC to improve the adversarial robustness of deep neural networks [27]. Although OvO-NEAT performs outstanding for multiclass classification, the robustness of OvO-NEAT against errors is insufficient compared to ECOC-NEAT. In this work, we apply the measure of Accuracy-Rejection curve to analyse the robustness of the NEAT with class binarization techniques. Fig. 12 shows the accuracy-rejection curve of OvO-NEAT and other ECOC-NEAT.
The large size ECOCs perform better than the small size ECOCs no matter whether the rejection rates are low or high. Large size ECOC-NEAT always outperforms OvO-NEAT, which means they have consistently stronger robustness against errors than OvO-NEAT. Comparing the small size of 10-bit ECOC-NEAT with OvO-NEAT, there is an intersection between two lines. The lines of the large size ECOC intersect the line of OvO-NEAT at the small values of rejections. From a rejection rate of the intersection onwards, ECOC-NEAT outperforms OvO-NEAT. For example, at rejection rates greater than 80%, even 10-bit ECOC-NEAT outperforms OvO-NEAT, which means 10-bit ECOC-NEAT gives 20% of the test samples pretty convincing predictions (with testing accuracy of 95%). Briefly, ECOC-NEAT has strong robustness against errors, especially with long codes. By contrast, the robustness of OvO-NEAT seems weak.
ECOC-NEAT performs strong robustness that its base classifiers are complement each other when the number of base classifiers decreases. In this work, we investigate the robustness of the performance of ECOC-NEAT and OvO-NEAT when their number of base classifiers decreases. The results are shown in Fig. 13, where the size of ECOC and OvO decrease from 45-bit (45 base classifiers) to 1-bit (one classifier). We randomly choose base classifiers from 45-bit ECOC-NEAT and OvO-NEAT to construct various size ECOC-NEAT and OvO-NEAT with ten repetitions. The results show that the testing accuracy of OvO-NEAT declines almost linearly as the number of base classifiers decreases. However, the accuracy of ECOC-NEAT decreases slightly as the number of base classifiers decreases. In particular, the accuracy of ECOC-NEAT hardly decreases when ECOC-NEAT is with a little fewer base classifiers, e.g., 40-bit ECOC-NEAT. The ECOC-NEAT with 22 base classifiers, that is half of 45 base classifiers, still obtains a testing accuracy of approximately \(70\%\) that dropping by \(12\%\) from the testing accuracy of 45-bit ECOC-NEAT (\(82\%\)). However, OvO-NEAT with 22 base classifiers performs \(45\%\) testing accuracy that dropping by \(41\%\) from the testing accuracy of 45-bit OvO-NEAT (\(86\%\)). This finding illustrates that ECOC-NEAT performs better robustness than OvO-NEAT when they ensemble fewer base classifiers for multiclass classification.
low variance, and strong robustness [28, 29]. An optimized minimal ECOC significantly outperforms a normal constructed ECOC [23].
In summary, we recommend OvO-NEAT and ECOC-NEAT with a great number of binary classifiers (e.g. mid-length ECOC-NEAT, or exhaustive ECOC-NEAT with moderate classes) for the tasks when a considerable number of generations is allowed. For the tasks that only limited generations are allowed, we recommend optimized ECOC-NEAT with a small number of binary classifiers.
## VI Discussions and Future work
### _Discussions_
In this section, we analyse the classification performance of these methods on different classes and the network complexity of base classifiers.
#### Vi-A1 Behavior Analysis
We observe the classification performance on each class of these methods by analyzing the results of the standard NEAT and the NEAT with class binarization techniques on the _Digit_ dataset 2. We apply the widely used metrics of precision, recall, and F1-score to evaluate the classification on each class of these methods. Moreover, we adopt popular averaging methods for precision, recall, and F1-score, resulting in a set of different average scores (macro-averaging, weighted-averaging, micro-averaging), see more details of these averaging methods in [30]. We conduct experiments for ten repetitions and take an average of the results. The heatmaps of the precision, recall, and F1-score of these methods are visualized in Fig. 14, Fig. 15, Fig. 16, respectively.
Footnote 2: It does not need to analyze the results on all three datasets
The classification precision on each class of the _Digit_ dataset from "0" to "9" is shown in the heatmap of Fig. 14. The results show that the difficulty of classifications on different digits is diverse. Specifically, the digit "0" is predicted by all these methods with high accuracy of more than 90%. All these methods perform low testing accuracy on the digit "3" and "8". The other digits are classified with diverse accuracies that are basically desired. The larger size ECOC-NEAT generally performs higher precision than the small size ECOC-NEAT. For example, a micro-averaging precision of 0.5350 for 4-bit ECOC-NEAT increases to 0.8189 for 45-bit ECOC-NEAT. All ECOC-NEAT including the small size 4-bit ECOC-NEAT outperform the standard NEAT. The precision of the standard NEAT once again verifies its low performance for multiclass classification. Exceptionally, the standard NEAT predict the digit "0" with a decent accuracy, which verifies that the digit "0" is distinctly predicted.
Fig. 15 shows the recall heatmap of different methods for classifying the digit class "0" to "9". The recall heatmap shows consistent results with the precision heatmap. For example, the recall of digit classes "3" and "8" are usually the low for all these methods.
F1-score is the harmonic mean of precision and recall to evaluate model performance comprehensively, which conveys a balance between precision and recall. The F1-score of different methods on the _Digit_ dataset is shown in Fig. 16. The recall heatmap shows consistent results with the precision and recall heatmaps.
It is worth noticing that OvO-NEAT performs a high precision on the digit "8" but a low precision on the digit "3" in Fig. 14. By contrast, its recall on the digit "8" is lower compared to the digit "3" in Fig. 15. We suppose that there are recognition errors between these two categories, and therefore observe the predicted label of OvO-NEAT and real label to verify this hypothesis, as shown in Fig. 17. The results show that OvO-NEAT often incorrectly predicts the digit "3" as "8". Specifically, 44 digits of "3" are incorrectly predicted as the digit "8". This explains that these methods perform low testing accuracy on the digit "3" and "8". Intuitively, the digit "3" and "8" have similar shapes, and they are even incorrectly recognized by human.
In summary, the three heatmaps of precision, recall, and F1-score reveal consistent conclusions that 1) NEAT with class binarization techniques, particularly ECOC-NEAT and
Fig. 14: Precision heatmap of these methods on the _Digit_ dataset. Rows from 0 to 9 are precision on the digit class from “0” to “9”. Rows 10, 11, 12 present micro-averaging precision, weighted-averaging precision, and macro-averaging precision, respectively. Columns represent various methods.
Fig. 15: Recall heatmap of different methods on the _Digit_ dataset. Rows 0 to 9 present the recall of digit class “0” to “9”. Rows 10 to 12 present micro-averaging recall, weighted-averaging recall, and macro-averaging recall, respectively. Columns represent different methods.
OvO-NEAT, outperform the standard NEAT for multiclass classification, 2) the large ECOC-NEAT generally performs high precision, recall, and F1-score, 3) NEAT (including the standard NEAT, OvO-NEAT, ECOC-NEAT) techniques perform diverse on different classes and large size ECOC-NEAT perform robust for the classification with different classes.
#### Vi-A2 Network Complexity
Network complexity offers an insight into the analysis of the mechanisms of NEAT with class binarization techniques for multiclass classification. We investigate how the number of nodes and connections influence classification performance. Table VI shows the network complexity of generated classifiers by different NEAT-based methods for a different number of classes on the _Digit_ dataset. These experiments are repeated ten times. The network complexity on the _Satellite_ and _Ecoli_. dataset are presented in Table VII and Table VIII. We observe the average total number of nodes and connections of all base classifiers over ten repetitions, and the average number of nodes and connections of each base classifier (the value in the bracket). For example, the exhaustive ECOC-NEAT generate three base classifiers with an average total number of 107 nodes and 286 connections for 3 classes, and an average number of 36 nodes and 95 connections for each base classifier over ten repetitions.
As the number of classes increases, it is reasonable to generate a complex neural network with more nodes and connections for more complicated patterns. However, the results show that the standard NEAT struggles to generate the neural networks with augmented nodes and connections as the number of classes increases, which basically causes its dramatic multiclass classification degradation. We hypothesis that the standard NEAT tend to eliminate the evolved neural networks with more nodes and connections during the evolution. In contrast, NEAT with class binarization techniques tend to generate neural networks with more nodes and connections as the number of classes increases for remarkable multiclass classification. Although ECOC-NEAT often generates the base classifiers with fewer and fewer nodes and connections as the number of classes increases, the increasing number of binary classifiers leads to the increasing total number of nodes and connections that contribute to the remarkable performance of multiclass classification. For example, the base classifier evolved by the exhaustive ECOC-NEAT for 3 classes has an average of 36 nodes and 95 connections, but that for 10 classes has an average of only 17 nodes and 17 connections. However, the total nodes and connections increase from 107 and 286 to 8711 and 8688 respectively for 10 classes classification.
### _Future Work_
Although this work investigates the different class binarization techniques, there are still multiple open issues and possible future work that may provide new insights into ECOC-NEAT. First, the ECOC-NEAT needs to train a lot of binary classifiers, which generally takes a lot of training time. Second, the hamming distance for matching the predicted codeword and ECOC codewords is a basic matching strategy that needs to be improved. Third, the ECOC still needs to be improved with different code design. We would like to improve the performance of ECOC-NEAT from the aspects of:
* using sparse codes (i.e., \(\mathbb{M}\in\{1,-1,0\}\)) instead of dense codes (i.e., \(\mathbb{M}\in\{1,-1\}\)), which are beneficial to efficient training [15].
* using other decoding strategies like loss-based decoding instead of hamming distance to match the codewords of ECOC. Loss-based decoding generally contributes to good performance because of the "confidence" information [15].
* applying low-density parity-check code to design the optimized ECOC.
## VII Conclusion
This work investigates class binarization techniques in neuroevolution and proposes the ECOC-NEAT method that applies ECOC to the neuroevolution algorithm of NEAT for multiclass classification. We investigate 1) the performance of NEAT with different class binarization techniques for multiclass classification from multiclass degradation, accuracy, training efficiency, and robustness on three popular datasets, 2) the performance of ECOC-NEAT with different size and
Fig. 16: F1-score Heatmap of different multiclass classification methods. Rows 0 to 9 present the F1-score of digit class from “0” to “9”. Rows 10 to 12 present micro-averaging, weighted-averaging, and macro-averaging F1-score, respectively. Columns represent different methods.
Fig. 17: The heatmap of predicted label by OvO-NEAT and real label on the _Digit_ dataset.
quality of ECOC. The results show that ECOC-NEAT offers various benefits compared to the standard NEAT and NEAT with other class binarization techniques for multiclass classification. Large size ECOCs and optimized ECOCs generally contribute to better performance for multiclass classification. ECOC-NEAT shows significant benefits in a flexible number of binary classifiers and strong robustness. In future, ECOC-NEAT can be extended to other applications such as image classification and computer vision. Moreover, ECOC can be applied to different neuroevolution algorithms for multiclass classification.
## Code and Data Availability
The code and data for this work are available at [https://github.com/lafengxiaoyu/NEAT-ensembles](https://github.com/lafengxiaoyu/NEAT-ensembles)
## CRediT authorship contribution statement
**Gongjin Lan:** Conceptualization, Methodology, Validation, Visualization, Investigation, Writing - original draft, Writing - review & editing. **Zhenyu Gao:** Conceptualization, Methodology, Coding and Validation, Visualization, Investigation, Writing - original draft, Writing - review & editing. **Lingyao Tong:** Writing - review & editing. **Ting Liu:** Writing - review & editing.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgment
This work is partially supported by the Guangdong Natural Science Funds for Young Scholar (No: 2021A1515110641).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & & & & & & & & \\ \cline{3-10} \multicolumn{1}{c}{} & & & & & & & & \\ \hline \multirow{4}{*}{Standard} & \# Classifiers & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ & Generations & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) \\ \cline{2-10} & Nodes & 43(43) & 68(68) & 61(61) & 55(55) & 60(60) & 63(63) & 68(68) & 72(72) \\ \cline{2-10} & Connections & 150(150) & 308(308) & 211(211) & 172(172) & 133(133) & 121(121) & 132(132) & 177(177) \\ \cline{2-10} & \# Classifiers & 3 & 6 & 10 & 15 & 21 & 28 & 36 & 45 \\ & Generations & 3000(1000) & 3000(500) & 3000(300) & 3000(200) & 3003(143) & 2996(107) & 2988(83) & 3015(67) \\ \cline{2-10} & Nodes & 81(27) & 159(27) & 249(25) & 396(26) & 506(24) & 657(23) & 857(24) & 1039(23) \\ \cline{2-10} & Connections & 205(68) & 282(47) & 505(51) & 658(44) & 744(35) & 916(33) & 1177(33) & 1387(31) \\ \cline{2-10} & \# Classifiers & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ & Generations & 3000(1000) & 3000(750) & 3000(600) & 3000(500) & 3003(429) & 3000(375) & 2997(333) & 3000(300) \\ \cline{2-10} & Nodes & 109(36) & 144(36) & 186(37) & 221(37) & 260(37) & 303(38) & 325(36) & 355(36) \\ \cline{2-10} & Connections & 341(114) & 464(116) & 661(132) & 649(108) & 598(85) & 967(121) & 836(93) & 931(93) \\ \cline{2-10} & \# Classifiers & 2 & 2 & 3 & 3 & 3 & 3 & 4 & 4 \\ Minimal & Generations & 3000(1500) & 3000(1500) & 3000(1000) & 3000(1000) & 3000(1000) & 3000(1000) & 3000(750) & 3000(750) \\ ECOC-NEAT & Nodes & 77(39) & 98(49) & 132(44) & 142(47) & 139(46) & 120(40) & 166(42) & 174(44) \\ \cline{2-10} & Connections & 207(104) & 492(246) & 475(158) & 479(160) & 471(157) & 493(164) & 518(130) & 540(135) \\ \cline{2-10} & \# Classifiers & 3 & 7 & 15 & 26 & 29 & 30 & 32 & 34 \\ Mid-length & Generations & 3000(1000) & 3003(429) & 3000(200) & 2990(115) & 2987(103) & 3000(100) & 3008(94) & 2992(88) \\ ECOC-NEAT & Nodes & 107(36) & 266(38) & 477(32) & 739(28) & 796(27) & 831(28) & 848(27) & 881(26) \\ Connections & & 286(95) & 783(112) & 1012(67) & 1300(50) & 1450(50) & 1347(45) & 1353(42) & 1416(42) \\ \cline{2-10} & \# Classifiers & 3 & 7 & 15 & 31 & 63 & 127 & 255 & 511 \\ Exhaustive & Generations & 3000(1000) & 3003(429) & 3000(200) & 3007(97) & 3024(448) & 3048(24) & 3060(12) & 3066(6) \\ ECOC-NEAT & Nodes & 107(36) & 266(38) & 477(32) & 836(27) & 1446(23) & 2988(24) & 4740(19) & 8711(17) \\ Connections & 286(95) & 783(112) & 1012(67) & 1317(42) & 1903(30) & 3060(24) & 506(202) & 8688(17) \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Network complexity of generated classifiers by different NEAT-based methods for different number of classes on the _Digit_ dataset. The value and the the value in the bracket are the average total number of all base classifiers and the average number of each base classifier over ten repetitions, respectively.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Generations & \#Classifiers & Nodes & Connections \\ \hline Standard NEAT & 3000 (3000) & 1 & 29 & **(29)** & 262 & **(262)** \\ Ov-NEAT & 2996 (107) & 28 & 170 (6) & 254 (9) \\ OvA-NEAT & 3000 (375) & 8 & 89 (11) & 323 (40) \\ Minimal ECOC & 3000 (1000) & 3 & 44 (15) & 272 (91) \\ 8-bit ECOC & 3000 (375) & 8 & 106 (13) & 457 (57) \\
15-bit ECOC & 3025 (200) & 15 & 183 (12) & 634 (42) \\
28-bit ECOC & 2996 (107) & 28 & 309 (11) & 880 (31) \\
40-bit ECOC & 3000 (75) & 40 & 407 (10) & 1007 (25) \\
60-bit ECOC & 3000 (50) & 60 & 556 (9) & 1202 (20) \\ Exhaustive ECOC & 3048 (24) & **127** & **964**\((8)\) & **1321**\((10)\) \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Network complexity of generated classifiers by different NEAT-based methods on the _Ecoli_. dataset.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{} & & & & & & \\ \cline{3-10} \multicolumn{1}{c}{} & & & & & & \\ \cline{3-10} \multicolumn{1}{c}{} & & & & & & \\ \hline \multirow{4}{*}{OvO-NEAT} & \# Classifiers & 1 & 1 & 1 & 1 & 1 & 1 \\ \cline{3-10} & Generations & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) & 3000(3000) \\ \cline{2-10} & Nodes & 43(43) & 68(68) & 61(61) & 55(55) & 60(60) & 63(63) & 68(68) & 72(72) \\ \cline{2-10} |
2301.10874 | Recursive deep learning framework for forecasting the decadal world
economic outlook | The gross domestic product (GDP) is the most widely used indicator in
macroeconomics and the main tool for measuring a country's economic output. Due
to the diversity and complexity of the world economy, a wide range of models
have been used, but there are challenges in making decadal GDP forecasts given
unexpected changes such as emergence of catastrophic world events including
pandemics and wars. Deep learning models are well suited for modelling temporal
sequences and time series forecasting. In this paper, we develop a deep
learning framework to forecast the GDP growth rate of the world economy over a
decade. We use the Penn World Table as the data source featuring 13 countries
prior to the COVID-19 pandemic, such as Australia, China, India, and the United
States. We present a recursive deep learning framework to predict the GDP
growth rate in the next ten years. We test prominent deep learning models and
compare their results with traditional econometric models for selected
developed and developing countries. Our decadal forecasts reveal that that most
of the developed countries would experience economic growth slowdown,
stagnation and even recession within five years (2020-2024). Furthermore, our
model forecasts show that only China, France, and India would experience stable
GDP growth. | Tianyi Wang, Rodney Beard, John Hawkins, Rohitash Chandra | 2023-01-25T23:47:34Z | http://arxiv.org/abs/2301.10874v2 | # Recursive deep learning framework for forecasting the decadal world economic outlook
###### Abstract
Gross domestic product (GDP) is the most widely used indicator in macroeconomics and the main tool for measuring a country's economic ouput. Due to the diversity and complexity of the world economy, a wide range of models have been used, but there are challenges in making decadal GDP forecasts given unexpected changes such as pandemics and wars. Deep learning models are well suited for modeling temporal sequences have been applied for time series forecasting. In this paper, we develop a deep learning framework to forecast the GDP growth rate of the world economy over a decade. We use Penn World Table as the source of our data, taking data from 1980 to 2019, across 13 countries, such as Australia, China, India, the United States and so on. We test multiple deep learning models, LSTM, BD-LSTM, ED-LSTM and CNN, and compared their results with the traditional time series model (ARIMA,VAR). Our results indicate that ED-LSTM is the best performing model. We present a recursive deep learning framework to predict the GDP growth rate in the next ten years. We predict that most countries will experience economic growth slowdown, stagnation or even recession within five years; only China, France and India are predicted to experience stable, or increasing, GDP growth.
keywords: Deep learning, GDP, growth rate, time series forecasting, world economy outlook +
Footnote †: journal:
## 1 Introduction
Economists and policy makers rely on a wide range of macroeconomic indicators to guide decisions and social policies that impact the economies of the world [57]. Among macroeconomic indicators, gross domestic product (GDP) is the most widely used and well-known. It is a measure of the market value of all the final goods and services produced in a specific time period in a country [16]. The GDP is seen as a powerful statistical indicator of national development and progress [67] and often used as means to measure the economic health of a country. The GDP also links with employment rate [3] and gives an indication of the trade and investment opportunities [9]. Expectations are a key element of macroeconomic theories. Expectations refers to the forecasts or views that decision makers hold about future prices or inputs economics[34; 17]. Macroeconomic forecasts can also be expected to have an influence on how individuals anticipate the economy will evolve. It is precisely because GDP is closely related to many other macroeconomic indicators such as stock prices and the unemployment rate [43] that it is regarded as the "core" metric to measure the development and growth of a country. The forecasting of GDP is critical for economic policy since inaccurate forecasts can lead to errors in decision-making[74]. Furthermore, forecasts routinely tend to fail to capture large changes [35], which are likely to be more critical for effective policy responses. Some international economic and financial organizations will publish some forecast reports unscheduled for government or corporate or even personal reference, as an example, IMF, as an international monetary fund, publishes reports every few months to provide their views on the world's economic development in the coming months or years, and some of these reports[54] will include views on GDP, which will influence countries' domestic or foreign policies [111]. For a recent overview of macroeconomic forecasting see [41]
GDP forecasting often requires the use of exogenous variables and in the machine learning context, these would be referred to as features such as oil and stock prices that can be collected more frequently than quarterly GDP data [87]. GDP forecasting has necessitated the use of domain specific modelling techniques that merge data across timescales including _bridge equations_ or _dynamic factor models_[78; 38; 42; 88]. These models are generally linear equations composed over a series of lagged and difference of exogenous and auto-regressive variables[82]. They fit within the broader family of traditional time series models defined by the auto-regressive integrated moving average (ARIMA)[11] and vector auto-regression (VAR) methodologies [103]. The weakness of these approaches is the restriction to linear relationships, which has been a necessary compromise in the absence of large data sets. How
ever, as these approaches have consistently failed to identify crucial changes in the economic climate [35; 88] and it remains an open research question to determine the best forecasting methodology for a given economic purpose. Machine learning methods with their emphasis on capturing non-linearity and emphasizing predictive performance have drawn the attention of econometricians [6; 7].
Deep learning provides a set of machine learning methods that have emerged in recent decades and shown to overcome previous limitations of statistical and econometric models[113; 27]. These techniques have been widely used in many fields, achieving widespread adoption in the image recognition, intelligent recommendation, and autonomous driving, etc[65]. Recurrent neural networks (RNNs) are deep learning methods designed to process arbitrarily long sequences of data[32]. They have been used for natural language processing[20; 108; 21], bioinformatics[49], and time series forecasting [18]. The LSTM model is an advanced RNN designed to identify long range dependencies in sequences [50]. The bidirectional LSTM (BD-LSTM) was developed for exploiting information across an entire sequence [47] to improve language processing models; however, it also can be used for time series forecasting [98; 18] by processing the preceding sequence in both directions. The encoder-decoder LSTM (ED-LSTM) is another variant of LSTM, it had a new architecture that learns to encode a variable-length sequence into a fixed-length vector representation and to decode a given fixed-length vector[24], widely used in pattern recognition scenarios [92] and also time series forecasting [18]. Unlike LSTM which is a variant of RNNs, convolutional neural networks (CNNs) [4] are deep learning methods which are mostly prominent in image processing, but have shown very promising results in time series prediction [18], such as stock price prediction [23]. We note that CNNs have not been extensively applied in macroeconomic forecasting [75], although they have a lot of potential.
Machine learning models have been very promising in forecasting competitions, typically through involvement of sophisticated ensemble learning methods [77; 44]. In addition, econometricians have identified important roles for machine learning models in GDP forecasting [10; 46], with deep learning models for forecasting the GDP growth rate with using a multivariate approach [97; 114; 105]. Zhang et al. [114] combined LSTM and hidden Markov models (HMMs) to use consumer price index (CPI)?? to forecast China's GDP before the COVID-19 pandemic. Based on import and export data, Sokolov-Mladenovic et al. [105] forecasted the GDP of European Union countries using simple neural networks and extreme learning machine models. Mourougane [83] made a monthly forecast of Canada's GDP based on hard indicators related to quantifiable indicators (industrial production, unemployment, etc.) and soft indicators which is a less tangible community characteristics and values [94] (confidence index, exchange rate, etc.). Longo et al. [71] used the ensemble method based on the LSTM to forecast the GDP of the United States and provided insights about the contribution of different features (economic indicators) during COVID-19. The economic indicators typically used for GDP forecasting include 141 variables taken form the FRED database (US Federal reserve Database), see also [43].
Time series forecasting employs two major strategies, which include direct and recursive strategies [26; 63; 79]; although direct strategy can be used for multi-step time series prediction, the recursive strategy is more useful when the prediction horizon spans longer time. Hence, it would be more appropriate to use a mix of direct and recursive strategies in decadal forecasting. Moreover, there is a gap in decadal forecasting methods using deep learning with multivariate analysis about the economic indicators. We would need to forecast the economic indicators using a recursive deep learning strategy, in order to forecast the decadal GDP growth rate. We find that most of the papers focus on developed countries for GDP forecasting and only a few papers are forecasts for developing countries. This could be because developed countries have better access and complete data about economic indicators, which are more helpful for forecasting. It is a challenge to develop deep learning models given limited and missing data about the economic indicators to achieve good forecasting performance. In the presence of novel deep learning models, there is growing interest within the machine learning literature as to what model is most appropriate for GDP forecasting when compared to traditional econometric models such as ARIMA, and VAR.
In this paper, we use deep learning models that include LSTM-based models and CNNs for GDP growth rate forecasting of major world economies. We focus on large economies around the world that includes developing and developed countries and use data from the past few decades to forecast the future decade. We present a recursive deep learning framework where the direct strategy is used for model development and recursive strategy is used for decadal forest. We also investigate what sort of data partitioning strategy is best for model training and development. We further compare the performance of deep learning models with traditional time series forecasting models (ARIIMA and VAR) for different countries. Our data includes periods of smooth development, rapid development, and periods of financial crisis in the training set, in order to better prepare the test set and forecast data. We first use a direct strategy to evaluate the respective deep learning models and then use the best model for the recursive strategy where we estimate the economic indicators first, in order to forecast the decadal GDP growth rate. Our data features GDP along with economic indicators prior to 2019 and we forecast a decadal world economy outlook for 2020 - 2030.
The rest of the paper is organised as follows. Section 2 provides a review of the literature and Section 3 presents methodology where we evaluate deep learning models and present a recursive deep learning framework for the world economic outlook. In Section 4, we present and discuss
the results. Section 5 summarizes findings and discusses promising future developments. Section 6 concludes the paper.
## 2 Literature Review
### Econometric models for time series forecasting
ARIMA and VAR are two of the more common statistical models in time series analysis and applied for forecasting in macro-economics. ARIMA models have been used to predict Singapore's quarterly GDP based on monthly external trade[1]. A modfied ARIMA model was used to predict Irish CPI by introducing an objective penalty function methods to focus on the forecast error out-of-sample rather than 'goodness of fit' [80]. Sims [103] introduced the VAR model in economics in 1980 for for macroeconomic modelling and forecasting in order to deal with endogeneity issues. Freeman and Williams[40] used a VAR model to analyze indicator variables that relate to policy and the economy. They compared their model to structural equation models[51] and found a VAR is better at capturing policy endogeneity. A decade later, Robertson et al. [96] relied on a VAR model to predict United States GDP using six economic indicators and found that imposing imprecise prior constraints on VAR can lead to more accurate predictions. Abonazel et al. [2] used ARIMA model to predict Egyptian GDP in next decade from 2019, and Salisu et al. [99] analysed how the oil uncertainty stock affect 33 countries' GDP and the influence between the countries using a global VAR. Iorio et al. [55] compared France and Germany's unemployment rate and GDP growth rate in the future basic on a VAR model. ARIMA and VAR models remain widely applied in a range of econometrics and finance applications [52; 29].
Muller et al.[84] proposed support vector regression (SVR) for time series and demonstrated improved the performance on benchmark time series problems. Lau et al.[64] implemented SVR as a local predictor to forecast Sunspot time series with better results than radial basis function and least square local predictor in relatively long-term prediction. In the financial time series forecasting domain, Lu et al.[73] introduced independent component analysis to combine with SVR model and improved the accuracy further. A wide range of applications related to time series prediction using SVR has been presented[100; 5].
Researchers in the past have combined traditional models with deep learning models for improving their performance. Tseng et al.[110] modeled the production value of Taiwan's machinery industry using seasonal time series data of soft drink factories by combining seasonal ARIMA and multi-layer perception [69], this combination achieved better results than standalone models. Choi and Hyeong Kyu [25] presented ARIMA-LSTM model for stock price correlation coefficient prediction where the ARIMA processed the linear correlation in the data and passed the residual to the LSTM model. Ouhame et al.[90] proposed a VAR-LSTM hybrid model where VAR considered linear information of the data and then the LSTM model catered for the non-linear data.
### Deep learning models for time series forecasting
Deep learning methods learn from raw data to understand structures and patterns for prediction [45; 93]. Deep learning has been applied across multiple domains such as speech recognition, visual object recognition and genomics [58; 14; 33], including time series forecasting [18].
Among deep learning models, LSTM is often used for time series forecasting and has a wide range of applications, including weather, macroeconomics, stock prices, and the number of cases of COVID-19. In 2020, Karevan et al. [61] used an LSTM model to predict the weather for cities in Belgium and the Netherlands, and convert it into a transductive LSTM by changing the cost function of LSTM, which changes the way it weights the data and improves the accuracy of the prediction in this experiment. Smalter et al. [104] used an LSTM model to forecast macro-economic indicators and extended the model into the Encoder-Decoder (ED-LSTM) architecture. They used these models to predict unemployment rate and compared with a VAR model, the result showed that LSTM and its derivative ED-LSTM is better than the traditional models. Siami-Namin et al.[102] used stock market data from 1985 to 2018 to compare ARIMA, LSTM and BD-LSTM, They found that compared to ARIMA, the accuracy of LSTM has improved by 30%. In the case of BD-LSTM, they found that the training speed was slower than the LSTM model, as the BD-LSTM needs to obtain additional batches in order to achieve the equilibrium. They hypothesized that this shows that the BD-LSTM has extracted for additional features of the data that unidirectional LSTMs cannot extract. Chandra et al. [18] proposed an LSTM framework for predicting new COVID-19 cases in India in a single day. By comparing with LSTM and BD-LSTM, they found that the univariate random-split ED-LSTM performed best in this experiment, and used it made forecasts two months in advance.
The CNN model can also be used for time series related problems. Selvin et al. [101] implemented a CNN to predict the stock prices of Infosys,TCS and Cipla. When compared to the RNN and LSTM, CNN was the best model for predicting stock price since the CNN model only relies on current information and has an advantage when dealing with data that is not too cyclical. Piao et al. [95] predicted the housing prices using a CNN model, and used a CNN for variable selection. Livieris et al. [70] proposed the CNN-LSTM model to predict the prices and price trend for gold markets. CNN-LSTM is a traditional LSTM model that adds multiple additional convolutional layers, which effectively improves the prediction accuracy of the model.
## 3 Methodology
### Data
The Group of Seven (G7) is an inter-governmental political forum consisting seven developed countries Canada, France, Germany, Italy, Japan, the United Kingdom and the United States. These countries exert significant influence over global geo-political affairs that impact the world economy[112; 60]. BRICS refers to the five major emerging economies and developing countries, including Brazil, Russia, India, China and South Africa. The BRICS countries occupy 27% of the world's land area, 43% of the population and 24% of the GDP[8; 85]. These developing countries occupy an indispensable position in the future development of the world[89].
We select thirteen countries by combining Australia with the group of seven (G7) and BRICS. This set includes developed and developing countries, and covers a significant proportion of the global economy. We use the Penn World Table version 10.0 1[37] to extract a time series dataset of economic indicators for each of these countries. The PWT 10.0 database contains information on relative levels of income, output, input and productivity, covering 183 countries between 1950 and 2019. We chose to extract data for the period from 1980 to 2019, to maximise the number of countries with complete coverage. The exception is Russia, for which there is only data for the period from 1991-2019.
Footnote 1: [https://www.rug.nl/ggdc/productivity/pwt/?lang=enPWT](https://www.rug.nl/ggdc/productivity/pwt/?lang=enPWT) 10.0
We chose a subset of the available variables to use in our models for predicting GDP growth rate. This included GDP, net population, CPI, employment rate, share of labour compensation in GDP at current national prices, exchange rate, gross domestic product per capital (CGDPo), and price-levels that refer to the perspectives of macroeconomics (production, expenditure, and trade), finance, and human resources, supplemented by a price index that can show whether inflation is present, to help us better predict economic development and economic crises [114; 105; 83; 71]. We transform data into year-on-year ratio in order to predict GDP growth rate.
The year-on-year ratio \(R\) formula is:
\[R=\frac{Data_{n}-Data_{\text{n-1}}}{Data_{\text{n-1}}} \tag{1}\]
where n represents nth year
Once all of the variables are transformed into year-on-year ratio, we apply a min-max scaling operation to remove the inter-variable scale differences and ensure all values reside between zero and one..
#### 3.1.1 Data Processing
In the traditional time series models (ARIMA and VAR), we simply divide the data into training and test sets by time and proportion. In the deep learning models, in order to develop multivariate and multi-step prediction model, we need to reconstruct the original data.
Suppose our original data structure is
\[[x_{1},x_{2},\dots,x_{N};y_{1},y_{2},\dots,y_{N}]\]
where \(N\) is the length of data. Next, we introduce the definition of time window \(T\). The reconstructed data \(D\) consists of multiple data windows
\[[T_{1},T_{2},\dots,T_{N-m}]\]
. Each data window can be regarded as the data used for model training. We need to determine the input step size \(m\) and output step size \(n\) first. In a time window, we have m-step input \(X\) to represent m-th exogenous series in \(T\) and n-step output \(Y\) as the n-th target series of \(T\), then the example of the first time window is as follows.
\[\begin{split} X_{1}&=[x_{1},x_{2},\dots,x_{m}]\\ Y_{1}&=[y_{m+1},y_{m+2},\dots,y_{m+n}]\end{split} \tag{2}\]
Next, using the same method, we can reconstruct the remaining data until \(x_{N}\) is included in the latest time window, so we can write its universal formula below.
\[\begin{split} X_{i}&=[x_{i},x_{i+1},\dots,x_{i+m}] \\ Y_{i}&=[y_{i+m+1},y_{i+m+2},\dots,y_{i+m+n}]\end{split} \tag{3}\]
### Econometric models
#### 3.2.1 ARIMA model
The ARIMA model has been prominent for nearly half a century in time series forecasting after being introduced by Box and Jenkins [11]. It is a combination of three components, auto-regressive model (\(AR\)), integrated average (\(I\)) and moving average model (\(MA\)). ARIMA models have three parameters to represent the three part in this model respectively, written as \(ARIMA(p,d,q)\). \(AR(p)\) represents the past value which is used to predict, \(p\) could be determined by PACF (partial auto-correlation function). \(I(d)\) is the times of differences to ensure the data stable, we use the ADF (augmented Dickey-Fuller test) to help us to find \(d\). \(MA(q)\) expresses the current data and errors in the past \(q\) values, which is found by by analysing the ACF (auto-correlation function).
The ARIMA model is given by
\[\begin{split} w_{t}&=\Delta^{d}y_{t}\\ w_{t}&=\phi_{1}w_{t-1}+\dots+\phi_{p}w_{t-p}+ \varepsilon_{t}-\theta_{1}\varepsilon_{t-1}-\dots-\theta_{q}\varepsilon_{t-q} \end{split} \tag{4}\]
where \(\phi\) and \(\theta\) are the coefficients of values \(w\) and errors \(\varepsilon\).
#### 3.2.2 VAR model
The VAR model introduced by Sims [103] has been used to explain the relationship with exogenous variables (fa
tures) in the data. The model combines the lagged endogenous variables with the exogenous variables to explain the endogenous variable (target) which extends the regression model to multivariate time series.
\[Y_{t}=BX_{t}+A_{1}Y_{t-1}+\ldots+A_{p}Y_{t-p}+u_{t} \tag{5}\]
where \(X\) is the exogenous variable, \(Y\) is the endogenous variable, \(u\) is the error term, \(A\) and \(B\) are the coefficients.
\[AIC=-2\ln(L)+2k \tag{6}\]
The Akaike Information Criterion(AIC) [12] is typically used in VAR model for determining model hyperparameters, such as determining the best lag, where \(L\) is the maximum likelihood, \(k\) is the number of parameters.
### Deep learning models
#### 3.3.1 LSTM model
A recurrent neural network (RNN) is a prominent deep learning model that can be used to model time series data [24]. RNNs feature a recurrent (context) layer \(h\) to cater for memory for preserving state information for output \(y\) during processing of a sequence data \(x=(x_{1},...,x_{t})\). The hidden layer \(h_{t}\) stores the previous hidden result to update the weight \(w_{hh}\) to process the \(x_{t}\) in next time step and output \(y_{t}\), as shown in Figure 2 and given by
\[\begin{split}\mathrm{y}_{t}&=f_{1}\left(h_{t}w_{hy} +b_{y}\right)\\ \mathrm{h}_{t}&=f_{2}\left(x_{t}w_{zh}+h_{t-1} \mathrm{w}_{hh}+\mathrm{b}_{\mathrm{h}}\right)\end{split} \tag{7}\]
where \(w\) is the weight in different layers, \(f_{1}\) and \(f_{2}\) are the activation function, and \(b\) is the bias.
Conventional RNNs faced limitations in training due to vanishing gradients in long sequences; hence, Hochreiter and Schmidhuber [50] designed LSTM network which introduce memory cells and gates that enable them to have better performance in processing the long-term dependencies in sequences. Figure 3) presents the LSTM network, the recurrent context layer is leveraged by memory cells.
BD-LSTM is a variant with two types of LSTM cells and process the input sequence data in two direction, from the start to the end, and from the end backwards. This bi-directionality allows the model to build up two separate hidden state representations of the input sequence. The structure is shown in Figure 4.
The ED-LSTM is another variant of LSTM with encoder-decoder architecture proposed by Sutskever et al. [109] that features two types of new cells to process the data, the encoder processes the input series data \(x_{1},...,x_{t}\) with \(T\) length, then transfer the summary of past series data to the cell state \(c_{t}\), in the encoder system. There are \(T\) LSTM cells to recursive \(T\) time until getting the summaries of all the series data and transfer the result in to cell state \(c_{T}\), \(c_{T}\) is also the initial input to the decoder system (i.e., \(c_{T}=c_{0}^{\prime}\)). This helps the LSTM cells in decoder system to output \(y_{1},...,y_{t}^{\prime}\), and each time of update, the previous output would be used as input in current step. ED-LSTM structure is shown in Figure 5.
Basically, this architecture estimates the conditional probability of the output sequence \(y_{1},...,y_{t}^{\prime}\) given the input sequence \(x_{1},...,x_{t}\) (i.e.,\(p\left(y_{1},\ldots,y_{t}^{\prime}\mid x_{1},\ldots,x_{t}\right)\)).
#### 3.3.2 Cnn
Inspired by research on the visual cortex of the mammalian biological brain, Yann Le Cun introduced CNN [66]. CNN consists of input layer, convolution layer, pooling layer, fully connected layer and output layer. The convolutional and pooling layers can be placed multiple times as a combination before the output layer depending on the requirements of the application. These components have been used to design multiple CNN architectures such as VGG-Net[76] and Alex-Net[22] for complex computer vision tasks. The convolutional layer extracts features of the input through the convolution operation, that are then filtered by the pooling layer. Finally, information is passed to the fully connected layer which is essentially a simple neural network. The CNN has been prominent for computer vision and image processing tasks, in recent years, it
Figure 1: The process of using VAR.
Figure 2: RNN structure showing information from input \(x\) to output \(y\) via the state and hidden layers \(h\).
also has been applied for time series prediction [101; 95; 18] and hence we use it as a model for comparison.
### Direct and recursive deep learning framework for decadal world economy outlook
We apply two strategies for forecasting, direct and recursive, both of which are illustrated in Figure 6, The direct strategy is used for model development and model evaluation, and the recursive strategy is used for decadal forecasting.
In the direct strategy, we use a fixed window of historical economic indicators from the PWT database (as discussed in Section 3.1.) We process the data before evaluating the respective econometric and deep learning models given in Section 3.2 and 3.3, respectively. There are four steps (transform into year-on-year ratio, min-max scale, data reconstruction, and data shuffled) in the data processing, we convert the data to year-on-year ratios and normalise them to remove the effects of variable ranges. We reconstruct the data to fit the deep learning model. We shuffle the data to create a training and test rest while preserving temporal dependence for a short time window, i.e 4 years in the past to forecast 3 years in the future. After the data has been processed, we train the model and then compare the accuracy and average ranking of the models on the test data set to assess how good the model is and which data partitioning strategy is most suitable for model training and development. We further compare the performance of deep learning models on the test data set with econometric models (ARIMA and VAR) and select the model with the best forecasting accuracy from these models as the model for the recursive strategy.
In the recursive strategy shown in Figure 6, we will use the optimal model and data partitioning evaluated in the direct strategy. First we need to train the model, and to ensure consistency here we use the same training set as in the direct strategy for training. The second step is to use the data to make forecasts. Unlike the direct strategy where the GDP growth rate is the target, this step target is features, but the independent variables used are still the same. Our model features multivariate input and one output (multi-to-one), so we can only predict one feature at a time, and we will predict all features in Step 2.3 and Step 2.4. When we finish this two steps to forecast and then determine whether the length of the current features is enough for us to predict the decadal GDP growth rate in Step 2.5. If the answer is no, then we go back to the beginning of Step 2.3 and continue to predict the new features. However, if the answer is yes, then we make the final prediction, i.e. the decadal GDP growth rate.
We note that in the direct strategy, the model only predicts the GDP growth rate, whereas in the recursive strategy, the model first predicts the features (economic indicators) which are then used to predict the decadal GDP growth rate. The advantage of the recursive strategy is that we can optimise the length of time step required for the deep learning model within a reasonable range, and not have the longer prediction w
Figure 4: BD-LSTM structure with forward direction LSTM cells \(F\)_cells_ and backward direction LSTM cells \(B\)_cells_.
Figure 5: ED-LSTM structure: The Encoder module is used to summarize sequence information, and the Decoder module is used to output.
Figure 3: LSTM network with the LSTM memory cell for handling long-term dependencies.
titioning. We can use the optimal short ranger forecast model to pursue prediction accuracy, and then use a recursive strategy to achieve our long term forecast goal.
#### 3.4.1 Training and test data
We take 1980-2010 data as the train data set with shuffling. The test data set is the rest of the data which between 2011 and 2019 after shuffling. In the case of Russia, its train data set begins at 1991 but still end at 2010, its test data set share the same period with other countries as shown in Table 1. Before importing the data into the deep learning model, we need to transform the training set and test set to a structure suitable for time series forecasting. Here we use a structure of 5 inputs and 3 outputs and shuffle it. The shuffled data set considers time windows in the reconstructed data set as given in Equation (1), the order and length inside the time window does not change (eg. past 4 years of input to the model to predict next 3 years).
## 4 Results
### Technical set-up
Our experiments first use direct strategy to evaluate the model, use the data of 13 countries in Penn-World data to compare the results and select the best model according to prediction accuracy. Then, in the second part, we will use this best-performing model to forecast the GDP growth rate of these 13 countries for the next ten years where we need recursive strategy to predict features (economic indicators) in order to predict decadal GDP growth rate.
Table 2 shows the details for the topology of the respective deep learning models in terms of input, hidden and output layers. We run 30 times experiments [18] for 500 epochs with batch-size of 64 in the training data in the deep learning models, and rectifier linear activation units (Relu) in hidden and output layers. We use adaptive moment estimation (Adam) optimiser [62] for training the deep learning models (LSTM, BD-LSTM, ED-LSTM and CNN).
In the VAR model, after the data pass the stationarity and co-integration test, we use AIC as the measurement to help us choose the best lag, and for the three parameters of ARIMA, AR(p), I(d) and MA(q). Table gives the detail of the hyper-parameter of ARIMA and VAR.
We use root mean squared error(RMSE) as the main prediction accuracy measure:
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}} \tag{9}\]
where \(N\) represents the amount of samples, \(y_{i}\) is the real value and \(\hat{y}_{i}\) is the predicted value.
We report the mean of RMSE, and 95% confidence interval, for 30 independent experiment runs (with random initialisation of weights and biases) in each country from the train/test datasets. We also report the RMSE at each prediction step given by year (prediction horizon) for the neural network models. We present results in two parts, according to whether it is a developed or developing country.
Figures 8 to 15 shows the performance across 10-steps prediction in 8 developed countries, which is Australia, Canada, France, Germany, Japan, United Kingdom and United States. Figures 17 to 21 represent the performance in the 5 developing countries, Brazil, China, India, Russia and South Africa.
\begin{table}
\begin{tabular}{c c c} \hline \hline & Train Data set & Test Data set \\ \hline Russia & 1991-2010 & 2011-2019 \\ \hline Others & 1980-2010 & 2011-2019 \\ \hline \hline \end{tabular}
\end{table}
Table 1: the period of train and test data set
\begin{table}
\begin{tabular}{c c c c} \hline \hline Country & ARIMA & VAR lags \\ \hline Japan & (4, 2, 3) & 3 \\ \hline Germany & (2, 0, 2) & 1 \\ \hline America & (2, 0, 2) & 1 \\ \hline British & (5, 0, 3) & 1 \\ \hline Canada & (1, 2, 3) & 1 \\ \hline France & (0, 1, 1) & 1 \\ \hline Italy & (0, 1, 1) & 1 \\ \hline Australia & (0, 2, 2) & 4 \\ \hline China & (1, 0, 0) & 1 \\ \hline India & (0, 3, 3) & 1 \\ \hline South Africa & (2, 0, 3) & 1 \\ \hline Brazil & (4, 2, 2) & 1 \\ \hline Russia & (0, 1, 1) & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: the details for the parameters of the traditional time series model
Figure 6: Deep learning-based framework to predict decadal GDP growth rate using the direct and the recursive strategy. The model only predicts the GDP growth rate in direct strategy, and in the recursive strategy, the model first predicts the features (economic indicators) which are then used to predict the decadal GDP growth rate.
### Results: Shuffled Data
Shuffled data allows the model to avoid over-fitting compared to the original training set. In developed countries, the difference between the two data sets is only reflected in the RMSE of the training set, and there is not much difference in the test set, but in developing countries, the role of shuffled data is highlighted, which effectively avoids the model over-fitting. This contrast can be seen from Figure 7.
--
As we can see from above Figure 7, compared with China, the gap between raw data and shuffled data in Japan is relatively small. This may be due to the economic trend of developed countries being more stable than that of developing countries. So even if the model fits the first half of the training set better than the second half, the performance of the model on the test set is preserved. However, in developing countries, due to greater variability in their year on year economic performance, if the model is trained in the order of years, it is likely to over-fit the older period, greatly affecting its performance on the test set.
### Results: Developed Countries
Developed countries have a more stable economic environment due to their sound financial and social security systems. The inflation, employment rate and other major economic indicators do not vary substantially year on year. As for GDP, the economic structure of developed countries is generally stable and gradually changing, so GDP does not have large fluctuations. As the most important group of countries in the world economy, they rarely encounter economic difficulties, except for financial crises [31] and pandemics such as COVID-19 [56]. Due to globalization, financial crises in developed countries are cyclical and structural, and hence, predictable.
The results of all models for Australia are shown in Figure 8 where in Panel (a) we see that the the deep learning models produce lower RMSE than the traditional models on the training data, but this performance does not transfer to the test data. The traditional time series models, by comparison, provide more consistent performance across the two sets. The results of ED-LSTM model appear consistently superior to all other models, while the prediction capabilities of CNN and ARIMA are approximately equivalent, and BD-LSTM is only slightly behind the former two models. The LSTM model ranks fifth, and it has the largest variance, meaning that these models are the most unstable. Next, we observe the step-by-step RMSE of Figure 8 Panel (b), ED-LSTM performs consistently well, and CNN has exhibits stable performance. In the other two models, the LSTM is only slightly stronger than the BD-LSTM in the second step, while it lags significantly behind the others in other steps, resulting in the worst ranking among these deep learning models.
We present the results for Canada in Figure 9. The VAR model provides the best performance overall on the test
Figure 7: Results showing RMSE for China (developing) and Japan (developed) between shuffled and unshuffled train set.
set, exceeding its own training set results. The other four models have relatively similar results, with the ED-LSTM exhibiting slightly stronger performance. CNN ranks second among the deep learning models. The RMSE of simple LSTM is the worst of these models, and its variance is extremely large, indicating that the model is extremely volatile. Panel (b) in Figure 9 demonstrates that ED-LSTM, BD-LSTM and CNN are approximately equivalent at the first step of prediction, but as the number of steps increases the ED-LSTM demonstrates improved performance. The simple LSTM is consistently the worst model in all steps. Comparing the ED-LSTM prediction curves with the VAR curves - Panel (c) and (d) in Figure 9, we see that the VAR model performs well on the test set, but does not appear to capture as much of the year on year variability as the ED-LSTM model.
Figure 10 presents the results for forecasting French GDP where we find that the traditional time series model (ARIMA)is worse than all the models, while the VAR model occupies the first position. Among the four deep learning models, CNN has lower mean RMSE but higher variance, making the ED-LSTM a more reliable choice. When we look at performance across the individual steps, we see that it is only in the third and final set of predictions that the performance of the models is heavily differentiated, where the RMSE of CNN and ED-LSTM is less than half of the other two models.
Figure 11 presents the prediction performance for Germany, Panel (a) shows that the CNN and ED-LSTM models outperform traditional time series models. We find that CNN ranks first, while ED-LSTM and VAR are at the second and the third positions, respectively. When we consider the RMSE at each step of the deep learning predictions, we find a similar outcome to the results for France. The four models only show a gap in the third step. The RMSE for all models other than LSTM at the third step is much lower than the values in the first two steps. The ED-LSTM and BD-LSTM are similar, but the BD-LSTM appears more stable.
We present the results for Italy in Figure 12 where the CNN and ED-LSTM models provide the best performance on both training and test data. In addition they appear more stable than the other deep learning models. Figure 12 Panel (b) indicates that the ED-LSTM model has stable and strong performance across all the three steps.
we present the results for Japan in Figure 13 where CNN and ED-LSTM models provide superior prediction performance overall. The CNN model has slightly lower error on the test data and the other models have significantly higher error in both training and test data. Looking at Figure 13(b), we observe that as the number of steps increases the CNN and ED-LSTM models maintain consistent performance (their RMSE remains stable a approx 0.2) while the other deep learning models deteriorate.
We present the results for the United Kingdom (UK) in Figure 14 Panel (a) where aside from the LSTM, the deep learning models perform better than traditional time series models on the test data. The ED-LSTM appears to provide the best means performance; however, its variance upper bound exceeds the mean performance of the BD-LSTM model. In Figure 14 Panel (b), we observe that the prediction accuracy of LSTM is consistently poor, while the other three deep learning models maintain a similar trend, with a slight increase in the second step and a decline in the third step. In the third step, the prediction accuracy of BD-LSTM surpassed the top-ranked ED-LSTM in the previous two steps.
We present the results for United States in Figure 15 where Panel (a) indicates that the two traditional time series models rank at first and the last, VAR model has the best performance among these 6 models while ARIMA performs worst. Among the deep learning models, the CNN and ED-LSTM perform better than the BD-LSTM and LSTM, although BD-LSTM has the least variance. Through the 3 steps RMSE, although the four models perform similar in the first two steps, they are not in the same
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.1699 & 0.3088 & 0.1537\(\pm\)0.0048 & 0.1393\(\pm\)0.0056 & 0.0748\(\pm\)0.0035 & 0.1207\(\pm\)0.0.006 \\ Test & 0.1919 & 0.3032 & 0.241\(\pm\)0.0187 & 0.2022\(\pm\)0.0019 & 0.1359\(\pm\)0.0048 & 0.1901\(\pm\)0.0049 \\ \hline Step1 & & & 0.2721\(\pm\)0.0065 & 0.1994\(\pm\)0.0013 & 0.1251\(\pm\)0.0026 & 0.2054\(\pm\)0.0012 \\ Step2 & & & 0.2112\(\pm\)0.0022 & 0.2247\(\pm\)0.0028 & 0.1481\(\pm\)0.0051 & 0.1755\(\pm\)0.0016 \\ Step3 & & & 0.2355\(\pm\)0.0036 & 0.1798\(\pm\)0.0007 & 0.1321\(\pm\)0.0023 & 0.188\(\pm\)0.0017 \\ \hline \end{tabular}
\end{table}
Table 4: Australia reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.1873 & 0.2260 & 0.1658\(\pm\)0.0081 & 0.1553\(\pm\)0.0057 & 0.1642\(\pm\)0.0040 & 0.1671\(\pm\)0.009 \\ Test & 0.2630 & 0.1277 & 0.4422\(\pm\)0.06 & 0.2624\(\pm\)0.012 & 0.2345\(\pm\)0.0037 & 0.2507\(\pm\)0.0065 \\ \hline Step1 & & & 0.4733\(\pm\)0.0188 & 0.2884\(\pm\)0.0024 & 0.2938\(\pm\)0.0013 & 0.2937\(\pm\)0.0027 \\ Step2 & & & 0.3562\(\pm\)0.0188 & 0.2481\(\pm\)0.0002 & 0.2319\(\pm\)0.0018 & 0.2381\(\pm\)0.0004 \\ Step3 & & & 0.4817\(\pm\)0.0345 & 0.2485\(\pm\)0.0049 & 0.1573\(\pm\)0.0026 & 0.2133\(\pm\)0.0011 \\ \hline \end{tabular}
\end{table}
Table 5: Canada reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.2034 & 0.2138 & 0.1144\(\pm\)0.0087 & 0.0954\(\pm\)0.0102 & 0.1035\(\pm\)0.0055 & 0.0979\(\pm\)0.0.0042 \\ Test & 0.2723 & 0.1793 & 0.242\(\pm\)0.0051 & 0.2477\(\pm\)0.0098 & 0.1901\(\pm\)0.0049 & 0.1738\(\pm\)0.0124 \\ \hline Step1 & & & 0.2433\(\pm\)0.0062 & 0.1686\(\pm\)0.0013 & 0.2055\(\pm\)0.0022 & 0.1679\(\pm\)0.0004 \\ Step2 & & & 0.2325\(\pm\)0.0032 & 0.2824\(\pm\)0.0015 & 0.2224\(\pm\)0.0052 & 0.2149\(\pm\)0.0017 \\ Step3 & & & 0.2483\(\pm\)0.0098 & 0.2748\(\pm\)0.0067 & 0.1273\(\pm\)0.0010 & 0.1442\(\pm\)0.0012 \\ \hline \hline \end{tabular}
\end{table}
Table 6: France reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.0847 & 0.1620 & 0.1843\(\pm\)0.0056 & 0.1277\(\pm\)0.0047 & 0.1281\(\pm\)0.0032 & 0.1607\(\pm\)0.0.032 \\ Test & 0.2987 & 0.2644 & 0.3215\(\pm\)0.016 & 0.2719\(\pm\)0.0048 & 0.2495\(\pm\)0.0042 & 0.2342\(\pm\)0.0050 \\ \hline Step1 & & & 0.2825\(\pm\)0.0018 & 0.2771\(\pm\)0.003 & 0.2437\(\pm\)0.0008 & 0.2709\(\pm\)0.0018 \\ Step2 & & & 0.3344\(\pm\)0.0112 & 0.3306\(\pm\)0.0017 & 0.2941\(\pm\)0.003 & 0.2741\(\pm\)0.0036 \\ Step3 & & & 0.3413\(\pm\)0.0187 & 0.1891\(\pm\)0.0027 & 0.1983\(\pm\)0.015 & 0.1266\(\pm\)0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Germany reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.2554 & 0.1466 & 0.1561\(\pm\)0.0066 & 0.1375\(\pm\)0.0075 & 0.1312\(\pm\)0.0031 & 0.1125\(\pm\)0.00.0021 \\ Test & 0.3014 & 0.3626 & 0.3483\(\pm\)0.0253 & 0.5448\(\pm\)0.0087 & 0.2151\(\pm\)0.0009 & 0.2086\(\pm\)0.0022 \\ \hline Step1 & & & 0.2516\(\pm\)0.0009 & 0.2592\(\pm\)0.0022 & 0.2308\(\pm\)0.0019 & 0.1778\(\pm\)0.0006 \\ Step2 & & & 0.3540\(\pm\)0.0017 & 0.3966\(\pm\)0.0122 & 0.2221\(\pm\)0.021 & 0.2374\(\pm\)0.0018 \\ Step3 & & & 0.4143\(\pm\)0.0272 & 0.8145\(\pm\)0.0382 & 0.1899\(\pm\)0.0043 & 0.206\(\pm\)0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Japan reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.2069 & 0.3095 & 0.1506\(\pm\)0.0055 & 0.1245\(\pm\)0.0081 & 0.1098\(\pm\)0.0037 & 0.1191\(\pm\)0.0.045 \\ Test & 0.2544 & 0.5767 & 0.2656\(\pm\)0.0223 & 0.2531\(\pm\)0.0080 & 0.1956\(\pm\)0.0017 & 0.2348\(\pm\)0.0025 \\ \hline Step1 & & & 0.2863\(\pm\)0.0023 & 0.2538\(\pm\)0.0006 & 0.1938\(\pm\)0.0023 & 0.1988\(\pm\)0.0006 \\ Step2 & & & 0.2705\(\pm\)0.0054 & 0.291\(\pm\)0.002 & 0.2001\(\pm\)0.031 & 0.21\(\pm\)0.0107 \\ Step3 & & & 0.2371\(\pm\)0.0023 & 0.1988\(\pm\)0.0024 & 0.1919\(\pm\)0.0052 & 0.2856\(\pm\)0.033 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Italy reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.2554 & 0.1466 & 0.1561\(\pm\)0.0066 & 0.1375\(\pm\)0.0075 & 0.1312\(\pm\)0.0031 & 0.1125\(\pm\)0.00.0021 \\ Test & 0.3014 & 0.3626 & 0.3483\(\pm\)0.0253 & 0.5448\(\pm\)0.0087 & 0.2151\(\pm\)0.0009 & 0.2086\(\pm\)0.0022 \\ \hline Step1 & & & 0.2516\(\pm\)0.0009 & 0.2592\(\pm\)0.0022 & 0.2308\(\pm\)0.0019 & 0.1778\(\pm\)0.0006 \\ Step2 & & & 0.3540\(\pm\)0.0017 & 0.3966\(\pm\)0.0122 & 0.2221\(\pm\)0.021 & 0.2374\(\pm\)0.0018 \\ Step3 & & & 0.4143\(\pm\)0.0272 & 0.8145\(\pm\)0.0382 & 0.1899\(\pm\)0.0043 & 0.206\(\pm\)0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 10: United Kingdom reporting RMSE mean and 95 % confidence interval (\(\pm\)).
trend, as the number of steps increases, the prediction accuracy of LSTM and BD-LSTM gradually degrades. The ED-LSTM's performance is consistently stable, while the CNN improves as the forecast horizon increases. We further compare Figure 15 - Panels (c) and (d) and find that such as the case of Canada, the performance of VAR on the test set is similar to that of the best deep learning model, and is more inclined to predict the general trend.
Figure 16 is a summary of the prediction results of developed countries. We observe that CNN and ED-LSTM are far better than other models. BD-LSTM and two traditional models have similar results, while LSTM's results are obviously worse than other models. Take each step separately to check, the deep learning models have very different trends in pairs, as the number of steps increases, CNN and ED-LSTM perform better and better, proving that CNN and ED-LSTM have the ability to achieve long-term prediction, while LSTM and BD-LSTM are completely opposite, and the increase in the number of steps directly leads to an increase in RMSE, indicating that their predictive ability for multi-step prediction is relatively weak.
Figure 8: Australia time series: performance evaluation of respective methods (RMSE mean and 95% confidence interval as error bar)
Figure 9: Canada time series: performance evaluation of respective methods (RMSE mean and 95% confidence interval as error bar).
#### 4.3.1 Results: Developing Countries
This section presents the forecast results for developing countries. These countries are characterised by the fact that they have not yet established sound financial and social security systems and are still exploring the path of national development [53], resulting in irregularities in their economic cycles [13]. At times they have no stable trends, a high probability of abnormal rises or falls, and the transparency and credibility of their data are often questioned, making forecasting more difficult [39]. In addition, the lack of historical data for Russia due to the collapse of the Soviet Union in 1991 has undoubtedly added to the difficulty of model fitting and forecasting.
We present the results for Brazil in Figure 17 where in Panel (a)we observe that only ED-LSTM is better than the traditional time series statistical models. LSTM remains the worst performing deep learning model. Examining the step-by-step predictions we see that the RMSE of ED-LSTM has a certain amount of increase in the second step, but it is still the best at each step.
We presents the results for China in Figure 18 where we observe that except for LSTM, the other three deep learning models are significantly better than the traditional time series models. The RMSE of LSTM is only
Figure 11: Germany time series: performance evaluation of respective methods (RMSE mean and 95% confidence interval as error bar).
Figure 10: France time series: performance evaluation of respective methods (RMSE mean and 95% confidence interval as error bar)
slightly larger than that of ARIMA, and its prediction results are also much better than VAR's predictions. Among the top three deep learning models, ED-LSTM ranks first, but the stability is worse than the other two, CNN and BD-LSTM are ranked second and third. In Figure 18 Panel (b), only the ED-LSTM remained stable in each step. The CNN performed on par with the ED-LSTM in the first two steps, but the prediction results in the third step were worse than all other models.
We present the results for India in Figure 19 where we observe that the mean RMSE of all the deep learning models are in similar range, outperforming the ARIMA but worse than the VAR. In Panel (b), BD-LSTM and ED-LSTM do not fluctuate much at each step, while LSTM and CNNs prediction accuracy gradually deteriorates as the number of steps increases. Comparing the prediction curves of CNN and VAR, plots (C) and (d), we can see that although the mean RMSE of VAR is superior, the deep learning model appears to have better captured the structure of the trend.
We present the results for Russia in Figure 20 where the CNN and ED-LSTM models outperform all others with the ED-LSTM exhibiting a slight advantage. In Figure 14 Panel (b), we observe that the prediction accuracy of LSTM and BD-LSTM declines with the increase of the number of steps, while ED-LSTM remains stable. Although the RMSE of the CNN in the first and third steps is slightly smaller than that of ED-LSTM, in the second step, its error is much higher making it an unstable choice.
We further present the results for South Africa in Figure 21. We see similar trend when compared to the results for India, where VAR is the best performing model in terms of RMSE, out-performing all the deep learning models. The ARIMA model, by contrast, ranks last and in case of the deep learning models, we find that CNN and ED-LSTM maintained consistently good performance, ranking first and second for deep learning models. Figure 21 Panel (b) shows that it is the third prediction step in which the CNN and ED-LSTM models exhibit superior performance. The comparison of the prediction curves of VAR and CNN is also very similar to that shown in India, the best CNN model captures the time series trend better than the VAR model.
As in the previous section, Figure 22 shows that CNN and ED-LSTM provide better overall performance than traditional time series models. Looking at the individual prediction time steps, ED-LSTM and CNN significantly outperform the other two models at every step, and the former is the first rank in every step. It is worth noting that for India and South Africa, the VAR performance was best according to the average RMSE, but from the prediction curves, the VAR performance was not as good as the optimal deep learning models.
### Recursive deep learning framework: results
According to the results of the evaluation, whether in developed or developing countries, or in the overall average ranking Table 18, ED-LSTM ranks first, so we use ED-LSTM to predict next decade's GDP growth rate. We use the Penn-World data and extend it recursively to forecast. We start the recursive strategy by forecasting features in the data-set until the features are long enough for us to forecast the GDP growth rate for the next decade. We selected the predicted RMSE for a number of countries features in the recursive process to demonstrate the reliability of the predictions.
As can be seen from the Table 17, the prediction results for the features perform well and the RMSE of the test set is equivalent to the training set. Although we used GDP growth rate as the target in Direct Strategy to train and select the optimal model, the good results in Table 17 tell us that the ED-LSTM still performs well even if the target is replaced with other features in the data set, which also indicates that we can use Recursive Strategy, i.e., we can predict the features first and then the GDP growth rate. This demonstrates the feasibility of our recursive strategy.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.1808 & 0.2394 & 0.1825\(\pm\)0.0066 & 0.0937\(\pm\)0.0032 & 0.0567\(\pm\)0.0018 & 0.1846\(\pm\)0.0.004 \\ Test & 0.2536 & 0.3656 & 0.2879\(\pm\)0.0024 & 0.1792\(\pm\)0.0045 & 0.1262\(\pm\)0.0073 & 0.1662\(\pm\)0.0028 \\ \hline Step1 & & & 0.3783\(\pm\)0.038 & 0.1701\(\pm\)0.0023 & 0.1123\(\pm\)0.006 & 0.118\(\pm\)0.0032 \\ Step2 & & & 0.2977\(\pm\)0.0045 & 0.2146\(\pm\)0.0006 & 0.1198\(\pm\)0.0024 & 0.1177\(\pm\)0.0028 \\ Step3 & & & 0.1294\(\pm\)0.0003 & 0.146\(\pm\)0.0017 & 0.1442\(\pm\)0.0012 & 0.2343\(\pm\)0.0054 \\ \hline \hline \end{tabular}
\end{table}
Table 13: China reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.1507 & 0.2169 & 0.1689\(\pm\)0.0064 & 0.1747\(\pm\)0.0055 & 0.1169\(\pm\)0.0061 & 0.1434\(\pm\)0.0.004 \\ Test & 0.3097 & 0.2549 & 0.3645\(\pm\)0.082 & 0.3344\(\pm\)0.0014 & 0.2203\(\pm\)0.0011 & 0.3303\(\pm\)0.0028 \\ \hline Step1 & & & 0.2988\(\pm\)0.005 & 0.3402\(\pm\)0.005 & 0.1979\(\pm\)0.005 & 0.2901\(\pm\)0.002 \\ Step2 & & & 0.4137\(\pm\)0.0106 & 0.3367\(\pm\)0.004 & 0.2647\(\pm\)0.004 & 0.412\(\pm\)0.006 \\ Step3 & & & 0.371\(\pm\)0.0142 & 0.3259\(\pm\)0.006 & 0.1894\(\pm\)0.006 & 0.2694\(\pm\)0.0131 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Brazil reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{c c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.1487 & 0.2093 & 0.1725\(\pm\)0.0068 & 0.0895\(\pm\)0.0053 & 0.1018\(\pm\)0.0041 & 0.1512\(\pm\)0.0.0022 \\ Test & 0.2569 & 0.1456 & 0.2031\(\pm\)0.0173 & 0.2317\(\pm\)0.0211 & 0.1901\(\pm\)0.002 & 0.1831\(\pm\)0.0023 \\ \hline Step1 & & & 0.1638\(\pm\)0.0037 & 0.209\(\pm\)0.0023 & 0.1876\(\pm\)0.0015 & 0.2324\(\pm\)0.021 \\ Step2 & & & 0.2046\(\pm\)0.0027 & 0.2195\(\pm\)0.003 & 0.2179\(\pm\)0.0029 & 0.1932\(\pm\)0.0014 \\ Step3 & & & 0.2335\(\pm\)0.07 & 0.263\(\pm\)0.005 & 0.1595\(\pm\)0.0052 & 0.0952\(\pm\)0.0045 \\ \hline \hline \end{tabular}
\end{table}
Table 16: South Africa reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{c c c c c c} \hline & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Train & 0.1348 & 0.3371 & 0.1599\(\pm\)0.0072 & 0.1535\(\pm\)0.0004 & 0.1437\(\pm\)0.002 & 0.1704\(\pm\)0.0049 \\ Test & 0.3029 & 0.3227 & 0.2892\(\pm\)0.0095 & 0.3211\(\pm\)0.0016 & 0.2265\(\pm\)0.0055 & 0.2453\(\pm\)0.0031 \\ \hline Step1 & & & 0.2543\(\pm\)0.0023 & 0.2896\(\pm\)0.002 & 0.2218\(\pm\)0.0038 & 0.211\(\pm\)0.0036 \\ Step2 & & & 0.2945\(\pm\)0.0036 & 0.3269\(\pm\)0.079 & 0.2293\(\pm\)0.0032 & 0.3009\(\pm\)0.0053 \\ Step3 & & & 0.315\(\pm\)0.0095 & 0.3436\(\pm\)0.0118 & 0.2274\(\pm\)0.0073 & 0.2115\(\pm\)0.0105 \\ \hline \hline \end{tabular}
\end{table}
Table 15: Russia reporting RMSE mean and 95 % confidence interval (\(\pm\)).
\begin{table}
\begin{tabular}{c c c c c c} \hline Country & Feature & train-RMSE & test-RMSE \\ \hline \hline Australia & net population growth & 0.0432 & 0.0769 \\ & CPI & 0.0528 & 0.1128 \\ & employment rate & 0.1595 & 0.1759 \\ Brazil & csh-c & 0.1765 & 0.1254 \\ & csh-i & 0.2128 & 0.1728 \\ & csh-g & 0.1784 & 0.1292 \\ \hline \hline \end{tabular}
\end{table}
Table 17: RMSE for feature prediction in recursive strategy in some countries.(csh is the share in CGDPo, c represents household consumption, i is gross capital formation and g is government consumption
Figures 23 24 25 26 27 28 29 30 31 32 33 34 35 represent the real value from 1980 to 2019 and the prediction from 2020 to 2031, the predictions include the mean of the forecast over 30 experiment runs and the 95% confidence interval of the results which are shown in the blue line with red shade. We see that the 95% Confidence Interval in each plots are narrow, demonstrating that the results of these 30 experiments did exhibit large variation.
### Comparison with institutional forecasts
In this section, the results predicted in the previous section are compared with those of a number of other studies. We first use the low institution data and compare the result with our results from the previous section, We compare our forecast of total GDP with a forecast based on the Lowy data in the Asian countries and the United States in Table19 [72].
We can see that aside from China and India, our forecasts are similar to the Lowy Institute forecast [72]. The Lowy Institute forecast for China's GDP is lower than our expectations, while India's is the opposite. We recognise that changing circumstances in China mean that it is unlikely that it will continue to grow at the same rate. While India may benefit from a more stable birthrate and a shift
Figure 12: Italy time series: performance evaluation of respective methods (RMSE mean and 95% confidence interval as error bar)
Figure 13: Japan time series: performance evaluation of respective methods (RMSE mean and 95% confidence interval as error bar)
of Chinese industry. These changing geopolitical circumstances are issues that we need to consider in our future work.
## 5 Discussion
In our experiments, we first evaluated model using the direct strategy. We selected the data of 13 countries in Penn World Table from 1980 to 2019. These countries belong to the developed country group G7 and the developing country group BRICS respectively. In the comparison results, whether it is for developed countries or developing countries, ED-LSTM provided the overall best results. The CNN and VAR models also provided the best performance overall (Table 18). The ARIMA consistently lagged behind the performance of the VAR, illustrating the importance of the exogenous variables in this task.
According to the results of model evaluation, we choose ED-LSTM in the recursive strategy as the final prediction model. In the results (Figure 23 24 25 26 27 28 29 30 31 32 33 34 35), we observe that the forecast results of most countries appear reasonable. In the forecast results of developed countries, except for the forecast results of France, other countries, Australia, Canada, Germany, Italy, Japan, the United Kingdom and the United States will experience economic growth slowdown or even regression in the first five years of the next decade, which is in line with our cognition of the law of economic crisis [107], and gradually returning to normal or improving in the following five years. Among developing countries, our model suggests that Brazil, Russia and South Africa will be impacted by the economic crisis in the future, and their economic development will stagnate in the near term. However, China and India, the two fastest-growing developing countries will maintain a development speed higher than 4% (Figure 26 29 ). In recent reports, it has been forecasted that India will become a 10 trillion dollar GDP by 2035 [30] by Centre for Economics and Business Research. Morgan Stanley forecasted that Indian GDP will reach 11 trillion by 2032 [106]. Due to the impacts of COVID-19 and Russia-Ukraine war, the forecasts are being rapidly adjusted by the respective organisations including International Monetary Fund and the World Bank.
In our experiments, there are also some limitations since due to the time constraints of the data set, we did not include the relevant data after 2020. The data collected since the COVID-19 pandemic has not been included in the modeling process. The world in recent years has experienced significant turbulence due to COVID-19 and secondary reasons, such unpredictable contingencies and the attendant political policies which present challenges to modelling [91].
Another problem is that it is difficult for the ED-LSTM model to consider the changes in the development speed when the economic scale develops to a certain stage. We know that a country's economy cannot grow stably and
\begin{table}
\begin{tabular}{c c c c c c} \hline Country & ARIMA & VAR & LSTM & BD-LSTM & ED-LSTM & CNN \\ \hline \hline Australia & 3 & 6 & 5 & 4 & 1 & 2 \\ Brazil & 3 & 2 & 6 & 5 & 1 & 4 \\ Canada & 5 & 1 & 6 & 4 & 2 & 3 \\ China & 4 & 6 & 5 & 3 & 1 & 2 \\ France & 6 & 2 & 4 & 5 & 3 & 1 \\ Germany & 5 & 3 & 6 & 4 & 2 & 1 \\ India & 6 & 1 & 5 & 3 & 4 & 2 \\ Italy & 3 & 6 & 5 & 4 & 1 & 2 \\ Japan & 3 & 5 & 4 & 6 & 2 & 1 \\ Russia & 4 & 6 & 3 & 5 & 1 & 2 \\ South Africa & 6 & 1 & 5 & 4 & 3 & 2 \\ United Kingdom & 5 & 4 & 6 & 2 & 1 & 3 \\ United States & 6 & 1 & 5 & 4 & 2 & 3 \\ \hline Avg-Rank & 4.53 & 3.38 & 5 & 4.07 & 1.84 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 18: Ranking of different models for respective data-sets (country) according to the RMSE.
\begin{table}
\begin{tabular}{c c c c} \hline Country & Forecast & Lowy & Diff \\ \hline \hline Australia & 1.875 & 1.819 & -0.56 \\ China & 30.934 & 24.28 & -6.7 \\ India & 4.884 & 5.812 & 0.93 \\ Japan & 5.514 & 5.327 & -0.193 \\ Russia & 1.524 & 1.724 & 0.2 \\ United States & 24.889 & 24.226 & -0.663 \\ \hline \hline \end{tabular}
\end{table}
Table 19: The forecast result and the Lewy forecast of 2030 (2017 constant ppp).
infinitely, it is always limited by its own national conditions, which is also the law of natural growth[81]. China is an example, since in the forecast, the total GDP of China reached an incredible value after steady growth. However, in reality, due to the huge economic volume, the GDP growth rate of China has slowed down in recent years due to housing crisis [59], trade disputes with USA [36], and COVID-19 [48]. The Chinese policy has also shifted from high-speed growth to high-quality growth [28].
Our future work will includes multiple additional considerations. Firstly, we would incorporate robust uncertainty quantification in model construction. Hence, we need to introduce a Bayesian deep learning model[86; 19] to help us better predict some emerging extreme situations that will affect the economy, such as COVID-19, or the financial crisis. In addition, we need to further comprehensively consider the relationship between economic volume and economic growth rate[68; 15], which may need to take into account land area, total population, and differences between developing and developed countries to help us model better. The second point is that after the update of Penn World Data, the data after 2020 was introduced. The economy in these years has been affected by COVID-19, so we need to re-evaluate the model, train the model and forecast. The third point is to use interactive maps to display data, making the data more readable and easy to understand. We can also compare the accuracy of the test set with the prediction results of some international institutions, such as IMF and World Bank, to confirm our relative accuracy.
## 6 Conclusion
We proposed a deep learning framework for predicting decadal GDP growth rate, which includes two strategies, direct strategy and recursive strategy. In the part of the direct strategy, our experiment selected 13 countries and nearly 40 years of data in the macroeconomic data set Penn World Table. We introduced data shuffled when processing the data, effectively avoiding the over fitting of the model. By comparing the performance of traditional time series models ARIMA, VAR and deep learning models CNN, LSTM, BD-LSTM and ED-LSTM, we found that ED-LSTM is the best performing model in this experiment. According to the comparison results, combined with the recursive strategy we proposed, the ED-LSTM model was used to predict the next decadal GDP growth rate of 13 countries.
Our results show that in the next ten years, most countries will experience GDP growth rate slowdown or even negative growth in the first five years, a global economic crisis has affected many countries, and the affected countries will return to normal in the second five years. Only China, France and India are predicted to stabilize their GDP growth rate over the next decade. Our future work is to introduce post-COVID-19 economic data and epidemic uncertainty into modeling to better simulate and predict economic outcomes in times of geopolitical uncertainty.
## Code and Data
Code and data 2
Footnote 2: [https://github.com/sydney-machine-learning/deeplearning-decadalworldeconomy](https://github.com/sydney-machine-learning/deeplearning-decadalworldeconomy)
|
2302.09875 | Backstepping Temporal Difference Learning | Off-policy learning ability is an important feature of reinforcement learning
(RL) for practical applications. However, even one of the most elementary RL
algorithms, temporal-difference (TD) learning, is known to suffer form
divergence issue when the off-policy scheme is used together with linear
function approximation. To overcome the divergent behavior, several off-policy
TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning
with correction (TDC), have been developed until now. In this work, we provide
a unified view of such algorithms from a purely control-theoretic perspective,
and propose a new convergent algorithm. Our method relies on the backstepping
technique, which is widely used in nonlinear control theory. Finally,
convergence of the proposed algorithm is experimentally verified in
environments where the standard TD-learning is known to be unstable. | Han-Dong Lim, Donghwan Lee | 2023-02-20T10:06:49Z | http://arxiv.org/abs/2302.09875v2 | # Backstepping Temporal Difference Learning
###### Abstract
Off-policy learning ability is an important feature of reinforcement learning (RL) for practical applications. However, even one of the most elementary RL algorithms, temporal-difference (TD) learning, is known to suffer form divergence issue when the off-policy scheme is used together with linear function approximation. To overcome the divergent behavior, several off-policy TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning with correction (TDC), have been developed until now. In this work, we provide a unified view of such algorithms from a purely control-theoretic perspective, and propose a new convergent algorithm. Our method relies on the backstepping technique, which is widely used in nonlinear control theory. Finally, convergence of the proposed algorithm is experimentally verified in environments where the standard TD-learning is known to be unstable.
## 1 Introduction
Since Mnih et al. (2015), which has demonstrated that deep reinforcement learning (RL) outperforms human in several video games (Atari 2600 games), significant advances has been made in RL theory and algorithms. For instance, Van Hasselt et al. (2016); Lan et al. (2020); Chen et al. (2021) proposed some variants of the so-called deep Q-network (Mnih et al., 2015) that achieves higher scores in Atari games than the original deep Q-network. An improved deep RL was developed in Badia et al. (2020) that performs better than average human scores across 57 Atari games. Not only performing well in video games, but Schrittwieser et al. (2020) also have shown that an RL agent can self-learn chess, Go, and Shogi. Furthermore, RL has shown great success in real world applications, e.g., robotics (Kober et al., 2013), healthcare (Gottesman et al., 2019), and recommendation systems (Chen et al., 2019).
Despite the practical success of deep RL, there is still a gap between theory and practice. One of the notorious phenomena is the deadly triad (Sutton & Barto, 2018), the diverging issue of the algorithm when function approximation, off-policy learning, and bootstrapping are used together. One of the most fundamental algorithms, the so-called temporal-difference (TD) learning (Sutton, 1988), is known to diverge under the deadly triad, and several works have tried to fix this issue for decades. In particular, the seminar works Sutton et al. (2008, 2009) introduced the so-called GTD, gradient-TD2 (GTD2), and TDC, which are off-policy, and have been proved to be convergent with linear function approximation. More recently, Ghiassian et al. (2020) suggested regularized version of TDC called TD learning with regularized correction (TDRC), and showed its favorable features under off-policy settings. Moreover, Lee et al. (2021) developed several variants of GTD based on primal dual formulation.
On the other hand, backstepping control (Khalil, 2015) is a popular method in designing stable controllers for nonlinear systems with special structures. The design technique offers a wide range of stable controllers, and is proved to be robust under various settings. It has been used in various fields including quadrotor helicopters (Madani & Benallegue, 2006), mobile robots (Fierro & Lewis, 1997), and ship control (Fossen & Strand, 1999). Using backstepping control technique, in this paper, we develop a new convergent off-policy TD-learning which is a single time-scale algorithm.
In particular, the goal of this paper is to introduce a new unifying framework to design off-policy TD-learning algorithms under linear function approximation. The main contributions are summarized as follows:
* We propose a systemic way to generate off-policy TD-learning algorithms including GTD2 and TDC from control theoretic perspective.
* Using our framework, we derive a new TD-learning algorithm, which we call backstepping TD (BTD).
* We experimentally verify its convergence and performance under various settings including where off-policy TD has known to be unstable.
In particular, most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) are derived based on optimization perspectives starting with an objective function. Then, the convergence is proved by proving stability of the corresponding O.D.E. models. In this paper, we follow reversed steps, and reveal that an off-policy TD-learning algorithm (called backstepping TD) can be derived based on control theoretic motivations. In particular, we develop stable O.D.E. models first using the backstepping technique, and then recover back the corresponding off-policy TD-learning algorithms. The new analysis reveals connections between off-policy TD-learning and notions in control theory, and provides additional insights on off-policy TD-learning with simple concepts in control theory. This sound theoretical foundation established in this paper can potentially motivate further analysis and developments of new algorithms.
Finally, we briefly summarize TD learning algorithms that guarantee convergence under linear function approximation. GTD (Sutton et al., 2008), GTD2 and TDC (Sutton et al., 2009) have been developed to approximate gradient on mean squared projected Bellman error. Later, GTD and GTD2 has been discovered to solve minimax optimization problem (Macua et al., 2014; Liu et al., 2020). Such saddle-point view point of GTD has led to many interesting results including Du et al. (2017); Dai et al. (2018); Lee et al. (2021). TDRC (Ghiassian et al., 2020) adds an additional term similar to regularization term to one-side of parameter update, and tries to balance between the performance of TD and stability of TDC. TDC++ (Ghiassian et al., 2020) also adds an additional regularization term on both sides of the parameter update. Even though TDRC shows good performance, it uses additional parameter condition to ensure convergence, whereas TDC++ does not.
## 2 Preliminaries
### Nonlinear system theory
Nonlinear system theory will play an important role throughout this paper. Here, we briefly review basics of nonlinear systems. Let us consider the continuous-time nonlinear system
\[\dot{x}_{t}=f(x_{t},u_{t}),\quad x_{0}\in\mathbb{R}^{n}, \tag{1}\]
where \(x_{0}\in\mathbb{R}^{n}\) is the initial state, \(t\in\mathbb{R},t\geq 0\) is the time, \(x_{t}\in\mathbb{R}^{n}\) is the state, \(u_{t}\in\mathbb{R}^{n}\) is the control input, and \(f:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a nonlinear mapping. An important concept in dealing with nonlinear systems is the equilibrium point. Considering the state-feedback law \(u_{t}=\mu(x_{t})\), the system can be written as \(\dot{x}_{t}=f(x_{t},u_{t})=f(x_{t},\mu(x_{t}))=:f(x_{t})\), and a point \(x=x^{e}\) in the state-space is said to be an equilibrium point of (1) if it has the property that whenever the state of the system starts at \(x^{e}\), it will remain at \(x^{e}\)(Khali, 2015). For \(\dot{x}_{t}=f(x_{t})\), the equilibrium points are the real roots of the equation \(f(x)=0\). The equilibrium point \(x^{e}\) is said to be globally asymptotically stable if for any initial state \(x_{0}\in\mathbb{R}^{n},x_{t}\to x^{e}\) as \(t\rightarrow\infty\).
An important control design problem is to construct a state-feedback law \(u_{t}=\mu(x_{t})\) such that the origin becomes the globally asymptotically stable equilibrium point of (1). To design a state-feedback law to meet such a goal, control Lyapunov function plays a central role, which is defined in the following definition.
**Definition 2.1** (Control Lyapunov function (Sontag, 2013)).: _A positive definite function \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is called a control Lyapunov function (CLF) if for all \(x\neq 0\), there exists a corresponding control input \(u\in\mathbb{R}^{m}\) that satisfies the inequality, \(\nabla_{x}V(x)^{\top}f(x,u)<0\) for all \(x\neq 0\)._
Once such a CLF is found, then it guarantees that there exists the control law that stabilizes the system. Moreover, the corresponding state-feedback control law can be extracted from the CLF, e.g., \(\mu(x)=\arg\min_{u}\nabla_{x}V(x)^{\top}f(x,u)\) provided that the minimum exists and unique. The concept of control Lyapunov function will be used in the derivations of our main results. For the autonomous
system, \(\dot{x}_{t}=f(x_{t})\), and Lypaunov function \(V:\mathbb{R}^{n}\to\mathbb{R}\), Lie derivative is defined as \(\mathcal{L}_{f}V(x):=\nabla_{x}V(x)^{\top}f(x)\) so that \(\dot{V}(x_{t})=\mathcal{L}_{f}V(x_{t})\) along the solution.
### Stochastic approximation and O.D.E. approach
Including Q-learning (Watkins & Dayan, 1992) and TD-learning (Sutton, 1988), reinforcement learning algorithms can be considered as stochastic approximation (Robbins & Monro, 1951) described by
\[x_{k+1}=x_{k}+\alpha_{k}(f(x_{k})+\epsilon_{k}), \tag{2}\]
where \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a nonlinear mapping, and \(\epsilon_{k}\) is an i.i.d. noise. Borkar and Meyn theorem (Borkar & Meyn, 2000) is a well-known method to bridge the asymptotic convergence of stochastic approximation and the stability of its corresponding O.D.E. model, which can be expressed as
\[\dot{x}_{t}=f(x_{t}),\quad x_{0}\in\mathbb{R}^{n}, \tag{3}\]
where \(x_{0}\in\mathbb{R}^{n}\) is initial state, and \(t\in\mathbb{R}\), \(t\geq 0\) is the time.
Borkar and Meyn theorem (Borkar & Meyn, 2000) states that under the conditions in Assumption 7.1 in the Appendix, global asymptotic stability of the O.D.E. (3) leads to asymptotic convergence of the stochastic approximation update (2), which is formally stated in the following lemma.
**Lemma 2.1** (Borkar and Meyn theorem (Borkar & Meyn, 2000)).: _Suppose that Assumption 7.1 in the Appendix holds, and consider the stochastic approximation in (2). Then, for any initial \(x_{0}\in\mathbb{R}^{n}\), \(\sup_{k\geq 0}||x_{k}||<\infty\) with probability one. In addition, \(x_{k}\to x^{e}\) as \(k\to\infty\) with probability one, where \(x^{e}\) is the unique equilibrium point of the O.D.E. in (3)._
The main idea of Borkar and Meyn theorem is as follows: iterations of a stochastic recursive algorithm follow the solution of its corresponding O.D.E. in the limit when the step-size satisfies the so-called Robbins-Monro condition (Robbins & Monro, 1951) in (33) in the Appendix. Therefore, by proving asymptotic stability of the O.D.E., we can induce convergence of the original algorithm. In this paper, we will use an O.D.E. model of TD-learning, which is expressed as a linear time-invariant system.
### Backstepping control
This section provides the concept of the backstepping control (Kokotovic, 1992; Khalil, 2015), which will be the main tool in this paper to derive TD-learning algorithms. The backstepping technique is a popular tool for generating a CLF (control Lyapunov function) for nonlinear systems with specific structures. In particular, let us start with the following general nonlinear system:
\[\dot{y}_{t} =f(y_{t})+g(y_{t})x_{t} \tag{4}\] \[\dot{x}_{t} =u_{t},\]
where \(y_{t}\in\mathbb{R}^{m},x_{t}\in\mathbb{R}^{m}\) are the states, \(u_{t}\in\mathbb{R}^{m}\) is the input, and \(f:\mathbb{R}^{m}\to\mathbb{R}^{m}\) and \(g:\mathbb{R}^{m}\to\mathbb{R}\) are continuous functions. The first system is a nonlinear system with a particular affine structure, and the second system is simply an integrator. It can be seen as a cascade interconnection of two systems, where the second system's state is injected to the input of the first system. The backstepping control technique gives us a systematic way to generate a CLF for such particular nonlinear systems provided that the first system admits a CLF independently. To this end, we suppose that the first system admits a CLF. Through the backstepping approach, designing a stable control law for the above system can be summarized in the following steps:
1. Consider \(x_{t}\) in (4) as virtual input \(\tilde{x}(y_{t})\) (state-feedback controller), and consider the following system: \(\dot{\lambda}_{t}=f(y_{t})+g(y_{t})\tilde{x}(y_{t})\). Design \(\tilde{x}(y_{t})\) such that the above system admits a CLF \(V\), i.e., it admits a positive definite and radially unbounded function \(V\) such that its time derivative is negative definite, i.e.,\(\dot{V}(y_{t})<0,\forall y_{t}\neq 0\).
2. Denote the error between the virtual state-feedback controller \(\tilde{x}(y_{t})\) and state variable \(x_{t}\) as \(z_{t}:=x_{t}-\tilde{x}(y_{t})\). Now, rewrite the original O.D.E. in (4) with the new variable \((y_{t},z_{t})\): \(\frac{d}{dt}\begin{bmatrix}y_{t}\\ z_{t}\end{bmatrix}=\begin{bmatrix}f(y_{t})+g(y_{t})\tilde{x}(y_{t})+g(y_{t})z_ {t}\\ u_{t}-\tilde{x}(y_{t})\end{bmatrix}\)
* Design the control input \(u_{t}\) such that the above system is stable. One popular choice is to consider the CLF \(V_{c}(y_{t},z_{t}):=V(y_{t})+||z_{t}||^{2}/2\), where \(V(y_{t})\) is defined in Step 1. Then choose \(u_{t}\) such that the time derivative of \(V_{c}(y_{t},z_{t})\) to be negative definite.
A simple example of designing stabilizing control law by backstepping technique is given in Appendix Section 7.3.
### Markov Decision Process
In this paper, we consider a Markov decision process (MDP) characterized by the tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\gamma,r)\), where \(\mathcal{S}:=\{1,2,\ldots,|\mathcal{S}|\}\) stands for the set of finite state space, \(|\mathcal{S}|\) denotes the size of \(\mathcal{S}\), \(\mathcal{A}:=\{1,2,\ldots,|\mathcal{A}|\}\) denotes the set of finite action space, \(|\mathcal{A}|\) is the size of \(\mathcal{A}\), \(\gamma\in(0,1)\) is the discount factor, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) means the reward function. In particular, if an agent at state \(s\in\mathcal{S}\), takes action \(a\in\mathcal{A}\), then the current state transits to the next state \(s^{\prime}\in\mathcal{S}\) with probability \(\mathcal{P}(s,a,s^{\prime})\), and the agent receives reward \(r(s,a,s^{\prime})\). Each element of the state to state transition matrix under policy \(\pi\), denoted by \(P^{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) is \([P^{\pi}]_{ij}:=\sum\limits_{a\in\mathcal{A}}\pi(a|i)\mathcal{P}(i,a,j),\quad 1 \leq i,j\leq|\mathcal{S}|,\) where \([P^{\pi}]_{ij}\) corresponds to \(i\)-th row and \(j\)-th column element of matrix \(P^{\pi}\). Moreover, the stationary state distribution induced by policy \(\mu\), is denoted as \(d^{\mu}:\mathcal{S}\rightarrow[0,1]\), i.e., \(d^{\mu\top}P^{\mu}=d^{\mu\top}\). With the above setup, we define the following matrix notations:
\[D^{\mu}:=\begin{bmatrix}d^{\mu}(1)&&\\ &\ddots&\\ &&d^{\mu}(|\mathcal{S}|)\end{bmatrix}\in\mathbb{R}^{|\mathcal{S}|\times| \mathcal{S}|},\quad R^{\pi}=\begin{bmatrix}\mathbb{E}_{a\sim\pi}[r(s,a,s^{ \prime})|s=1]\\ \mathbb{E}_{a\sim\pi}[r(s,a,s^{\prime})|s=2]\\ \vdots\\ \mathbb{E}_{a\sim\pi}[r(s,a,s^{\prime})|s=|\mathcal{S}|]\end{bmatrix}\in \mathbb{R}^{|\mathcal{S}|},\]
where \(D^{\mu}\) is a diagonal matrix of the state distribution induced by behavior policy \(\mu\), each element of \(R^{\pi}\) is the expected reward under policy \(\pi\) at the corresponding state. The policy evaluation problem aims to approximate the value function at state \(s\in\mathcal{S}\), \(v^{\pi}(s):=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}r(S_{k},A_{k},S_{k+1} )\big{|}\,S_{0}=s,\pi\right]\), where the trajectory is generated under policy \(\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\). In this paper, we consider the linear function approximation to approximate the value function \(v^{\pi}(s)\). In particular, we parameterize the value function \(v^{\pi}(s)\) with \(\phi^{\top}(s)\xi\), where \(\phi:\mathcal{S}\rightarrow\mathbb{R}^{n}\) is a pre-selected feature vector with \(\phi(s):=[\phi_{1}(s)\quad\cdots\quad\phi_{n}(s)]\), \(\phi_{1},\ldots,\phi_{n}:\mathcal{S}\rightarrow\mathbb{R}\) are feature functions, and \(\xi\in\mathbb{R}^{n}\) is the learning parameter. The goal of the policy evaluation problem is then to approximate the value function \(v^{\pi}(s)\) using this linear parameterization, i.e., \(\phi^{\top}(s)\xi\approx v^{\pi}(s)\). Moreover, using the matrix notation \(\Phi:=[\phi(1),\phi(2),\cdots,\phi(|\mathcal{S}|)]^{\top}\in\mathbb{R}^{| \mathcal{S}|\times n}\), called the feature matrix, the linear parameterization can be written in the vector form \(\Phi\xi\). We also assume that \(\Phi\) is full column rank matrix throughout the paper, which is a standard assumption (Sutton et al., 2008, 2009; Ghiassian et al., 2020; Lee et al., 2021).
### Temporal difference learning
This section provides a brief background on TD-learning (Sutton, 1988). Suppose that we have access to stochastic samples of state \(s_{k}\) from the state stationary distribution induced by the behavior policy \(\mu\), i.e., \(s_{k}\sim d^{\mu}(\cdot)\), and action is chosen under behavior policy \(\mu\), i.e., \(a_{k}\sim\mu(\cdot|s_{k})\). Then, we observe the next state \(s^{\prime}_{k}\) following \(s^{\prime}_{k}\sim\mathcal{P}(\cdot,a_{k},s_{k})\), and receive the reward \(r_{k}:=r(s_{k},a_{k},s^{\prime}_{k})\). Using the simplified notations for the feature vectors \(\phi_{k}:=\phi(s_{k}),\quad\phi^{\prime}_{k}=\phi(s^{\prime}_{k})\), the TD-learning update at time step \(k\) with linear function approximation can be expressed as \(\xi_{k+1}=\xi_{k}+\alpha_{k}\rho_{k}\delta_{k}(\xi_{k})\phi_{k},\) where \(\alpha_{k}>0\) is the step-size, \(\delta_{k}(\xi_{k}):=r_{k}+\gamma\phi^{\prime\top}_{k}\xi_{k}-\phi^{\top}_{k} \xi_{k}\) is called the temporal difference or temporal difference error (TD-error), and \(\rho_{k}:=\rho(s_{k},a_{k})=\frac{\pi(a_{k}|s_{k})}{\mu(a_{k}|s_{k})}\) is called the importance sampling ratio (Precup et al., 2001). The importance sampling ratio re-weights the TD-error to handle the mismatch between the behavior policy \(\mu\) and target policy \(\pi\). It is known that TD-learning with linear function approximation and off-policy learning scheme does not guarantee convergence in general. The above stochastic approximation aims to find fixed point of the following projected Bellman equation, which is, after some manipulations, expressed as:
\[\Phi^{\top}D^{\mu}\Phi\xi^{*}-\gamma\Phi^{\top}D^{\mu}P^{\pi}\Phi\xi^{*}=\Phi^{ \top}D^{\mu}R^{\pi}. \tag{5}\]
To simplify the expressions, let use introduce one more piece of notations:
\[A :=\mathbb{E}_{s\sim d^{\mu}(s),s^{\prime}\sim P^{\pi}(s^{\prime}|s)}[ \phi(s)(\phi(s)-\gamma\phi(s^{\prime}))^{\top}]=\Phi^{\top}D^{\mu}\Phi-\gamma D ^{\mu}P^{\pi}\Phi\in\mathbb{R}^{n\times n},\] \[b :=\mathbb{E}_{s\sim d^{\mu}(s),a\sim\pi(a|s),s^{\prime}\sim P(s^{ \prime}|s,a)}[r(s,a,s^{\prime})\phi(s)]=\Phi^{\top}D^{\mu}R^{\pi}\in\mathbb{R}^ {n\times 1}.\]
Even though we can use arbitrary distribution, for simplicity we assume stationary distribution of \(\mu\). Now, we can rewrite (5) compactly as
\[A\xi^{*}=b. \tag{6}\]
The corresponding O.D.E. for TD-learning can be written as \(\dot{\xi}_{t}=A\xi_{t}-b,\xi_{0}\in\mathbb{R}^{n}\). Using the coordinate transform \(x_{k}:=\xi_{k}-\xi^{*}\), we get the O.D.E. \(\dot{x}_{t}=Ax_{t},x_{0}\in\mathbb{R}^{n}\), whose origin is globally asymptotically stable equilibrium point if \(\rho(s,a)=\frac{\pi(a|s)}{\mu(a|s)}=1\) for all \((s,a)\in\mathcal{S}\times\mathcal{A}\). Throughout the paper we will use the vector \(x_{k}:=\xi_{k}-\xi^{*}\) to represent the coordinate transform of \(\xi_{k}\) to the origin, and will use \(\xi_{t}\) and \(x_{t}\) to denote the corresponding continuous-time counterparts of \(\xi_{k}\) and \(x_{k}\), respectively.
### Gradient temporal difference learning
To fix the instability issue of off-policy TD-learning under linear function approximation, Sutton et al. (2008) and Sutton et al. (2009) introduced various stable off-policy TD-learning algorithms, called GTD (gradient TD-learning), GTD2, and TDC (temporal difference correction). The idea behind these algorithms is to minimize the mean-square error of projected Bellman equation (MSPBE) \(\min_{\xi\in\mathbb{R}^{n}}\frac{1}{2}||\Phi^{\top}D^{\mu}(R^{\pi}+\gamma P^{ \pi}\Phi\xi-\Phi\xi)||^{2}_{(\Phi^{\top}D^{\mu}\Phi)-1}\), where \(||x||_{D}:=\sqrt{x^{\top}Dx}\), and the global minimizer of MSPBE corresponds to the solution of (6). The core idea of the algorithms is to introduce an additional variable \(\lambda_{k}\in\mathbb{R}^{n}\) to approximate the stochastic gradient descent method for MSPBE as an objective function. In particular, GTD2 update can be written as
\[\lambda_{k+1}=\lambda_{k}+\alpha_{k}(-\phi_{k}^{\top}\lambda_{k}+\rho_{k} \delta_{k}(\xi_{k}))\phi_{k},\quad\xi_{k+1}=\xi_{k}+\alpha_{k}(\phi_{k}^{\top} \lambda_{k}\phi_{k}-\rho_{k}\gamma\phi_{k}^{\top}\lambda_{k}\phi_{k}^{\prime}).\]
We denote \(\lambda_{t}\) to denote continuous time part of \(\lambda_{k}\). Since the fixed point for \(\lambda_{k}\) is zero, it doesn't require coordinate transformation. It is a single time-scale algorithm because it uses a single step-size \(\alpha_{k}\). The corresponding O.D.E. is expressed as \(\dot{\lambda}_{t}=-C\lambda_{t}-Ax_{t},\dot{x}_{t}=A^{\top}\lambda_{t}\), where \(C:=\mathbb{E}_{s\sim d^{\mu}(s)}[\phi(s)\phi^{\top}(s)]=\Phi^{\top}D^{\mu}\Phi \in\mathbb{R}^{n\times n}\). Similarly, TDC update can be written as
\[\lambda_{k+1} =\lambda_{k}+\alpha_{k}(-\phi_{k}^{\top}\lambda_{k}+\rho_{k} \delta_{k}(\xi_{k}))\phi_{k} \tag{7}\] \[\xi_{k+1} =\xi_{k}+\beta_{k}(-\rho_{k}\gamma\phi_{k}^{\top}\lambda_{k}\phi_ {k}^{\prime}+\rho_{k}\delta_{k}(\xi_{k})\phi_{k}), \tag{8}\]
where the step-sizes, \(\alpha_{k}\) and \(\beta_{k}\), satisfy \(\alpha_{k}/\beta_{k}\to 0\) as \(k\to\infty\) and the Robbins and Monro step-size condition (Robbins and Monro, 1951) in (33) in Appendix. It is a two time-scale algorithm because it uses two time-steps, \(\alpha_{k}\) and \(\beta_{k}\).
## 3 Designing TD-learning through backstepping
We briefly explain the motivation for our algorithmic development. Borkar and Meyn theorem (Borkar and Meyn, 2000) in Lemma 2.1 is a typical tool to prove convergence of Q-learning (Borkar and Meyn, 2000; Lee and He, 2019) and TD-learning (Sutton et al., 2009; Lee et al., 2021). Most of the previous works on off-policy TD-learning algorithms (e.g., GTD2 and TDC) first start with an objective function, and then derive GTD algorithms based on optimization perspectives. Then, the convergence is proved using the corresponding O.D.E. models and stability theory of linear time-invariant systems. A natural question arises is, can we derive off-policy TD-learning algorithms following a reversed step? In other words, can we develop a stable O.D.E. model first using tools in control theory, and then recover back the corresponding off-policy TD-learning algorithms? In this paper, we reveal that a class of off-policy TD-learning algorithms can be derived based on purely control theoretic motivations following such a reversed process. By doing so, this work provides additional insights on off-policy TD-learning algorithms and gives a sound theoretical foundation on off-policy TD-learning algorithms for further developments of new algorithms.
Designing stabilizing control laws for continuous-time nonlinear system has been successful over the past decades (Khalil, 2015). One such technique, so called backstepping, is a popular controller design method in non-linear control literature (Khalil, 2015). With the help of the backstepping method (Khalil, 2015), we design stabilizing control laws for continuous-time systems, and then the corresponding off-policy TD-learning algorithms are derived, and are shown to be convergent via Borkar and Meyn theorem (Borkar & Meyn, 2000) in Lemma 2.1. The brief procedure is explained in the following steps: Step 1) Choose an appropriate continuous-time dynamic model such that (a) we can recover the TD-fixed point \(\xi^{*}\) in (6) via its equilibrium point; (b) the corresponding stochastic approximation algorithm can be implementable only through transitions of MDP and accessible data.; Step 2) Using the backstepping method, design a control input to stabilize the dynamic model chosen in Step 1).
### Backstepping TD
Now, we introduce a new off-policy TD-learning algorithm, which we call Backstepping TD (BTD). Firstly, we will develop a stabilizing control law for the following the continuous-time system:
\[\dot{\lambda}_{t} =(-C+\eta A)\lambda_{t}-Ax_{t} \tag{9}\] \[\dot{x}_{t} =u_{t} \tag{10}\]
The idea stems from finding a control system for which we can easily apply the backstepping technique. In details, the backstepping technique can be applied to the two interconnected systems where one subsystem, namely (4), can be stabilized with \(x_{t}\) in (4) as a control input. Therefore, our first aim is to find such a system. To this end, we can try a natural choice of O.D.E. to solve the TD problem, i.e., \(\dot{\lambda}_{t}=A\lambda_{t}\), which is however unstable in the off-policy case. Therefore, we can develop a modified O.D.E. \(\dot{\lambda}_{t}=(-C+\eta A)\lambda_{t}-Ax_{t}\), where \(x_{t}\) is the control input, the negative definite matrix \(-C\) is introduced to stabilize the system, and \(\eta>0\) is introduced to provide additional degrees of freedom in design. Now, the constructed system can be stabilized through the state-feedback controller \(x_{t}=\eta\lambda_{t}\) and admits the simple control Lyapunov function \(V(\lambda)=||\lambda||^{2}\). Moreover, \(A\) should be included in the right-hand side in order to implement the corresponding algorithm without knowing the solution because \(x_{k}=\xi_{k}-\xi^{*}\) and \(\xi^{*}\) should be removed using \(A\xi^{*}=b\) in the final step. Simply setting \(x_{t}=\eta\lambda_{t}\) may cancel out \(A\) in the right-hand side, the O.D.E. becomes \(\dot{\lambda}_{t}=-C\lambda_{t}\), Therefore, as mentioned before, we can apply the backstepping technique by adding an additional dynamic controller. As the next step, the backstepping technique is applied, and one needs to observe what would be the final form of the control system. In summary, if we consist \(f(\lambda_{t})\) with the combination of \(A\) and \(-C\) (not necessarily \(-C\), it may be \(-I\)), it can be a reasonable candidate to apply the backstepping technique. Cancelling \(A\) with virtual input only leaves \(-C\), which guarantees stability from its negative definiteness. Therefore, (9) and (10) is a reasonable candidate for the dynamics where we can apply the backstepping technique. In particular, our aim is to design an appropriate control input \(u_{t}\) for the above system such that the origin is the unique asymptotically stable equilibrium point, i.e., \((\lambda_{t},x_{t})\to 0\) as \(t\to\infty\) for any \((\lambda_{0},x_{0})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\). The overall procedure is depicted in Figure 1 in the Appendix, and we show how to choose the control input \(u_{t}\) in the following lemma.
**Lemma 3.1**.: _Consider the O.D.E. in (9) and (10). If we choose the control input \(u_{t}:=(A^{\top}+\eta^{2}A-\eta C)\lambda_{t}-\eta Ax_{t}\), then the above O.D.E. has globally asymptotically stable origin, i.e., \((\lambda_{t},x_{t})\to(0,0)\) as \(t\to\infty\) for any \((\lambda_{0},x_{0})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\)._
Proof sketch.: The proof follows the steps given in the backstepping scheme in Section 3. First, substituting \(x_{t}\) in (9) with a virtual controller \(\tilde{x}(\lambda_{t})\), we will design a control law \(\tilde{x}(\lambda_{t})\) that stabilizes the following new virtual system:
\[\dot{\lambda}_{t}=(-C+\eta A)\lambda_{t}-A\tilde{x}(\lambda_{t}). \tag{11}\]
One natural choice of the virtual controller is \(\tilde{x}(\lambda_{t})=\eta\lambda_{t}\). Plugging it into (11) leads to \(\dot{\lambda}_{t}=-C\lambda_{t}\), and we can verify the global asymptotic stability of the above system with the following Lyapunov function:
\[V(\lambda_{t}):=\frac{||\lambda_{t}||_{2}^{2}}{2}. \tag{12}\]
We now consider the original O.D.E. in (9) and (10). Applying simple algebraic manipulations yield \(\dot{\lambda}_{t}=-C\lambda_{t}-A(x_{t}-\eta\lambda_{t}),\quad\dot{x}_{t}=u_{t}\). The error between \(x_{t}\) and the virtual controller \(\tilde{x}(\lambda_{t})\) can be expressed as new variable \(z_{t}\), which is \(z_{t}:=x_{t}-\tilde{x}(\lambda_{t})=x_{t}-\eta\lambda_{t}\). Rewriting the O.D.E. in (9) and (10) with \((\lambda_{t},z_{t})\) coordinates, we have
\[\dot{\lambda}_{t} =-C\lambda_{t}-Az_{t} \tag{13}\] \[\dot{z}_{t} =u_{t}+\eta C\lambda_{t}+\eta Az_{t}.\]
To prove the global asymptotic stability of the above system, consider the function \(V_{c}(\lambda_{t},z_{t}):=V(\lambda_{t})+\frac{1}{2}||z_{t}||_{2}^{2}\) where \(V(\lambda_{t})\) is defined in (12). By taking \(u_{t}\) as \(u_{t}=A^{\top}\lambda_{t}-\eta C\lambda_{t}-\eta Az_{t}\), we can apply LaSaL'es invariance principle in Lemma 7.1. The full proof is in Appendix Section 7.4.1.
Using the relation \(z_{t}:=x_{t}-\eta\lambda_{t}\), the control input in the original coordinate \((\lambda_{t},x_{t})\) can be written as \(u_{t}:=A^{\top}\lambda_{t}-\eta C\lambda_{t}-\eta Az_{t}=(A^{\top}+\eta^{2}A- \eta C)\lambda_{t}-\eta Ax_{t}\). Plugging this input into the original open-loop system in (9) and (10), the closed-loop system in the original coordinate \((\lambda_{t},x_{t})\) can written as
\[\dot{\lambda}_{t} =(-C+\eta A)\lambda_{t}-Ax_{t} \tag{14}\] \[\dot{x}_{t} =(A^{\top}+\eta^{2}A-\eta C)\lambda_{t}-\eta Ax_{t}, \tag{15}\]
whose origin is also globally asymptotically stable according to Lemma 3.1. Recovering back from \(x_{t}\) to \(\xi_{t}\), we have \(\frac{d}{dt}\begin{bmatrix}\lambda_{t}\\ \xi_{t}\end{bmatrix}=\begin{bmatrix}-C+\eta A&-A\\ A^{\top}+\eta^{2}A-\eta C&-\eta A\end{bmatrix}\begin{bmatrix}\lambda_{t}\\ \xi_{t}\end{bmatrix}+\begin{bmatrix}b\\ \eta b\end{bmatrix}\). The corresponding stochastic approximation of the O.D.E. in Theorem 3.1 becomes
\[\lambda_{k+1} =\lambda_{k}+\alpha_{k}(((-1+\eta)\phi_{k}^{\top}-\eta\rho_{k} \gamma\phi_{k}^{\prime\top})\lambda_{k}+\rho_{k}\delta_{k}(\xi_{k}))\phi_{k} \tag{16}\] \[\xi_{k+1} =\xi_{k}+\alpha_{k}(((-\eta+\eta^{2})\phi_{k}^{\top}-\eta^{2}\rho _{k}\gamma\phi_{k}^{\prime\top})\lambda_{k}\phi_{k}+\eta\rho_{k}\delta_{k}(\xi _{k})\phi_{k}+(\phi_{k}^{\top}\lambda_{k}\phi_{k}-\rho_{k}\gamma\phi_{k}^{ \top}\lambda_{k}\phi_{k}^{\prime})). \tag{17}\]
The equilibrium point of the above O.D.E. is \((0,\xi^{*})\). Hence, we only need to transform the coordinate of \(\xi_{t}\) to \(x_{t}=\xi_{t}-\xi^{*}\), which results to the O.D.E. in (14) and (15). With the above result, we are now ready to prove convergence of Algorithm 1. The proof simply follows from Borkar and Meyn theorem in Lemma 2.1, of which the details can be found in Sutton et al. (2009).
**Theorem 3.1**.: _Under the step size condition (33), with Algorithm 1 in Appendix, \(\xi_{k}\to\xi^{*}\) as \(k\to\infty\) with probability one, where \(\xi^{*}\) is the fixed point of (6)._
Proof.: The proof is done by checking Assumption 7.1 in Appendix.
**Remark 3.1**.: _Theorem 3.1 doesn't require any condition on \(\eta\). Therefore, we can set \(\eta=0\), which results to GTD2 developed in Sutton et al. (2009)._
### Recovering single time-scale TDC
In this section, we derive a single-time scale version of TDC (Sutton et al., 2009) through the backstepping design in the previous section. TDC (Sutton et al., 2009) was originally developed as a two-time scale algorithm in Sutton et al. (2009). Even though the two time-scale method provides theoretical guarantee for a larger class of algorithms, the single time-scale scheme provides more simplicity in practice, and shows faster convergence empirically. Subsequently, Maei (2011) provided a single-time scale version of TDC by multiplying a large enough constant \(\eta>0\) to the faster time scale part (7), which leads to
\[\lambda_{k+1} =\lambda_{k}+\beta_{k}\eta(-\phi_{k}^{\top}\lambda_{k}+\rho_{k} \delta_{k}(\xi_{k}))\phi_{k} \tag{18}\] \[\xi_{k+1} =\xi_{k}+\beta_{k}(-\rho_{k}\gamma\phi_{k}^{\top}\lambda_{k}\phi_ {k}^{\prime}+\rho_{k}\delta_{k}(\xi_{k})\phi_{k}), \tag{19}\]
where
\[\eta>\max\left\{0,-\lambda_{\min}\left(C^{-1}(A+A^{\top})/2\right) \right\}. \tag{20}\]
Here, we derive another version of single-time TDC by multiplying a constant to the slower time-scale part in (8), which results in
\[\lambda_{k+1} =\lambda_{k}+\alpha_{k}(-\phi_{k}^{\top}\lambda_{k}+\rho_{k}\delta _{k}(\xi_{k}))\phi_{k}-\beta\lambda_{k}) \tag{21}\] \[\xi_{k+1} =\xi_{k}+\alpha_{k}(-\rho_{k}\gamma\phi_{k}^{\top}\lambda_{k}\phi _{k}-\beta\lambda_{k}+\rho_{k}\delta_{k}(\xi_{k})\phi_{k}), \tag{22}\]
where \(\beta\) satisfies
\[0<\beta<-\frac{\lambda_{\min}(C)}{\lambda_{\min}(A)}\quad\text{if}\quad\lambda _{\min}(A)<0,\text{ else}\quad\beta>0. \tag{23}\]
We can derive the above algorithm following similar steps as in Section 3.1. Let us first consider the following dynamic model:
\[\dot{\lambda}_{t} =-C\lambda_{t}-Ax_{t} \tag{24}\] \[\dot{x}_{t} =u_{t} \tag{25}\]
Using the backstepping technique, we can prove that the above system admits the origin as a global asymptotically stable equilibrium point with the control input \(u_{t}:=\beta\left((A^{\top}-C)\lambda_{t}-A\xi_{t}\right)\), which is shown in the following lemma:
**Lemma 3.2**.: _Consider the O.D.E. in (24) and (25). Suppose that we choose the control input \(u_{t}:=\beta\left((A^{\top}-C)\lambda_{t}-A\xi_{t}\right))\), and \(\beta\) satisfies condition (23). Then, the above O.D.E. has globally asymptotically stable origin, i.e., \((\lambda_{t},x_{t})\rightarrow(0,0)\) as \(t\rightarrow\infty\)._
The proof of Lemma 3.2 is given in Appendix Section 7.4.2. By Borkar and Meyn theorem in Lemma 2.1, we can readily prove the convergence of Algorithm 2 in Appendix, which uses stochastic recursive update (21) and (22).
**Theorem 3.2**.: _Consider Algorithm 2 in Appendix. Under the step size condition (33), and if \(\beta\) satisfies (23), \(\xi_{k}\rightarrow\xi^{*}\) as \(k\rightarrow\infty\) with probability one, where \(\xi^{*}\) is the fixed point of (6)._
We will call the Algorithm 4 as TDC-slow, and single-time version of TDC suggested by Maei (2011) as TDC-fast. Other than the multiplication of a constant reflecting two-time scale property, we can make TDC into a single-time algorithm, which we call a single time-scale TDC2, while the original version in Maei (2011) will be called the single time-scale TDC. The derivation is given in Appendix Section 7.5. The performance of such versions of TDC are evaluated in Appendix Section 7.9.1. Even though not one of the algorithms outperforms each other, TDC-slow and TDC2 shows better performance in general.
### Generalizing TDC++
This section provides versions of TDC++ (Ghiassian et al., 2020), which is variant of TDC. With an additional regularization term \(\xi_{k}\) on both updates of TDC in (7) and (8), the update is written as follows:
\[\lambda_{k+1} =\lambda_{k}+\alpha_{k}\eta(-\phi_{k}^{\top}\lambda_{k}+\rho_{k} \delta_{k}(\xi_{k}))\phi_{k}-\beta\lambda_{k}) \tag{26}\] \[\xi_{k+1} =\xi_{k}+\alpha_{k}(-\rho_{k}\gamma\phi_{k}^{\top}\lambda_{k}\phi _{k}^{\prime}-\beta\lambda_{k}+\rho_{k}\delta_{k}(\xi_{k})\phi_{k}), \tag{27}\]
where \(\eta>0\) satisfies (20) and \(\beta>0\) is a new parameter. Note that TDC++ can be simply viewed as variant of TDC by adding the term \(\beta\lambda_{k}\) in the update, which can be seen as a regularization term. Therefore, letting \(\beta=0\) yields the original TDC. In this paper, we prove that our controller design leads to the following update:
\[\lambda_{k+1} =\lambda_{k}+\alpha_{k}\eta(-\phi_{k}^{\top}\lambda_{k}+\rho_{k} \delta_{k}(\xi_{k}))\phi_{k}-\beta\lambda_{k}) \tag{28}\] \[\xi_{k+1} =\xi_{k}+\alpha_{k}(-\rho_{k}\gamma\phi_{k}^{\top}\lambda_{k}\phi _{k}^{\prime}+(1-\kappa\eta)\phi_{k}^{\top}\lambda_{k}\phi_{k}-\kappa\beta\eta \lambda_{k}+\rho_{k}\kappa\eta\delta_{k}(\xi_{k})\phi_{k}), \tag{29}\]
where \(\kappa\) and \(\beta\) are new parameters and when \(\kappa=1/\eta\) it becomes TDC++. The difference with the original TDC++ can be seen in their corresponding O.D.E. forms. The corresponding O.D.E. for (26) and (27) (original TDC++) can be expressed as: \(\frac{d}{dt}\begin{bmatrix}\lambda_{t}\\ x_{t}\end{bmatrix}=\begin{bmatrix}-\eta(C+\beta I)&-\eta A\\ A^{\top}-C-\beta I&-A\end{bmatrix}\begin{bmatrix}\lambda_{t}\\ x_{t}\end{bmatrix}\). Meanwhile, the O.D.E. corresponding to (28) and (29) (new TDC++) becomes
\(\begin{bmatrix}-\eta(C+\beta I)&-\eta A\\ A^{\top}-\kappa\eta(C+\beta I)&-\kappa\eta A\end{bmatrix}\begin{bmatrix}\lambda_{t} \\ x_{t}\end{bmatrix}\). We experiment under different of \(\kappa\) and \(\eta\) to examine the behavior of new TDC++. The result shows that in general, smaller \(\kappa\) leads to better performance. The results are given in Appendix Section 7.9.
**Lemma 3.3**.: _Consider the following O.D.E.:_
\[\dot{\lambda}_{t} =-\eta(C+\beta I)\lambda_{t}-\eta Ax_{t} \tag{30}\] \[\dot{x}_{t} =u_{t}. \tag{31}\]
_Suppose that we choose the control input \(u_{t}:=(A^{\top}-\kappa\eta(C+\beta I))\lambda_{t}-\kappa\eta Ax_{t}\). Assume \(\eta>0\) and \(\beta\) and \(\kappa\) satisfies the following condition: \(\beta+\kappa\lambda_{\min}(A)>\lambda_{\min}(C)\). Then, the above O.D.E. has globally asymptotically stable origin, i.e., \((\lambda_{t},x_{t})\rightarrow(0,0)\) as \(t\rightarrow\infty\)._
The proof is given in Appendix Section 7.4.3. With Lemma 2.1, we can prove the convergence of stochastic update with (28) and (29) whose pseudo code is given in Algorithm 5 in Appendix.
**Theorem 3.3**.: _Consider Algorithm 5 in Appendix. Under the step-size condition (33) and if \(\eta\) satisfies (20), then \(\xi_{k}\rightarrow\xi^{*}\) as \(k\rightarrow\infty\) with probability one, where \(\xi^{*}\) is the TD fixed point in (6)._
**Remark 3.2**.: _We can replace the regularization term with nonlinear terms satisfying certain conditions. The details are given in Appendix Section 7.6._
## 4 Experiments
We verify the performance and convergence of the proposed BTD under standard benchmarks to evaluate off-policy TD-learning algorithms, including Baird environment (Baird, 1995), RandomWalk (Sutton et al., 2009) with different features, and Boyan chain (Boyan, 2002). The details about the environments are given in Appendix Section 7.7. From the experiments, we see how BTD behaves under different coefficients \(\eta\in\{-0.5,-0.25,0,0.25,0.5\}\). We measure the Root Mean-Squared Projected Bellman Error (RMSPBE) as the performance metric, and every results are averaged over 100 runs. From Table 1, the result with \(\eta=0.5\) shows the best performance except at Baird, where \(\eta=0\), corresponding to GTD2 performs best. There exist two aspects on the role of \(\eta\). First of all, it can be thought of as a parameter that can mitigate the effect of instability coming from matrix \(A\) in (9). For example, a smaller \(\eta\) can stabilize the system. However, as a trade off, if \(\eta\) is too small, then the update rate might be too small as well. As a result, the overall convergence can be slower. Furthermore, \(\eta\) also controls the effect of \(-C\) in (13) in the BTD update rules, where \(-C\) corresponds to \((-\eta+\eta^{2})\phi_{t}^{\top}\lambda_{k}\phi_{k}\) in (17). Note that the role of \(\eta\) in the final BTD update rule in (17) shows different perspectives compared to that in (9). In particular, \(\eta=1/2\) maximizes the effect of \(-C\) in (17). From Table 1, it leads to reasonably good performances in most domains. Another natural choice is to multiply \(\eta\) to \(-C\) instead of \(A\). However, in such cases, we need to introduce another constrain \(\eta>0\), whereas in the current BTD, convergence is guaranteed for all \(\eta\in\mathbb{R}\). Finally, we note that simply multiplying \(-C\) by a large positive constant does not lead to good results in general. This is because in this case, it may increase variance, and destabilize the algorithm. Overall results are given in Appendix Section 7.8.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Env\(\eta\) & -0.5 & -0.25 & 0 & 0.25 & 0.5 \\ \hline \hline Boyan & \(1.51\pm 0.66\) & \(1.481\pm 0.656\) & \(1.452\pm 0.647\) & \(1.428\pm 0.64\) & \(1.408\pm 0.635\) \\ \hline Dependent & \(0.11\pm 0.19\) & \(0.097\pm 0.163\) & \(0.086\pm 0.142\) & \(0.079\pm 0.128\) & \(0.076\pm 0.122\) \\ \hline Inverted & \(0.21\pm 0.25\) & \(0.173\pm 0.218\) & \(0.151\pm 0.193\) & \(0.139\pm 0.177\) & \(0.136\pm 0.172\) \\ \hline Tabular & \(0.17\pm 0.28\) & \(0.147\pm 0.238\) & \(0.133\pm 0.208\) & \(0.124\pm 0.191\) & \(0.122\pm 0.188\) \\ \hline Baird & \(0.1\pm 0.64\) & \(0.09\pm 0.629\) & \(0.085\pm 0.625\) & \(0.087\pm 0.628\) & \(0.092\pm 0.637\) \\ \hline \end{tabular}
\end{table}
Table 1: Backstepping TD, step-size = \(0.01\)
Conclusion
In this work, we have proposed a new framework to design off-policy TD-learning algorithms from control-theoretic view. Future research directions would be extending the framework to non-linear function approximation setting.
## 6 Acknowledgements
This work was supported by the National Research Foundation under Grant NRF-2021R1F1A1061613, Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT)(No.2022-0-00469), and the BK21 FOUR from the Ministry of Education (Republic of Korea). (Corresponding author: Donghwan Lee.)
|
2307.05649 | Bayesian Poisson Regression and Tensor Train Decomposition Model for
Learning Mortality Pattern Changes during COVID-19 Pandemic | COVID-19 has led to excess deaths around the world, however it remains
unclear how the mortality of other causes of death has changed during the
pandemic. Aiming at understanding the wider impact of COVID-19 on other death
causes, we study Italian data set that consists of monthly mortality counts of
different causes from January 2015 to December 2020. Due to the high
dimensional nature of the data, we develop a model which combines conventional
Poisson regression with tensor train decomposition to explore the lower
dimensional residual structure of the data. We take a Bayesian approach, impose
priors on model parameters. Posterior inference is performed using an efficient
Metropolis-Hastings within Gibbs algorithm. The validity of our approach is
tested in simulation studies. Our method not only identifies differential
effects of interventions on cause specific mortality rates through the Poisson
regression component, but also offers informative interpretations of the
relationship between COVID-19 and other causes of death as well as latent
classes that underline demographic characteristics, temporal patterns and
causes of death respectively. | Wei Zhang, Antonietta Mira, Ernst C. Wit | 2023-07-11T13:25:16Z | http://arxiv.org/abs/2307.05649v1 | Bayesian Poisson Regression and Tensor Train Decomposition Model for Learning Mortality Pattern Changes during COVID-19 Pandemic
###### Abstract
COVID-19 has led to excess deaths around the world, however it remains unclear how the mortality of other causes of death has changed during the pandemic. Aiming at understanding the wider impact of COVID-19 on other death causes, we study Italian data set that consists of monthly mortality counts of different causes from January 2015 to December 2020. Due to the high dimensional nature of the data, we develop a model which combines conventional Poisson regression with tensor train decomposition to explore the lower dimensional residual structure of the data. We take a Bayesian approach, impose priors on model parameters. Posterior inference is performed using an efficient Metropolis-Hastings within Gibbs algorithm. The validity of our approach is tested in simulation studies. Our method not only identifies differential effects of interventions on cause specific mortality rates through the Poisson regression component, but also offers informative interpretations of the relationship between COVID-19 and other causes of death as well as latent classes that underline demographic characteristics, temporal patterns and causes of death respectively.
**Keywords:** COVID-19, mortality, tensor decomposition, Bayesian inference
## 1 Introduction
Following the outbreak, COVID-19 has led to far-reaching consequences on various aspects of the world (Gormsen and Koijen, 2020; Cheval et al., 2020; Sarkodie and Owusu, 2021; Kuzemko et al., 2020; Bol et al., 2021). Focusing on its impacts on health and health systems, extensive studies have investigated topics such as health inequality as a result of racial and social-economic status (Bambra et al., 2020; Abedi et al., 2021),
adaptation of health care system in terms of testing, contact tracing and vaccination campaign (Kretzschmar et al., 2020; Peretti-Watel et al., 2020). Excess mortality due to the pandemic is also under scrutiny as it generates the overall picture of the impacts the pandemic has on the human health through various channels like government lockdown interventions, disruptions to non-COVID care and son on. The mortality pattern shift compared to pre-COVID era depends on the potential joint effects of all these factors (Karlinsky and Kobak, 2021; Wang et al., 2022; Msemburi et al., 2023). Even though excess mortality is sufficient to grasp the general view, it is also of great importance to examine cause specific mortality changes in face of the pandemic so that strategies to mitigate similar impacts in the future can be more targeted. For instance, the pandemic may have indirectly led to increases in causes of death including heart disease, diabetes and Alzheimer disease as observed by Shiels et al. (2022). As for non-natural causes of death, Dmetrichuk et al. (2022) found out that accidental drug-related fatalities increased substantially while homicide or suicide rates only moderately, nor did motor vehicle collision fatality rates greatly decrease during all stages of the lockdown in Ontario. However, evidences also suggest that suicide rates increased during the pandemic (Mitchell and Li, 2021; Pell et al., 2020) whereas deaths related to traffic accidents decreased significantly according to Calderon-Anyosa and Kaufman (2021) and Sutherland et al. (2020). However, it is usually challenging to collect cause specific mortality data based on death certificates in a consistent manner (Gill and DeJoseph, 2020; Gundlapalli et al., 2021). We analyze the Italian monthly death counts from 2015 to 2020 categorized according to the International Classification of Diseases 10th Revision (ICD-10), see Section 5 for more data description.
When the count data are assumed to follow Poisson distributions, the Poisson regression model is a good starting point (Frome, 1983). In practice, we can exploit other properties of the data and develop more sophisticated modeling tools in addition to the Poisson regression so that we learn more from the data. This is particularly important when the dimension of the observations is large or when we are not able to observe, collect all relevant covariates or when we suspect more complicated relationships between covariates and the outcome variable. The Italian mortality data can be rearrange as a multi-way array or tensor which facilitates us to subtract extra information hidden in the data thanks to its well studied theoretical properties, in fact, tensors have been shown to be a powerful tool in many disciplines such as political sciences, biology, economics and so on (Hoff, 2015; Zhou et al., 2015; Cai et al., 2022), therefore we utilize those properties, combine the Poisson regression with the tensor perspective in this applied work. Our primary interests lie in understanding the effects of covariates, especially government lockdown policies during the pandemic, on the mortality rates of various causes of death through the Poisson regression specification as well as uncovering further information in the data by inferring latent spaces via the tensor construction. Inferences are made in a Bayesian framework where we impose trivial priors on model parameters and employ a Metropolis within Gibbs sampler to draw posterior samples.
The rest of the paper is organized as follows. In Section 2, we formulate the model and elucidate how to obtain dimension reduction via a tensor train decomposition. In Section 3, we describe the prior specification and the Markov chain Monte Carlo (MCMC) algorithm for posterior inferences. Results of the simulation studies as well as the real data application are shown in Section 4 and Section 5, respectively. Finally, Section 6
provides some concluding remarks and future work.
## 2 Bayesian Poisson Regression and Tensor Train Decomposition model for count data
When high-dimensional data can be organized as tensors, to achieve dimension reduction and exploit inherent structure embedded in the data, researchers have developed numerous decomposition techniques. In this paper, we introduce the tensor train decomposition which has both theoretical and practical advantages (Oseledets, 2011; Cichocki et al., 2016). In general, an \(M\)-dimensional tensor \(\mathcal{A}\) of size \(Q_{1}\times Q_{2}\times\cdots\times Q_{M}\) is said to admit a train decomposition if entries \(a_{q_{1},q_{2},\ldots,q_{M}}\) of \(\mathcal{A}\) can be expressed as the sum of \(R_{1}R_{2}\cdots R_{M-1}\) terms such that
\[a_{q_{1},q_{2},\ldots,q_{M}}=\sum_{r_{1}=1}^{R_{1}}\sum_{r_{2}=1}^{R_{2}}\cdots \sum_{r_{M-1}=1}^{R_{M-1}}g_{q_{1},r_{1}}^{(1)}g_{q_{2},r_{1},r_{2}}^{(2)}\cdots g _{q_{M},r_{M-1}}^{(M)}.\]
We call \(g_{,\cdot}^{(1)},g_{,\cdot,\cdot}^{(2)},\cdots,g_{,\cdot}^{(M)}\) tensor train cores and \(R_{1},R_{2},\ldots,R_{M-1}\) the tensor train ranks. The order of dimensions in the tensor matters as the decomposition is performed sequentially from the first dimension \(g_{q_{1},r_{1}}^{(1)}\) to the last dimension \(g_{q_{M},r_{M-1}}^{(M)}\) by construction, and tensor trains cores of a certain dimension always depend on the cores of its previous dimension. Therefore it is important to arrange the data in such a tensor structure that the train decomposition is meaningful. See Section 5 when we describe and analyze the Italian monthly cause specific mortality data. The tensor train decomposition has the theoretical advantages that it encompasses any specific tensor decomposition such as the canonical polyadic (CP) decomposition and the Tucker decomposition, but remains one of the most stable and simple approaches to summarize high-dimensional data by a limited number of latent variables, hence enabling straightforward interpretation of the results in application. We value these merits of the tensor train decomposition and employ it in our proposed model described as follows.
Suppose that we observe count data that can be arranged as a three-way discrete-valued tensor \(Y_{i,t,k}\) of dimension \(N\times T\times K\) and \(i=1,\ldots,N,t=1,\ldots,T,k=1,\ldots,K\). Additionally, we have information on covariates \(\mathbf{x}_{i,t,k}\in\mathbb{R}^{P}\) and offsets \(u_{i,t,k}\). Classical Poisson regression model assumes that
\[Y_{i,t,k}\sim\text{Pois}\left(u_{i,t,k}\exp\left(\mathbf{x}_{i,t,k}\cdot \boldsymbol{\beta}\right)\right). \tag{1}\]
For the linear Poisson regression model in (1), it is straightforward to infer the relationship between the covariates and the dependent variable. In practice it is unwise to fit the data with a fully saturated model. A fully saturated model can certainly accounts for all possible interactions between observed covariates in linear form, however it requires to estimate the same number of parameters as the data dimension, which creates extra computational burden and hinders any meaningful interpretation of the results when the dimension becomes large. Including only a limited subset of covariates and their interactions is more feasible, however, the regression can potentially fail to account for residual variation in the observed counts \(Y_{i,t,k}\). It may also be at the risk of bias induced by unobserved confounding variables. To address these issues, we propose to combine
the current Poisson regression framework with Tensor Train Decomposition technique to form a new Poisson Regression Tensor Train Decomposition (BPRTTD) model so that we are able to extract more information from the data. The model extends the Poisson regression model with an extra rate parameter \(\lambda^{*}_{i,t,k}\)
\[Y_{i,t,k}\sim\text{Pois}\left(u_{i,t,k}\exp\left(\mathbf{x}_{i,t,k}\cdot\mathbf{ \beta}\right)\lambda^{*}_{i,t,k}\right). \tag{2}\]
We assume that the rate \(\lambda^{*}_{i,t,k}\) can be expressed according to tensor train decomposition such that
\[\lambda^{*}_{i,t,k} =\sum_{h_{1}=1}^{H_{1}}\lambda^{(1)}_{i,h_{1}}\sum_{h_{2}}^{H_{2} }\lambda^{(2)}_{t,h_{1},h_{2}}\lambda^{(3)}_{k,h_{2}}\] \[=\mathbf{\lambda}^{(1)^{\prime}}_{i}\Lambda^{(2)}_{t}\mathbf{\lambda}^{(3 )}_{k},\]
where \(\mathbf{\lambda}^{(1)}_{i}=(\lambda^{(1)}_{i,1},\ldots,\lambda^{(1)}_{i,H_{1}})^{ \prime}\in\mathbb{R}^{H_{1}}_{+},\mathbf{\lambda}^{(3)}_{k}=(\lambda^{(3)}_{k,1},\ldots,\lambda^{(3)}_{k,H_{2}})\in\mathbb{R}^{H_{2}}_{+}\) and
\[\Lambda^{(2)}_{t}=\begin{pmatrix}\lambda^{(2)}_{t,1,1}&\lambda^{(2)}_{t,1,2}& \ldots&\lambda^{(2)}_{t,1,H_{2}}\\ \lambda^{(2)}_{t,2,1}&\lambda^{(2)}_{t,2,2}&\ldots&\lambda^{(2)}_{t,2,H_{2}} \\ \vdots&\vdots&\ldots&\vdots\\ \lambda^{(2)}_{t,H_{1},1}&\lambda^{(2)}_{t,H_{1},2}&\ldots&\lambda^{(2)}_{t,H _{1},H_{2}}\\ \end{pmatrix}.\]
Here collection of matrices \(\{\mathbf{\lambda}^{(1)}_{i}\}_{i=1,\ldots,N},\{\Lambda^{(2)}_{t}\}_{t=1,\ldots,T}\) and \(\{\mathbf{\lambda}^{(3)}_{k}\}_{k=1,\ldots,K}\) are tensor train cores. \(H_{1}\) and \(H_{2}\) are tensor train ranks and they control the model complexity. When \(H_{1}\) and \(H_{2}\) are small relative to \(N,T\) and \(K\), this is a parsimonious representation of the rate tensor \(\{\lambda^{*}_{i,t,k}\}_{i=1,\ldots,N,t=1,\ldots,T,k=1,\ldots,K}\). Initially, the tensor has \(N\cdot T\cdot K\) parameters whereas the number reduces to \(N\cdot H_{1}+T\cdot H_{1}\cdot H_{2}+K\cdot H_{2}\) after using the tensor decomposition representation.
When the data are Poisson counts and are treated as tensors, Schein et al. (2015) and Schein et al. (2016) applied CP decomposition and Tucker decomposition to enforce dimension reduction and obtain reliable statistical inferences. More recently, tensor train decomposition has gained more popularity. For instance, Mehrizi et al. (2021) proposed a content request prediction algorithm that employs tensor train decomposition. Motivated by existing literature, our method combines the classical Poisson regression model and the tensor train decomposition to fully utilize information in the data. Furthermore, since we are more oriented in explanatory analysis than predictive performance of the approach, we carefully specify the priors and choose the set of prior hyperparameters to avoid unidentifiable issues inherently to the general latent factor models.
## 3 Prior Specification and Posterior Inference
Due to the complex nature of the model space, we adapt a Bayesian approach to make inferences. Bayesian methods also provide the necessary uncertainty quantification. We impose gamma priors on \(\{\mathbf{\lambda}^{(1)}_{i}\}_{i=1,\ldots,N},\{\Lambda^{(2)}_{t}\}_{t=1,\ldots,T}\) and \(\{\mathbf{\lambda}^{(3)}_{k}\}_{k=1,\ldots,K}\) to exploit the congu
gate property of the Poisson parameters; that is
\[\lambda_{i,h_{1}}^{(1)}\sim\text{Ga}(\alpha_{a},\alpha_{b}),\ i=1, \ldots,N,\ h_{1}=1,\ldots,H_{1},\] \[\lambda_{t,h_{1},h_{2}}^{(2)}\sim\text{Ga}(\beta_{a},\beta_{b}),\ t=1, \ldots,T,\ h_{1}=1,\ldots,H_{1},\ h_{2}=1,\ldots,H_{2},\] \[\lambda_{k,h_{2}}^{(3)}\sim\text{Ga}(\epsilon_{a},\epsilon_{b}), \ k=1,\ldots,K,\ h_{2}=1,\ldots,H_{2}.\]
Posterior inference on these parameters can be obtained by using Gibbs sampling algorithm conditionally on most recent values of other parameters. As for the Poisson regression coefficients \(\beta\), we follow the literature and assume zero-mean normal priors such that
\[\beta_{p}\sim\mathcal{N}(0,\sigma^{2}),\ p=1,\ldots,P.\]
This completes the prior specification for the BPRTTD model. Figure 1 illustrates the hierarchical graphical representation of the model together with the imposed priors.
Since normal priors on \(\boldsymbol{\beta}\) are not conjugate, we sample \(\boldsymbol{\beta}\) in an adaptive Metropolis-Hastings step that learns the posterior correlation between multivariate parameters (Roberts and Rosenthal, 2009). We outline the MCMC algorithm in below.
### Metropolis within Gibbs sampler
We employ a Gibbs sampler for \(\lambda_{i,h_{1}},\lambda_{t,h_{1},h_{2}}\) and \(\lambda_{k,h_{2}}\) given the Poisson regression coefficients \(\boldsymbol{\beta}\). The Gibbs sampling algorithm augments the state space with variable
Figure 1: A directed graph summarizing the prior specification of the BPRTTD model. Square boxes are pre-fixed constant hyperparameters; circles are parameters of inferential interest and the colored circles are observed quantities.
such that
\[Y_{i,t,k}^{h_{1},h_{2}}\sim\text{Pois}\left(u_{i,t,k}\exp\left(\mathbf{x}_{i,t,k} \cdot\mathbf{\beta}\right)\lambda_{i,h_{1}}^{(1)}\lambda_{t,h_{1},h_{2}}^{(2)} \lambda_{k,h_{2}}^{(3)}\right). \tag{3}\]
Utilizing the closeness under addition property of Poisson random variables, (3) implies that
\[Y_{i,t,k}=\sum_{h_{1}=1}^{H_{1}}\sum_{h_{2}=1}^{H_{2}}Y_{i,t,k}^{h_{1},h_{2}}.\]
To draw \(Y_{i,t,k}^{h_{1},h_{2}}\) conditional on \(Y_{i,t,k}\) and \(\lambda_{i,h_{1}}^{(1)},\lambda_{t,h_{1},h_{2}}^{(2)},\lambda_{k,h_{2}}^{(3)}\), it suffices to note the relationship between the Poisson random variable and the Multinomial random variable, i.e.
\[\left(Y_{i,t,k}^{1,1},Y_{i,t,k}^{1,2},\ldots,Y_{i,t,k}^{H_{1},H_{2}}\right) \sim\text{Multi}\left(Y_{i,t,k},\left(\pi_{i,t,k}^{1,1},\pi_{i,t,k}^{1,2},\ldots,\pi_{i,t,k}^{H_{1},H_{2}}\right)\right)\]
with \(\pi_{i,t,k}^{h_{1},h_{2}}=\lambda_{i,h_{1}}^{(1)}\lambda_{t,h_{1},h_{2}}^{(2) }\lambda_{k,h_{2}}^{(3)}/\sum_{h_{1}=1}^{H_{1}}\sum_{h_{2}=1}^{H_{2}}\lambda_{ i,h_{1}}^{(1)}\lambda_{t,h_{1},h_{2}}^{(2)}\lambda_{k,h_{2}}^{(3)}\). Other useful latent quantities for Gibbs sampler that follows are
\[Y_{i,\cdot,\cdot}^{h_{1},\cdot} =\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{h_{2}=1}^{H_{2}}Y_{i,t,k}^{h_{1 },h_{2}}\] \[\sim\text{Pois}\left(\lambda_{i,h_{1}}^{(1)}u_{i,t,k}\exp\left( \mathbf{x}_{i,t,k}\cdot\mathbf{\beta}\right)\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{h_{2 }=1}^{H_{2}}\lambda_{t,h_{1},h_{2}}^{(2)}\lambda_{k,h_{2}}^{(3)}\right),\] \[Y_{\cdot,t,\cdot}^{h_{1},h_{2}} =\sum_{i=1}^{N}\sum_{k=1}^{K}Y_{i,t,k}^{h_{1},h_{2}}\] \[\sim\text{Pois}\left(\lambda_{t,h_{1},h_{2}}^{(2)}u_{i,t,k}\exp \left(\mathbf{x}_{i,t,k}\cdot\mathbf{\beta}\right)\sum_{i=1}^{N}\sum_{k=1}^{K} \lambda_{i,h_{1}}^{(1)}\lambda_{t,h_{1},h_{2}}^{(3)}\right),\] \[Y_{\cdot,\cdot,k}^{\cdot h_{2}} =\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{h_{1}=1}^{H_{1}}Y_{i,t,k}^{h_{ 1},h_{2}}\] \[\sim\text{Pois}\left(\lambda_{k,h_{2}}^{(3)}u_{i,t,k}\exp\left( \mathbf{x}_{i,t,k}\cdot\mathbf{\beta}\right)\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{h_{ 1}=1}^{H_{1}}\lambda_{i,h_{1}}^{(1)}\lambda_{t,h_{1},h_{2}}^{(2)}\right).\]
With these three auxiliary variables, it is easy to derive the full conditional distributions. To update \(\lambda_{i,h_{1}}\), we draw samples from
\[\lambda_{i,h_{1}|\cdot}\sim\text{Ga}\left(\alpha_{a}+Y_{i,\cdot,\cdot}^{h_{1},\cdot},\alpha_{b}+u_{i,t,k}\exp\left(\mathbf{x}_{i,t,k}\cdot\mathbf{\beta}\right) \sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{h_{2}=1}^{H_{2}}\lambda_{t,h_{1},h_{2}}^{(2) }\lambda_{k,h_{2}}^{(3)}\right).\]
Similarly for \(\lambda_{t,h_{1},h_{2}}\) and \(\lambda_{k,h_{2}}\), the full conditional distributions are
\[\lambda_{t,h_{1},h_{2}|\cdot}\sim\text{Ga}\left(\beta_{a}+Y_{ \cdot,t,\cdot}^{h_{1},h_{2}},\beta_{b}+u_{i,t,k}\exp\left(\mathbf{x}_{i,t,k} \cdot\mathbf{\beta}\right)\sum_{i=1}^{N}\sum_{k=1}^{K}\lambda_{i,h_{1}}^{(1)} \lambda_{k,h_{2}}^{(3)}\right)\] \[\lambda_{k,h_{2}|\cdot}\sim\text{Ga}\left(\epsilon_{a}+Y_{\cdot, \cdot,k}^{\cdot,h_{2}},\epsilon_{b}+u_{i,t,k}\exp\left(\mathbf{x}_{i,t,k} \cdot\mathbf{\beta}\right)\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{h_{1}=1}^{H_{1}}\lambda _{i,h_{1}}^{(1)}\lambda_{t,h_{1},h_{2}}^{(2)}\right).\]
After updating \(\lambda_{i,h_{1}},\lambda_{t,h_{1},h_{2}}\) and \(\lambda_{k,h_{2}}\) in each iteration, \(\mathbf{\beta}\) is sampled in a Metropolis-Hastings step with \(n\)-step proposal distribution
\[Q_{n}\left(\mathbf{\beta},\cdot\right)=(1-p)\mathcal{N}\left(\mathbf{\beta},(2.38)^{2} \Sigma_{n}/d\right)+p\mathcal{N}\left(\mathbf{\beta},(0.1)^{2}\Sigma/d\right),\]
where \(p\) is a small constant between 0 and 1, \(\Sigma_{n}\) is empirical estimate of the covariance matrix of the target posterior distribution based on the run so far and \(d\) is the dimension of \(\mathbf{\beta}\). \(\Sigma\) is a fixed covariance matrix and we take it to be the GLM estimate of the Poisson regression covariance matrix for efficiency.
## 4 Simulation Studies
We conduct two simulation studies to validate the BPRTTD model and the posterior sampling algorithm. In the first simulation study, we artificially simulate true parameters and use these parameters to generate the Poisson observations. Results are reported in the Appendices. However, the dimension of the simulated data is much smaller than what we encounter in the real data application (see Section 5 for more detailed data description). The reason for this choice is that we are able to repeat the simulations multiple times. Another limitation is that the true parameters \(\mathbf{\beta}\) is sampled from a arbitrary normal distribution, and \(\mathbf{\lambda}_{i}^{(1)},i=1,\ldots,N,\Lambda_{t}^{(2)},t=1,\ldots,T,\mathbf{\lambda }_{k}^{(3)},k=1,\ldots,K\) are simulated from a gamma distribution with certain artificial shape and rate, which may not really reflect the typical real data scenario. To address these drawbacks, we design the second simulation study where true parameters are estimated from the real data under the BPRTTD model specification (see Section 5 for steps regarding posterior inferences). After obtaining the estimates, which we treat as true parameters, we simulate offset from a gamma distribution with shape equal to \(10^{6}\) and rate equal to 1. The Poisson observations are then sampled according to (2). We apply our approach to the simulated data and verify whether we are able to recover the true parameters in this high-dimensional and more realistic scenario. We report the summary statistics of absolute percentage error (APE) between the true parameters and their posterior mean estimates in Table 1. At least 75% of parameters \(\mathbf{\beta},\lambda_{i,h_{1}}^{(1)},\lambda_{t,h_{1},h_{2}}^{(2)}\) and \(\lambda_{k,h_{2}}^{(3)}\) in the BPRTTD model are recovered within 40% deviation from the truth using our approach.
## 5 Drivers of Causes of Death in Italy from 2015 to 2020
With the aim of understanding the shifting mortality patterns of COVID-19 as well as other causes of death prior to and during the pandemic outbreak, we apply our method to Italian official mortality data that records provisional monthly death counts based on
\begin{table}
\begin{tabular}{c c c c c c c} \hline & Min. & 1st Qu. & Median & Mean & 3rd Qu. & Max. \\ \hline \(|\hat{\mathbf{\beta}}-\mathbf{\beta}|/|\mathbf{\beta}|\) & 0.0001 & 0.0281 & 0.0621 & 0.4092 & 0.2152 & 17.5434 \\ \hline \(|\hat{\lambda}_{i,h_{1}}^{(1)}-\lambda_{i,h_{1}}^{(1)}|/|\lambda_{i,h_{1}}^{(1)}|\) & 0.0003 & 0.0765 & 0.1765 & 0.4100 & 0.3681 & 36.9590 \\ \hline \(|\hat{\lambda}_{t,h_{1},h_{2}}^{(2)}-\lambda_{t,h_{1},h_{2}}^{(2)}|/|\lambda_{ k,h_{1},h_{2}}^{(2)}|\) & 0.0000 & 0.0864 & 0.1896 & 0.3057 & 0.3818 & 4.1056 \\ \hline \(|\hat{\lambda}_{k,h_{2}}^{(3)}-\lambda_{k,h_{2}}^{(3)}|/|\lambda_{k,h_{2}}^{(3)}|\) & 0.0005 & 0.0342 & 0.0764 & 0.1891 & 0.1978 & 3.1954 \\ \hline \end{tabular}
\end{table}
Table 1: Summary statistics of APE between true parameters and the estimated posterior means in the second simulation study.
the analysis of the declarations of the \(K=18\) causes of death compiled by doctors for all deaths in Italy from January 2015 until December 2020, i.e., \(T=72\) monthly death counts. Table 2 shows the 18 causes of death under investigation. Furthermore, the death counts are aggregated in \(N=420\) levels formed by 10 age groups, 2 genders and 21 Italian regions. In summary, we observe \(Y_{i,t,k}\) for \(i=1,\ldots,N,t=1,\ldots,T,k=1,\ldots,K\), in total 544,320 observations arrange in a \(N\times T\times K\) multiway-array. A more comprehensive description of the mortality data can be found on [https://www.istat.it/it/archivio/240401](https://www.istat.it/it/archivio/240401).
Along with death counts \(Y_{i,t,k}\), we also obtain covariates \(\mathbf{x}_{i,t,k}\). One important variable is the Italian Stringency Index (ISI) presented by Conteduca and Borin (2022) in the same spirit as the Oxford Stringency Index (OSI) introduced by Hale et al. (2021). The data set measures non-pharmaceutical interventions adopted by Italian authorities to tackle the COVID-19 pandemic at both the national and regional levels. Regional level stringency indices are desirable since mortality counts are collected according to regions. We look into interactions between the ISI and various causes of death, as suggested in literature that the pandemic can potentially result in differential consequences in other mortality causes. The other two groups of covariates that we include are interactions between age groups and causes of death as well as interactions between age groups and gender. It is well documented that age and gender are important risk factors for many causes of death. Female and male also demonstrate varying mortality patterns in different ages. These interaction terms in total result in 208 dimensional covariates \(\mathbf{x}_{i,t,k}\) in the model. Lastly, the offsets \(u_{i,t,j}\) we include are days in each month, the reported monthly aggregated cases in each region for COVID-19 death category and population in each region for all other causes of death. Specifically for external causes of trauma and poisoning, we consider another offset that reflects the mobility level. The index we adapt is the Google
\begin{table}
\begin{tabular}{l l} \hline \hline
1. & COVID-19 \\ \hline
2. & Some infectious and parasitic diseases \\ \hline
3. & Tumors \\ \hline
4. & Diseases of the blood and hematopoietic organs and \\ & some disorders of the immune system \\ \hline
5. & Endocrine, nutritional and metabolic diseases \\ \hline
6. & Psychic and behavioral disorders \\ \hline
7. & Diseases of the nervous system and sense organs \\ \hline
8. & Diseases of the circulatory system \\ \hline
9. & Diseases of the respiratory system \\ \hline
10. & Diseases of the digestive system \\ \hline
11. & Diseases of the skin and subcutaneous tissue \\ \hline
12. & Diseases of the musculoskeletal system and connective tissue \\ \hline
13. & Diseases of the genitourinary system \\ \hline
14. & Complications of pregnancy, childbirth and the puerperium \\ \hline
15. & Some morbid conditions that originate in the perinatal period \\ \hline
16. & Congenital malformations and chromosomal anomalies \\ \hline
17. & Symptoms, signs, abnormal results and ill-defined causes \\ \hline
18. & External causes of trauma and poisoning \\ \hline \hline \end{tabular}
\end{table}
Table 2: Causes of death according to the ICD-10.
COVID-19 Community Mobility Reports (Google LLC "Google COVID-19 Community Mobility Reports". [https://www.google.com/covid19/mobility/](https://www.google.com/covid19/mobility/)). By adding the mobility offset into the Poisson rate, we model the change in mortality rate of external causes of death per fixed mobility unit. The remaining Poisson rate \(\lambda^{*}_{i,t,k}\) unaccounted for by the regression component is assumed to be has latent structure with \(H_{1}=6\) and \(H_{2}=6\). The choice of these two values is tested over varying combinations of \(H_{1}\) and \(H_{2}\) over grids defined by \(H_{1}=5,6,7,8\) and \(H_{2}=5,6,7,8\) and we use \(H_{1}=6\) and \(H_{2}=6\) to achieve the balance between reasonable model fitting and model complexity. The Gamma priors on \(\lambda^{(1)}_{i,h_{1}},\lambda^{(2)}_{i,h_{1},h_{2}},\lambda^{(3)}_{k,h_{2}}\) has parameters such that \(\alpha_{2}=20,\alpha_{1}=\sqrt{1/(H1*H2)}*\alpha_{2},\beta_{2}=20,\beta_{1}= \sqrt{1/(H1*H2)}*\beta_{2},\epsilon_{1}=200,\epsilon_{2}=200\). For the Poisson regression coefficients \(\boldsymbol{\beta}\), we impose centered normal priors with variance 2. The MCMC iterations are 40,000.
### Improvement of the BPRTTD model over the Poisson regression
First, we highlight what the additional tensor decomposition component contributes to fitting the Poisson rate by showing in Figure 2 how our method complements the GLM estimates in recovering the observed variations in death counts \(Y_{i,t,k}\). In these selected trajectories, we can see that the tensor decomposition component adjusts the naive GLM estimates to better follow the observed trajectories. For instance, GLM predicted values of death counts of male who reside in Lombardia and died of Tumors between age 80 to 84 are consistently under the observed ones; this is not surprising since GLM tends to estimate and fit with the average of all observations whereas Lombardia, as the most populated region in Italy, has in general larger values of death counts. Our method successfully makes up the gap between data and GLM estimates by amplifying the Poisson rates, as shown in Figure 2(a). In the case such as in Figure 2(b) where the GLM estimates over predict, \(\lambda^{*}_{i,t,k}\) plays the role of downsizing the Poisson rate. Through the tensor decomposition assumption, such adjustments are done in a parsimonious manner. Recall that the saturated model requires in total 544,320 parameters whereas now except for the 208 coefficients, we add only \(N\times H_{1}+T\times H_{1}\times H_{2}+K\times H_{2}=5,220\) more parameters to achieve great improvement in terms of model fitting. This advantage can also be seen when we calculate and compare the log-likelihood of simple Poisson regression versus our BPRTTD model, which are -862910.4 and -731919.9 respectively. Even though our approach provides further approximation to observations, it is still robust to outliers or abnormal records as the model specification exploits and leverages information from other data by introducing common shared latent classes. We demonstrate in Figure 2(c) such a scenario where female mortality counts in age group 0-49 in Lazio in August 2016 show a sudden spike deviating from the normal pattern. The red BPRTTD line is not sensitive to such an outlier.
### Interpretation of the Poisson regression component
We now make explanatory analysis on the Italian mortality data. We are primarily interested in discovering how other causes of death are affected by the government intervention policies. Three types of responses are inferred, positive, negative and no effects based on the criteria whether the 95% credible intervals of each coefficient are above 0, below 0
or contain 0. Mortality counts are positively associated with the ISI in the following death categories: 4. Diseases of the blood and hematopoietic organs and some disorders of the immune system, 5. Endocrine, nutritional and metabolic diseases, 6. Psychic and behavioral disorders, 7. Diseases of the nervous system and sense organs, 9. Diseases of the respiratory system, 12. Diseases of the musculoskeletal system and connective tissue, 13. Diseases of the genitourinary system, 17. Symptoms, signs, abnormal results and ill-defined causes, 18. External causes of trauma and poisoning. The positive relationship between psychic and behavioral disorders, shown in Figure 3(a), is well documented in literature, affecting psychiatric patients as well as health population (Hao et al., 2020; Every-Palmer et al., 2020; Pieh et al., 2021; Rossi et al., 2020). However, most studies report increasing levels of anxiety, acute stress disorders and so on, we offer new evidence that it actually translates to elevated mortality rates of psychic and behavioral disorders in the end. During the pandemic, individuals with psychiatric and behavioral disorders may face additional challenges due to disruptions in routine care, limited access to mental health services, increased stressors and social isolation. These factors can potentially contribute to adverse outcomes and exacerbate existing conditions. Another positive relationship we would like to comment on is between mortality due to respiratory system diseases and the ISI in Figure 3(b). Although there have been wide range of studies suggesting that people with certain lung diseases appear to have an increased risk at the height of the epidemic and these risk factors are important clinical predictors of severe COVID-19 to enable risk stratification and optimize resource allocation (Lippi and Henry, 2020; Aveyard et al., 2021), we discover that reversely the mortality rate of respiratory disease rises during COVID-19 lockdown despite the common observation that respiratory disease incidences declined due to public precautionary measures (Hsieh et al., 2020; Huh et al., 2021; Britton et al., 2020). Several factors can jointly explain the positive relationship. For instance, lockdown measures can disrupt the routine care and monitoring of respiratory conditions, as a result, lack of timely interventions and preventive measures can contribute to a higher risk of mortality. Misclassification can also explain the increasing mortality. In the early pandemic, diagnosing the cause of death accurately can be complex especially when healthcare systems are under strain; limited testing capacity or availability of COVID-19 tests also potentialize deaths being attributed to respiratory
Figure 2: Death counts of selected demographic groups and causes of death from January 2015 to December 2020. Black line is the observed trajectory \(Y_{i,t,k},t=1,\ldots,T\) for fixed \(i\) and \(k\); red and blue lines are BPRTTD fitted values and GLM fitted values respectively. Shaded areas correspond to 95% credible intervals for BPRTTD predicts and 95% confidence interval for GLM predicts.
diseases without confirming the presence of COVID-19. As for the mortality rate of external causes of trauma and poisoning in Figure 3(c), it may be contradicting to see that this also trended up as more intense lockdown measures were enforced. However, since we include mobility index in the offset that disentangles the negative effect of lockdown on population mobility from the total effect, we state that government intervention policies actually drive up mortality per mobility unit due to reasons such as delayed or reduced access to healthcare.
Negative correlations appear in 2. Some infectious and parasitic diseases, see Figure 3(d), 3. Tumors, see Figure 3(e), 8. Diseases of the circulatory system, 10. Diseases of the digestive system, 14. Complications of pregnancy, childbirth and the puerperium, 15. Some phorid conditions that originate in the perinatal period, 16. Congenital malformations and chromosomal anomalies. It has been observed that infectious and parasitic diseases caused less mortality when government interventions were more strict (Dadras et al., 2021). Measures such as lockdowns, travel restrictions, and social distancing, can help limit the spread of infectious diseases. By reducing contact between individuals, these measures can interrupt the transmission of infectious agents, thereby decreasing the overall incidence of infections and subsequent mortality. As for the decrease in tumor mortality rate, one possible explanation is the harvesting effect, also known as mortality displacement (Schwartz, 2000; Kepp et al., 2022). The harvesting effect refers to the phenomenon that individuals who are already vulnerable, in this case, tumor patients, experience accelerated deaths during the COVID-19 lockdown intervention, leading to a temporary decline in tumor mortality rates. However, this decline is expected to be followed by a period of increased mortality as those who would have died during the intervention succumb in the subsequent period. Figure 3(f) shows the only category 11. Diseases of the skin and subcutaneous tissue that exhibits no statistically significant relationship with ISI. In Figure 3, we can also conclude the effect of gender and age on the hazard rates. In general, older population is associated with higher mortality in almost all types of death causes and men are more likely to die than women in the same age group. The exception is with tumors where men from certain younger age groups present higher mortality rates compared to women from older age group. It is also counter-intuitive to observe that the mortality rate due to external causes of trauma and poisoning is positively related to age. Even though it is confirmed in the data that the absolute death counts do go down with age, after taking into account the population size of each age group, the mortality rates per unit of population indeed increase with age, indicating that external causes of trauma and poisoning becomes more threatening when people get older. For detailed coefficient estimates, please refer to the Appendices.
### Interpretation of the latent parameters
Three blocks of latent parameters are introduced in the BPRTTD model and they are arranged in a dependent structure; that is, each latent class \(\lambda^{(1)}_{i,h_{1}},h_{1}=1,\ldots,H_{1}\) is characterized by different \(\lambda^{(2)}_{t,h_{1},h_{2}}\), and furthermore \(h_{2}\)-specific \(\lambda^{(3)}_{k,h_{2}},h_{2}=1,\ldots,H_{2}\). Therefore we approach the interpretation of latent parameters in an orderly manner. We start with the first block of latent parameters \(\lambda^{(1)}_{i,h_{1}}\) that allocate demographic groups defined by Italian regions, gender and age groups into \(H_{1}\) latent classes. Table 8 in the Appendices shows the posterior mean estimates of \(\lambda^{(1)}_{i,h_{1}}\) and we highlight in red values above the mean
\(\alpha_{1}/\alpha_{2}\) of the Gamma prior distribution. Note that since we already have a Poisson regression component that accounts for global linear relationships between covariates and death rates, what we see in estimates of \(\lambda_{i,h_{1}}^{(1)}\) indicates differential local effects of higher order interactions between covariates on mortality rates unexplained by linear regression. It is clear that although latent classes labeled by \(h_{1}=1\) and \(h_{1}=4\) represent majorly female and male mortality patterns respectively, they appear to be geographical dependent. For instance, almost all female age groups, except for older female (age group 85+) from southern Italy (Molise, Campania, Apulia, Basilicata, Calabria, Sicily) show elevated weights in latent class \(h_{1}=1\) in Table 8(a), whereas the same older female southern Italian population shares similar mortality patterns with almost all male groups, excluding those in northern Italy (Piemonte, Valle d'Aosta, Lombardia, Veneto, Friuli-Venezia Giulia, Emilia-Romagna) as shown in Table 8(d). Latent class \(h_{1}=6\), on the other hand, suggests old male and young female share something in common in their causes of death over time captured by \(\lambda_{t,6,h_{2}}^{(2)}\) and \(\lambda_{k,h_{2}}^{(3)}\). The remaining three latent classes indexed by \(h_{1}=2,3,5\) are less related to age and gender but show more geographical dependence. So before we move on to the analysis of latent parameters \(\lambda_{t,h_{1},h_{2}}^{(2)}\) and \(\lambda_{k,h_{2}}^{(3)}\), we make another attempt to decipher the local joint effect of regions, age and gender. To do this, we first rearrange the posterior mean estimates of \(\lambda_{i,h_{1}}^{(1)},i=1,\ldots,N,h_{1}=1,\ldots,H_{1}\) into a new matrix of dimension \(21\times(2\times 10\times H_{1})\) where \(21\) is the number of Italian regions, \(2\) and \(10\) are gender and age groups. Then we treat \(21\) Italian regions as observations, gender, age groups as well as \(H_{1}\) latent classes as features, apply the partitioning around medoids (PAM) algorithm to classify Italian regions based on features. Optimal number of clusters is \(4\) according to the elbow method. The clustering algorithm confirms previous observations. Figure 4(a) shows that northern Italy plus Toscana, Umbria, Marche is classified in a different group from southern Italy, plus Lazio and excluding Campania, Calabria, Sicily. Although the two clusters have similar weights in latent class \(h_{1}=6\), the
Figure 3: Selected predicted values of the BPRTTD model based on regression coefficients. Horizontal axes are the ISI from \(0\) to \(100\) and vertical axes are hazard rates.
differences mainly exist in latent class \(h_{1}=1\) for female population in the northern Italy cluster and \(h_{1}=4\) for older female in the southern Italy cluster. The PAM algorithm also separate the conventional classification of southern Italy further into two groups that exhibit homogeneous behavior when looking at latent class \(h_{1}=4\) and \(h_{1}=6\), but differ quite substantially in latent class \(h_{2}\), see Figure 4(b). Lastly, Liguria is singled out to form its own cluster because latent class \(h_{1}=2\) plays a rather significant role in defining the mortality pattern over time in the region.
We proceed to analyze together the second layer latent parameters \(\lambda^{(2)}_{t,h_{1},h_{2}}\) and the third layer latent parameters \(\lambda^{(3)}_{k,h_{2}}\) as they jointly identify the corresponding latent classes labeled by \(h_{1}\). \(\lambda^{(2)}_{t,h_{1},h_{2}}\) is the block of parameters associated with time indices \(T\), so we display the posterior mean estimates of \(\lambda^{(2)}_{t,h_{1},h_{2}}\) in terms of trajectories evolving over time in Figure 5; on the other hand, \(\lambda^{(3)}_{k,h_{2}}\) utilizes \(H_{2}\) latent structures to summarizes 18 causes of death as shown in Figure 6. We begin with latent class \(h_{1}=1\) shown in Figure 5(a) that is significant for almost all female age groups except for older population in southern Italy. Two trajectories are more relevant in this class, and they are characterized by mortality rates \(\lambda^{(3)}_{k,5}\) in Figure 6(e) and \(\lambda^{(3)}_{k,6}\) in Figure 6(f). \(\lambda^{(3)}_{k,5}\) mostly captures COVID-19 mortality and the trajectory \(\lambda^{(1)}_{t,1,5}\) in Figure 5(a) indicates a sudden weight spike of this particular latent class \(h_{2}=5\) around June 2020. This is when the pandemic situation eases between the first wave and the second wave so the daily new cases are almost single digits; the time lag between contracting COVID-19 in the previous wave and dying of COVID-19 potentially results in the spike that we observe. We will see later another type of weight spike with respect to latent class \(h_{2}=5\). \(\lambda^{(1)}_{t,1,5}\) is also active from January 2015 to July 2016. However, since during this period the new cases offset in the BPRTTD model is exactly 0, the dominating factor is no longer COVID-19 death, but possibly the other cause in \(\lambda^{(3)}_{k,5}\) higher than the prior mean, namely some infectious and parasitic diseases.
Figure 4: PAM classification of Italian regions based on \(\lambda^{(1)}_{i,h_{1}}\). Horizontal axes in (b) denote latent classes.
Trajectory \(\lambda^{(1)}_{t,1,6}\) has opposite behavior as \(\lambda^{(1)}_{t,1,5}\); that is, it is squeezed out when the latter is high and bounces when the latter is low. When we inspect \(\lambda^{(3)}_{k,6}\) in Figure 6(f), the following causes of death have rising weights including 2. Some infectious and parasitic diseases, 6. Psychic and behavioral disorders, 7. Diseases of the nervous system and sense organs, 10. Diseases of the digestive system, 11. Diseases of the skin and subcutaneous tissue and 12. Diseases of the musculoskeletal system and connective tissue. In the Poisson regression component, we observe positive global main effect of COVID lockdown measures on the mortality rate of psychic and behavioral disorders, however, the squeezing phenomenon does not contradict our previous arguments; in fact, since we are discussing latent class \(h_{1}=1\) crucial to female population except for older ones in southern Italy, it actually suggests a local compensation effect specific to this demographic group.
We have commented beforehand that latent class \(h_{1}=2\) are unique to three southern Italian regions, Campania, Calabria and Sicily and now we see that the determining trajectory \(\lambda^{(2)}_{t,2,1}\) in Figure 5(b) has high estimated rates in causes of death 5. Endocrine, nutritional and metabolic diseases as well as 17. Symptoms, signs, abnormal results and ill-defined causes in Figure 6(a). It shows strong seasonality with peaks in both winter and summer. Endocrine, nutritional and metabolic diseases have been documented to be related to winter holidays (Phillips et al., 2010) and heat exposure (Zhao et al., 2019). On the other hand, the seasonality of symptoms, signs, abnormal results and ill-defined causes in these three regions may consist of misclassified deaths related to seasonal illnesses. Latent class \(h_{1}=3\) is almost exclusively explanatory for female older than 85 years old in northern Italy and some male age groups in the south. The class portraits a pattern where COVID-19 mortality rate goes through two spikes in June 2020 and October 2020 in Figure 6(c). As stated in the previous paragraph, the spike in the end of the first wave is possibly due to the lag between contracting and death of COVID-19; the spike in mortality rate in October anticipates the strike of second COVID-19 wave. This can be the outcome of many factors, for instance, even though Italy has gone through the first wave, in face
of second wave, testing and reporting of COVID-19 cases are still insufficient, leading to an underestimating of real case number. The health system is also not thoroughly prepared to combat the much intenser comeback of COVID-19 in the coming fall and winter. We distinguish two types of displacements between case peak and mortality peak. The first type is usually seen after a previous COVID-19 wave and it is due to the time lag between contracting COVID and final death whereas the second type predicts the incoming COVID-19 hit, which is particular true in 2020 when the society and the health system are seriously under prepared to tackle the pandemic. Additionally recall that this represents local effects for female older than 85 years old in northern Italy and certain male age groups in the south, suggesting that underpreparedness is particular detrimental to those people. We also notice that the trajectories of all other causes of death are crowded out by \(\lambda_{t,3,5}^{(2)}\) in 2020, offering evidences to the hypothesis that potential harvesting effect exists. Next latent class \(h_{1}=4\) underlies the mortality composition of young male Italian population in the north, all male and female population in the south. The essential feature of this class is the downward trend of trajectory \(\lambda_{t,4,3}^{(2)}\) displayed in Figure 5(d). A closer look at Figure 6(c) reveals that 5. Endocrine, nutritional and metabolic diseases, 8. Diseases of the circulatory system and 18. External causes of trauma and poisoning are the three causes that define the mortality structure in \(\lambda_{k,3}^{(3)}\). The trajectories indicates these three mortality causes tend to be seasonal. Although we have briefly commented on the seasonality of mortality due to endocrine, nutritional and metabolic diseases observed in Campania, Calabria and Sicily, we elaborate on the fact that the seasonality is distinct with young male Italian population in the north, all male and female population in the south except for the three regions just mentioned. The spikes are generally less drastic in the second demographic group. For instance, when heatwave hits Campania, Calabria and Sicily in the summer of 2017, causing noticeable increase in the number of people dying of endocrine, nutritional and metabolic diseases, the situation is less severe in the north. Another observation worth pointing out is that endocrine, nutritional and
Figure 6: Bar plots of \(\lambda_{k,h_{2}}^{(3)}\) of 18 causes of death (horizontal axes) for each latent \(h_{2}=1,\ldots,H_{2}\). Black horizontal lines stand for the Gamma prior mean \(\epsilon_{1}/\epsilon_{2}\) on \(\lambda_{k,h_{2}}^{(3)}\).
metabolic diseases are more lethal for older female population as indicated in Table 8(b) and Table 8(d) in the Appendices. Diseases of the circulatory system are causes of death whose seasonality has been widely studies as well and our findings concur with previous findings in the literature (Fares, 2013; Stewart et al., 2017). Lastly, the seasonality of external causes of trauma and poisoning may largely be contributed to increasing traffic accidents in the winter and outdoor activities in the summer.
Latent class \(h_{1}=5\) in Figure 5(e), which is primarily significant for both male and female in northern Italy, has two major attributes. One is that trajectory \(\lambda^{(2)}_{t,5,2}\) representing mortality rate of 9. Diseases of the respiratory system shows an abnormal spike around March and April 2020 when health system is overwhelmed in northern Italy and many COVID-19 deaths are mis-classified. Similar argument has been made when we interpret the coefficients of Poisson regression component of the BPRTTD model. The other attribute that characterizes the latent class is \(\lambda^{(2)}_{t,5,5}\) with its two peaks first in February and and then July 2020. Both types of displacement of COVID mortality rate appear. Almost all male between the age 50 and 89 and female between 70 and 94 in northern Italy except for Veneto and Friuli-Venezia Giulia experience the second type of displacement and are subjects to elevated mortality rate of dying of COVID-19 in the beginning of first wave (February and March 2020). On the contrary, the second type occurs to older female population in northern Italy and certain male age groups in the south only in the beginning of second wave as previously illustrated. We close the analysis by commenting on latent class \(h_{1}=6\) shown in Figure 5(f) which features constant trend of \(\lambda^{(2)}_{t,6,4}\) defined mostly by tumor and respiratory diseases shown in Figure 6(d). Another relevant trajectory \(\lambda^{(2)}_{t,6,2}\) captures expected seasonality of respiratory disease deaths. This is the mortality structure shared by male older population and female population under 69 across almost all Italian regions.
## 6 Summary and Future Work
In this paper, we propose to model Poisson count data using the BPRTTD model. The model comprises two parts, the first part is the Poisson regression model. In the second part, the data are organized as tensor and we apply tensor train decomposition to estimate latent parameter space for explanatory purposes. The Bayesian inference framework is validated in two simulation studies and then applied to the Italian monthly causes specific mortality data from January 2015 to December 2020. The regression component leverages information in covariates and we are able to identify causes of death that are positively, negatively and not related to government interventions during the COVID-19 pandemic. We also discover the joint effects of age, gender and causes of death on mortality rate via the tensor decomposition component that compensates what the Poisson regression fails to account for. It enables a further stratification of demographic profiles characterized jointly by geographical location, gender and age based on their unique dynamic mortality structures over the time span. Regional classification are made and the results coincide with conventional conception. COVID-19 related consequences are also revealed in the latent parameters. Several causes of death, including infectious and parasitic diseases and psychic and behavioral disorders, compete with COVID-19 mortality among specific demographic groups.
In the BPRTTD model, we have not fully exploit the spatial-temporal information in the data. For instance, instead of applying clustering algorithms to the posterior estimates, one can introduce reasonable distance measures and utilize geographic locations encoded in \(\lambda_{i,h_{1}}^{(1)}\) when specifying the model. \(\lambda_{t,h_{1},h_{2}}^{(2}\) can also be modeled in a time series framework so that the temporal dependence can be inferred. Another possible future direction is the proper choice of tensor train ranks in the BPRTTD model plays an important role in controlling the model complexity. The model selection can be accomplished by calculating marginal likelihoods over a pre-specified grids defined by tensor train ranks. Due to the increased computational burden this solution would require, we leave its exploration to future work. |
2303.08842 | Multiwavelength monitoring of the nucleus in PBC J2333.9-2343: the giant
radio galaxy with a blazar-like core | PBC J2333.9-2343 is a giant radio galaxy at z = 0.047 with a bright central
core associated to a blazar nucleus. If the nuclear blazar jet is a new phase
of the jet activity, then the small orientation angle suggest a dramatic change
of the jet direction. We present observations obtained between September 2018
and January 2019 (cadence larger than three days) with Effeslberg, SMARTS-1.3m,
ZTF, ATLAS, Swift, and Fermi-LAT, and between April-July 2019 (daily cadence)
with SMARTS-1.3m and ATLAS. Large (>2x) flux increases are observed on
timescales shorter than a month, which are interpreted as flaring events. The
cross correlation between the SMARTS-1.3m monitoring in the NIR and optical
shows that these data do not show significant time lag within the measured
errors. A comparison of the optical variability properties between non-blazars
and blazars AGN shows that PBC J2333.9-2343 has properties more comparable to
the latter. The SED of the nucleus shows two peaks, that were fitted with a one
zone leptonic model. Our data and modelling shows that the high energy peak is
dominated by External Compton from the dusty torus with mild contribution from
Inverse Compton from the jet. The derived jet angle of 3 degrees is also
typical of a blazar. Therefore, we confirm the presence of a blazar-like core
in the center of this giant radio galaxy, likely a Flat Spectrum Radio Quasar
with peculiar properties. | L. Hernández-García, F. Panessa, G. Bruni, L. Bassani, P. Arévalo, V. M. Patiño-Alvarez, A. Tramacere, P. Lira, P. Sánchez-Sáez, F. E. Bauer, V. Chavushyan, R. Carraro, F. Förster, A. M. Muñoz Arancibia, P. Ubertini | 2023-03-15T18:00:03Z | http://arxiv.org/abs/2303.08842v1 | Multiwavelength monitoring of the nucleus in PBC J2333.9-2343: the giant radio galaxy with a blazar-like core
###### Abstract
PBC J2333.9-2343 is a giant radio galaxy at z = 0.047 with a bright central core associated to a blazar nucleus. If the nuclear blazar jet is a new phase of the jet activity, then the small orientation angle suggest a dramatic change of the jet direction. We present observations obtained between September 2018 and January 2019 (cadence larger than three days) with Effeslberg, SMARTS-1.3m, ZTF, ATLAS, _Swift_, and Fermi-LAT, and between April-July 2019 (daily cadence) with SMARTS-1.3m and ATLAS. Large (\(>\)2\(\times\)) flux increases are observed on timescales shorter than a month, which are interpreted as flaring events. The cross correlation between the SMARTS-1.3m monitoring in the NIR and optical shows that these data do not show significant time lag within the measured errors. A comparison of the optical variability properties between non-blazars and blazars AGN shows that PBC J2333.9-2343 has properties more comparable to the latter. The SED of the nucleus shows two peaks, that were fitted with a one zone leptonic model. Our data and modelling shows that the high energy peak is dominated by External Compton from the dusty torus with mild contribution from Inverse Compton from the jet. The derived jet angle of 3 degrees is also typical of a blazar. Therefore, we confirm the presence of a blazar-like core in the center of this giant radio galaxy, likely a Flat Spectrum Radio Quasar with peculiar properties.
keywords: galaxies - individual: PBC J2333.9-2343, galaxies - nuclei, galaxies - active
## 1 Introduction
Active Galactic Nuclei (AGN) are thought to be powered by supermassive black holes at the center of the galaxies, which are fed by matter falling from the accretion disk (Rees, 1984). Among nearby AGN, only \(\sim\)10% show biconical relativistic jets (Panessa et al., 2016; Padovani et al., 2017). Jets are produced by charged particles accelerated and collimated relativistically in a strong magnetic field. Jets imprint their mark at radio frequencies due to synchrotron radiation and these AGN are classified as radio galaxies. When one of the jets is oriented close to the line of sight of the observer, its emission is relativistically Doppler boosted, and can dominate over all
of the other sources of radiation (Blandford & Rees, 1978; Giommi et al., 2012a). The simplistic Unified Model (UM) of AGN tries to explain the different types of AGN due to orientation effects, with radio galaxies being those where the pair of jets can be observed in the plane of the sky, and blazars represent sources where one of the jets is pointing to the line of sight to the observer (Urry & Padovani, 1995).
It is commonly believed that the AGN phenomenon is a phase in the life cycle of a galaxy, where the nuclear activity can be reactivated 10-100 times during its lifetime, with typical timescales for this phases of about \(10^{5}\)-\(10^{6}\) years (Hickox et al., 2014; Schawinski et al., 2015; Shen, 2021). The total growth time of a supermassive black hole has been estimated to be \(10^{7}\)-\(10^{9}\) yr (Marconi et al., 2004; Konar et al., 2006).
One example of the reactivation of AGN activity is from Giant Radio Galaxies (GRGs, Ishwara-Chandra & Saikia, 1999; Lara et al., 2001), which are characterized by showing linear sizes larger than 0.7 Mpc from radio-lobe to radio-lobe. Some GRGs show multiple episodes of nuclear activity that can be observed in the same radio image at different spatial scales. The farthest lobes correspond to the oldest ejection, where the most energetic electrons have already cooled, whereas the structures located closer to the nucleus represent younger and active jets. This is usually interpreted as recurrent nuclear activity, where the old jets are relics of past radio activity (Lara et al., 1999, 2002; Saikia & Jamrozy, 2009; Gopal-Krishna et al., 2012; Bruni et al., 2020, and references therein). A particular case are the X-shaped radio galaxies (XRGs, Leahy & Williams, 1984; Leahy & Parma, 1992), which are characterized by exhibiting two misaligned pairs of radio lobes, with the primary lobes created by a pair of powerful radio jets emanating from the central AGN, whereas the origin of the secondary pair is still under debate. Recent works have indeed proposed that not only one scenario can explain their shapes, but that this morphology can be the result of different mechanisms, with one of the options being the reorientation of the jets (see Joshi et al., 2019; Bhukta et al., 2022, and references therein).
PBC J2333.9-2343 is a radio galaxy at redshift z = 0.047 with observational characteristics that point to an extreme reinitiated nuclear activity. It was first selected from the INTEGRAL sample because of its different classifications when observed at different wavelengths (Bassani et al., 2016). At Mpc scales, the Very Large Array (VLA) radio map shows two jets with a linear size of 1.2 Mpc (Bassani et al., 2016), whereas at milliarcsecond scale, the Very Long Baseline Array (VLBA) images show an optically thick compact core with an inverted, self-absorbed spectrum and a steep-spectrum, optically thin jet that extends out asymmetrically to over 60 mas, i.e., \(\sim\)53 pc in projection. Based on the observed radio spectral index, the angle between the jet and the observer must be smaller than 40 degrees (Hernandez-Garcia et al., 2017). Furthermore, its infrared WISE colors and association with a radio source would classify it as BL Lac (D'Abrusco et al., 2014), and the modeling of the spectral energy distribution (SED) constrains the angle between the small-scale jet and the observer to less than 6 degrees (Hernandez-Garcia et al., 2017). The most plausible explanation for resolving the discrepancy for the different classifications given to this AGN is that, in the past, the nuclear activity was restarted and the jets changed direction so that now a jet is pointing towards us. Indeed, Bruni et al. (2020) presented a deep image with the Giant Metrewave Radio Telescope (GMRT) at 150 MHz showing a lack of emission between the lobes and the core region, reinforcing the scenario for which the jet has changed its direction due to a major event, like a galaxy major merger. This would imply that for the first time we are observing a transformation from a GRG (with two lobes in the plane of the sky) to a blazar (with a jet pointing towards us), a very exceptional case of jet reorientation. Previous observations of PBCJ2333.9-2343 showed variations at all observed wavelengths between radio and X-rays, as well as in the broad optical emission lines, with flux and spectral variations of about 60%, but the epochs of observation were so sparse (\(>\)1 year) that the typical timescales could not be estimated (Hernandez-Garcia et al., 2018). In order to constrain the timescales of these variations and to shed light on the nature of this source, we organized a multi-frequency monitoring that was carried out during 2018 and 2019, where contemporaneous observations were obtained with _Swift_ (X-rays and ultraviolet), SMARTS-1.3m (infrared and optical photometry), and Effelsberg-100m (radio frequency). This study is complemented by data provided through the alert systems Zwicky Transient Facility (ZTF), the Asteroid Terrestrial-impact Last Alert System (ATLAS), plus detections by the Very Long baseline Array (VLBA), the Very Large Array Sky Survey (VLASS), and the Rapid ASKAP Continuum Survey (RACS), in the radio, and Fermi-LAT at gamma-rays.
The paper is organized as follows. In Section 2 we present the observations and data reduction. In Section 3 the results from the variability analysis, cross correlation and the SED can be found. A discussion is presented in Section 5. Finally, the main results of this work are summarized in Section 6.
## 2 Observations and data reduction
In this section we present the multiwavelength data that will be used for the analysis. Figures with the light curves for each instrument and Tables containing the data are presented in Appendix A.
### Radio data
#### 2.1.1 Effelsberg-100m monitoring
We performed a multi-frequency monitoring with the Effelsberg-100m radio telescope, from June to December 2018, with a cadence of almost once per month (project ID 15-18). Secondary focus receivers at four different frequencies were used: 4.8, 8.5, 10.5, and 20.4 GHz (see Table 2 for technical details). Observations were performed in cross-scan mode, correcting the pointing on suitable calibrators at a few tens of arcminutes distance from the target. The flux density scale was calibrated on well known sources from Oti et al. (1994). Data reduction was performed with the TOOLBOX21 software, extracting the flux density via single-component Gaussian fit of the cross-scans. The measurement errors were calculated as the sum in quadrature of the cross-scan rms and a 10% of the total flux density, with the latter being a conservative estimate of the uncertainty on the absolute flux scale calibration.
Footnote 1: [https://gitlab.mpifr-bonn.mpg.de/effelsberg/toolbox2.git](https://gitlab.mpifr-bonn.mpg.de/effelsberg/toolbox2.git)
Footnote 2: [http://www.aips.nrao.edu](http://www.aips.nrao.edu)
#### 2.1.2 Very Long Baseline Array
Observations with the Very Long Baseline Array (VLBA) were performed on June 28th, 2018, at 15 GHz and 24 GHz (project ID BB390A). The same setup of the previous observations presented in Hernandez-Garcia et al. (2017) was adopted. The on-source time was 0.5 hours at 15 GHz, while 1.5 hours at 24 GHz. Data were reduced with the astronomical image processing system (AIPS2)
following standard procedures. Imaging was performed in DIFMAP3, via several cycles of phase and phase-amplitude self-calibration.
Footnote 3: [https://science.nrao.edu/facilities/vlba/docs/manuals/os2013a/post-processing-software/difmap](https://science.nrao.edu/facilities/vlba/docs/manuals/os2013a/post-processing-software/difmap)
#### 2.1.3 Ancillary data from new radio surveys
Additionally, we considered data from the Very Large Array Sky Survey at 3 GHz (VLASS, Lacy et al., 2020) observed on 30 June 2019, and the Rapid ASKAP Continuum Survey at 0.88 GHz (RACS, McConnell et al., 2020) obtained on 27 March 2020. These data will be used for visualization purposes only (see Section 3.3).
### Infrared and optical photometry with SMARTS
PBC J2333.9-2343 was monitored using the dual optical/NIR photometer ANDICAM, that have a pixel scale of 0.37\({}^{\prime\prime}\)/px (optical) and 0.27\({}^{\prime\prime}\)/px (NIR), mounted on the SMARTS-1.3m telescope, located in Cerro Tololo, Chile. We carried out two observing campaigns. The first one used the \(V\) and \(K\) bands, had a four days average cadence with three exposures per epoch and was carried out between August 2018 and January 2019, to coincide with the targeted campaigns in the other energy bands. The second campaign used the \(I\) and \(K\) filters and was carried out between April-July 2019 with an average cadence of one day and four exposures per epoch, to refine the measurement of rapid optical and NIR fluctuations.
The \(V\) and \(I\) band images were bias-subtracted and corrected by flat-fielded using the observatory standard pipeline. The reduced images were aligned to a reference frame using the xyXYmatch, geomap and geotran iraf tasks. The images were degraded to a common seeing of FWHM=7.5 pixels using the task psfmatch. We then performed aperture photometry on the target and several other sources in the field using a common coordinate file and fixed small aperture of 8 pixels in diameter, with the task phot.
These fluxes were used to build a relative photometry light curve by dividing the flux of PBC J2333.9-2343 by that of the sum of the fluxes of four other sources in the field:
\[\frac{ADU_{gal}}{\sum ADU_{i}}=\frac{f_{gal}}{\sum f_{i}}\;\Rightarrow\;f_{ gal}=\frac{ADU_{gal}}{\sum ADU_{i}}\cdot\sum f_{i} \tag{1}\]
where ADU are the counts on the detector and \(f_{i}\) are the calibrated fluxes of the stars obtained from the literature as described at the end of this section.
These comparison sources were selected to be bright but well below the saturation limit and that the pair-wise relative light curves between any pair showed a consistent constant flux ratio. Three or four consecutive exposures of 225 s for the \(I\) band and 300 s in the \(V\) band were taken each observing night. We averaged the relative fluxes within a night and considered the root-mean-squared (RMS) scatter of the fluxes as an estimate of the uncertainty in the flux measurement.
In the case of the NIR detector, two co-added exposures of 90 s each were taken at three different pointings with a small offset respect to the field size each observing night. The three pointings ensure that the target and two nearby comparison objects appear in each exposure at a different location in the detector, so that consecutive images can be used to subtract the bright sky emission. Naming the exposures as A, B and C in chronological order, we computed the difference images A-B, B-A and C-B using the task imarith in iraf to produce three sky-subtracted images. We further degraded these to a common FWHM of 8 pixels with the task gauss in iraf and extracted aperture photometry for the targets and two comparison objects with an aperture diameter of 8 px. As the field of view of the NIR detector is smaller than that of the optical detector, we could only use the nearest two comparison objects of the four used in the optical case. One of them, however, was too dim to produce useful photometric data so the final relative photometry was done with respect to the brightest of the two comparison objects, which is 20 times brighter than our target. The three relative photometry points per observing night were averaged together and the RMS scatter between the three points is used as an estimate of the uncertainty.
We then calibrated the relative flux by multiplying it by the sum of the fluxes of the comparison objects, where the reference fluxes in the V and K bands were obtained from the "TESS input catalog v8.0" (Stassun et al., 2019), and for the I band it was obtained from the "The USNO-B1.0" (Monet et al., 2003).
### Optical photometry
#### 2.3.1 Zwicky Transient Facility (ZTF)
The Zwicky Transient Facility (ZTF, Graham et al., 2019; Bellm et al., 2019; Masci et al., 2019) surveys the extragalactic Northern Sky every three days in the g, r and i optical filters since 2018. It uses the Palomar Observatory's Samuel Oschin 48" Schmidt telescope, that has a 47 deg\({}^{2}\) field of view (FoV) composed by 16 CCDs and reaches 20.5 r-band mag in 30 seconds exposure time. The images are processed by the Infrared Processing and Analysis Center (IPAC) pipeline.
ZTF offers different services, including 1) a public alert system, for real-time, time-domain science, 2) Data Releases (DRs) every two months, including photometry measurements on the science images and 3) the Forced Photometry Service on-demand and per source, including photometry measurements on the reference-subtracted science images.
For an alert to be generated, a source has to show a variation above 5\(\sigma\) of confidence level with respect to a reference image4. We cross-matched the coordinates of PBC J2333.9-2343 with the ZTF alert stream using the Web Interface5 provided the Automatic Learning for the Rapid Classification of Events (ALeRCE, Forster et al., 2021) broker, and found that this source (ZTF name is ZTF18abwpdny) had alerts during the monitoring that we performed (AT 2018igu, Nordin et al., 2018).
Footnote 4: A reference image is generated by ZTF from the stacking of at least 10 images of the source.
Footnote 5: [http://alerce.online/](http://alerce.online/)
For this work we retrieved data from the ZTF Forced Photometry Service. The measurements obtained from this service are less affected by extranuclear emission than in DRs because it is obtained from the reference-subtracted images, therefore isolating the variable nuclear component. The light curves were cleaned with the following criteria: the processing summary/QA bits for science image inprofits=0, we used data only from the CCD with the largest number of data points per filter, as well as rejecting bad-data quality flags that are explained in the ZTF Public DR6. The following are filter dependent and are related to the photometric zero point (ZP) for science image (zpmagnpsci, in [mag]), and its deviation
from the average (rms) difference between instrumental magnitudes and PanSTARRS1 calibrators (zpmagimpscirms, in [mag]), so we rejected data points that fulfill the following criteria:
* 0.2secz OR Zpmagimpsci < Z\(\overline{\text{Z}}_{thres}\)[cid]
- 0.2secz OR zmgslmatches < 80 OR zpmagimpscirms > 0.06 Filter r: Zpmagimpsci > 26.65
- 0.1secz OR zpmagimpsci < Z\(\overline{\text{Z}}_{thres}\)[cid]
- 0.1secz OR zpslmatches < 120 OR zpmagimpscirms > 0.05 Filter i: Zpmagimpsci > 26.0
- 0.07secz OR zpmagimpsci < Z\(\overline{\text{Z}}_{thres}\)[cid]
- 0.07secz OR zpslmatches < 100 OR zpmagimpscirms > 0.06 where secz is the airmass, that was computed by using the EarthLocation function within astropy (Astropy Collaboration et al., 2013, 2018); ZP\({}_{thres}\)[cid] is the CCD-quadrant-based ZP thresholds to identify bad quality images7; and npslmatches is the number of PS1 calibrators. Footnote 7: [http://web.ipac.caltech.edu/staff/fmassci/ztf/zp_thresholds_quadID.txt](http://web.ipac.caltech.edu/staff/fmassci/ztf/zp_thresholds_quadID.txt)
The difference PSF magnitudes were converted into apparent magnitude following Forster et al. (2021). The resulting magnitudes were color corrected using the linear color coefficient from ZTF calibration and the colors in Pan-STARRS1 (clrcoeff*(g-r) for g and r, and clrcoeff*(r-i) for the i-filter. We used data between July-December 2018. We note that data during the second monitoring in 2019 is not included because even if there was coverage later on with ZTF, there are only a few data points and do not provide additional information.
#### 2.3.2 Asteroid Terrestrial-impact Last Alert System (ATLAS)
The Asteroid Terrestrial-impact Last Alert System (ATLAS, Tonry et al., 2018) surveys the entire north sky8 (dec > -50 degrees) every two nights in two optical filters, c (4157.33-6556.44 A) and o (5582.07-8249.18 A). ATLAS began observations in 2015 with one telescope and its two-telescopes version has been operational since 2017. Each of these are 50-centimeter diameter f/2 Wright-Schmidt telescopes, with a 7.4 deg FOV. At 30 seconds per exposure, they reach a magnitude limit of 19.
Footnote 8: ATLAS started monitoring the southern sky in 2022.
We obtained the data from the ATLAS Forced Photometry Service9 on the science images. In order to remove bad data points, we applied filters to the errors and the limiting magnitude as follows. We obtained the cumulative distributions of the flux error and limiting magnitude and determined the percentiles at which the slope of the distributions changed. This resulted in a 90% percentile for the flux error (duJv < 62) and 5% percentile for the limiting magnitude (magSig < 17.2). We also filtered by date to include only data during our monitoring campaign (MJD between 58320 and 58700).
Footnote 9: [https://fallingstar-data.com/forcedphot/](https://fallingstar-data.com/forcedphot/)
### UV data
The Ultraviolet and Optical Telescope (UVOT, Roming et al., 2005) onboard the Neil Gehrels _Swift_ Observatory has six primary photometric filters. During the monitoring only the UVM2 (centered at 2246 A) filter was used, in order to maximize the cadence and S/N of the resulting light curve. These observations were performed simultaneously to the X-ray observations weekly between August-November 2018.
The uvotsource task within the software HEASoft version 6.26 was used to perform aperture photometry using a circular aperture radius of 5 arcsec centred on the coordinates of PBC J2333.9-2343. The background region was selected free of sources adopting a circular region of 20 arcsec close to the nucleus.
### X-ray data
We performed weekly monitoring with the _Swift_ X-ray Telescope (XRT, Burrows et al., 2005) onboard the Neil Gehrels _Swift_ Observatory in the Photon Counting mode between August-November 2018, with a total of 17 observations taken simultaneously to the UV observations.
The data reduction was performed following standard routines described by the UK Swift Science Data Centre (UKSSDC) using the software in HEASoft version 6.26. Calibrated event files were produced using the routine xrtpipeline, accounting for bad pixels and effects of vignetting, and exposure maps were also created. Source and background spectra were extracted from circular regions with 30 arcsec and 80 arcsec radius, respectively. The xrtmkaratask was used to create the corresponding ancillary response files. The response matrix files were obtained from the HEASARC CALibration DataBase. The spectra were grouped to have a minimum of 20 counts per bin using the gpppha task.
The X-ray data analysis was performed using XSPEC v. 12.10.1. We assumed a Galactic absorption of \(N_{Gal}=1.63\times 10^{20}cm^{-2}\)(Dickey & Lockman, 1990). Following the results obtained in Hernandez-Garcia et al. (2017) and Hernandez-Garcia et al. (2018), where _Swift_ data between 2010 and 2017 were analyzed, we fitted the _Swift_/XRT spectra with a single power-law model. All the spectra were fitted simultaneously to search for X-ray spectral and flux variability (Hernandez-Garcia et al., 2013). Modeling all epochs with linked photon index, \(\Gamma\), and normalizations of the power law (i.e. assuming the non-variable case) results in a \(\chi^{2}\) of 2481.7 for 1242 degrees of freedom (dof), i.e. a reduced \(\chi^{2}\) of 2). Allowing the normalizations to vary independently results in a satisfactory fit with \(\chi^{2}\)/dof= 1198.08/1121 = 0.99, whereas allowing \(\Gamma\) to vary results in a reduced \(\chi^{2}\) of 1.74. Varying together the normalizations and \(\Gamma\) did not improve the fit (F-test=0.02). From this spectral fit we obtained the intrinsic X-ray luminosity of the source for each epoch.
We used _XMM-Newton_ data (from pn and OM) analyzed in Hernandez-Garcia et al. (2017) to compare multiwavelength data of the source in different epochs through its spectral energy distribution (SED, see Section 3.3). We refer the reader to this paper for details on the data reduction and analysis. The only difference between the data presented here, and the one published before, is that there was an error in the unit conversion of a factor two in the X-ray data that is now corrected for a proper comparison.
### Gamma-Ray Data
We used data from the Fermi Large Area Telescope (LAT, Abdo et al., 2009) database. First, we checked that the 95% error ellipse of this source includes only the nuclear region, excluding the lobes as gamma-ray emitters. We calculated the gamma-ray flux integrating data between May 1st 2018 at 00:00 (UT) and September 30th 2019 at 24:00 (UT), in order to match the observation epochs of the data in the other wavebands. We considered data in the energy range 0.1-300 GeV, and analyzed it with the Fermitools version 1.0.2. From the 4FGL catalog (Abdollahi et al., 2020), all sources within 15\({}^{\circ}\) of the location of PBC J2333.9-2343 were included in the model. The spectral parameters for the sources within 5\({}^{\circ}\) were left free, while for the remaining sources only the normalization was left free. We note that we were not able to obtain a light curve because the smaller time bins tested did not have enough signal to noise ratio to consider them statistically significant detections (test statistic, TS \(\geq\) 25).
For the aforementioned epochs, we obtained an integrated flux of \((1140.8\pm 6.2)\times 10^{-11}\) photons s\({}^{-1}\)cm\({}^{-2}\) in \((599.4\pm 6.5)\times 10^{-14}\) erg s\({}^{-1}\)cm\({}^{-2}\) at a 6.2\(\sigma\) of significance level. The data were fitted with a power law model with spectral index \(\Gamma\) = 2.417\(\pm\)0.002.
In order to compare with the results in Hernandez-Garcia et al. (2017), we performed the same analysis in Fermi-LAT data between January 1st 2015 at 00:00 (UT) and December 31 2015 at 24:00 (UT). We obtained an integrated flux of \((603.8\pm 3.4)\times 10^{-11}\) photons s\({}^{-1}\)cm\({}^{-2}\) in \((520.0\pm 5.9)\times 10^{-14}\) erg s\({}^{-1}\)cm\({}^{-2}\) at a 5.1\(\sigma\) of significance level, and \(\Gamma\) = 2.149\(\pm\)0.002.
Regarding the use of gamma-ray data to constrain the SED model, we prefer to not use the fluxes obtained above, because they span over three orders of magnitude in energy. Therefore, we made three equal logarithmic energy bins for the same epochs detailed above, to try and build a low-resolution gamma-ray spectra. The three energy bins are between the energies 100 MeV - 1.44 GeV - 20.8 GeV - 300 GeV.
For the 2018-2019 data, the first energy bin resulted in a TS of 19.2, the second energy bin yielded a TS of 14.9, and the third energy bin resulted in a TS lower than 0, which means that the isotropic and diffuse background are brighter than the source at those energies. Since the TS for any of the three bins is not enough to be considered a detection (i.e., TS \(\geq\) 25), we decided to compute upper limits (with Bayesian analysis, using FermiPy) for the first and second energy bins - it was impossible for the third one. For the first energy bin, we obtained an upper limit of \(7.8\times 10^{-9}\) ph s\({}^{-1}\) cm\({}^{-2}\)\(\equiv\) 3.2 \(\times\) 10\({}^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). For the second energy bin, we obtained an upper limit of \(2.1\times 10^{-10}\) ph s\({}^{-1}\) cm\({}^{-2}\)\(\equiv\) 1.3 \(\times\) 10\({}^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\).
For the 2015 data, we did not get a significant detection, the TS we obtained for the first, second and third energy bins are 7.5, 20.4 and lower than zero, respectively. This time, we were only able to get an upper limit for the second energy bin, for the third energy bin it was impossible, and for the first one, the analysis did not converged satisfactorily. For the second energy bin, we obtained an upper limit of \(3.7\times 10^{-10}\) ph s\({}^{-1}\) cm\({}^{-2}\)\(\equiv\) 1.6 \(\times\) 10\({}^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). These upper limits will be used in Section 3.3.
In order to visually compare our upper limits, we included the results reported in Abdollahi et al. (2020), who presented the Data Release 3 (4FGL-DR3) for Fermi-LAT including 12 years of survey data. PBC J2333.9-2343 shows detections in three out of the eight energy bands: 0.3-1, 1-3, and 3-10 MeV (see Section 3.3).
## 3 Results
### Variability analysis
In Fig. 1 we plot the multiwavelength light curves for the monitoring in 2018, spanning between MJD=58320 (July 21st, 2018) and 58500 (January 17th, 2019). For simplicity and clarity, we plot only the most representative light curve per instrument (see Appendix A for light curves in all bands). The selection of the light curves obeys to the following criteria: for Effelsberg we selected the 20.4 GHz because is the only one showing variations in the radio, from SMARTS-1.3 the K band shows NIR variations, from ZTF and ATLAS the r-band and o-band because these have better cadence, from _Swift_/UVOT there is only one band, and from _Swift_/XRT the 2-10 keV band because it represents the nuclear source. In Fig. 2 we plot the monitoring performed between MJD=58600-58700 (April 27th-August 5th, 2019) with SMARTS-1.3m and also observed by ATLAS. These observations were taken with a cadence of 1-3 days, allowing a more detailed analysis of the observed variations.
We show the results of the variability analysis of these light curves in Table 1. For each of the observed bands, we list the mean flux and its standard deviation. The \(\chi^{2}\) of the light curve with respect to a constant flux is also presented with the degrees of freedom (d.o.f). In addition, we have estimated the normalized excess variance, \(\sigma^{2}_{NXS}\) and its error \(\Delta\sigma^{2}_{NXS}\), which represents the variability amplitude of the light curves, following the prescriptions in Vaughan et al. (2003). A source is considered to be variable when \(\sigma^{2}_{NXS}>0\) within the errors, i.e. when the intrinsic amplitude of the variability is greater than zero. We also report F\({}_{var}\), i.e., the intrinsic variability amplitude as in Vaughan et al. (2003). Finally we include the percentage of the variations, estimated as the change between the minimum and maximum flux in the light curves.
The results obtained from this analysis are the following:
* Flux variations are found in almost all the observed frequencies:
* In the radio, flux densities at different epochs with Effelsberg at 4.8, 8.5, and 10.5 GHz are consistent within errors, thus showing no variability. At 20 GHz, a significant difference between epochs is found, with values varying by a factor of four - between \(\sim\)0.5 Jy and \(\sim\)2 Jy - in only two months, at a 13\(\sigma\) of confidence level. The fact that variability is only evident at frequencies \(>\)10 GHz is most probably due to the different radio-emitting regions probed at high frequencies, the latter being linked to the inner part of the jet (closer to the core) and thus more subject to Doppler boosting.
* Variations in the K band flux observed by SMARTS-1.3m between 7.3 and 12.13 mJy are detected, at a 6\(\sigma\) of confidence level in 2018, and 20\(\sigma\) during the 2019 monitoring.
* Optical variations in the g,r,i (ZTF), o,c (ATLAS), and \(V\),\(I\) (SMARTS-1.3m) by a factor of about two are observed in all bands. In all cases these variations are detected at confidence levels larger than 14\(\sigma\).
* The UV _Swift_ observations show changes between 0.96 and 2.36 \(\times\) 10\({}^{-15}\) erg s\({}^{-1}\)cm\({}^{-2}\), i.e., a flux increase by a factor of 2.5. The variations are detected at 16 \(\sigma\) of confidence level.
* X-ray variations are detected only at a 2\(\sigma\) of confidence level, with flux variations by a factor of 1.4.
* The observed flux variations show a flaring behaviour. Three events occurred during this monitoring and were detected at different frequencies:
* On September 25th, 2018 the ZTF reported a first alert com
Figure 1: Multiwavelength light curve of the monitoring of PBC J2333.9-2343 during 2018. Note that for more clarity, only one light curve per instrument is plotted here. The grey dashed lines represent the peak of the first and second flares as observed by _Swift_/UVOT/UVM2 that occurred at MJD=58387 and 58429.
ing from the coordinates of PBC J2333.9-2343. These alerts are produced only for changes above 5\(\sigma\) when compared to a reference image. This first flare had its peak around September 26th, 2018 (MJD=58387) and was detected by _Swift_/UVOT, and ZTF (see Fig. 1). The data points before and after this date observed by _Swift_/UVOT, when the higher amplitude of the variation is detected, were September 19th and October 3rd, 2018, i.e., 16 days, so this represents the maximum duration for this first flare. The left black dashed line of Fig. 1 represents the highest point in _Swift_/UVOT.
- The second flare peaked around November 11th, 2018 (MJD=58429) and was detected by _Swift_/UVOT, ZTF, ATLAS, and SMARTS-1.3m. The data points before and after this date observed observed by _Swift_/UVOT were October 31st and November 14th, 2018, i.e., 15 days. The right black dashed line of Fig. 1 represents the highest point in _Swift_/UVOT during this flare.
- The third flare peaked on June 10th, 2019 (MJD=58644.5). This was detected by SMARTS-1.3m and ATLAS, the only telescopes that were monitoring during that period. The flare occurred between May 25th and June 22nd, 2018 as observed by SMARTS-1.3m, i.e., 28 days (see Fig. 2).
- Effelsberg also shows variations at 20.4 GHz, but the sampling of the light curve does not allow a detailed study at this frequency. The maximum flux observed at 20.4 GHz was on November 20th, 2018, i.e., in the middle of the first and second flare, so we cannot assume that it is related to any of the flares.
* In order to quantify the variability timescales of the source, we calculated the doubling/halving times, \(t_{d}\) (see e.g. Brown, 2013; Saito et al., 2013; Kapanadze et al., 2018; Abhir et al., 2021), using the SMARTS-1.3m light curves from 2019, which have the highest cadence among our data sets. The doubling/halving times represent how long it will take for a variable time series, to double (if it increases) or halve (if it decreases) its flux, assuming the variability pattern (in this case, a power-law function) does not change during the time period of interest (see Appendix B for details). The shortest \(t_{d}\) in the I band is 7.7 days, and in the K band is 6.7 days. Using these timescales we computed the size of the emission region following the relation \(R\leq ct_{var}\delta_{D}/(1+z)\)(Abdo et al., 2011). We estimated the Doppler factor \(\delta_{D}=[\Gamma(1-Bcosos)]^{-1}\)=2.7 using \(\theta\)=3 degrees and \(\Gamma\) and \(\beta\) from Hernandez-Garcia et al. (2017). Assuming \(t_{var}\)=\(t_{d}\), we obtain a region of 5.1\(\times\)10\({}^{16}\) cm in the I band, and of 4.4\(\times\)10\({}^{16}\) cm in the K band.
Figure 2: Multiwavelength light curve of the monitoring of PBC J2333.9-2343 during 2019. Observations are only available with SMARTS-1.3m and ATLAS. The grey dashed lines represent the peak of the third flare at MJD=58644.5.
### Cross-Correlation analysis
The monitoring in 2018 had roughly weekly cadence, thus it is not possible to evaluate the similarity between the light curves. For this reason, we did not perform the cross-correlation analysis on these data. We will use the 2019 SMARTS-1.3m monitoring, which has cadence of 1-3 days and simultaneous observations in both bands, to cross-correlate the light curves in the K and I bands.
The cross-correlation function (CCF) was estimated using three methods: the Interpolated Cross Correlation Function (ICCF, Gaskell & Sparke, 1986), Discrete Cross-Correlation Function (DCF, Edelson & Krolik, 1988), and the Z-Transformed Discrete Cross-Correlation Function (ZDCF Alexander, 1997). We applied these methods with the modifications to be used for non-stationary time series following Patino-Alvarez et al. (2018) and Amaya-Almazan et al. (2022). We also calculated significance levels, following Emmanoulopoulos et al. (2013). The cross-correlation function between the K and I bands is shown in Figure 3. We consider a significant delay only when it is above 99% significance and obtained in at least two of the methods. We obtained a delay by averaging the lag obtained with the three methods, of 1.02\(\pm\)1.45 days, i.e., the cross-correlation analysis did not find either delay or advance within the measurement errors, and is compatible with these variations occurring quasi-simultaneously.
### Spectral energy distribution (SED)
In order to construct the SED, we used data obtained when there was no flaring activity in the different bands, i.e., before MJD = 58370. When more than one data point was available, we computed the mean value. The SED is presented in the right panel of Fig. 4. We included RACS and VLASS data for visualization purposes, but data below 20 GHz were not considered because compact regions produce a synchrotron spectrum that is self-absorbed, and therefore cannot account for the radio flux at smaller frequencies. This has to be produced by other, more extended, portions of the jet. We also included VLBA data at 15 (for visualization) and 24 GHz.
We fit a single-zone leptonic model to the SED, using the Jets SED modeler and fitting Tool (JetSeT)10(Tramacere et al., 2009, 2011; Tramacere, 2020) for a Synchrotron Self Compton (SSC) + External Compton (EC) scenario. The accretion disk spectrum is modeled as multi-temperature blackbody as described in Frank et al. (2002), with a luminosity \(L_{\rm Disk}\), a black hole mass \(M_{BH}\) (fixed to the value derived in Hernandez-Garcia et al., 2018), and an accretion efficiency fixed to 0.08. The BLR is assumed to be a clumpy thin spherical shell with and internal radius determined by the phenomenological relation provided by Kaspi et al. (2007),
\begin{table}
\begin{tabular}{l c c c c c c} \hline Band & Mean (mJy) & Stddev (mJy) & \(\chi^{2}\) / d.o.f & \(\sigma_{NVXS}^{2}\pm\Delta\sigma_{NVXS}^{2}\) & \(F_{var}\pm\Delta F_{var}\) & Change (\%) \\ \hline \multicolumn{8}{c}{Monitoring 2018} \\ \hline Effelsberg-4.8 & 1156.0 & 43.3 & 0.8 / 6 & \textless{}0 & - & 10.4 \\ Effelsberg-8.5 & 1331.3 & 65.7 & 1.7 / 6 & \textless{}0 & - & 13.0 \\ Effelsberg-10.5 & 1456.3 & 98.0 & 3.0 / 6 & \textless{}0 & - & 16.8 \\ Effelsberg-20.4 & 1122.0 & 774.1 & 247.7/2 & 0.47 \(\pm\)0.08 & 0.68 \(\pm\)0.04 & 77.4 \\ SMARTS-V & 1.2 & 0.1 & 7189.3 / 24 & 0.0079 \(\pm\)0.0004 & 0.0886 \(\pm\)0.0002 & 32.5 \\ SMARTS-K & 9.0 & 0.8 & 4375.2 / 24 & 0.006 \(\pm\)0.001 & 0.0793 \(\pm\)0.0005 & 30.5 \\ ZTF-g & 0.70 & 0.08 & 12045.7 / 45 & 0.0145 \(\pm\)0.0003 & 0.1204 \(\pm\)0.0002 & 45.9 \\ ZTF-r & 1.3 & 0.2 & 23291.6 / 62 & 0.0135 \(\pm\)0.0002 & 0.11615 \(\pm\)0.00008 & 38.9 \\ ZTF-i & 1.8 & 0.1 & 1320.2 / 10 & 0.0067 \(\pm\)0.0003 & 0.0819 \(\pm\)0.0001 & 24.3 \\ ATLAS-o & 2.1 & 0.3 & 20554.3 / 165 & 0.0167 \(\pm\)0.0003 & 0.1293 \(\pm\)0.0001 & 47.6 \\ ATLAS-c & 1.2 & 0.1 & 1050.7 / 33 & 0.0069 \(\pm\)0.0005 & 0.0828 \(\pm\)0.0003 & 27.8 \\ _Swift_-UVM2 & 0.00009 & 0.00003 & 514.9 / 16 & 0.086 \(\pm\)0.006 & 0.296\(\pm\)0.003 & 59.2 \\ emphSwift-(0.5-2 keV) & 0.0018 & 0.0002 & 49.4 / 16 & 0.005 \(\pm\)0.002 & 0.073 \(\pm\)0.001 & 27.6 \\ _Swift_-(2-10 keV) & 0.00071 & 0.00007 & 42.8 / 16 & 0.005 \(\pm\)0.002 & 0.071 \(\pm\)0.001 & 29.2 \\ \hline \multicolumn{8}{c}{Monitoring 2019} \\ \hline SMARTS-I & 2.6 & 0.3 & 20064.4 / 58 & 0.0132 \(\pm\)0.0003 & 0.1150 \(\pm\) 0.0002 & 37.8 \\ SMARTS-K & 8.9 & 1.0 & 3648.7 / 58 & 0.0132 \(\pm\)0.0007 & 0.1150 \(\pm\) 0.0003 & 39.5 \\ ATLAS-o & 2.1 & 0.3 & 6927.1 / 54 & 0.0169 \(\pm\)0.0005 & 0.1299 \(\pm\) 0.0003 & 37.9 \\ \hline \end{tabular}
\end{table}
Table 1: Results of the variability analysis. For the 2018 (Fig. 1) and 2019 (Fig. 2) monitorings, and for each band, it lists the mean flux and its standard deviation (in mJy), the value of \(\chi^{2}\) and the degrees of freedom, the normalized excess variance, \(\sigma_{NVXS}^{2}\), and its error, the intrinsic variability amplitude, \(F_{var}\) and its error, and the percentage of variation.
Figure 3: The cross correlation function obtained by the interpolation method using SMARTS-1.3m data. The noisy horizontal lines represent the 90, 95 and 99% significance, both at correlation (above correlation coefficient above 0) and anti-correlation (below 0).
\(R_{BLR,in}=3\times 10^{17}L_{\rm Disk,46}^{1/2}\) cm. The external radius of the BLR is assumed to be \(0.1R_{\rm BLR,in}\), with a coverage factor \(\tau_{BLR}=0.1\). The dusty torus (DT) radiation is assumed to be described by spherical uniform radiative filed, with a radius \(R_{DT}=2\times 10^{19}L_{\rm Disk,46}^{1/2}\) cm, (Cleary et al., 2007), and a reprocessing factor \(\tau_{DT}=0.1\). Both the radius of the DT, and the radii of the BLR are implemented in JetSetT as dependent parameters, hence during the fit they are not free but determined by the phenomenological relation described above. The emitting region is assumed to be a single zone with a spherical geometry and radius \(R\), located at a distance \(R_{H}\) from the central black hole of mass \(\log\!M_{BH}=8.4\). The jet is assumed to be conical at the scale where the emitting region is located, with an half opening angle of \(\phi\approx 5\deg\), with the emitting region size determined by \(R=R_{H}\tan\phi\). The blob moves through the jet with a bulk Lorentz factor \(\Gamma\), oriented at a viewing angle \(\theta\), and a consequent beaming factor \(\delta=1/(\Gamma\sqrt{1-\beta_{\Gamma}\cos(\theta)})\). The relativistic electrons are assumed to follow a broken power-law energy distribution,
\[n(\gamma)=N\left\{\begin{array}{ll}N_{0}\gamma^{-p}&\gamma_{min}\leq\gamma \leq\gamma_{b}\\ N_{0}\gamma^{-p_{1}}\gamma_{b}^{p-p_{1}}\gamma_{b}^{p-p_{1}}&\gamma_{b}<\gamma <\gamma_{max},\end{array}\right. \tag{2}\]
with an index of \(p\) and \(p_{1}\) below and above the break energy \(\gamma_{b}\), respectively. The electron distribution normalization constant, \(N_{0}\), is set in order to have \(\int_{Y_{small}}^{Y_{small}}n(\gamma)d\gamma=N\). Given the small size of the BLR we set the initial position of the emitting region at \(R_{H}\approx 1\) pc. The resulting best fit model is shown in Figure 4. The initial value of \(L_{\rm Disk}\), is determined by JetSetT during the pre-fit stage, and is set to an initial value of \(L_{\rm Disk}=10^{43}\) erg s\({}^{-1}\). The minimization of the model is performed using the JetSet ModelMinimizer module plugged to iminuit python interface (Demninski et al., 2020). The errors are estimated from the matrix of second derivatives, using the HESSE method. Even though, in JetSet errors can be evaluated more accurately forcing the MINOSiminuit method, or by using a MonteCarlo Markov Chain, for the current analysis the HESSE method provides a fair estimate. We fit data above 20 GHz, excluding data below the synchrotron self-absorption frequency, we also add a 10% systematic error to data below \(10^{16}\) Hz to avoid that the small errors in the UV-to-radio frequencies biases the fit toward the lower frequencies. Since the model fit returned for both the sates (2015 and 2018) a similar best-fit value of \(L_{\rm Disk}\approx 10^{43}\) erg s\({}^{-1}\), and of \(\theta\approx 3\) deg, then we decided to freeze these parameters.
We used the same model to fit both, the data in 2018, and the data in the SED presented in (Hernandez-Garcia et al., 2017, using data from 2015) using data from the VLBA in the radio, _XMM-Newton_ in UV/optical/X-rays, and added the gamma-ray upper limit obtained for Fermi-LAT data within the dates mentioned in Section 2.6. We also included data from the 4FGL-DR3 in three energy bands (0.3-1, 1-3, and 3-10 MeV) in comparison with the upper limits in both SEDs. However these data are for visualization purposes only because flares can easily exceed stacked average constraints and we do not know what variability behavior might have gone into the stacked average including 12 years of data.
Both the 2015 and 2018 SEDs can be explained by an EC dominated scenario coming from the dusty torus, with mild contribution of the SSC component. The most relevant model parameters are reported in Table 2, and the best fit SEDs are shown in Figure 4. Due to the lack of data in the mm-IR band, where the peak of the synchrotron component occurs, it is not straight forward to provide a firm estimate of the luminosity of this component from the model fit, anyhow we notice that the total energetic budget of the jet has not showed a dramatic change. Assuming a barionic load of one cold proton each lepton, the total luminosity of the jet in 2015 is of \(\approx 4\times 10^{47}\) erg s\({}^{-1}\), and of \(\approx 2\times 10^{47}\) erg s\({}^{-1}\) in 2018, with a radiative power changing from \(\approx 6\times 10^{42}\) erg s\({}^{-1}\) in 2015, to \(\approx 2\times 10^{42}\) erg s\({}^{-1}\) in 2018. The difference in the radiative output can be explained mainly by the changes in the bulk Lorentz factor, and in the electron distribution, with the one in 2015 being harder and characterized by larger value of \(\gamma_{b}\approx 1900\). Anyhow, a model with spectral changed due mostly to variation of \(\Gamma\) and/or different viewing angles and magnetic field intensity, could still be accommodated. In conclusion, we think that the behaviour of the jet, based on the result of the SED modeling, can be related to a mostly quiescent state of the source, with a moderate change in the radiative output and moderate flarign activity in a shock located at a scale between 0.1 and 1 pc.
## 4 Discussion
### Variability of AGN and blazar populations
Variability is a property characterizing AGN, that manifests in a variety of timescales, ranging from minutes to years depending on the type of source (Netzer, 2013). Among them, some blazars show the most peculiar variability pattern, showing flaring activity with high amplitude variations (in some cases to some orders of magnitude) in timescales as short as a few days, as shown by intensive monitoring campaigns of particular sources (e.g., Patino-Alvarez et al., 2018; MAGIC Collaboration et al., 2018; Fraija et al., 2019; Fernandes et al., 2020; Chavshyan et al., 2020; Zargaryan et al., 2022; Guise et al., 2022; Priya et al., 2022; Abe et al., 2022). However, intensive monitoring is usually performed for interesting sources that usually show the largest amplitude variations (in particular for blazars), so these studies could be hampered by the few available resources to follow-up large samples of sources at different wavebands.
The ZTF survey has a sampling rate of three days with the
\begin{table}
\begin{tabular}{l l l l} \hline \hline Name & Units & Values (2015) & Values (2018-19) \\ \hline \(\gamma_{min}\)* & & 1 & 1 \\ \(\gamma_{max}\) & & \((4.0\pm 0.2)\times 10^{3}\) & \((4\pm 3)\times 10^{3}\) \\ \(N\) & cm\({}^{-3}\) & \((2.4\pm 0.8)\times 10^{3}\) & \((8\pm 2)\times 10^{2}\) \\ \(\gamma_{b}\) & & \(1917\pm 221\) & \(110\pm 10\) \\ \(p\) & & \(2.3\pm 0.2\) & \(2.2\pm 0.8\) \\ \(P_{1}\) & & \(3.4\pm 0.4\) & \(3.7\pm 0.2\) \\ \(T_{\rm DT}\)* & K & 330 & 330 \\ \(R_{\rm DT}\)\({}^{\dagger}\) & cm & \(6.3\times 10^{17}\) & \(6.3\times 10^{17}\) \\ \(\tau_{\rm DT}\)* & 0.1 & 0.1 \\ accr. eff.* & 0.08 & 0.08 \\ \(M_{\rm BH}\)* & \(M_{\sun}\) & \(2.5\times 10^{8}\) & \(2.5\times 10^{8}\) \\ \(\tau_{\rm BLR,in}\)* & cm & \(9.5\times 10^{15}\) & \(9.5\times 10^{15}\) \\ \(R_{\rm BLR,out}\) & cm & \(1.0\times 10^{16}\) & \(1.0\times 10^{16}\) \\ \(L_{\rm Disk}\) & erg/s & \(10^{43}\) & \(10^{43}\) \\ \(R^{\dagger}\) & cm & \(4.8\times 10^{16}\) & \(6.3\times 10^{16}\) \\ \(R_{H}\) & cm & \((4.76\pm 0.09)\times 10^{17}\) & \((6.325\pm 0.006)\times 10^{17}\) \\ \(B\) & G & \(0.20\pm 0.01\) & \(0.42\pm 0.05\) \\ \(\theta^{*}\) & deg & \(3.0\) & \(3.0\) \\ \(\Gamma\) & \(24\pm 2\) & \(19\pm 5\) \\ \hline \end{tabular}
\end{table}
Table 2: Parameters of the fitting for the SED built with the 2015 data, and the SED built with the 2018-19 data. The parameters marked with * were frozen in the fit, the parameters marked with * are dependent parameters (see text for details).
possibility to monitor light curves of large samples of AGN, especially blazars. This allows a comparison of the optical variability properties of these samples with the ones of PBC J2333.9-2343.
The samples used for comparison were taken from the training set used by the ALeRCE light curve classifier (see Sanchez-Saez et al., 2021, for details). The first sample is composed by a total of 4667 non-blazar AGN; these sources were taken from the class "A" of the Million Quasars Catalog (MILLIQUAS, version 6.4c, 2019 December; Flesch, 2015, 2019) and the New Catalog of Type 1 AGNs (Oh et al., 2015). The second sample contains 1267 blazars, these were taken from the 5th Roma-BZCAT Multi-Frequency Catalog of Blazars (ROMABZCAT; Massaro et al., 2015), and the class "B" sources of the MILLIQUAS11.
Footnote 11: Class “A” are type-1 Seyferts/host-dominated AGN, and class “B” are BL Lac type object in MILLIQUAS.
In particular, we used the ALeRCE light curve classifier repository12 to compute variability features. Ruan et al. (2012) have shown that the Damped Random Walk (DRW, Kelly et al., 2009) parameters are able to differentiate the variability properties of blazar and non-blazar populations of AGN. We used the ZTF alerts light curves for this comparison, as well as the complete alert light curve of PBC J2333.9-23431 retrieved from ALeRCE. We measured \(\tau_{DRW}\); the characteristic time for the time series to become roughly uncorrelated; and \(\sigma_{DRW}^{2}\); the squared amplitude of the variations; for the g-filter (that is the less affected by the host galaxy contribution) of the ZTF light curves. From \(\sigma_{DRW}^{2}\) we estimated the asymptotic value of the structure function on long timescales as \(\mathrm{SF}_{\mathrm{\infty}}=\sqrt{2}\sigma_{DRW}\)(MacLeod et al., 2011). The measurements are shown in Fig. 5. AGN are represented as red circles, blazars as green triangles, and PBC J2333.9-2343 is marked with a blue cross, and is among the blazar population, suggesting its optical variability properties are closer to those of the blazar population. This plot shows results in agreement with Ruan et al. (2012), who reported that blazars have \(\tau_{DRW}\) in between those of normal quasars and other objects (mainly variable stars), as well as larger values of \(\mathrm{SF}_{\mathrm{\infty}}\). It is worth remarking that there might exist a degree of misclassification between blazar and non-blazar AGN. Indeed, the confusion matrix in Sanchez-Saez et al. (2021) shows that \(74^{+5}_{-3}\%\) of blazars are well classified using the ALeRCE light curve classifier, while the rest of the sources are classified as AGN or Quasar. For instance, blazars could be classified as AGN during non-flaring activity, explaining why there is some mix between the classes.
Footnote 12: [https://github.com/alercbroker/lc_classifier](https://github.com/alercbroker/lc_classifier)
### Constraints on the region responsible for the variations observed in the optical/NIR bands
Optical and NIR variations are normally seen in radio quiet AGN as well as radio loud objects. In the former case they are often attributed to variations in the accretion disc (optical and NIR) and reprocessing of variable emission in the torus (NIR). In this section we compare the upper limits on the delay between I band and K band fluctuations in PBC J2333.9-2343 to the ranges of delays expected in these non-jet scenarios, in order to constrain the origin of the variable optical/NIR emission in this object. In PBC J2333.9-2343 we found a time lag between the K and I band of 1.02\(\pm\)1.45 days, so we can investigate if the lag would be compatible with the K
Figure 4: Left: The observed frame SED of PBC J2333.9-02343 built with data from 2015 (as in Hernández-García et al., 2017), including data from VLBA, _XMM-Newton_ and Fermi-LAT (not included in previous works). Right: The observed frame SED using contemporaneous data from 2018 from the Effelsberg, SMARTS-1.3m, ZTF, ATLAS, and _Swift_ observatories. Additionally we included data from VLASS, RACS (only for visualization), VLBA and Fermi-LAT. The fit was done using the upper limits from Fermi-LAT, but for visualization we also include the 4FGL-DR3 detections.
band emission arising from the torus, the accretion disk or the jet. We use a black hole mass of log\(\mathrm{M}_{BH}\) = 8.4 and the luminosity at 5100 A reported in Hernandez-Garcia et al. (2018) to estimate an Eddington ratio \(R_{\mathrm{Edd}}\) of 3.2\(\times\)10\({}^{-3}\) following Sanchez-Saez et al. (2018).
When the NIR emission is dominated by the torus, the dust, responsible for the re-emission in the NIR, can only be located at distances of at least light days/weeks, so variations may occur smoothly and with time lags of weeks/months due to the different time delays on different sides of the torus (Lira et al., 2015; Sanchez et al., 2017). Indeed, recent studies show delayed response of the K-band light curve after the V-band light curve of at least 10 days, and in particular for the luminosity of PBC J2333.9-2343 a lag longer than 80 days is expected (Koshida et al., 2014; Minezaki et al., 2019). We can therefore reject that the torus is responsible for the NIR emission.
The variable optical and NIR emission can also come from the accretion disk, either through intrinsic (i.e. viscosity or thermal) changes in the disk or through reprocessing of a variable illuminating source, for example of X-rays (Netzer, 2013). In the first case, the intrinsic variations have difficulty reproducing quasi-simultaneous flux changes in two different bands, unless it is only a small region of the disk which is varying and this region contributes all the _variability_ of both bands even if it does not produce all their _emission_. In any accretion disk where the surface temperature decreases outwards, longer wavelength are emitted by larger fractions of the disk. Therefore, if only a small central region of the disk is producing the variations, these will modulate a larger fraction of the total I band emission than of the total emission of the disk in the K band and the fractional variability of the I band would be larger than that of the K band, which is contrary to what we observe. On the other hand, to estimate the delay expected from reprocessing on the disk, we followed Lira et al. (2015) to estimate the light travel time across a standard accretion disk model as \(\tau=3\times 10^{-10}\lambda^{4/3}R_{Edd}^{1/3}\lambda_{BH}^{2/3}\), with \(\lambda\) in A, \(\tau\) in days, \(R_{Edd}\) in Eddington units and \(M_{BH}\) in solar masses. The expected lag between the K and I band is of 7.5 days, much larger than the measured value of \(1.02\pm 1.45\) days. Thus it is unlikely that the accretion disk is responsible for the variable emission in both I and K bands.
If variations are related to jet emission and the jet is oriented towards the observer, however, the variability timescale is shortened and the radiation is strongly enhanced by relativistic beaming. The characteristic timescales of such variations are of about a few days, which may be interpreted as the typical timescale of successive flare events (Kataoka et al., 2001). In fact, if the emission is dominated by emission from the jet both at optical and NIR frequencies, then the variations are rapid, high amplitude and simultaneous in both bands (Asmus et al., 2015). For instance, Bonning et al. (2012) presented the results of light curves from a sample of 12 blazars observed by SMARTS-1.3m in the BVJK between 2008-10, showing that their CCFs are centered around zero. The monitoring campaign presented here, as well as the analysis of the variability in PBC J2333.9-2343 (see Sect. 3.1), reveals large amplitude variations at short timescales showing flaring behaviour, and the fact that the variations occur simultaneously in the optical and NIR band agrees well with the blazar nature of PBC J2333.9-2343.
The doubling/halving times were estimated to be 7.7 days (I band) and 6.7 days (K band). We estimated the electrons cooling time in the observer frame using best fit parameters for the SED from 2018-19, taking into account both synchrotron and IC/EC in Thomson regime, which results to be \(\sim\)0.3 days, and the escape time, ignoring possible contribution from turbulence, \((\frac{R(1+z)}{c\dot{\alpha}_{D}})\sim\) 1 day. This seems to indicate that the decay could be driven not only by the electrons cooling/escape times, but other mechanisms may play a role, as for instance a modulation in the injection of the particles (and/or in the beaming pattern), or an area of magnetic reconnection in the flare (see e.g., Zhu et al., 2018; Sahakyan et al., 2020; Pandey & Salin, 2022).
### Multiwavelength emission
Extra-galactic emission in the gamma-ray band is a phenomenon related to the presence of relativistic jets (Dermer & Giebels, 2016). PBC J2333.9-2343 is detected by Fermi-LAT and reported in the fourth catalog of AGN detected by the Fermi Gamma-ray Space Telescope Large Area Telescope (4LAC), with an integrated flux of 3.9\(\pm\)0.5\(\times\)10\({}^{-12}\) erg s\({}^{-1}\)cm\({}^{-2}\) in the 0.1-100 GeV band between 2008 and 2016, and a spectral index of 2.42\(\pm\)0.12 (Ajello et al., 2020), in agreement with our results (see Sect. 2.6). Indeed, this source is classified as a remarkable 'other AGN' case because it was not possible to classify it. We notice that the fact that gamma-ray emission was detected on this source with Fermi-LAT at a 6.2\(\sigma\) of significance level is more likely indicative to be from a blazar rather than a radio galaxy. Furthermore, our results show that this source has varied in the gamma-ray range, the number of photons being almost half in 2015 compared to the 2018-2019 data. This is in agreement with the variability results from the 4FGL-DR3 where a variability index of 30.814 is reported for this source (Abdollahi et al., 2022).
Figure 5: Distribution of the Damp Random Walk (DRW) parameters, \(\tau_{DRW}\) (in days) against \(SF_{\infty}\) (in \(mag_{S}\)), in logarithmic scale, in the g-filter for the ALeRCE training set sample of non-blazar AGN (red circles) and blazars (green triangles). The parameters for PBC J2333.9-2343 are marked as a blue cross.
Blazars characteristically show a two hump structure (low and high energy), as we see for PBC J2333.9-2343. The low energy hump is well explained by synchrotron emission by ultrarelativistic electrons, whereas the high peak is usually interpreted as inverse Compton (IC) emission for the case of a pure leptonic scenario (Blandford & Konigl, 1979), or dominated by high energy emission of ultra-relativistic protons in the case of the hadronic scenario (Dermer & Schlickeiser, 1993; Bottcher et al., 2013). Radio galaxies can also show a two hump structure, the way to differentiate between a blazar or a radio galaxy being the jet angle (e.g., Ghisellini et al., 2005). In this work we used a one-zone leptonic model, where the high energy frequencies are well fitted by EC emission and some SSC contribution, with values of the model parameters within the blazar expectations, among them a jet angle of 3 degrees (see Fig. 4 and Table 2).
A previous SED fitting was presented in Hernandez-Garcia et al. (2017) using simultaneous _XMM-Newton_ data. The main difference between the SED fittings is that in Hernandez-Garcia et al. (2017) the SSC and EC are fitted separately, whereas here they are fitted simultaneously. Moreover, the photon contribution to the EC model in Hernandez-Garcia et al. (2017) comes from IC emission with seed photons from the torus, whereas in this work the contribution comes from IC emission with seed photons from the disk, the BLR and the torus.
In order to be able to compare the SEDs built in 2015 and 2018, we fitted the two of them with the same model (see Section 3.3). The main result from this analysis is that the jet angle must be very close to the line of sight of the observer. For larger angles the Doppler boosting is smaller and therefore in order to properly fit the lower frequency peak of the SED it becomes necessary to either increase the energy of the electron population, or increase the amount of electrons available (either by increasing the density or the size of the region). However, for both cases, the high energy part of the SED model increases significantly, which results in a poor fit to the X-ray data. Therefore, we found necessary to increase the Doppler boosting (lowering the angle) to obtain a good fit.
The advantage of the current SED is that more data points are available, including data in the gamma-rays, which allows a more robust fit. In particular, the fact that gamma-ray emission is detected and the SED shows a two peaked structure favors the blazar-like nature hypothesis, consistent with its variability pattern. It is worth remarking that according to the SED modeling, in this object, the contribution of the disk and the torus is negligible compared to the synchrotron emission, which would be responsible for almost all the optical and NIR emission, as well as the X-rays and gamma-rays can also be explained by emission reprocessed by the jet, with mild contribution from the dusty torus.
Blazars are further classified into flat-spectrum radio quasars (FSRQ) and BL Lac (BL Lacertae being the prototype) based on various observational properties. BL Lacs show no (or weak) emission lines in their optical spectra, and the synchrotron peaks in the SED at frequencies \(>10^{14}\) Hz, whereas FSRQ do show emission lines and their synchrotron typically peaks at frequencies lower than \(10^{14}\) Hz (Abdo et al., 2010; Giommi et al., 2012; Padovani et al., 2017). The optical spectrum of PBC J2333.9-2343 shows prominent narrow and broad emission lines (Hernandez-Garcia et al., 2018) and the synchrotron peak in the SED is at \(<10^{14}\) Hz (Fig. 4), therefore we classify the nucleus of this galaxy as a FSRQ.
We would like to stress the peculiarities of this galaxy, with emphasis on the low gamma-ray luminosity ((3.10\(\pm\)0.02)\(\times 10^{43}\) erg \(s^{-1}\)) compared to other blazars (see Fig. 10 in Ajello et al., 2020), more typical of BL Lacs. It has also being suggested that BL Lacs are dominated by SSC whereas FSRQ are dominated by EC, in agreement with the classical FSRQ classification of PBC J2333.9-2343. However, there are examples in the literature of sources that can change from EC to SSC dominated, as for example 3C279 (Patino-Alvarez et al., 2018). In the recent work by Pei et al. (2022) the authors proposed an "apparentzing zone" consisting on a potential transition field between BL Lacs and FSRQs where changing-look blazars may reside based on four physical parameters, where PBC J2333.9-2343 indeed should be located. Other examples of sources showing observational characteristics changing between a BL Lac and a FSRQ include BS B1646+499 (Pajdosz-Smerciak et al., 2018) or B2 1420+32 (Mishra et al., 2021). Observing morphologically different AGN types simultaneously has been proposed as the result of jet axis reorientation (Pajdosz-Smerciak et al., 2022). PBC J2333.9-2343 could also be an extreme case of XRG, with the new jet pointing towards us and therefore preventing us from observing the X-shaped morphology.
### Radio morphology and long-term variability from the latest radio surveys and archives
An additional evidence in favour of the blazar nature of the core is the lack of clear jet emission connecting the lobes and the core region presented in Bruni et al. (2020) with the GMRT at 150 MHz. To further confirm this result, we considered images of the source from recently released radio surveys: the VLASS at 3 GHz, and the RACS at 0.88 GHz. The latter has a sensitivity ten times larger than the mentioned GMRT observations, and five times larger than the NVSS. Moreover, the short baselines of ASKAP allow to recover extended structures up to 1 deg, making it suitable for the study of GRGs. The RACS image is presented in Figure 6: at a noise level of 215 \(\mu\)Jy/beam, and a resolution of \(\sim\)15\({}^{\prime\prime}\), no sign of connection between the lobes and the nucleus is visible, confirming previous
Figure 6: Image of PBC J2333.9-2343 from the RACS survey at 0.88 GHz. Contours are 3\(\times\)RMS\(\times\)(-1, 1, 2, 4, 8, 16, 32, 64, 128, 256). The HPBW (14.8\({}^{\prime\prime}\times\)13.5\({}^{\prime\prime}\)) is shown in the lower-left corner.
results. The shortest distance between the core and the first contour of the lobes is \(\sim\)3\({}^{\prime}\) (167 kpc at the redshift of the source), resulting in a missing association and the consequent absence of this source in the recent catalogue of GRG from RACS (Andernach et al., 2021). The VLASS quick-look image, at an angular resolution of \(\sim\)2.8 arcsec and an RMS of 580 \(\mu\)Jy/beam, detects only the core. This is expected, since the VLA configuration used for the survey (B) only allows to recover structures with an angular size up to 1\({}^{\prime}\). However, the absence of a jet even in the regions closer to the core, confirms the discontinuity visible at lower frequencies.
Finally, we have investigated the core long-term radio variability browsing the NRAO VLA archive survey (NVAS15, Crossley et al., 2007). We could collect images between 1983 and 2001, covering almost 20 years at 4.8 and 15 GHz. At 8.5 GHz, the observations span about 10 years (1990-1999). The highest flux density was recorded at 15 GHz during November 1984 (6.9\(\pm\)0.3 Jy), corresponding to a factor of \(\sim\) 7 of variability with respect to previous and subsequent epochs. A corresponding, although lower, peak is present at 4.8 GHz (200% increase, also November 1984), confirming the variation. Only mild variations were recorded at these frequencies during later years. At 8.5 GHz, the absolute maximum was recorded in July 1991 (an increase of a factor 2 with respect to the previous epoch), and milder variations during 1999. As a whole, these archival data confirm the pronounced variability discussed in this work, and detected in the radio band with the Effelsberg observations on a shorter time window, as well as comparison with previous works such as the flux reported at 8 GHz by Ojha et al. (2004) and Hernandez-Garcia et al. (2017) with data obtained 11 years later.
Footnote 15: [http://www.vla.nrao.edu/astro/nvas/](http://www.vla.nrao.edu/astro/nvas/)
### A re-oriented jet or an intervening blazar in a GRG?
Another possibility to explain the behaviour of this source is to consider the presence of an intervening blazar in our line of sight. Looking at the number of hard X-ray selected blazars, N, as a function of flux, S (14-195 keV), logN-log(S see figure 10 from Langejahn et al., 2020) and assuming the _Swift_/XRT error circle of 6 arcsec, the number of blazars similar to PBC J2333.9-2343 expected to fall in it is \(\sim\)6\(\times\)10\({}^{-5}\), i.e. indicating that a chance overlap is unlikely.
Generally in these cases there is an overlapping of two different optical spectra with some peculiar composition, as for instance lines at different redshift, that should appear as double peaked lines in the narrow emission lines. This is not observed in the spectra of PBC J2333.9-2343 even with data obtained from the Very Large Telescope at a dispersion of 0.3 A/pixel (Hernandez-Garcia et al., in prep.). Then, if there are two sources we should assume that there is a broad-line radio galaxy with its optical spectrum and then a blazar with a featureless blue spectrum which becomes embedded in the radio galaxy. However, no indications of composite spectral continuum are found.
If an intervening blazar can be discarded, we can conclude that the GRG and the blazar nucleus in PBC J2333.9-2343 are part of the same galaxy. Then, the most plausible explanation is that the jet has changed its direction, as previously proposed in Hernandez-Garcia et al. (2017), making this an exceptional case of jet reorientation. These kind of changes have already been proposed to explain XRGs. For example, Dennett-Thorpe et al. (2002) studied two XRG (3C 223.1 and 3C 403) and explained the morphology in terms of a rapid realignment of the radio jet, and considered a binary black hole merger or acquisition of a smaller galaxy as likely candidates for the cause of the change of the jet axis, as they did not see any indication of merging. Similarly, Machalski et al. (2016) reported the case of 3C 293, which is very likely the result of a post merging event with the galaxy UGC 8782. In this work they concluded that the jet axis flipped rapidly due to tidal interaction of its merging process. In the particular case of PBC J2333.9-2343, we do not see the X-shape, but this can be explained because the new jets are, by chance, pointing towards us. The confirmation of no emission between the nucleus and the jet discussed in Sect. 4.4 strongly supports the idea of a re-oriented jet.
## 5 Conclusions
In this work we presented a contemporaneous multiwavelength monitoring of the nucleus in PBC J2333.9-2343 that covered two periods, between September 2018 and January 2019, and April-July 2019. Variations are found at all observed wavelengths at significance larger than 6\(\sigma\), except at X-rays where variations are detected at 2\(\sigma\) of confidence level within the four month monitoring period. The observed variations occur in timescales shorter than a month and with amplitudes larger than a factor of two. The cross-correlation between optical/NIR also shows that the variations occur simultaneously in these bands. When comparing the optical variability features with large samples of non-blazar AGN and blazars, PBC J2333.9-2343 shows characteristics more similar to the blazar population. According to these results, we interpret the observed variations as flaring events.
We constructed the SED, that we then fitted using a single-zone leptonic model. The SED shows two distinct peaks, the low energy one is well fitted by synchrotron emission, while the high energy peak is dominated by EC from the torus with some contribution from SSC. This SED was compared with the data already presented in Hernandez-Garcia et al. (2017) using VLBA and _XMM-Newton_, and we added Fermi-LAT data. The jet angle in the fitted models is 3 degrees, indicative of a blazar.
These results and the gamma-ray detection at 6\(\sigma\) of confidence level strongly suggest the presence of a blazar-like nucleus at the center of PBC J2333.9-2343. This galaxy was previously classified as a GRG, suggestive of a change in the direction of the jet as previously proposed in Hernandez-Garcia et al. (2017). Further evidence in agreement with this scenario is the fact that no connection between the nucleus and the lobes is observed in the deepest radio images, and historical radio fluxes from the NRAO VLA archive survey revealed variations by a factor of seven about 30 years ago, confirming the pronounced variability in the radio at 20.4 GHz shown in the present work.
In the future we can use resources such as the ZTF or ATLAS alert streams to monitor this or other interesting sources and trigger other instrumentation at different wavebands when a flaring event is detected. In particular, ALeRCE has a watchlist service16 that notifies via email when a source from your own target list generates an alert.
Footnote 16: [https://watchlist.alerce.online/](https://watchlist.alerce.online/)
## Acknowledgements
We acknowledge funding from ANID programs: Millennium Science Initiative ICN12_009 (LHG, AMA, PSS, FEB, FF) and NCN19_058 (PA, PL); CATA-Basal - ACE210002 (FEB), FB210003 (FEB, FF) and BASAL project FB210005 (AMA); and FONDECYT Regular 1190818 (FEB) and 1200495 (FEB). G.B. and F.P. acknowledges financial support under the INTEGRAL ASI-INAF agreement 2019-35-HH.0 and ASI/INAF n. 2017-14-H.0. This work was partially supported by CONACyT (Consejo Nacional de Ciencia y Tecnologia) research grants 280789 (V.M.P.-a., VC, Mexico) and 320987 (V.M.P.-A., Mexico). This work is supported by the MPIfR-Mexico Max Planck Partner Group led by V.M.P.-A, and the MPA-Universidad de Valparaiso Max Planck Partner Group led by PA. VC acknowledges support from the Fulbright - Garcia Robles scholarship. This publication has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 730562 (RadioNet). This research has used data from the SMARTS-1.3m telescope, which is operated as part of the SMARTS Consortium, as part of the approved proposals CN2018B-19,and CN2019A-Fast Track (CNTAC). We acknowledge the use of public data from the _Swift_ data archive (Too ID 10939). Partly based on observations with the 100-m telescope of the MPIfR (Max-Planck-Institut fur Radioastronomie) at Effelsberg. We thank the staff of the Effelsberg-100m telescope, for making these observations possible (Proposal 15-18). The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI: Graham). This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S001609/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile. The ASKAP radio telescope is part of the Australia Telescope National Facility which is managed by Australia's national science agency, CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Research Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Research Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. This paper includes archived data obtained through the CSIRO ASKAP Science Data Archive, CASDA ([https://data.csiro.au](https://data.csiro.au)). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. CIRADA is funded by a grant from the Canada Foundation for Innovation 2017 Innovation Fund (Project 35999), as well as by the Provinces of Ontario, British Columbia, Alberta, Manitoba and Quebec. We thank the ALeRCE Broker for making their services public to the scientific community. In this work we used their Web Interface and the ZTF Forced Photometry Notebook.
## Data availability
The data underlying this article were accessed from the ZTF Forced Photometry Service ([https://zfweb.ipac.caltech.edu/cgi-bin/requestForcedPhotometry.cgi](https://zfweb.ipac.caltech.edu/cgi-bin/requestForcedPhotometry.cgi)), the ATLAS Forced Photometry Service ([https://fallingstar-data.com/forcedphot/](https://fallingstar-data.com/forcedphot/)). _Swift_ ([https://swift.gsfc.nasa.gov/](https://swift.gsfc.nasa.gov/)), and Fermi ([https://fermi.gsfc.nasa.gov/](https://fermi.gsfc.nasa.gov/)) archives. The Effelsberg, VLBA, SMARTS-1.3m data and the derived data generated in this research will be shared on reasonable request to the corresponding author.
|
2302.07858 | Solutions in $\mathbb{Z}[i]$ of $A^{5}+B^{5}=C^{5}\pm 1$ | We build solutions in $\mathbb{Z}[i]$ of $A^{5}+B^{5}=C^{5}\pm 1$ based on a
method developed by Michael D. Hirschhorn in his article "An Amazing Identity
of Ramanujan", from Mathematics Magazine, Vol. 68, No3 (June 1995). | Dominique Fosse | 2023-02-09T15:44:40Z | http://arxiv.org/abs/2302.07858v1 | # Solutions in \(\mathbb{Z}[i]\) of \(A^{5}+B^{5}=C^{5}\pm 1\)
###### Abstract
In this paper we study the solutions of \(A^{5}+B^{5}=C^{5}\pm 1\) with \((a^{2}+7ab-9b^{2})^{3}+(2a^{2}-4ab+12b^{2})^{3}=(2a^{2}+10b^{2})^{3}+(a^{2}-9ab -b^{2})^{3}\). We show that the solutions of \(A^{5}+B^{5}=C^{5}\pm 1\) with \((a^{2}+7ab-9b^{2})^{3}+(2a^{2}-4ab+12b^{2})^{3}=(2a^{2}+10b^{2})^{3}+(a^{2}-9ab -b^{2})^{3}\) are solutions of \(A^{5}+B^{5}=C^{5}\pm 1\) with \((a^{2}+7ab-9b^{2})^{3}=(2a^{2}+10b^{2})^{3}+(a^{2}-9ab-b^{2})^{3}\). We also show that the solutions of \(A^{5}+B^{5}=C^{5}\pm 1\) with \((a^{2}+7ab-9b^{2})^{3}=(2a^{2}+10b^{2})^{3}+(a^{2}-9ab-b^{2})^{3}\) are solutions of \(A^{5}+B^{5}=C^{5}\pm 1\) with \((a^{2}+7ab-9b^{2})^{3}=(2a^{2}+10b^{2})^{3}+(a^{2}-9ab-b^{2})^{3}\).
Thus, equation (1) becomes \(A_{n}^{5}+B_{n}^{5}=C_{n}^{5}+d_{n}^{5}\). Now,
\[d_{n}=F_{n+1}^{2}+2F_{n+1}F_{n}-2F_{n}^{2} =F_{n+1}^{2}-F_{n}\left(-2F_{n+1}+2F_{n}\right)\] \[=F_{n+1}^{2}-F_{n}F_{n+2}\qquad\text{using equation (2)}\] \[=2^{n}(-1)^{n}\qquad\text{using equation (4)}\]
With \(z\in\mathbb{C}\), \(|z|<1\), it's easy enough to calculate the following, from equations (5), (6) and (7):
\[\sum_{n\geq 0}A_{n}z^{n} =\frac{4z^{2}+1}{(2z+1)(4z^{2}-8z+1)}\] \[\sum_{n\geq 0}B_{n}z^{n} =\frac{-4z}{(2z+1)(4z^{2}-8z+1)}+i\left(\frac{1-2z}{4z^{2}-8z+1}\right)\] \[\sum_{n\geq 0}C_{n}z^{n} =\frac{4z}{(2z+1)(4z^{2}-8z+1)}+i\left(\frac{1-2z}{4z^{2}-8z+1}\right)\]
And this satisfies \(A_{n}^{5}+B_{n}^{5}=C_{n}^{5}+2^{5n}(-1)^{n}\implies\left(\frac{A_{n}}{2^{n}} \right)^{5}+\left(\frac{B_{n}}{2^{n}}\right)^{5}=\left(\frac{C_{n}}{2^{n}} \right)^{5}+(-1)^{n}\). Let's take \(x:=2z\) and \(\forall n\in\mathbb{N}\), \(a_{n}:=\frac{A_{n}}{2^{n}}\), \(b_{n}:=\frac{B_{n}}{2^{n}}\) and \(c_{n}:=\frac{C_{n}}{2^{n}}\). It follows that
\[\boxed{\sum_{n\geq 0}a_{n}x^{n}=\frac{x^{2}+1}{(x+1)(x^{2}-4x+1)}}\] \[\sum_{n\geq 0}b_{n}x^{n}=\frac{-2x}{(x+1)(x^{2}-4x+1)}+i\left( \frac{1-x}{x^{2}-4x+1}\right)\] \[\sum_{n\geq 0}c_{n}x^{n}=\frac{2x}{(x+1)(x^{2}-4x+1)}+i\left( \frac{1-x}{x^{2}-4x+1}\right)\] \[\implies a_{n}^{5}+b_{n}^{5}=c_{n}^{5}+(-1)^{n}\]
For example,
\[n:=1 \implies 3^{5}+(-2+3i)^{5}=(2+3i)^{5}-1\] \[n:=2 \implies 13^{5}+(-6+11i)^{5}=(6+11i)^{5}+1\] \[n:=3 \implies 47^{5}+(-24+41i)^{5}=(24+41i)^{5}-1\]
|
2305.05030 | Adaptive Cross Tubal Tensor Approximation | In this paper, we propose a new adaptive cross algorithm for computing a low
tubal rank approximation of third-order tensors, with less memory and lower
computational complexity than the truncated tensor SVD (t-SVD). This makes it
applicable for decomposing large-scale tensors. We conduct numerical
experiments on synthetic and real-world datasets to confirm the efficiency and
feasibility of the proposed algorithm. The simulation results show more than
one order of magnitude acceleration in the computation of low tubal rank
(t-SVD) for large-scale tensors. An application to pedestrian attribute
recognition is also presented. | Salman Ahmadi-Asl, Anh Huy Phan, Andrzej Cichocki, Anastasia Sozykina, Zaher Al Aghbari, Jun Wang, Ivan Oseledets | 2023-05-08T20:17:50Z | http://arxiv.org/abs/2305.05030v2 | # Adaptive Cross Tubal Tensor Approximation
###### Abstract
In this paper, we propose a new adaptive cross algorithm for computing a low tubal rank approximation of third-order tensors, with less memory and lower computational complexity than the truncated tensor SVD (t-SVD). This makes it applicable for decomposing large-scale tensors. We conduct numerical experiments on synthetic and real-world datasets to confirm the efficiency and feasibility of the proposed algorithm. The simulation results show more than one order of magnitude acceleration in the computation of low tubal rank (t-SVD) for large-scale tensors. An application to pedestrian attribute recognition is also presented.
keywords: Cross tensor approximation, tensor SVD, tubal product Msc: 15A69, 46N40, 15A23 +
Footnote †: journal: Computer Science
## 1 Introduction
Tensors are high-dimensional generalizations of matrices and vectors. Contrary to the rank of matrices, the rank of tensors is not well understood and has to be defined and determined. Different types of tensor decompositions can have different rank definitions such as Tensor Train (TT) [1], Tucker decomposition [2] and its special case, i.e. Higher Order SVD (HOSVD) [3], CANDECOMP/PARAFAC decomposition (CPD) [4; 5], Block Term decomposition [6], Tensor Train/Tensor Ring (TT-TR) decomposition [1; 7; 8], tubal SVD (t-SVD) [9]. The t-SVD factorizes a tensor into three tensors, two orthogonal tensors and one f-diagonal tensor (to be discussed in Section 3). Like the SVD for matrices, the truncation version of the t-SVD provides the best tubal rank approximation for every unitary invariant tensor norm. The t-SVD has been successfully applied in
deep learning [10; 11], tensor completion [12; 13], image reconstruction [14] and tensor compression [15].
Decomposing big data tensors into the t-SVD format is a challenging task, especially when the data is extremely massive and we can not view the entire data tensor. The cross, skeleton, or CUR approximation is a useful paradigm widely used for fast low-rank matrix approximation. Achieving a higher compression ratio, and problems with data interpretation are other motivations to use the cross approximation methods. The main feature of the cross algorithms that makes them effective for managing very large-scale data tensors is their ability to use less memory and have lower computational complexity. When it comes to the higher compression capacity, for instance, the cross matrix approximation provides sparse factor matrices, whereas the SVD of sparse matrices fails to do so, resulting in a more compact data structure. It is also known that the cross approximations can provide more interpretable approximations, we refer to [16] for more details.
Due to the mentioned motivations, the cross matrix approximation [17; 18] has been generalized to different types of tensor decompositions such as the TT-Cross [19], Cross-3D [20], FSTD [21], and tubal Cross [22]. The cross matrix approximation is generalized to the tensor case based on the tubal product (t-product) in [22] where some individual lateral and horizontal slices are selected and based on them a low tubal rank approximation is computed. The main drawback of this approach is its dependency on the tubal rank estimation, which may be a difficult task in real-world applications. To tackle this problem, we propose to generalize the adaptive cross matrix approximation [23; 24; 25] to tensors based on the t-product. The idea is to select one actual lateral slice and one actual horizontal slice at each iteration and adaptively check the tubal rank of the tensor.
The generalization of the adaptive cross matrix approximation to tensors based on the t-product is an interesting problem and in this paper, we discuss how to perform it properly. The novelties done in this work include:
* A new adaptive tubal tensor approximation algorithm, which estimates the tubal rank and compute the low tubal rank approximation. The proposed algorithm does not need to use the whole data tensor and at each iteration works only on a part of the horizontal and lateral slices. This facilitates handling large-scale tensors.
* Presenting an application in pedestrian attribute recognition.
The rest of the paper is structured as follows. The basic definitions are given
in Section 2. The t-SVD model is introduced in Section 3. The cross matrix approximation and its adaptive version are discussed in Section 4. Section 5, shows how to generalize the adaptive cross approximation to the tensor case based on the t-product. We compare the computational complexity of the algorithms in Section 6. The experimental results are presented in Section 7 and Section 8 concludes the paper and presents potential future directions.
## 2 Preliminaries
The key notations and concepts used in the rest of the paper are introduced in this section. A tensor, a matrix and a vector are denoted by an underlined bold capital case letter, a bold capital case letter and a bold lower case letter, respectively. Slices are subtensors generated with fixed all but two modes. Our work is for real-valued third-order tensors but generalization to complex higher order tensor is also straightforward. For a third-order tensor, \(\underline{\mathbf{X}}\), the three types of slices \(\underline{\mathbf{X}}(:,:,k),\ \underline{\mathbf{X}}(:,j,:),\ \underline{ \mathbf{X}}(i,:,:)\) are called frontal, lateral and horizontal slices. For a third-order tensor \(\underline{\mathbf{X}}\), the three types fibers \(\underline{\mathbf{X}}(:\ j,k),\ \underline{\mathbf{X}}(i,:,k),\ \underline{ \mathbf{X}}(i,j,:)\) are called columns, rows and tubes. The notation "\(\mathrm{conj}\)" means the complex conjugate of all elements (complex numbers) of a matrix. The notations \(\mathbf{X}_{(:,-j)}\) and \(\mathbf{X}_{(-i,:)}\) are used to denote new sub-matrices of the matrix \(\mathbf{X}\) with the \(j\)-column and the \(i\)-th row removed. The Frobenius norm of tensors/matrices is denoted by \(\|.\|_{F}\) and the Euclidean norm of a vector is shown by \(\|.\|_{2}\). The notation \(|.|\) stands for the absolute value of a real number. We need the subsequent definitions to introduce the tensor SVD (t-SVD) model.
**Definition 1**.: (t-product) Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{2}\times I_{4}\times I_{3}}\), the t-product \(\underline{\mathbf{X}}*\underline{\mathbf{Y}}\in\mathbb{R}^{I_{1}\times I_{4} \times I_{3}}\) is defined as follows
\[\underline{\mathbf{C}}=\underline{\mathbf{X}}*\underline{\mathbf{Y}}=\mathrm{ fold}\left(\mathrm{circ}\left(\underline{\mathbf{X}}\right)\mathrm{unfold}\left( \underline{\mathbf{Y}}\right)\right), \tag{1}\]
where
\[\mathrm{circ}\left(\underline{\mathbf{X}}\right)=\begin{bmatrix}\mathbf{X}^{ (1)}&\mathbf{X}^{(I_{3})}&\cdots&\mathbf{X}^{(2)}\\ \mathbf{X}^{(2)}&\mathbf{X}^{(1)}&\cdots&\mathbf{X}^{(3)}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{X}^{(I_{3})}&\mathbf{X}^{(I_{3}-1)}&\cdots&\mathbf{X}^{(1)}\end{bmatrix},\]
and
\[\mathrm{unfold}(\underline{\mathbf{Y}})=\begin{bmatrix}\mathbf{Y}^{(1)}\\ \mathbf{Y}^{(2)}\\ \vdots\\ \mathbf{Y}^{(I_{3})}\end{bmatrix},\ \ \ \ \underline{\mathbf{Y}}=\mathrm{fold}\left(\mathrm{unfold}\left( \underline{\mathbf{Y}}\right)\right).\]
Here, \(\mathbf{X}^{(i)}=\underline{\mathbf{X}}(:,:,i)\) and \(\mathbf{Y}^{(i)}=\underline{\mathbf{Y}}(:,:,i)\) for \(i=1,2,\ldots,I_{3}\).
We denote by \(\widehat{\mathbf{X}}\), the Fourier transform of \(\underline{\mathbf{X}}\) along its third mode, which can be computed as \(\widehat{\underline{\mathbf{X}}}=\mathrm{fft}(\underline{\mathbf{X}},[],3)\). It is known that the block circulant matrix, \(\mathrm{circ}(\underline{\mathbf{X}})\in\mathbb{R}^{I_{1}I_{3}\times I_{2}I_{3}}\), can be block diagonalized, i.e.,
\[(\mathbf{F}_{I_{3}}\otimes\mathbf{I}_{I_{1}})\,\mathrm{circ}\,(\underline{ \mathbf{X}})(\mathbf{F}_{I_{3}}^{-1}\otimes\mathbf{I}_{I_{2}})=\widehat{ \mathbf{X}}, \tag{2}\]
where \(\mathbf{F}_{I_{3}}\in\mathbb{R}^{I_{3}\times I_{3}}\) is the discrete Fourier transform matrix and \((\mathbf{F}_{I_{3}}\otimes\mathbf{I}_{I_{1}})/\sqrt{I_{3}}\) is a unitary matrix. Here, the block diagonal matrix \(\widehat{\mathbf{X}}\) is
\[\widehat{\mathbf{X}}=\begin{bmatrix}\widehat{\underline{\mathbf{X}}}(:,:,1) \\ &\widehat{\underline{\mathbf{X}}}(:,:,2)\\ &&\ddots\\ &&&\widehat{\underline{\mathbf{X}}}(:,:,I_{3})\end{bmatrix}, \tag{3}\]
and we have the following important properties [26, 27]
\[\widehat{\underline{\mathbf{X}}}(:,:,1) \in \mathbb{R}^{I_{1}\times I_{2}}, \tag{4}\] \[\mathrm{conj}(\widehat{\underline{\mathbf{X}}}(:,:,i)) = \widehat{\underline{\mathbf{X}}}(:,:,I_{3}-i+2), \tag{5}\]
for \(i=2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil+1\). The t-product can be equivalently performed in the Fourier domain. Indeed, let \(\underline{\mathbf{C}}=\underline{\mathbf{X}}*\underline{\mathbf{Y}}\), then from the definition of the t-product and the fact that the block circulant matrix can be block diagonalized, we have
\[\begin{array}{rcl}\mathrm{unfold}(\underline{\mathbf{C}})&=&\mathrm{circ}( \underline{\mathbf{X}})\,\mathrm{unfold}(\underline{\mathbf{Y}})\\ &=&(\mathbf{F}_{I_{3}}^{-1}\otimes\mathbf{I}_{I_{1}})((\mathbf{F}_{I_{3}} \otimes\mathbf{I}_{I_{1}})\,\mathrm{circ}\,(\underline{\mathbf{X}})( \mathbf{F}_{I_{3}}^{-1}\otimes\mathbf{I}_{I_{2}}))\\ &&((\mathbf{F}_{I_{3}}^{-1}\otimes\mathbf{I}_{I_{2}})\,\mathrm{unfold}( \underline{\mathbf{Y}}))\\ &=&(\mathbf{F}_{I_{3}}\otimes\mathbf{I}_{I_{1}})\,\widehat{\mathbf{X}}\, \mathrm{unfold}(\underline{\mathbf{Y}}),\end{array} \tag{6}\]
where \(\widehat{\underline{\mathbf{Y}}}=\mathrm{fft}(\underline{\mathbf{Y}},[],3)\). If we multiply both sides of (6) from the left-hand side with \((\mathbf{F}_{I_{3}}\otimes\mathbf{I}_{I_{1}})\), we get \(\mathrm{unfold}(\widehat{\underline{\mathbf{C}}})=\widehat{\mathbf{X}}\, \mathrm{unfold}(\underline{\mathbf{Y}})\), where \(\widehat{\underline{\mathbf{C}}}=\mathrm{fft}(\underline{\mathbf{C}},[],3)\). This means that \(\widehat{\underline{\mathbf{C}}}(:,:,i)=\widehat{\underline{\mathbf{X}}}(:,:,i)\)\(\widehat{\underline{\mathbf{Y}}}(:,:,i)\). So, it suffices to transform two given tensors into the Fourier domain and multiply their frontal slices. Then, the resulting tensor in the Fourier domain returned back to the original space via the inverse FFT. Note that due to the equations in (4)-(5), half of the computations are reduced. This procedure is summarized in Algorithm 1.
**Definition 2**.: (Transpose) The transpose of a tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is denoted by \(\underline{\mathbf{X}}^{T}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) produced by applying the transpose to all frontal slices of the tensor \(\underline{\mathbf{X}}\) and reversing the order of the second untill the last transposed frontal slices.
**Definition 3**.: (Identity tensor) Identity tensor \(\underline{\mathbf{I}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is a tensor whose first frontal slice is an identity matrix of size \(I_{1}\times I_{1}\) and all other frontal slices are zero. It is easy to show \(\underline{\mathbf{I}}*\underline{\mathbf{X}}=\underline{\mathbf{X}}\) and \(\underline{\mathbf{X}}*\underline{\mathbf{I}}=\underline{\mathbf{X}}\) for all tensors of conforming sizes.
**Definition 4**.: (Orthogonal tensor) A tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is orthogonal (under t-product operator) if \(\underline{\mathbf{X}}^{T}*\underline{\mathbf{X}}=\underline{\mathbf{X}}* \underline{\mathbf{X}}^{T}=\underline{\mathbf{I}}\).
**Definition 5**.: (f-diagonal tensor) If all frontal slices of a tensor are diagonal then the tensor is called an f-diagonal tensor.
**Definition 6**.: (Inverse of a tensor) The inverse of a tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is denoted by \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is a unique tensor satisfying \(\underline{\mathbf{X}}*\underline{\mathbf{X}}^{-1}=\underline{\mathbf{X}}^{- 1}*\underline{\mathbf{X}}=\underline{\mathbf{I}}\), where \(\underline{\mathbf{I}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is the identity tensor. The inverse of a tensor can also be computed in the Fourier domain and described in Algorithm 2. The MATLAB command "inv" in Line 3 computes the inverse of a matrix. The Moore-Penrose (MP) inverse of a tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is denoted by \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) and can be computed by Algorithm 2 where "inv" is replaced with the MATLAB function "pinv". Here, "pinv" stands for the MP inverse of a matrix.
## 3 Tensor SVD (t-SVD)
The tensor SVD (t-SVD) represents a tensor as the t-product of three tensors. The first and last tensors are orthogonal, while the middle tensor is an f-diagonal tensor. To be more precise, let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), then the t-SVD of the tensor \(\underline{\mathbf{X}}\), is \(\underline{\mathbf{X}}=\underline{\mathbf{U}}*\underline{\mathbf{S}}*\underline {\mathbf{V}}^{T},\) where \(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times R\times I_{3}}\), and \(\underline{\mathbf{V}}\in\mathbb{R}^{I_{2}\times R\times I_{3}}\) are orthogonal tensors and the tensor \(\underline{\mathbf{S}}\in\mathbb{R}^{R\times R\times I_{3}}\) is f-diagonal [9; 28], see Figure 1 for an illustration on the t-SVD and its truncated version. Note that Algorithm 3 only needs the truncated SVD of the \(\lceil\frac{I_{3}+1}{2}\rceil\) first frontal slices. The generalization of the t-SVD to tensors of order higher than three is done in [29]. Other types of classical matrix decompositions such as QR and LU decompositions can be generalized based on the t-product in straightforward ways.
The computational complexity of Algorithm 3 is dominated by the FFT of all tubes of an input tensor and also the truncated SVD of the frontal slices in
```
Input : Two data tensors \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{2}\times I_{4}\times I_{3}}\) Output : t-product \(\underline{\mathbf{C}}=\underline{\mathbf{X}}\ast\underline{\mathbf{Y}}\in \mathbb{R}^{I_{1}\times I_{4}\times I_{3}}\)
1\(\underline{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\);
2\(\underline{\mathbf{Y}}=\mathrm{fft}\left(\underline{\mathbf{Y}},[],3\right)\);
3for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do
4\(\underline{\mathbf{C}}\left(.;.;i\right)=\underline{\mathbf{X}}\left(.;.;i\right) \)\(\underline{\mathbf{\hat{Y}}}\left(.;.;i\right)\);
5
6 end for
7for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do
8\(\underline{\mathbf{C}}\left(.;.;i\right)=\mathrm{conj}(\underline{\mathbf{ \hat{C}}}\left(.;.;I_{3}-i+2\right))\);
9
10 end for
11\(\underline{\mathbf{C}}=\mathrm{fft}\left(\underline{\mathbf{C}},[],3\right)\);
```
**Algorithm 1**Fast t-product of two tensors [9, 27]
```
Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\)
1\(\underline{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\);
2for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do
3\(\underline{\mathbf{C}}\left(.;.;i\right)=\mathrm{inv}\left(\widehat{\mathbf{X }}(.;.,i)\right)\);
4
5 end for
6for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do
7\(\underline{\mathbf{C}}\left(.;.;i\right)=\mathrm{conj}(\underline{\mathbf{C}} \left(.;.;I_{3}-i+2\right))\);
8
9 end for
10\(\underline{\mathbf{X}}^{-1}=\mathrm{fft}\left(\underline{\mathbf{C}},[],3\right)\);
```
**Algorithm 2**Fast inverse computation of the tensor \(\underline{\mathbf{X}}\)
``` Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) Output : Tensor inverse \(\underline{\mathbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{1}}\) Output : Tensor inverse
the Fourier domain. In the literature, some algorithms have been developed to accelerate these computations. For example, using the idea of randomization, we can replace the classical truncated SVD with more efficient and faster approaches such as the randomized SVD [31; 16] or cross matrix approximation. Although this idea can somehow solve the mentioned computation difficulty of Algorithm 3, still we need to access all elements of the underlying data tensor. For very big data tensors where viewing the data tensor even once is very prohibitive, it is required to develop algorithms that only use a part of the data tensor at each iteration. In this paper, we follow this idea and propose an efficient algorithm for the computation of the t-SVD, which uses only a part of lateral and horizontal slices of a tensor at each iteration. This significantly accelerates the computations and in some of our simulations, we have achieved almost two orders of magnitude acceleration, which shows the performance of the proposed algorithm. To the best of our knowledge, this is the first adaptive cross algorithm developed for the computation of the t-SVD.
## 4 Matrix cross approximation and its adaptive version
The cross matrix approximation was first proposed in [17] for fast low-rank approximation of matrices. It provides a low-rank matrix approximation based on some actual columns and rows of the original matrix. It has been shown that a cross approximation with the maximum volume of the intersection matrix leads to close to optimal approximation [18]. The adaptive cross approximation
Figure 1: a) Tensor SVD (t-SVD) of the tensor \(\underline{\mathbf{X}}\), b) The truncated t-SVD of the tensor \(\underline{\mathbf{X}}\) for the tubal rank \(R\)[30].
```
Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and a target tubal rank \(R\) Output : The truncated t-SVD of the tensor \(\underline{\mathbf{X}}\)
1\(\widehat{\underline{\mathbf{X}}}=\mathrm{fft}\left(\underline{\mathbf{X}}, \left[\!\left.\right|\!\right],3\right)\);
2for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do
3\(\left[\widehat{\underline{\mathbf{U}}}\left(.;.;i\right),\widehat{\underline{ \mathbf{S}}}(.;.;i),\widehat{\mathbf{V}}(.;.;i)\right]=\mathrm{svds}\left( \widehat{\mathbf{X}}(.;.;i),R\right)\);
4 end for
5for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do
6\(\widehat{\underline{\mathbf{U}}}\left(.;.;i\right)=\mathrm{conj}(\widehat{ \underline{\mathbf{U}}}\left(.;.;I_{3}-i+2\right))\);
7\(\widehat{\underline{\mathbf{S}}}\left(.;.;i\right)=\widehat{\underline{\mathbf{S} }}\left(.;.;I_{3}-i+2\right)\);
8\(\widehat{\underline{\mathbf{V}}}\left(.;.;i\right)=\mathrm{conj}(\widehat{ \underline{\mathbf{V}}}\left(.;.;I_{3}-i+2\right))\);
9 end for
10\(\underline{\mathbf{U}}_{R}=\mathrm{ifft}\left(\widehat{\underline{\mathbf{U}}}, \left[\!\left.\right|\!\right],3\right)\); \(\underline{\mathbf{S}}_{R}=\mathrm{ifft}\left(\widehat{\underline{\mathbf{S} }},\left[\!\left.\right|\!\right],3\right)\); \(\underline{\mathbf{V}}_{R}=\mathrm{ifft}\left(\widehat{\underline{\mathbf{V} }},\left[\!\left.\right|\!\right],3\right)\)
```
**Algorithm 3**The truncated t-SVD decomposition of the tensor \(\underline{\mathbf{X}}\)
Figure 2: a) One stage of the adaptive cross matrix approximation for rank reduction. The corresponding column and row of the residual matrix \(\mathbf{Z},\) with the same indices as the selected column and row of the original data matrix \(\mathbf{X}\) become zeros, i.e. \(\mathrm{rank}(\mathbf{X}-\mathbf{Y})=\mathrm{rank}(\mathbf{X})-1.\) The rank one matrix approximation \(\mathbf{Y}\) interpolates \(\mathbf{X}\) at the selected column and row. b) One stage of the cross tubal approximation for the tubal rank reduction. The corresponding lateral and horizontal slices of the residual tensor \(\underline{\mathbf{Z}}\), with the same indices as the selected lateral and horizontal slices of the original data tensor \(\underline{\mathbf{X}}\) become zeros, \(\mathrm{rank}(\underline{\mathbf{X}}-\underline{\mathbf{Y}})=\mathrm{rank}( \underline{\mathbf{X}})-1\). The tubal rank one tensor approximation \(\underline{\mathbf{Y}}\) interpolates \(\underline{\mathbf{X}}\) at the selected lateral and horizontal slices.
or Cross2D algorithm [32; 33; 23; 24; 25] sequentially selects a column and a row of the original data matrix and based on them, computes a rank-1 matrix scaled by the intersection element, as stated in the following theorem, which indeed is the Gaussian elimination process.
**Theorem 1**.: (Rank-1 deflation) Let \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) be a given matrix, and we select the \(i\)-th row and the \(j\)-th column with the nonzero intersection element \(\mathbf{X}(i,j)\). Then the following residual matrix
\[\mathbf{Y}=\mathbf{X}-\frac{1}{\mathbf{X}(i,j)}\mathbf{X}(:,j)\mathbf{X}(i,:),\]
vanishes at the \(i\)-th row and \(j\)-th column, so \(\operatorname{rank}(\mathbf{Y})=\operatorname{rank}(\mathbf{X})-1\).
Proof.: It is obvious that the \(j\)-th column and \(i\)-th row of \(\mathbf{Y}\) are zeros
\[\mathbf{Y}(:,j)=\mathbf{X}(:,j)-\frac{1}{\mathbf{X}(i,j)}\mathbf{ X}(:,j)\mathbf{X}(i,j)=0, \tag{7}\] \[\mathbf{Y}(i,:)=\mathbf{X}(i,:)-\frac{1}{\mathbf{X}(i,j)}\mathbf{ X}(i,j)\mathbf{X}(i,:)=0. \tag{8}\]
To prove Theorem 1, we consider two cases:
1. If \(\mathbf{X}\) is of full-rank, that is, either \(\mathbf{X}(:,j)\notin\operatorname{range}\left(\mathbf{X}_{(:,-j)}\right)\) or \(\mathbf{X}(i,:)\notin\operatorname{range}\left(\mathbf{X}_{(-i,:)}\right)\), then it is straightforward that \(\mathbf{Y}\) has smaller rank than \(\mathbf{X}\).
2. Otherwise, we consider the case \(\mathbf{X}\) is rank-deficient and \(\mathbf{X}(:,j)\) and \(\mathbf{X}(i,:)\) are non zero vectors that \(\mathbf{X}(i,:)\in\operatorname{range}\left(\mathbf{X}_{(-i,:)}\right)\) and \(\mathbf{X}(:,j)\in\operatorname{range}\left(\mathbf{X}_{(:,-j)}\right)\). Without loss of generality, we assume that \(\mathbf{X}(:,j)\) and \(\mathbf{X}(i,:)\) are the last column and the last row of the matrix \(\mathbf{X}\), i.e. \(i=I_{1},\,j=I_{2}\), (see illustration in Figure 3) and \(\operatorname{rank}(\mathbf{X})<\min(I_{1},I_{2})\). Since \(\mathbf{X}(i,:)\in\operatorname{range}\left(\mathbf{X}_{(-i,:)}\right)\) and \(\mathbf{X}(:,j)\in\operatorname{range}\left(\mathbf{X}_{(:,-j)}\right)\), there exist linear combinations such that \[\mathbf{X}(:,j)=\begin{bmatrix}\mathbf{a}\\ c\end{bmatrix}=\begin{bmatrix}\mathbf{Z}\\ \mathbf{b}^{T}\end{bmatrix}\boldsymbol{\alpha},\qquad\mathbf{X}(i,:)^{T}= \begin{bmatrix}\mathbf{b}\\ c\end{bmatrix}=\begin{bmatrix}\mathbf{Z}^{T}\\ \mathbf{a}^{T}\end{bmatrix}\boldsymbol{\beta},\] (9) where \(\boldsymbol{\alpha}\neq 0,\,\boldsymbol{\beta}\neq 0\). This gives \(c=\boldsymbol{\beta}^{T}\mathbf{Z}\boldsymbol{\alpha}\). The rank-1 matrix deflation yields \[\mathbf{Y}=\mathbf{X}-\frac{1}{c}\begin{bmatrix}\mathbf{a}\\ c\end{bmatrix}\begin{bmatrix}\mathbf{b}^{T}&c\end{bmatrix}=\begin{bmatrix} \mathbf{Z}-\frac{1}{c}\mathbf{a}\mathbf{b}^{T}&\boldsymbol{0}\\ \boldsymbol{0}&0\end{bmatrix}.\] (10)
where the top-left submatrix of \(\mathbf{Y}\) has rank-1 reduction from \(\mathbf{Z}\)
\[\mathbf{W}=\mathbf{Z}-\frac{1}{c}\mathbf{a}\mathbf{b}^{T}=\mathbf{Z}-\frac{1}{c }\mathbf{Z}\boldsymbol{\alpha}\,\boldsymbol{\beta}^{T}\mathbf{Z}. \tag{11}\]
We next substitute \(\mathbf{Z}\) by its singular value decomposition \(\mathbf{Z}=\mathbf{U}\mathbf{S}\mathbf{V}^{T}\), where \(\mathbf{S}\) is a diagonal matrix of positive singular values of \(\mathbf{Z}\) and consider
\[\mathbf{W}=\mathbf{U}\left(\underbrace{\mathbf{S}-\frac{1}{c}\mathbf{S}( \mathbf{V}^{T}\boldsymbol{\alpha})(\boldsymbol{\beta}^{T}\mathbf{U})\mathbf{S} }_{\mathbf{K}}\right)\mathbf{V}^{T}. \tag{12}\]
Assume \(\mathbf{d}=\mathbf{V}^{T}\boldsymbol{\alpha}\) and \(\mathbf{e}=\mathbf{U}^{T}\boldsymbol{\beta}\), then from (12), we have
\[\mathbf{K}=\mathbf{S}-\frac{\mathbf{S}\mathbf{d}\mathbf{e}^{T}\mathbf{S}}{ \mathbf{e}^{T}\mathbf{S}\mathbf{d}}. \tag{13}\]
It is straightforward to see that
\[\mathbf{S}^{-1/2}\mathbf{K}\mathbf{S}^{-1/2}=\mathbf{I}-\frac{\mathbf{u} \mathbf{v}^{T}}{\mathbf{v}^{T}\mathbf{u}}, \tag{14}\]
where \(\mathbf{u}=\mathbf{S}^{1/2}\mathbf{d}\) and \(\mathbf{v}=\mathbf{S}^{1/2}\mathbf{e}\). Now, it is readily seen that \(\mathbf{K}\) has a zero singular value, i.e., its rank is reduced from the rank of \(\mathbf{S}\) by 1. Since the multiplication with orthogonal matrices does not change the matrix rank, the proof of the theorem is completed.
**Remark 2**.: An alternative proof for Theorem 1 adopts the following fact proved in [34]. Given \(\mathbf{X}\in\mathbb{R}^{m\times n}\) and assume \(\mathbf{U}\in\mathbb{R}^{m\times k},\,\mathbf{R}\in\mathbb{R}^{k\times k}\), and \(\mathbf{V}\in\mathbb{R}^{n\times k}\). Then
\[\mathrm{rank}(\mathbf{X}-\mathbf{U}\mathbf{R}^{-1}\mathbf{V}^{T})=\mathrm{ rank}(\mathbf{X})-\mathrm{rank}(\mathbf{U}\mathbf{R}^{-1}\mathbf{V}^{T}), \tag{15}\]
Figure 3: Partitioning the matrix \(\mathbf{X}\) for the proof of Theorem 1.
if and only if there exist \(\mathbf{A}\in\mathbb{R}^{n\times k}\) and \(\mathbf{B}\in\mathbb{R}^{m\times k}\) such that \(\mathbf{U}=\mathbf{X}\mathbf{A}\), \(\mathbf{V}=\mathbf{X}^{T}\mathbf{B}\), and \(\mathbf{R}=\mathbf{B}^{T}\mathbf{X}\mathbf{A}\). If we define \(\mathbf{A}=\mathbf{e}_{j}\in\mathbb{R}^{n},\ \mathbf{B}=\mathbf{e}_{i}\in\mathbb{R}^{m}\) as the \(j\)-th and \(i\)-th standard unit vectors1. Then, we have \(\mathbf{U}=\mathbf{X}\mathbf{A}=\mathbf{X}(:,j),\ \mathbf{V}=\mathbf{X}^{T}\mathbf{B}= \mathbf{X}(i,:)^{T}\) and \(\mathbf{R}^{-1}=(\mathbf{e}_{i}^{T}\mathbf{X}\mathbf{e}_{j})^{-1}=\frac{1}{ \mathbf{X}(i,j)}\). Now 15, demonstrates that
Footnote 1: For the standard unit vector \(e_{i}\), the \(i\)th element is 1 and the rest are zero.
\[\mathrm{rank}(\mathbf{X}-\frac{1}{c}\mathbf{X}(:,j)\mathbf{X}(i,:)) = \mathrm{rank}(\mathbf{X})-\mathrm{rank}(\frac{1}{c}\mathbf{X}(:,j )\mathbf{X}(i,:))\] \[= \mathrm{rank}(\mathbf{X})-1.\]
So, this completes the proof.
In view of Theorem 1, we see that the corresponding column and row of the residual matrix \(\mathbf{Y}\), with the same indices as the selected column/row of the original data matrix \(\mathbf{X}\) become zero, which means that this approximation interpolates the original data matrix at the mentioned indices and reduces its rank by one order, see Figure 2 (a) for a graphical illustration of this approach. This procedure is repeated by selecting a new column and a new row of the residual matrix, so we can sequentially reduce the matrix rank and interpolate the original matrix at the new columns/rows. The adaptive cross matrix approximation method is summarized in Algorithm 4. Clearly, the breakdown can happen in Algorithm 4 if the denominator \(\mathbf{u}_{k}(i_{k}),\) becomes zero. If we deal with such a case, we should select a new index \(i_{k}\) for which the \(\mathbf{u}_{k}(i_{k})\) is not zero. For example one can select randomly a new index, which has not been chosen previously.
## 5 Proposed adaptive tensor cross approximation based on the t-product
In this section, we show how to generalize the adaptive cross matrix approximation to the tensor case based on the t-product. Compared to the matrix case, instead of a column and a row, here we select a lateral slice and a horizontal slice at each iteration but the important question is how to use the intersection tube for scaling the corresponding tubal rank-1 tensor so that the tubal rank of the residual tensor is reduced one in order. We found that the inverse of the intersection tube should be used and this is proved in Theorem 3.
```
Input : A data matrix \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\), an approximation error \(\epsilon\) Output : Low rank matrix approximation \(\mathbf{X}=\mathbf{U}\mathbf{V}\)
1\(\mathbf{U}=0,\;\mathbf{V}=0,\;\mu=0,\;r_{0}=0,\,j_{1}\) is a random column index
2for\(k=1,2,\ldots,\min(I_{1},I_{2})\)do
3\(\mathbf{u}_{k}=\mathbf{E}_{k-1}(:,j_{k})=\mathbf{X}(:,j_{k})-\mathbf{U}\mathbf{ V}(:,j_{k})\);
4\(i_{k}=\arg\max_{i}|\mathbf{u}_{k}(i)|\);
5\(\mathbf{u}_{k}\leftarrow\mathbf{u}_{k}/\mathbf{u}_{k}(i_{k})\);
6\(\mathbf{v}_{k}^{T}=\mathbf{E}_{k-1}(i_{k},:)=\mathbf{X}(i_{k},:)-\mathbf{U}( i_{k},:)\mathbf{V}\);
7\(j_{k+1}=\arg\max_{j}|\mathbf{v}_{k}(j)|\);
8\(\rho^{2}=\|\mathbf{u}_{k}\|_{2}^{2}\|\mathbf{v}_{k}\|_{2}^{2}\)
9\(\mu^{2}\leftarrow\mu^{2}+\rho^{2}+2\sum_{j=1}^{k-1}\mathbf{V}(j,:)\mathbf{v}_ {k}\mathbf{u}_{k}^{T}\mathbf{U}(:,j)\);
10\(\mathbf{U}\leftarrow[\mathbf{U},\mathbf{u}_{k}]\), \(\mathbf{V}\leftarrow[\mathbf{V};\mathbf{v}_{k}^{T}]\);
11\(r_{k}=r_{k-1}+1\);
12if\(\rho<\epsilon\mu\)then
13 Break
14
15 end if
16
17 end for
```
**Algorithm 4**Adaptive cross approximation algorithm (ACA) [32, 33, 23, 24, 25]
**Theorem 3**.: (Tubal rank-1 deflation) Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) be a given data tensor with sampled lateral and horizontal slices as \(\underline{\mathbf{X}}(:,j,:)\) and \(\underline{\mathbf{X}}(i,:,:)\) with the nonzero intersection tube \(\mathbf{X}(i,j,:)\). Then the residual tensor
\[\underline{\mathbf{Y}}=\underline{\mathbf{X}}-\underline{\mathbf{X}}(:,j,:)*( \underline{\mathbf{X}}(i,j,:))^{-1}*\underline{\mathbf{X}}(i,:,:), \tag{16}\]
vanishes at its \(i\)-th lateral slice and \(j\)-th horizontal slice and \(\mathrm{rank}(\underline{\mathbf{Y}})=\mathrm{rank}(\underline{\mathbf{X}})-1\).
```
Input : A data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), an approximation error \(\epsilon\) Output : Low tubal rank tensor approximation \(\underline{\mathbf{X}}=\underline{\mathbf{U}}*\underline{\mathbf{V}}\)
1\(\underline{\mathbf{U}}=0,\;\underline{\mathbf{V}}=0,\;\mu=0,\;r_{0}=0,\,j_{1}\) is a random lateral slice index
2for\(k=1,2,\ldots,\min(I_{1},I_{2})\)do
3\(\underline{\mathbf{u}}_{k}=\underline{\mathbf{E}}_{k-1}(:,j_{k},:)=\underline{ \mathbf{X}}(:,j_{k},:)-\underline{\mathbf{U}}*\underline{\mathbf{V}}(:,j_{k}, :)\);
4\(i_{k}=\arg\max_{i}\;||\underline{\mathbf{u}}_{k}(i,j_{k},:)||_{2}^{2}\);
5\(\underline{\mathbf{u}}_{k}\leftarrow\underline{\mathbf{u}}_{k}*(\underline{ \mathbf{u}}_{k}(i_{k},j_{k},:))^{-1}\);
6\(\underline{\mathbf{v}}_{k}^{T}=\underline{\mathbf{E}}_{k-1}(i_{k},:,:)= \underline{\mathbf{X}}(i_{k},:,:)-\underline{\mathbf{U}}(i_{k},:,:)*\underline{ \mathbf{V}}\);
7\(j_{k+1}=\arg\max_{j}\;||\underline{\mathbf{v}}_{k}(i_{k},j,:)||_{2}^{2}\);
8\(\rho^{2}=\|\underline{\mathbf{u}}_{k}\|_{F}^{2}\|\underline{\mathbf{v}}_{k}\|_ {F}^{2}\);
9\(\mu^{2}\leftarrow\mu^{2}+\rho^{2}+2\|\sum_{j=1}^{k-1}\underline{\mathbf{v}}(j, :,:)*\underline{\mathbf{v}}_{k}*\underline{\mathbf{u}}_{k}^{T}*\underline{ \mathbf{u}}_{k}(:,j,:)\|_{2}^{2}\);
10\(\underline{\mathbf{U}}\leftarrow[\underline{\mathbf{U}},\underline{\mathbf{u}} _{k}]\), \(\underline{\mathbf{V}}\leftarrow[\underline{\mathbf{V}};\underline{\mathbf{v}}_{k }^{T}]\);
11\(r_{k}=r_{k-1}+1\);
12if\(\rho<\epsilon\mu\)then
13break
14
15 end if
16
17 end for
```
**Algorithm 5**Proposed adaptive cross tubal tensor approximation algorithm (ACTA)
Proof.: To prove Theorem 3, we show that the \(i\)-th row and the \(j\)-th column of each frontal slice of the residual tensor \(\underline{\mathbf{Y}}\) are zero. To do this, let us consider the \(k\)-th frontal slice of the residual tensor \(\underline{\mathbf{Y}}\) as \(\underline{\mathbf{Y}}(:,:,k)\). In the Fourier domain, it can be represented as
\[\underline{\mathbf{\hat{Y}}}(:,:,k)=\underline{\mathbf{\hat{X}}}(:,:,k)-\frac{1 }{\underline{\mathbf{\hat{X}}}(i,j,k)}\underline{\mathbf{\hat{X}}}(:,j,k) \underline{\mathbf{\hat{X}}}(i,:,k). \tag{17}\]
In view of Theorem 1, this means that the \(i\)-th row and the \(j\)-th column of the the \(k\)-th frontal slice of the tensor \(\underline{\mathbf{Y}}\) in the Fourier domain are zero and its rank is one order lower than the rank of the matrix \(\widehat{\mathbf{X}}(:,:,k)\). So the \(i\)-th row and the \(j\)-th column of the all frontal slices \(\underline{\mathbf{X}}(:,:,k),\,k=1,2,\ldots,I_{3}\) equal to zero. This clearly completes the proof.
It is not difficult to see that for second-order tensors (matrices), Equation (16) is reduced to the classical matrix cross approximation. At each iteration, we select a lateral slice and a horizontal slice and perform the scaling using the pseudoinverse of the intersection tube. The corresponding scaled tubal rank-1 tensor reduces the tubal rank of the underlying data tensor by one order. It is interesting to note that similar to the matrix case where after each iteration the corresponding selected column and row in the residual matrix become zeros, here the corresponding lateral and horizontal slices of the residual tensor vanish. So, naturally, this approximation interpolates the original tensor at the mentioned slices. This procedure can proceed with the residual tensor to reduce the tubal rank sequentially. The generalized adaptive cross tubal approximation method is outlined in Algorithm 5. In Line 10 of Algorithm 5, the new horizontal and lateral slicers are concatenated along the second and first modes, respectively. The relative error accuracy is used for the stopping criterion as \(\rho\leq\epsilon\mu\) according to
\[\rho=\|\underline{\mathbf{u}}_{k}*\underline{\mathbf{v}}_{k}^{T} \|_{F} \approx \|\underline{\mathbf{X}}-\underline{\mathbf{U}}*\underline{ \mathbf{V}}\|_{F},\] \[\mu=\|\underline{\mathbf{U}}*\underline{\mathbf{V}}\|_{F} \approx \|\underline{\mathbf{X}}\|_{F},\]
where \(\underline{\mathbf{U}}=[\underline{\mathbf{u}}_{1},\ldots,\underline{ \mathbf{u}}_{k-1}]\) and \(\underline{\mathbf{V}}=[\underline{\mathbf{v}}_{1};\ldots;\underline{ \mathbf{v}}_{k-1}]\). It is necessary to enforce \(i_{k}\neq i_{1},i_{2},\ldots,i_{k-1}\) and \(j_{k}\neq j_{1},j_{2},\ldots,j_{k-1},\) which means that each iteration should produce new indices (different from the others). We also remark that if the size of a frontal/lateral slice is big, one can compress it using the classical cross methods, similar to [20] where the cross approximation is used in two stages for the computation of the Tucker decomposition. Although our results so far are for third-order tensors, clearly they can be straightforwardly generalized to tensors of a higher order than three, according to [29].
## 6 Computational complexity
The adaptive cross tensor algorithm is efficient as it only works on a lateral slice and a horizontal slice at each iteration. The computational complexity of Algorithm 5 is \(\mathcal{O}((I+J)K\log(K))\). The computational complexity of the truncated t-SVD for a tensor of the size \(I\times J\times K\) is \(\mathcal{O}(IJK\log(K))+\mathcal{O}(IJK\min(I,J))\)
Besides, the truncated t-SVD needs to access and process the whole data tensor while the proposed algorithm works only on a part of the lateral slice and horizontal slices at each iteration. So, it is clearly seen that the proposed Algorithm 5 requires much less memory and computational operation than the t-SVD algorithm. This makes it applicable for decomposing large-scale tensors.
## 7 Experimental Results
We have used Matlab and some functions of the toolbox
[https://github.com/canyilu/Tensor-tensor-product-toolbox](https://github.com/canyilu/Tensor-tensor-product-toolbox)
to implement the proposed algorithm using a laptop computer with 2.60 GHz Intel(R) Core(TM) i7-5600U processor and 8GB memory. We have used two metrics, _relative error_ and Peak signal-to-noise ratio (PSNR) to compare the efficiency of the proposed algorithm with the baselines. The relative error is defined as follows
\[\mathrm{Relative\ error}=\frac{\|\underline{\mathbf{X}}-\underline{\mathbf{ U}}\ast\underline{\mathbf{S}}\ast\underline{\mathbf{V}}^{T}\|_{F}}{\| \underline{\mathbf{X}}\|_{F}}.\]
The PSNR is also defined as
\[\mathrm{PSNR}=10\mathrm{log}_{10}\left(255^{2}/\mathrm{MSE}\right),\]
where \(\mathrm{MSE}=\left\|\underline{\mathbf{X}}-\widehat{\underline{\mathbf{X}}} \right\|_{F}^{2}/\mathrm{num}\left(\underline{\mathbf{X}}\right).\) Note "num" denotes the number of parameters of a given data tensor. We mainly consider three examples. In the first example, we examine the algorithms using low-rank random data tensors. In the second example, we have used the functional based tensors. In the last example, we used the images as real-world data with application to the image completion problem.
**Example 1**.: In this example we consider a random data tensor \(\mathbf{X}\in\mathbb{R}^{N\times N\times N}\) with exact tubal rank \(R=30\) for \(N=100,200,\ldots,600\). To generate such a tensor, we considered two standard Gaussian tensors and orthonormalize them. Let us denote these orthogonal parts by \(\underline{\mathbf{U}}\in\mathbb{R}^{N\times R\times N}\) and \(\underline{\mathbf{V}}\in\mathbb{R}^{N\times R\times N}\). Then, we generate a tensor \(\underline{\mathbf{S}}\in\mathbb{R}^{R\times R\times N}\) with only \(R\) nonzero diagonal tubes \(\underline{\mathbf{S}}(i,i,:),\ i=1,2,\ldots,R\) whose elements are also standard Gaussian and build the tensor \(\underline{\mathbf{X}}=\underline{\mathbf{U}}\ast\underline{\mathbf{S}}\ast \underline{\mathbf{V}}^{T},\) which is used in our simulations. Assume that \(R=30\) in our simulations, and set \(\epsilon=10^{-8}\) in Algorithm 5. Then, we apply the proposed algorithm to find the tubal rank and the corresponding low tubal rank approximation. We consider 100 Monte Carlo experiments and report the mean of our results (accuracy and running
time). In all our experiments, the proposed approach retrieved the true tubal rank successfully, and this convinced us that it works well for finding the tubal rank of a tensor. Then we used the truncated t-SVD and the randomized t-SVD [22] to compute low tubal rank approximations of the underlying data tensor. The running time of the proposed algorithm, the truncated t-SVD, the randomized t-SVD are compared in Figure 4 (right). The numerical results show almost two orders of magnitude speed-up of the proposed approach compared with the truncated t-SVD algorithm, while it is also faster than the randomized t-SVD. The accuracy comparison of the algorithms is also presented in Figure 4 (left). This illustrates that the proposed algorithm can provide acceptable results in much less time than the truncated t-SVD algorithm and the randomized t-SVD.
**Example 2**.: In this example, we apply Algorithm 5 to compute low tubal rank approximations of function based tensors. To do so, we consider the following case studies:
* Case study I: \(\quad\underline{\mathbf{X}}(i,j,k)=\frac{1}{(i^{2}+j^{2}+k^{2})^{1/2}}\);
* Case study II: \(\quad\underline{\mathbf{X}}(i,j,k)=\sin{(i+j+k)}+\tanh(i+j+k)\);
* Case study III: \(\quad\underline{\mathbf{X}}(i,j,k)=\frac{1}{(i^{5}+j^{5}+k^{5})^{1/5}}\);
where \(1\leq i,j,k\leq n\) for \(n=100,200,\ldots,600.\) It is not difficult to see that these tensors have low tubal ranks. The numerical tubal rank for case studies I, II and II for a tensor of size \(100\times 100\times 100\) were 25, 5 and 43, respectively. However, for
Figure 4: The running time and accuracy comparisons of the proposed algorithm and the truncated t-SVD for Example 1.
Figure 5: The running time comparison of the truncated t-SVD and the proposed algorithm for Case I (upper left), Case II (upper right) and Case III (bottom) for Example 2.
larger sizes the numerical tubal rank may be slightly changed. We applied the proposed approach to the mentioned data tensors in a similar way as for Example 1, to find the numerical tubal rank and the corresponding low tubal rank approximation. Here, for the case studies I, II and III, the proposed algorithm for \(\epsilon=10^{(-8)}\) gave tubal ranks 24, 5 and 42, respectively, which is very close to the true numerical tubal ranks. Then for these tubal ranks, we applied the truncated t-SVD and the randomized t-SVD to compute a low tubal rank approximation. The running time and relative errors of the solutions of the algorithms are compared in Figure 5 and Table 2, respectively. In view of Figure 5, the performance of the proposed algorithm compared with the truncated t-SVD and the randomized t-SVD is visible. These two experiments verified that the proposed approach is applicable for large-scale tensors because it works only on a small a part of the data tensor at each iteration while the classical approaches, e.g. the truncated t-SVD deals with the whole data tensor. The results in Table 2 also show that the proposed algorithm provides an approximation with almost the same accuracy as the truncated t-SVD, which is known to be the best approximation in the lease-squares sense2 for the low tubal rank approximation [9].
Footnote 2: For any unitary invariant tensor norm.
**Example 3. Application to tensor completion.** In this example, we show the
Figure 6: The original image, the available with 70% missing pixels (randomly) and the reconstructed images using the truncated t-SVD and the proposed approach for Example 3.
application of the proposed adaptive algorithm for the task of tensor completion. To this end, we consider the benchmark images "Peppers", "Lena" and "House" that of size \(256\times 256\times 3\) depicted in Figure 6 (left) and remove 70% of their pixels randomly shown in Figure 6 (middle). We use the Peak signal-to-noise ratio (PSNR) to compare the performance of the proposed algorithm with the benchmark algorithm. The tensor decomposition formulation (18) for the tensor completion problem is written as follows
\[\begin{array}{ll}\min\limits_{\underline{\mathbf{X}}}&\|\mathbf{P}_{ \underline{\mathbf{\Omega}}}(\underline{\mathbf{X}})-\mathbf{P}_{\underline{ \mathbf{\Omega}}}(\underline{\mathbf{M}})\|_{F}^{2},\\ \mbox{s.t.}&\quad\quad\mbox{rank}(\underline{\mathbf{X}})=R,\end{array} \tag{18}\]
where the unknown tensor \(\underline{\mathbf{X}}\) to be determined, and we assume that it has low tensor rank representation, \(\underline{\mathbf{M}}\) is the original data tensor and \(\underline{\mathbf{\Omega}}\) is the index of known pixels. The projector \(\mathbf{P}_{\underline{\mathbf{\Omega}}}\) is defined as follows
\[\mathbf{P}_{\underline{\mathbf{\Omega}}}(\mathbf{X})=\left\{\begin{array}[] {ll}\mathbf{X}(\mathbf{i})&\mathbf{i}\in\underline{\mathbf{\Omega}},\\ 0&\mathbf{i}\notin\underline{\mathbf{\Omega}},\end{array}\right.\]
where \(\mathbf{i}=(i_{1},i_{2},\ldots,i_{N})\) is an arbitrary multi-index with \(1\leq i_{n}\leq I_{n},\ n=1,2,\ldots,N\). Here, different kinds of tensor ranks and associated tensor decompositions can be considered in the formulation (18). The solution to the minimization problem (18) can be approximated by the following iterative procedure
\[\underline{\mathbf{Y}}^{(n)}=\mathcal{L}(\underline{\mathbf{X}}^{(n)}), \tag{19}\]
\begin{table}
\begin{tabular}{||c|c c c c c||} \multicolumn{7}{c}{Case study I} \\ \hline Methods N & 100 & 200 & 300 & 400 & 500 & 600 \\ \hline \hline Truncated t-SVD [9] & 1.6e-14 & 8.01e-13 & 6.9e-12 & 2.4e-11 & 5.9e-11 & 1.1e-10 \\ Randomized t-SVD [22] & 4.9e-14 & 8.4e-12 & 4.4e-11 & 1.5e-10 & 3.7e-10 & 7.7e-10 \\ Proposed algorithm & 3.1e-14 & 1.01e-12 & 3.5e-11 & 2.95e-10 & 3.2e-10 & 9.7e-10 \\ \hline \multicolumn{7}{c}{Case study II} \\ \hline \hline Truncated t-SVD [9] & 1.0e-15 & 1.6e-15 & 1.3e-15 & 1.7e-15 & 2.09e-15 & 2.5e-15 \\ Randomized t-SVD [22] & 1.2e-15 & 1.5e-15 & 1.6e-15 & 2.4e-15 & 2.06e-15 & 2.3e-15 \\ Proposed algorithm & 1.3e-15 & 1.5e-15 & 1.7e-15 & 1.8e-15 & 2.06e-15 & 2.3e-15 \\ \hline \multicolumn{7}{c}{Case study III} \\ \hline \hline Truncated t-SVD [9] & 2.7e-14 & 1.3e-11 & 1.3e-10 & 5.04e-10 & 1.1e-09 & 2.2e-09 \\ Randomized t-SVD [22] & 1.9e-13 & 1.2e-10 & 1.01e-09 & 3.5e-09 & 6.5e-09 & 1.3e-08 \\ Proposed algorithm & 5.3e-14 & 7.7e-11 & 4.7e-10 & 7.1e-09 & 3.2e-09 & 2.1e-08 \\ \hline \end{tabular}
\end{table}
Table 1: Relative errors of results obtained by the truncated t-SVD and the proposed Algorithm for Example 2.
\[\underline{\mathbf{X}}^{(n+1)}=\underline{\mathbf{\Omega}}\varoqright\mbox{$ \mathbf{Y}$}^{(n)}+(\underline{\mathbf{1}}-\underline{\mathbf{\Omega}})\varoqright \mbox{$\mathbf{Y}$}^{(n)}, \tag{20}\]
as described in [35] to complete the unknown pixels where \(\mathcal{L}\) is an operator, which computes a low-rank tensor approximation of the data tensor \(\underline{\mathbf{X}}^{(n)}\), \(\underline{\mathbf{1}}\) is a tensor whose all components are equal to one and is the Hadamard (elementwise) product. For the low-rank computations in the first step (19), we apply the proposed Algorithm 5 with a given number of iterations and not a given tolerance to find the lateral and horizontal slice indices and compute the approximation \(\underline{\mathbf{C}}*\underline{\mathbf{U}}*\underline{\mathbf{R}}\) where \(\underline{\mathbf{U}}=\underline{\mathbf{C}}^{\dagger}*\underline{\mathbf{X} }*\underline{\mathbf{R}}^{\dagger}\) and \(\underline{\mathbf{C}}\), \(\underline{\mathbf{R}}\) are the sampled lateral and horizontal slices, respectively. The tubal rank \(R=70\), was used in our computations. Beside applying the proposed algorithm, we also used the truncated t-SVD and the randomized t-SVD in our computations. The reconstructed images using the proposed approach and the truncated t-SVD are displayed in Figure 6 (bottom). The running time required to compute these reconstructions and also their PSNR are reported in Table 2. The results in Table 2 and Figure 6, clearly illustrate that the proposed adaptive algorithm provides comparable results in much less running time. This clearly shows the feasibility and efficiency of the proposed algorithm for fast tensor completion task.
\(256\times 384\times 1012\). With an error bound of \(\epsilon=0.1\), we applied the suggested approach to the above dataset to determine the relevant tubal-rank. The truncated t-SVD of the underlying dataset was then computed for this tubal-rank. The reconstructed images that were obtained by them for two random samples using the proposed algorithm and compared to the truncated t-SVD are shown in Figure 7. Here we achieved \(\times 5\) speed-up compared to the truncated t-SVD. The results unequivocally show that the suggested technique can produce similar results in less computational time. Additionally, the Attribute-specific Localization (ASL) model [38], an effective deep neural network (DNN), was taken into account as it had provided cutting-edge results for the PAR problem. In order to create a lightweight model with fewer parameters and complexity [39], we first compressed the underlying convolution layers in the ASL model using the Error Preserving Correction-CPD [40] and the SVD. As our test datasets, we also compressed \(30\%\) of the PETA dataset's images using Algorithm 5 (for \(\epsilon=0.1,0.2,\) and \(0.3\)) and compared the ASL model's performance in identifying pedestrian features in the compressed and original images. Table 3 displays the experiment's outcomes. We see that the light-weight model's accuracy for both the original and compressed images is quite similar. It should be noted that the model was not trained on compressed photos, though one may do so to improve recognition accuracy. As a result, this concept may be applied to Internet of Things (IoT) applications where tremendous amounts of data in various shapes and formats are generated (for example, image sensors embedded in mobile cameras produce enormous amounts of data in the form of higher-resolution photographs and videos). Here, it is fundamental to install compacted DNNs and portable DL models on the edge of the IoT network, along with having fast data communication for real-time applications (denoising, defogging, deblurring, segmentation, target detection, and recognition). Using the suggested method, we may use the compressed form of the data in these applications.
## 8 Conclusion and future works
In this work, we proposed an adaptive tubal tensor approximation algorithm for the computation of the tensor SVD. The proposed algorithm can estimate the tubal rank of a tensor and provide the corresponding low tubal rank approximation. The experimental results verified the feasibility of the proposed algorithm. Our future work will be developing a blocked version of the proposed adaptive tubal tensor algorithm. The block version can be further improved using the parallel hierarchical strategy [41] and we will investigate this in future works. In the
\begin{table}
\begin{tabular}{||c|c|c||} \multicolumn{3}{c}{\(\epsilon=0.3\)} \\ \hline Algorithms & Running Time (s) & Recognition accuracy \\ \hline \hline Truncated t-SVD [9] & 236.45 & 85.86\% \\ Proposed algorithm & 45.56 & 84.36\% \\ \hline \multicolumn{3}{c}{\(\epsilon=0.2\)} \\ \hline Truncated t-SVD [9] & 196.45 & 87.21\% \\ Proposed algorithm & 35.97 & 86.16\% \\ \hline \multicolumn{3}{c}{\(\epsilon=0.1\)} \\ \hline Truncated t-SVD [9] & 150.32 & 88.42\% \\ Proposed algorithm & 27.12 & 87.21\% \\ \hline \end{tabular}
\end{table}
Table 3: Comparing the running times (second) and relative errors achieved by the proposed algorithm and the Truncated t-SVD [9] with original accuracy **0.8887** for Example 4.
Figure 7: Comparing the original images and their compressed forms by the proposed algorithm and the truncated t-SVD for Example 4.
matrix case, it is known that the maximum volume (maxvol) algorithm as a matrix cross approximation method provides close to optimal low-rank approximations. Generalization of the maxvol approach from the matrix case to tensors based on the t-product is our ongoing research work.
## 9 Acknowledgement
The authors would like to thank the editor and two reviewer reviewers for their constructive comments, which have greatly improved the quality of the paper. The work was partially supported by the Ministry of Education and Science of the Russian Federation (grant 075.10.2021.068).
|
2304.05319 | Identifying regions of minimal back-scattering by a
relativistically-moving sphere | The far-field back-scattering amplitude of an electric field from a
relativistically-moving sphere is analyzed. Contrary to prior research, we do
so by expressing the fields in the helicity basis, and we highlight here its
advantages when compared to the commonly-considered parity basis. With the
purpose of exploring specific scattering phenomena considering relativistic
effects, we identify conditions that minimize the back-scattered field, leading
to a relativistic formulation of the first Kerker condition. The requirements
to be satisfied by the sphere are expressed in terms of Mie angles, which
constitute an effective parametrization of any possible optical response a
sphere might have. We are able to identify multiple combinations of Mie angles
up to octupolar order via gradient-based optimization that satisfy our newly
formulated relativistic Kerker condition, yielding minima for the
back-scattered energy as low as 0.016% of the average scattered energy. Our
results can be extended to involve multiple particles forming a metasurface,
potentially having direct implications on the design of light sails as
considered by the Breakthrough Starshot Initiative. | Mitchell R. Whittam, Aristeidis G. Lamprianidis, Yannick Augenstein, Carsten Rockstuhl | 2023-04-11T16:23:44Z | http://arxiv.org/abs/2304.05319v1 | # Identifying regions of minimal back-scattering by a relativistically-moving sphere
###### Abstract
The far-field back-scattering amplitude of an electric field from a relativistically-moving sphere is analyzed. Contrary to prior research, we do so by expressing the fields in the helicity basis, and we highlight here its advantages when compared to the commonly-considered parity basis. With the purpose of exploring specific scattering phenomena considering relativistic effects, we identify conditions that minimize the back-scattered field, leading to a relativistic formulation of the first Kerker condition. The requirements to be satisfied by the sphere are expressed in terms of Mie angles, which constitute an effective parametrization of any possible optical response a sphere might have. We are able to identify multiple combinations of Mie angles up to octupolar order via gradient-based optimization that satisfy our newly formulated relativistic Kerker condition, yielding minima for the back-scattered energy as low as \(0.016\,\mathrm{\char 37}\) of the average scattered energy. Our results can be extended to involve multiple particles forming a metasurface, potentially having direct implications on the design of light sails as considered by the Breakthrough Starshot Initiative.
## I Introduction
The scattering of light by a sphere is a canonical problem in optics and electrodynamics and has been investigated for many years, particularly for stationary spheres [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. The scattering of light by spheres is best described using Mie theory, which involves expressing the incident and scattered electromagnetic fields in terms of vector spherical harmonics (VSHs). The amplitude coefficients that weight these VSHs are collected in a vector and are mutually linked by a matrix-vector product. Moreover, all optical properties of the object are captured by the corresponding matrix, called the transition or T-matrix. For an arbitrary object, the T-matrix can be dense, but is diagonal for a sphere, and the diagonal entries are the Mie coefficients [12; 13].
Controlling an object's geometrical and material properties provides a unique way of tailoring the scattered field on demand, and many intriguing aspects have been explored, an example being the so-called Kerker condition [14; 15; 16; 17; 18]. The first Kerker condition contains the necessary composition of multipolar excitations such that the object exhibits zero back-scattering. A second Kerker condition implies a vanishing scattering in the forward direction, but this is considered less often, since optical gain is necessary for its observation [19].
While initially formulated for objects that can be safely described in dipolar approximation, it was soon realized that similar effects are encountered while capitalizing on higher-order multipole moments. This coined the notion of generalized Kerker conditions [20], and Kerker effects have been explored in a large variety of settings. These studies are motivated by high-impact applications related to nanoantennas, chiral molecules, and metamaterials, to name a few [20; 21; 22; 23; 24; 25].
This paper provides a further yet, to our knowledge, unexplored perspective on the Kerker effect. It considers the Kerker effect in the relativistic regime. The basic setting of our exploration is that of a relativistically moving sphere illuminated with a monochromatic Gaussian beam characterized by an incident angle \(\Theta_{\mathrm{i}}\) relative to the direction of motion of the sphere. Of course, unlike the case of a stationary sphere, one cannot assume that there exists a combination of multipole excitations that yield zero back-scattering with the inclusion of motion. However, one can aim to minimize the back-scattering, which depends on the multipolar contribution to the scattering response, for a given speed and incident electric field angle. This leads to an approximate Kerker condition in the case of a relativistically moving sphere.
Our work has clear implications for future technology developments. For example, within the Breakthrough Starshot Initiative [26; 27], micro-gram satellites equipped with light sails, potentially made from metasurfaces consisting of a tailored arrangement of scatterers, are to be accelerated with an Earth-based laser system up to a significant fraction of the speed of light. Using these satellites, neighbouring galaxies shall be explored. The design of such systems has many facets, and among them is the accurate description of the optical response from scattering objects in the form of metasurfaces. The formulation of the light scattering by an isolated object under relativistic conditions, as a pursuit in this contribution, is an important prerequisite to study such more advanced devices.
The structure of the paper is as follows. In Section II, the physical setup is outlined, and all necessary coordinate systems are defined. Moreover, the field of the considered incident beam is transformed from the lab frame to the reference frame of the sphere based on the transformation of each constitutive plane wave of its angular spectrum representation. Afterwards, the scattered field is obtained by solving an ordinary Mie problem in the rest frame of the sphere. We rely on a parametrization of its response using Mie angles [28]. These Mie angles constitute a minimalist model to express all possible re
sponses from a sphere, which allows a generic analysis of the back-scattering response. To conclude Section II, the scattered field will be transformed back to the lab frame, in which the back-scattering is observed.
In Section III, the back-scattering amplitude is visualized with respect to some given Mie angles for a sphere with a fixed velocity and a field with a fixed incident angle. We implement all calculations using the Julia programming language [29] and implement a gradient-based optimization scheme by leveraging automatic differentiation within the JuMP modelling framework [30], much in the spirit of recent works on differentiable physics solvers [31; 32]. Using this scheme, we design spheres that provide minimum values for the back-scattering and identify the corresponding combinations of Mie angles. We find multiple suitable combinations, and minimize the back-scattered energy to a negligible \(0.016\,\mathrm{\char 37}\) of the average scattered energy. In Section IV, we conclude our findings.
## II Description of the scattering scenario
Before delving into the mathematical description of the scattered field, it is first necessary to specify the geometry and constraints of the system. We consider a spherical particle moving at a relativistic velocity \(\mathbf{v}=v\dot{\mathbf{z}}\) within a pervading incident electric field \(\mathbf{E}_{\mathrm{i}}(\mathbf{r},t)\) with incident angle \(\Theta_{\mathrm{i}}\) as observed by an external lab frame \(S\). Although an accompanying magnetic field will always exist, to avoid repetition, we omit explicit reference to this. Two further frames are considered, namely the beam's frame \(S^{\parallel}\), which is the frame where the direction of motion of the beam moves parallel to its corresponding \(z^{\parallel}\)-axis, and the boosted frame \(S^{\prime}\), which represents the inertial reference frame of the sphere (see Fig. 1). Accordingly, the corresponding quantities are denoted without a prime in \(S\) and with a prime in \(S^{\prime}\), while all quantities in \(S^{\parallel}\) are denoted with a \(\parallel\) superscript.
A further quantity of interest is the polar angle \(\Theta_{\mathrm{i}}\) between \(\hat{\mathbf{k}}_{\mathrm{i}}\) and \(\mathbf{v}\), i.e., the angle between the beam's propagation direction and the axis of movement of the scatterer (see Fig. 1). Given the symmetry of the system, we set the azimuthal angle of the incident field \(\Phi_{\mathrm{i}}\) to be zero. Moreover, the direction of back-scattering is given by \(\hat{\mathbf{k}}_{\mathrm{BS}}\), the opposite direction to \(\hat{\mathbf{k}}_{\mathrm{i}}\).
To determine the scattered field in \(S\), we implement the 'frame-hopping method' (FHM) as described in Garner _et al._[33]. For reference, this process is outlined below:
1. Lorentz-boost the incident electric field from \(S\) to \(S^{\prime}\).
2. Solve the scattering problem in \(S^{\prime}\).
3. Inverse Lorentz-boost the scattered field from \(S^{\prime}\) back to \(S\).
The reason for computing the scattered field in \(S^{\prime}\) and not \(S\) is a matter of mathematical simplicity. In \(S^{\prime}\), the scattering calculation is analogous to a stationary system, thus avoiding any superfluous variable transformations.
### Lorentz-boosting the incident field into the scatterer's reference frame
First, we need to consider the incident field in the beam's reference frame \(S^{\parallel}\). As an incident field, we consider a single monochromatic Gaussian beam of well-defined helicity (_i.e._, handedness) expanded in terms of circularly polarized plane waves, which are eigenstates of the electromagnetic wave equation. We use the following ket in abstract Dirac notation to denote such plane waves as eigenstates of free space characterized by helicity \(\lambda^{\parallel}=\pm 1\), temporal frequency \(\omega^{\parallel}\), and direction of propagation \(\hat{\mathbf{k}}^{\parallel}\):
\[\left|\lambda^{\parallel}\ \hat{\mathbf{k}}^{\parallel}\ \omega^{\parallel} \right\rangle\doteq\hat{\mathbf{e}}_{\lambda}(\hat{\mathbf{k}}^{\parallel}) \exp\{\mathrm{i}\omega^{\parallel}[(\hat{\mathbf{k}}^{\parallel}\cdot\mathbf{r} /c)-t]\}\quad, \tag{1}\]
where the symbol \(\doteq\) refers to the spatiotemporal representation of the plane wave eigenstate. The quantity \(c\) is the speed of light in vacuum, and the polarization unit vector \(\hat{\mathbf{e}}_{\lambda^{\parallel}}(\hat{\mathbf{k}}^{\parallel})\) is given by
\[\hat{\mathbf{e}}_{\lambda^{\parallel}}(\hat{\mathbf{k}}^{\parallel})=\frac{- \lambda^{\parallel}\hat{\theta}(\hat{\mathbf{k}}^{\parallel})-\mathrm{i}\hat{ \phi}(\hat{\mathbf{k}}^{\parallel})}{\sqrt{2}}\quad, \tag{2}\]
where \(\lambda^{\parallel}=\pm 1\) corresponds to left/right circularly-polarized waves. The quantities \(\hat{\theta}\) and \(\hat{\phi}\) are, respectively, the polar and azimuthal spherical unit vectors perpendicular to the direction of propagation \(\hat{\mathbf{k}}^{\parallel}\) that is characterized by the polar and azimuthal angles of propagation \(\theta^{\parallel}\), \(\phi^{\parallel}\) of each constituent plane wave.
Denoting quantities that belong to the incident field with the subscript '\(\mathrm{i}\)', we represent a general electric field in terms of its angular spectrum, i.e., as a plane wave expansion:
\[\left|\mathbf{E}_{\mathrm{i}}^{\parallel}\right\rangle=\sum_{ \lambda}\int_{0}^{2\pi}\mathrm{d}\phi^{\parallel}\int_{0}^{\pi}\mathrm{d} \theta^{\parallel}\int_{0^{+}}^{\infty}\mathrm{d}\omega^{\parallel}\] \[\mathcal{G}_{\lambda^{\parallel},\mathrm{i}}^{\parallel}(\omega^ {\parallel},\theta^{\parallel},\phi^{\parallel})\left|\lambda^{\parallel}\ \hat{\mathbf{k}}^{\parallel}\ \omega^{\parallel}\right\rangle+\mathrm{c.c.}\quad, \tag{3}\]
where the amplitudes for a monochromatic Gaussian beam focused at the origin of \(S\) of waist \(w_{0}\), frequency \(\omega_{\mathrm{i}}\) and helicity \(\lambda_{\mathrm{i}}\) propagating along the \(+z\)-axis are given by:
\[\mathcal{G}_{\lambda^{\parallel},\mathrm{i}}^{\parallel}(\omega^ {\parallel},\theta^{\parallel},\phi^{\parallel})=E_{0}\sin\!\left(2\theta^{ \parallel}\right)\] \[\cdot\exp\!\left[-\frac{\omega_{\mathrm{i}}^{2}w_{0}^{2}\sin^{2 }(\theta^{\parallel})}{4c^{2}}\right]\] \[\cdot\delta_{\lambda^{\parallel}\lambda_{\mathrm{i}}}\delta( \omega^{\parallel}-\omega_{\mathrm{i}})H(\pi/2-\theta^{\parallel})\quad, \tag{4}\]
where \(E_{0}\) is a constant, and \(H(\pi/2-\theta^{\parallel})\) is the Heaviside step function which eliminates all counter-propagating waves. Moreover, we consider the waist \(w_{0}\) to be very large, such that \(\left|\mathbf{E}_{\mathrm{i}}^{\parallel}\right\rangle\) approximates to a plane wave but is nonetheless still finite in space. The reason for doing this is that, for a regular plane wave infinitely extended in space, the interaction of the incident wave with the moving sphere would be incessant and, therefore, the scattered power flux would have a cylindrical symmetry with respect to the axis of movement of the scatterer. That is, the scattered power flux would be translationally invariant with respect to this axis and would only vary azimuthally. On the other hand, an excitation of finite spatial extent ensures that the interaction of light with the scatterer is localized in space (around the origin of \(S\)), therefore, yielding a spherical-like scattering of waves emanating from the region where the interaction takes place.
To consider an electric field of arbitrary angle of incidence \(\Theta_{\mathrm{i}}\), we apply a rotation operator \(\hat{\mathbf{R}}_{y}(\Theta_{\mathrm{i}})\) about the \(y\)-axis to Eqn. (3) to transit from a representation of the beam with respect to the \(S^{\parallel}\) to one with respect to \(S\) such that
\[\left|\mathbf{E}_{\mathrm{i}}\right\rangle=\hat{\mathbf{R}}_{y}(\Theta_{ \mathrm{i}})\left|\mathbf{E}_{\mathrm{i}}^{\parallel}\right\rangle\quad, \tag{5}\]
where
\[\hat{\mathbf{R}}_{y}(\Theta_{\mathrm{i}})\left|\lambda^{\parallel }\;\hat{\mathbf{k}}^{\parallel}\;\omega^{\parallel}\right\rangle=\sum_{ \lambda}\int_{0}^{2\pi}\mathrm{d}\phi\int_{0}^{\pi}\mathrm{d}\theta\int_{0^{+ }}^{\infty}\mathrm{d}\omega\] \[\mathcal{R}_{\lambda^{\parallel},\lambda}(\omega^{\parallel}, \theta^{\parallel},\phi^{\parallel},\omega,\theta,\phi;\Theta_{\mathrm{i}})\] \[\left|\lambda\;\hat{\mathbf{k}}\;\omega\right\rangle\quad. \tag{6}\]
The transformation coefficients are given by
\[\mathcal{R}_{\lambda^{\parallel},\lambda}(\omega^{\parallel}, \theta^{\parallel},\phi^{\parallel},\omega,\theta,\phi;\Theta_{\mathrm{i}}) =\mathcal{P}(\theta^{\parallel},\phi^{\parallel},\Theta_{\mathrm{i}})\] \[\cdot\delta_{\lambda\lambda^{\parallel}}\] \[\cdot\delta(\omega-\omega^{\parallel})\] \[\cdot\delta\left[\theta-\mathrm{arccos}(\hat{k}_{z})\right]\] \[\cdot\delta\left[\phi-\mathrm{atan2}(\hat{k}_{y},\hat{k}_{x}) \right]\quad, \tag{7}\]
where
\[\begin{pmatrix}\hat{k}_{x}\\ \hat{k}_{y}\\ \hat{k}_{z}\end{pmatrix} =\begin{pmatrix}\cos\Theta_{\rm i}&0&\sin\Theta_{\rm i}\\ 0&1&0\\ -\sin\Theta_{\rm i}&0&\cos\Theta_{\rm i}\end{pmatrix}\begin{pmatrix}\hat{k}_{x} ^{\parallel}\\ \hat{k}_{y}^{\parallel}\\ \hat{k}_{z}^{\parallel}\end{pmatrix}\] \[=\begin{pmatrix}\sin\theta^{\parallel}\cos\phi^{\parallel}\cos \Theta_{\rm i}+\cos\theta^{\parallel}\sin\Theta_{\rm i}\\ \sin\theta^{\parallel}\sin\phi^{\parallel}\\ -\sin\theta^{\parallel}\cos\phi^{\parallel}\sin\Theta_{\rm i}+\cos\theta^{ \parallel}\cos\Theta_{\rm i}\end{pmatrix}\quad, \tag{8}\]
and \(\mathcal{P}(\theta^{\parallel},\phi^{\parallel},\Theta_{\rm i})\) is a prefactor corresponding to the acquired phase due to the rotation:
\[\mathcal{P}(\theta^{\parallel},\phi^{\parallel},\Theta_{\rm i})=\exp[{\rm i} p(\theta^{\parallel},\phi^{\parallel},\Theta_{\rm i})]\quad, \tag{9}\]
with
\[p(\theta^{\parallel},\phi^{\parallel},\Theta_{\rm i}) =\text{atan2}\Big{[}-\lambda\sin(\Theta_{\rm i})\sin\Bigl{(} \phi^{\parallel}\Bigr{)},\] \[\quad\cos\Bigl{(}\theta^{\parallel}\Bigr{)}\sin(\Theta_{\rm i} )\cos\Bigl{(}\phi^{\parallel}\Bigr{)}\] \[+\sin\Bigl{(}\theta^{\parallel}\Bigr{)}\cos(\Theta_{\rm i}) \Big{]}\quad. \tag{10}\]
After doing this, one can follow the first step of the frame-hopping method and compute the Lorentz boost of the incident electric field from \(S\) to \(S^{\prime}\). In App. A, we calculate the Lorentz boost of plane waves. We denote with \(\mathbf{\dot{L}}\mathbf{B}_{z}(\beta)\) the operator that boosts fields along the \(z\)-axis with speed \(v=\beta c\), where \(0\leq\beta<1\) from \(S\) to \(S^{\prime}\). This operator acts on the eigenstates of monochromatic plane waves with well-defined helicity in the following way:
\[\mathbf{\dot{L}}\mathbf{\dot{B}}_{z}(\beta)\left|\lambda\ \mathbf{\dot{k}}\ \omega\right\rangle =\sum_{\lambda^{\prime}}\int_{0}^{2\pi}\text{d}\phi^{\prime}\int_{0}^{ \pi}\text{d}\theta^{\prime}\int_{0^{+}}^{\infty}\text{d}\omega^{\prime}\] \[\mathcal{L}_{\lambda\lambda^{\prime}}(\omega,\theta,\phi,\omega^ {\prime},\theta^{\prime},\phi^{\prime};\beta)\] \[\left|\lambda^{\prime}\ \mathbf{\dot{k}}^{\prime}\ \omega^{\prime}\right\rangle\quad, \tag{11}\]
with the transformation coefficients given by
\[\mathcal{L}_{\lambda\lambda^{\prime}}(\omega,\theta,\phi,\omega^ {\prime},\theta^{\prime},\phi^{\prime};\beta) =\mathcal{C}\left(\beta,\theta\right)\] \[\cdot\delta_{\lambda^{\prime}\lambda}\] \[\cdot\delta\left(\omega^{\prime}-\mathcal{C}\left(\beta,\theta \right)\omega\right)\] \[\cdot\delta\left[\theta^{\prime}-\arccos\left(\frac{\cos\theta- \beta}{1-\beta\cos\theta}\right)\right]\] \[\cdot\delta\left(\phi^{\prime}-\phi\right)\quad, \tag{12}\]
where \(\gamma=1/\sqrt{1-\beta^{2}}\), \(\cos\theta=\hat{\mathbf{k}}\cdot\hat{\mathbf{z}}\) and
\[\mathcal{C}\left(\beta,\theta\right)=\gamma\left[1-\beta\cos\theta\right]\quad, \tag{13}\]
which is derived in App. A. We see from Eqn. (12) that
\[\theta^{\prime}=\arccos\left(\frac{\cos\theta-\beta}{1-\beta\cos\theta}\right)\quad, \tag{14}\]
and
\[\omega^{\prime}=\mathcal{C}\left(\beta,\theta\right)\omega \tag{15}\]
correspond to the Lorentz boost of \(\theta\) and \(\omega\), respectively. Since the motion occurs solely along the \(z\)-axis, the azimuthal angle \(\phi\) remains unchanged under the Lorentz boost, that is,
\[\phi^{\prime}=\phi\quad. \tag{16}\]
The Lorentz boost \(\theta^{\prime}\) of \(\theta\) given by Eqn. (14) explains the perceived change in direction of the beam in \(S^{\prime}\) compared to \(S\) as shown by \(\mathbf{\dot{k}}_{\rm i}\) and \(\mathbf{\dot{k}}_{\rm i}^{\prime}\) in Fig. 1. In Fig. 2 a), this is visualized for the Lorentz boost \(\Theta_{\rm i}^{\prime}\) of the polar angle of the incident field with respect to the incident angle \(\Theta_{\rm i}\) as seen in \(S\) and speed ratio \(\beta\). Moreover, the Doppler shift \(\omega^{\prime}\) of \(\omega\) is displayed in Fig. 2 b) with the same functional dependency. Note that, for an incident angle of \(\Theta_{\rm i}=0\) and a speed ratio \(\beta\to 1\), the Doppler-shifted frequency becomes zero. This corresponds to the sphere moving away from the external observer in \(S\) at a speed tending to that of light, thus exhibiting a complete redshift. In other words, the incident wave is perceived by the sphere to be so stretched out that the frequency disappears in its reference frame. Conversely, when \(\Theta_{\rm i}=\pi\)
and \(\beta\to 1\), the wave is seen to be infinitely blueshifted in \(S^{\prime}\), corresponding to a completely compressed wave with infinite frequency. This corresponds to the sphere moving towards the source of the incident field.
Note that the same expression for the scaling factor \(\mathcal{C}\left(\beta,\theta\right)\) is given by Eqn. (27) in De Cupis _et al._[34]. Importantly, we observe that for Eqn. (12) to be non-zero, the helicity of the field must remain invariant upon the Lorentz boost transformation due to the \(\delta_{\lambda^{\prime}\lambda}\) term. This invariance demonstrates the power of expressing the fields in the helicity instead of the parity basis, that is, specifically making use of circularly-polarized plane waves instead of TE/TM plane waves.
Generally speaking, the change in direction of the beam upon boosting is given by the following transformation of the wavevectors:
\[\hat{\mathbf{k}}^{\prime}=\frac{\hat{\mathbf{k}}+[(\gamma-1)\cos\theta-\gamma \beta]\,\hat{\mathbf{z}}}{\mathcal{C}\left(\beta,\theta\right)}\quad. \tag{17}\]
Finally, putting all the above together, and after some straightforward algebra, we can get the following relation between the amplitudes of the initially considered and non-rotated incident beam in \(S^{\parallel}\) and the rotated one in \(S^{\prime}\):
\[|\mathbf{E}_{\mathrm{i}}^{\prime}\rangle =\mathbf{L}\hat{\mathbf{B}}_{z}(\beta)\hat{\mathbf{R}}_{y}(\Theta _{\mathrm{i}})\left|\mathbf{E}_{\mathrm{i}}^{\parallel}\right\rangle\] \[=\sum_{\lambda^{\prime}}\int_{0}^{2\pi}\mathrm{d}\phi^{\prime} \int_{0}^{\pi}\mathrm{d}\theta^{\prime}\int_{0^{+}}^{\infty}\mathrm{d}\omega^ {\prime}\] \[\quad\mathcal{G}^{\prime}_{\lambda^{\prime},\mathrm{i}}(\omega^{ \prime},\theta^{\prime},\phi^{\prime})\left|\lambda^{\prime}_{\mathrm{i}}\; \hat{\mathbf{k}}^{\prime}_{\mathrm{i}}\;\omega^{\prime}_{\mathrm{i}}\right\rangle\] \[+\;\mathrm{c.c.}\quad, \tag{18}\]
where \(\omega^{\prime}_{\mathrm{i}}\) and \(\hat{\mathbf{k}}^{\prime}_{\mathrm{i}}\) are determined by Eqns. (15) and (17), respectively and
\[\mathcal{G}^{\prime}_{\lambda^{\prime},\mathrm{i}}(\omega^{ \prime},\theta^{\prime},\phi^{\prime}) =\mathcal{J}(\theta^{\prime},\phi^{\prime},\Theta_{\mathrm{i}})\] \[\cdot\mathcal{G}^{\parallel}_{\lambda^{\prime},\mathrm{i}}\Bigg{\{} \frac{\omega^{\prime}}{\mathcal{C}\left[\beta,\theta\|(\theta^{\prime},\phi^{ \prime})\right]},\] \[\theta^{\parallel}(\theta^{\prime},\phi^{\prime}),\phi^{ \parallel}(\theta^{\prime},\phi^{\prime})\Bigg{\}}\quad, \tag{19}\]
where \(\theta^{\parallel}(\theta^{\prime},\phi^{\prime})\) and \(\phi^{\parallel}(\theta^{\prime},\phi^{\prime})\) express \(\theta^{\parallel}\) and \(\phi^{\parallel}\) as viewed from \(S^{\prime}\):
\[\theta^{\parallel}(\theta^{\prime},\phi^{\prime}) =\arccos\Bigg{\{}\frac{1}{\gamma(1+\beta\cos\theta^{\prime})}\] \[\cdot\Big{[}\sin\theta^{\prime}\cos\phi^{\prime}\sin\Theta\] \[+\gamma(\cos\theta^{\prime}+\beta)\cos\Theta\Big{]}\Bigg{\}}, \tag{20}\] \[\phi^{\parallel}(\theta^{\prime},\phi^{\prime}) =\mathrm{atan2}[\sin\theta^{\prime}\sin\phi^{\prime},\] \[\sin\theta^{\prime}\cos\phi^{\prime}\cos\Theta\] \[-\gamma(\cos\theta^{\prime}+\beta)\sin\Theta]\quad, \tag{21}\]
with the Jacobian
\[\mathcal{J}(\theta^{\prime},\phi^{\prime},\Theta_{\mathrm{i}})=\begin{vmatrix} \frac{\partial\phi^{\parallel}}{\partial\theta^{\prime}}&\frac{\partial\theta ^{\parallel}}{\partial\phi^{\prime}}\\ \frac{\partial\phi^{\parallel}}{\partial\theta^{\prime}}&\frac{\partial\phi^{ \parallel}}{\partial\phi^{\prime}}\end{vmatrix} \tag{22}\]
that converts \(\mathrm{d}\phi^{\parallel}\mathrm{d}\theta^{\parallel}\) to \(\mathrm{d}\phi^{\prime}\mathrm{d}\theta^{\prime}\).
Recall that we do this since, for simplicity, we wish to carry out the scattering calculation in \(S^{\prime}\), that is, where it is equivalent to that in the stationary case. For a given incident field described by Eqn. (4), we can use Eqn. (19) to calculate the amplitudes that are needed to describe the incident field in \(S^{\prime}\) in terms of the plane wave representation given by Eqn. (18).
### Solving the scattering problem in the sphere's reference frame
The approach taken to calculate the amplitude of the scattered field begins by expressing the incident field \(|\mathbf{E}_{\mathrm{i}}^{\prime}\rangle\) as a series of spherical waves with respect to the coordinates describing \(S^{\prime}\)[12]:
\[|\mathbf{E}_{\mathrm{i}}^{\prime}\rangle=\sum_{\lambda^{\prime}\ell^{\prime}m^ {\prime}}\int_{0^{+}}^{\infty}\mathrm{d}\omega^{\prime}\mathcal{A}^{\prime}{}_ {\lambda^{\prime}\ell^{\prime}m^{\prime}}(\omega^{\prime})\left|\omega^{\prime} \;\lambda^{\prime}\,\ell^{\prime}\;m^{\prime}\right\rangle^{(1)}+\mathrm{c.c.}\quad, \tag{23}\]
where \(\left|\omega^{\prime}\;\lambda^{\prime}\,\ell^{\prime}\,m^{\prime}\right\rangle^ {(1)}\) signifies a regular VSH attached to \(S^{\prime}\) with frequency \(\omega^{\prime}\), helicity \(\lambda^{\prime}\), multipolar index \(\ell^{\prime}\) (\(\ell^{\prime}=1\) corresponds to dipoles, \(\ell^{\prime}=2\) corresponds to quadrupoles etc.), and angular momentum along the \(z\)-axis \(m^{\prime}=-\ell^{\prime},-(\ell^{\prime}-1),\;...\,,\ell^{\prime}\). The (1) superscript denotes that the VSH corresponds to a first-order spherical Bessel function \(j_{\ell^{\prime}}(k^{\prime}r^{\prime})\), and the coefficients of the expansion are given by
\[\mathcal{A}^{\prime}{}_{\lambda^{\prime}\ell^{\prime}m^{\prime}}( \omega^{\prime}) =\int_{0}^{2\pi}\mathrm{d}\phi^{\prime}\int_{0}^{\pi}\mathrm{d} \theta^{\prime}\,\mathcal{G}^{\prime}_{\lambda^{\prime},\mathrm{i}}(\omega^{ \prime},\theta^{\prime},\phi^{\prime})\] \[\cdot\mathcal{S}_{\lambda^{\prime}\ell^{\prime}m^{\prime}}( \omega^{\prime},\hat{\mathbf{k}}^{\prime})\quad, \tag{24}\]
where the transformation coefficients between the plane waves and the spherical waves (under which transformation the helicity and frequency of the waves remain unchanged) are given by:
\[\mathcal{S}_{\lambda^{\prime}\ell^{\prime}m^{\prime}}(\omega^{ \prime},\hat{\mathbf{k}}^{\prime}) =4\pi\mathrm{i}\ell^{\prime}+2m^{\prime}+1\,\Omega_{\ell^{\prime}m^{ \prime}}\] \[\cdot\tau^{(\lambda^{\prime})}_{\ell^{\prime}m^{\prime}}[\theta^ {\prime}(\hat{\mathbf{k}}^{\prime})]\mathrm{e}^{-im^{\prime}\phi^{\prime}(\hat{ \mathbf{k}}^{\prime})}\quad, \tag{25}\]
where \(\Omega_{\ell^{\prime}m^{\prime}}\) is a normalization constant and \(\tau^{(\lambda^{\prime})}_{\ell^{\prime}m^{\prime}}[\theta^{\prime}(\hat{ \mathbf{k}}^{\prime})]\) is a function which we define in App. B. The expression given in Eqn. (25) is derived by applying Eqn. (2) to Eqn. (12) in Lamprianidis and Miroshnichenko [35].
For the case of monochromatic excitation (in \(S\)), like the one we consider here, we have that
\[\mathcal{G}^{\parallel}_{\lambda\downarrow,\mathrm{i}}(\omega^{\parallel}, \theta^{\parallel},\phi^{\parallel})=\mathcal{G}^{\parallel,\phi}_{ \lambda\downarrow,\mathrm{i}}(\theta^{\parallel},\phi^{\parallel})\delta(\omega^{ \parallel}-\omega_{\mathrm{i}})\quad. \tag{26}\]
Using this expression, we get the following simplified expression for the incident spherical amplitudes in \(S^{\prime}\):
\[\mathcal{A^{\prime}}_{\,\,\lambda^{\prime}\ell^{\prime}m^{\prime}}( \omega^{\prime}) =-4\pi i^{\ell^{\prime}+2m^{\prime}+1}\Omega_{\ell^{\prime}m^{ \prime}}\tau_{\ell^{\prime}m^{\prime}}^{(\lambda^{\prime})}(\theta_{0}^{\prime})\] \[\cdot\delta\left(\omega^{\prime}\in\left[\frac{\omega_{\rm i}}{ \gamma(1+\beta)},\frac{\omega_{\rm i}}{\gamma(1-\beta)}\right]\right)\] \[\cdot\frac{(1+\beta\cos\theta_{0}^{\prime})}{\beta\sin\theta_{0 }^{\prime}}\] \[\cdot\int_{0}^{2\pi}{\rm d}\phi^{\prime}\,{\rm e}^{-{\rm i}m^{ \prime}\phi^{\prime}}\] \[\cdot\mathcal{P}\left[\theta^{\parallel}(\theta_{0}^{\prime}, \phi^{\prime}),\phi^{\parallel}(\theta_{0}^{\prime},\phi^{\prime}),\Theta_{ \rm i}\right]\] \[\cdot\mathcal{J}(\theta_{0}^{\prime},\phi^{\prime},\Theta_{\rm i })\mathcal{B}_{\lambda^{\prime},\,{\rm i}}^{\parallel,0}[\theta^{\parallel}( \theta_{0}^{\prime},\phi^{\prime}),\phi^{\parallel}]\quad, \tag{27}\]
where:
\[\theta_{0}^{\prime}=\arccos\left(\frac{\gamma\omega_{\rm i}-\beta^{2}\gamma \omega_{\rm i}-\omega^{\prime}}{\beta\omega^{\prime}}\right)\quad. \tag{28}\]
Next, in conjunction with Step 2 of the FHM, we need to express the scattered field \(|\mathbf{E}_{\rm s}^{\prime}\rangle\) in a series of radiating VSHs in \(S^{\prime}\), denoted as \(\left|\omega^{\prime}\,\lambda^{\prime}\,\ell^{\prime}\,m^{\prime}\right\rangle ^{(3)}\). Analogous to Eqn. (23), this can be written as
\[|\mathbf{E}_{\rm s}^{\prime}\rangle=\sum_{\lambda^{\prime}\ell^{\prime}m^{ \prime}}\int_{0^{+}}^{\infty}{\rm d}\omega^{\prime}\mathcal{B}_{\lambda^{ \prime}\ell^{\prime}m^{\prime}}^{\prime}(\omega^{\prime})\left|\omega^{\prime} \,\lambda^{\prime}\,\ell^{\prime}\,m^{\prime}\right\rangle^{(3)}+{\rm c.c.}\quad, \tag{29}\]
where the (3) superscript denotes that the VSHs correspond to a third-order spherical Bessel (Hankel) function \(h_{\ell^{\prime}}(k^{\prime}r^{\prime})\). Specific expressions of the radiating and regular VSHs are given in App. B. Moreover, the's' subscript denotes quantities which correspond to the scattered field.
Finally, the scattering coefficients \(\mathcal{B^{\prime}}_{\,\,\lambda^{\prime}\ell^{\prime}m^{\prime}}(\omega^{ \prime})\) can be related to the incident coefficients \(\mathcal{A^{\prime}}_{\,\,\lambda^{\prime}\ell^{\prime}m^{\prime}}(\omega^{ \prime})\) by way of the T-matrix formalism [12]:
\[\mathbf{B}^{\prime}=\mathbf{T}^{\rm H}\mathbf{A}^{\prime}\quad, \tag{30}\]
where \(\mathbf{A}^{\prime}\) and \(\mathbf{B}^{\prime}\) are vectors containing the incident and scattering coefficients, respectively, and \(\mathbf{T}^{\rm H}\) is the corresponding T-matrix expressed in the helicity basis (see App. C). The T-matrix fully describes the scattering response of the individual scatterer in the stationary case, which we can safely use in the rest frame of the sphere. Let us note that the time-invariance of the stationary system implies a matrix that is diagonal with respect to frequency \(\omega^{\prime}\), whereas duality-symmetry implies a diagonal matrix with respect to helicity \(\lambda^{\prime}\), and the spherical symmetry of the scatterer implies a diagonal matrix with respect to the multipolar indices \(\ell^{\prime},m^{\prime}\).
Specifically, for a spherical scatterer, we can write
\[\mathcal{B}_{\lambda^{\prime}\ell^{\prime}m^{\prime}}(\omega^{\prime})=\sum_{ \lambda_{0}}\mathrm{T}_{\lambda^{\prime}\lambda_{0},\ell^{\prime}}(\omega^{ \prime})\mathcal{A}_{\lambda_{0}\ell^{\prime}m^{\prime}}(\omega^{\prime})\quad, \tag{31}\]
where \(\lambda_{0}=\pm 1\) is a dummy index representing helicity and the term \(\mathrm{T}_{\lambda^{\prime}\lambda_{0},\ell^{\prime}}\) is defined at the end of App. C. Moreover, in this work, we will make the assumption that the T-matrix of the scatterer is non-dispersive, _i.e._, invariant with respect to frequency. This assumption is logical as long as we are exciting with a monochromatic beam with a narrow angular spectrum, _i.e._, a large waist. One must consider this, since the plane-wave components of the beam all Doppler-shift differently depending on their polar angles of propagation. However, a small angular width in \(S\) minimizes this difference, thus allowing us to assume a non-dispersive T-matrix in \(S^{\prime}\). As we will see, this assumption significantly simplifies the final equations used for numerical computation.
Finally, we require an expression for the electric field in the far-field region of \(S^{\prime}\). For this, we need to use the following asymptotic expression for the radiating spherical waves:
\[\lim_{\omega^{\prime}r^{\prime}/c\rightarrow\infty}\left|\omega^ {\prime}\,\lambda^{\prime}\,m^{\prime}\,\ell^{\prime}\right\rangle^{(3)} \equiv\left(-{\rm i}\right)^{\ell^{\prime}}\,\mathbf{f}_{\lambda^{\prime}, \ell^{\prime}m^{\prime}}(\mathbf{\dot{r}}^{\prime})\] \[\cdot\frac{{\rm e}^{{\rm i}\omega^{\prime}(r^{\prime}/c-t^{ \prime})}}{\omega^{\prime}r^{\prime}/c}\quad, \tag{32}\]
which, from Eqn. (29), readily gives the following expression for the electric field in the far-field region of \(S^{\prime}\):
\[\mathbf{E}_{\rm s}^{{}^{\prime}\rm ff}(\mathbf{r}^{\prime},t^{ \prime}) =\sum_{\lambda^{\prime}\ell^{\prime}m^{\prime}}\int_{0^{+}}^{+ \infty}{\rm d}\omega^{\prime}\mathcal{B^{\prime}}_{\lambda^{\prime}\ell^{\prime} m^{\prime}}(\omega^{\prime})(-{\rm i})^{\ell^{\prime}}\] \[\cdot\mathbf{f}_{\lambda^{\prime},\ell^{\prime}m^{\prime}}(\mathbf{ \dot{r}}^{\prime})\,\frac{{\rm e}^{{\rm i}\omega^{\prime}(r^{\prime}/c-t^{ \prime})}}{\omega^{\prime}r^{\prime}/c}+{\rm c.c.}\quad, \tag{33}\]
where \(\mathbf{f}_{\lambda^{\prime},\ell^{\prime}m^{\prime}}(\mathbf{\dot{r}}^{ \prime})\) is a vector function defined in App. B.
As shown in Garner _et al._[36], the angular density of the total radiation energy flux in a given direction in \(S^{\prime}\) specified by \(\theta^{\prime}\) and \(\phi^{\prime}\), which we denote as \(U^{\prime}(\theta^{\prime},\phi^{\prime})\), is calculated by integrating the amplitude of the electric field \(\mathbf{E}_{\rm s}^{\rm ff}(\mathbf{r}^{\prime},t^{\prime})\) in the far-field limit. As a result, we have
\[U^{\prime}(\theta^{\prime},\phi^{\prime})=\lim_{r^{\prime}\rightarrow\infty} \int_{-\infty}^{\infty}(r^{\prime})^{2}\frac{|\mathbf{E}_{\rm s}^{\prime\rm ff}( \mathbf{r}^{\prime},t^{\prime})|^{2}}{\eta_{0}}{\rm d}t^{\prime}\quad, \tag{34}\]
where \(\eta_{0}\) is the impedance of free space. An expanded, and numerically-efficient form of Eqn. (34) is given in App. D.
At this point, the second step of the FHM is complete.
### Solution to the scattering problem in the lab frame
To investigate the back-scattering, we analyse the directivity \(D(\theta,\phi)\) of the sphere. This is defined as [37]
\[D(\theta,\phi)=\frac{U(\theta,\phi)}{W_{\rm tot}/4\pi}\quad, \tag{35}\]
where \(U(\theta,\phi)=\sum_{\lambda_{\rm s}}U_{\lambda_{\rm s}}(\theta,\phi)\) is the angular density of the total radiation energy in a given direction in \(S\) specified by \(\theta\) and \(\phi\), \(U_{\lambda_{\rm s}}\) is the component of \(U(\theta,\phi)\) corresponding to the scattered helicity \(\lambda_{\rm s}=\pm 1\), and \(W_{\rm tot}\) is the total scattered energy.
Considering the directivity of the sphere allows us to obtain a physically meaningful and intuitive formulation from which the behaviour of the back-scattering can be interpreted. Qualitatively speaking, the directivity is the ratio of the total angular energy \(U(\theta,\phi)\) to the average scattered energy \(W_{\rm tot}/4\pi\) by an analogous isotropic scatterer. Consequently, a directivity \(>\)1 means that the contribution of back-scattered energy outweighs that of the average scattered energy. Conversely, a directivity \(<\)1 implies that the back-scattered energy is lower than the average energy scattered by the sphere.
We are now in a position to carry out the final step of the FHM, that is, transforming the directivity from \(S^{\prime}\) back to \(S\). The power of the FHM really becomes apparent here, since the angular energy \(U(\theta,\phi)\) in \(S\) can easily be related to quantities in \(S^{\prime}\). More specifically, we have
\[W_{\rm tot}=\int_{0}^{2\pi}\int_{0}^{\pi}U(\theta,\phi)\sin\theta{\rm d}\theta{ \rm d}\phi\quad, \tag{36}\]
where, as we see from Eqn. (21) in Garner _et al._[36],
\[U(\theta,\phi)=[\gamma\left(1+\beta{\rm cos}\,\theta^{\prime}\right)]^{3}\,U^ {\prime}(\theta^{\prime},\phi^{\prime})\quad, \tag{37}\]
where \(\theta^{\prime}\) and \(\phi^{\prime}\) can be transformed using Eqns. (14) and (16) to obtain an expression for \(U(\theta,\phi)\) in \(S\). For back-scattering, we have \(\theta=\Theta_{\rm BS}\) and \(\phi=\Phi_{\rm BS}\), where
\[\Theta_{\rm BS}=\pi-\Theta_{\rm i}\quad{\rm and}\quad\Phi_{\rm BS}=\pi\quad, \tag{38}\]
respectively.
The third step of the FHM is now complete, and the back-scattered directivity \(D_{\rm BS}\) of the sphere in \(S\) can be calculated by substituting Eqns. (36) and (37) into Eqn. (35) such that
\[D_{\rm BS}=D(\Theta_{\rm BS},\Phi_{\rm BS})\quad. \tag{39}\]
## III Relativistic Kerker condition
### Visualizing the directivity
The final theoretical result of our work has been formulated in Eqn. (39), which expresses the contribution of the back-scattered energy compared to the average scattered energy for a given scenario. While Eqn. (35) is more general, for the sake of this discussion, we choose to investigate a possible suppression of the back-scattering (that is, the first Kerker condition) [38; 39; 40].
As an example, we consider a lossless dielectric sphere and parametrize the multipolar response using what are known as Mie angles [41; 42] (cf. App. E). Each Mie angle is bounded between \(-\pi/2\) and \(\pi/2\), and we consider them to be non-dispersive. A value of zero corresponds to a resonance of the respective Mie coefficient.
Since the objective function describing \(D_{\rm BS}\) does not have a clear analytical solution, we implement numerical routines to identify properties for the sphere which minimize the back-scattering. More specifically, we wish to seek the optimized combination of Mie angles such that the back-scattering is minimized. Moreover, we carry out the optimization when \(\beta\) and \(\Theta_{\rm i}\) are fixed values; as an example we consider \(\beta=0.2\) and \(\Theta_{\rm i}=\frac{\pi}{4}\).
Before doing this, it makes sense to visualize how \(D_{\rm BS}\) varies with respect to some chosen Mie angles. For this purpose, we sweep across the possible electric quadrupole (\(\theta_{\rm EQ}\)) and magnetic quadrupole (\(\theta_{\rm MQ}\)) angles while fixing the dipole angles \(\theta_{\rm ED}\) and \(\theta_{\rm MD}\).
We visualize the \(\lambda_{\rm s}=+1\) and \(\lambda_{\rm s}=-1\) components of \(\log_{10}D_{\rm BS}\) with \(\theta_{\rm ED}=\theta_{\rm MD}=\pi/3\) in Fig. 3 a) and Fig. 3 b), respectively, followed by the corresponding total directivity in Fig. 3 c). In all cases, the helicity of the incident field is \(\lambda_{\rm i}=1\). One observes in Fig. 3 b) that the diagonal representing \(\theta_{\rm EQ}=\theta_{\rm MQ}\), that is, when the sphere is a dual scatterer, displays values of below \(-30\), implying that the back-scattering is zero at these points. This is expected since the incoming helicity is given by \(\lambda_{\rm i}=+1\) and the scattered helicity in the dual case must remain the same, so we are only left with non-zero values when \(\lambda_{\rm s}=+1\). For comparison, Fig. 3 d) shows the total directivity where \(\theta_{\rm ED}=\pi/9\) and \(\theta_{\rm MD}=-\pi/4\).
### Numerically minimizing the back-scattering
The fact that Fig. 3 b) provides a physically known result is a justification of the numerical implementation and allows us to proceed in minimizing the directivity. A further verification is the fact that the directivity is independent of the incident helicity \(\lambda_{\rm i}\). This is expected since the system is both rotationally and mirror-symmetric about the \(x\)-axis. From the mirror symmetry, \(\lambda_{\rm i}\) would flip sign [43]. However, when rotated to the same position, \(\lambda_{\rm i}\) would preserve its sign. These situations both describe the same physical scenario, that is, a system where the sphere moves in the opposite direction with \({\bf v}\rightarrow-v\hat{\bf z}\) and \(\Theta_{\rm i}\rightarrow\pi+\Theta_{\rm i}\). Therefore, the scattered energy remains unchanged.
To minimize the back-scattering with respect to the Mie angles, we implement the directivity calculation using the Julia programming language [29] and leverage the automatic differentiation capabilities included in the modelling toolkit JuMP [30] for gradient-based optimization. This enables us to efficiently take derivatives of \(D_{\rm BS}\) with respect to Mie angles up to arbitrary order. We then formulate our optimization problem as the minimization of \(D_{\rm BS}\) using the interior-point optimizer IPOPT [44].
Using this method, we find higher order combinations of Mie angles which yield minima below our defined cut-off point of \(D_{\rm C}=10^{-3}\), that is, the value below which we consider the back-scattering to be negligible. We consider this value appropriate, since it corresponds to a back-scattered energy which contributes a mere \(0.1\,\%\) to the average scattered energy. For a single optimization run, we randomly initialize a set of Mie angles between
(\(-\nicefrac{{\pi}}{{2}}\), \(\nicefrac{{\pi}}{{2}}\)) and minimize \(D_{\rm BS}\) with respect to these angles. Owing to quasi-analytical gradients, the optimization quickly converges to high-quality minima one order of magnitude lower than \(D_{\rm C}\). Finding a single set of Mie angles up to octupolar order takes less than a second on average (measured over 100 optimization runs on Intel Xeon Platinum 8368 CPU @ 2.4 GHz). A selection of possible combinations up to octupolar order is given in Table 1.
These minimized values of \(D_{\rm BS}\) are all smaller than \(D_{\rm T}=10^{-3}\) and thus satisfy our cut-off criterion, providing evidence of the existence of the first Kerker condition in the relativistic regime. The most pronounced minimum located (\(D_{\rm BS}=1.57\times 10^{-4}\)) describes a case where the back-scattered energy contributes a negligible \(0.016\,\%\) to the average scattered energy. It must be emphasized that there exist many combinations of Mie angles that fulfil this condition, making those in Table 1 just few of many.
## IV Conclusion
The main goal of this paper was to demonstrate the utility of expressing incident and scattered fields in the helicity basis for the case of a sphere moving at a relativistic speed. In doing this, we were able to transform
\begin{table}
\begin{tabular}{r r r r r r r} \hline \hline \(\theta_{\rm ED}\) & \(\theta_{\rm MD}\) & \(\theta_{\rm EQ}\) & \(\theta_{\rm MQ}\) & \(\theta_{\rm EQ}\) & \(\theta_{\rm MO}\) & \(D_{\rm BS}\times 10^{-4}\) \\ \hline \(-0.18\) & \(1.21\) & \(1.38\) & \(1.23\) & \(1.54\) & \(1.55\) & \(1.57\) \\ \(-1.39\) & \(-1.32\) & \(-1.12\) & \(-1.44\) & \(-1.51\) & \(-1.57\) & \(1.95\) \\ \(-1.32\) & \(-1.32\) & \(-1.21\) & \(-1.21\) & \(-1.53\) & \(-1.53\) & \(2.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of optimized Mie angles for minimum back-scattering up to octupolar order.
fewer variables (namely the helicity) as opposed to the same problem in the parity basis, thus greatly simplifying calculations. Moreover, we obtained an expression for the back-scattering amplitude of the scattered field observed from an external lab frame in the form of the directivity of the sphere. Finally, the directivity was minimized with respect to Mie angles, providing evidence for the existence of the first Kerker condition in the relativistic regime.
Opportunities for future work are plentiful. Since our implementation can be differentiated with respect to all continuous parameters of the considered system, it is, in principle, possible to optimize not only for the Mie angles but also for parameters beyond the scope of this work, such as incident angle and velocity. Furthermore, the current work could be extended to analyze a composition of particles describing a metasurface as opposed to just a single particle. In this case, a cluster T-matrix as described in Mishchenko _et al._[12] would need to be implemented. The motivation for considering a surface of particles links to the future applications of light sails proposed by the Breakthrough Starshot Initiative.
Of course, for this to be done, the current model would have to be refined to consider the opposite case of maximum back-scattering, resulting in the ideal case of maximum momentum transfer to the sail [45].
## Data availability
The code used to produce the results in this manuscript can be accessed via the following link: [https://github.com/tfp-photonics/Jorkle.jl](https://github.com/tfp-photonics/Jorkle.jl).
###### Acknowledgements.
M.R.W., A.G.L., and C.R. acknowledge support from the Max Planck School of Photonics, which is supported by BMBF, Max Planck Society, and the Fraunhofer Society. M.R.W. and A.G.L. also acknowledge support from the Karlsruhe School of Optics and Photonics (KSOP). Y.A. and C.R. acknowledge support by the German Research Foundation within the Excellence Cluster 3D Matter Made to Order (EXC 2082/1 under project number - 390761711) and by the Carl Zeiss Foundation. The optimizations in Section III were carried out on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Wurttemberg and by the Federal Ministry of Education and Research.
## Appendix A Lorentz boost of helical plane waves
As stated in Garner _et al._[33], the Lorentz boost of the electric field \(\mathbf{E}(\mathbf{r},t)\) is given by
\[\mathbf{E}^{\prime}(\mathbf{r}^{\prime},t^{\prime}) =\gamma\left[\mathbf{E}(\mathbf{r},t)+v\hat{\mathbf{v}}\times \mathbf{B}(\mathbf{r},t)\right]\] \[+\left(1-\gamma\right)\left[\hat{\mathbf{v}}\cdot\mathbf{E}( \mathbf{r},t)\right]\hat{\mathbf{v}}\quad, \tag{10}\]
where \(\mathbf{B}(\mathbf{r},t)\) is the corresponding magnetic field and \(\gamma=1/\sqrt{1-\beta^{2}}\) with \(\beta=v/c\) being the ratio of the boosting speed to the speed of light and boosting takes place along the direction \(\hat{\mathbf{v}}\). For boosting along \(+\hat{\mathbf{z}}\), the coordinates of the primed (boosted) and unprimed coordinate systems are related with the following formulas:
\[x^{\prime} =x \tag{11}\] \[y^{\prime} =y\] (12) \[z^{\prime} =\gamma(z-\beta ct)\] (13) \[t^{\prime} =\gamma(t-\beta z/c)\quad. \tag{14}\]
Here, we want to consider the boost of a monochromatic plane wave of well-defined helicity \(\lambda\). Specifically, its electric field is given by \(\mathbf{E}(\mathbf{r},t)=\hat{\mathbf{e}}_{\lambda}(\hat{\mathbf{k}})\exp[ \{\mathrm{i}\omega[(\hat{\mathbf{k}}\cdot\mathbf{r}/c)-t]\}\) (see main text). By making use of the above coordinate transformations we can get the following transformation of the exponent: \(\{\mathrm{i}\omega[(\hat{\mathbf{k}}\cdot\mathbf{r}/c)-t]\}=\{\mathrm{i} \omega^{\prime}[(\hat{\mathbf{k}}^{\prime}\cdot\mathbf{r}^{\prime}/c)-t^{ \prime}]\}\), with \(\omega^{\prime},\hat{\mathbf{k}}^{\prime}\) being given by Eqs. (15,17) of the main text. Thus, we have transformed the scalar part of the fields, which gave us the transformed frequency and direction of propagation of the boosted plane wave, \(\omega^{\prime},\hat{\mathbf{k}}^{\prime}\), respectively.
Next, we need to transform the polarization vector. For this we need to take into account that our considered plane wave, being an eigenstate of the helicity operator \(\frac{\nabla\times}{k}\) with well-defined helicity \(\lambda\)[46], has the following property:
\[\nabla\times\mathbf{E}(\mathbf{r},t)=\lambda k\mathbf{E}(\mathbf{r},t)\quad, \tag{15}\]
and, therefore, we get the following for its corresponding magnetic field from Maxwell's equations:
\[\mathbf{B}(\mathbf{r},t)=\frac{\lambda}{\mathrm{i}c}\mathbf{E}(\mathbf{r},t)\quad. \tag{16}\]
Therefore, by substituting the right-hand-side of Eqn. (16) into Eqn. (10), we find that
\[\mathbf{E}^{\prime}(\mathbf{r}^{\prime},t^{\prime}) =\gamma\left[\mathbf{E}(\mathbf{r},t)+\frac{\lambda v}{\mathrm{i} c}\hat{\mathbf{z}}\times\mathbf{E}(\mathbf{r},t)\right]\] \[+\left(1-\gamma\right)\left[\hat{\mathbf{z}}\cdot\mathbf{E}( \mathbf{r},t)\right]\hat{\mathbf{z}}\quad. \tag{17}\]
What remains then, is to project the boosted helical polarization vector \(\hat{\mathbf{e}}_{\lambda}(\hat{\mathbf{k}})\) onto the boosted polarization basis \(\hat{\mathbf{e}}_{\lambda^{\prime}}(\hat{\mathbf{k}}^{\prime})\). That is to say, we need to find the coefficients
\(\mathcal{E}_{\lambda\lambda^{\prime}}(\beta,\hat{\mathbf{k}})\) in the expansion bellow:
\[\gamma\left[\hat{\mathbf{e}}_{\lambda}(\hat{\mathbf{k}})+\frac{ \lambda v}{\mathrm{i}c}\hat{\mathbf{z}}\times\hat{\mathbf{e}}_{\lambda}(\hat{ \mathbf{k}})\right]\] \[+(1-\gamma)\left[\hat{\mathbf{z}}\cdot\hat{\mathbf{e}}_{\lambda} (\hat{\mathbf{k}})\right]\hat{\mathbf{z}}\] \[=\sum_{\lambda^{\prime}}\mathcal{E}_{\lambda\lambda^{\prime}}( \beta,\hat{\mathbf{k}})\hat{\mathbf{e}}_{\lambda^{\prime}}(\hat{\mathbf{k}}^{ \prime})\quad. \tag{100}\]
By making use of the following orthogonality relation [43]:
\[\hat{\mathbf{e}}_{\lambda^{\prime}}(\mathbf{k}^{\prime})\cdot\hat{\mathbf{e}} _{-\lambda^{\prime}_{0}}(\mathbf{k}^{\prime})=-\delta_{\lambda^{\prime}\lambda ^{\prime}_{0}}\quad, \tag{101}\]
we readily get after some algebra that \(\mathcal{E}_{\lambda\lambda^{\prime}}(\beta,\hat{\mathbf{k}})=\delta_{\lambda \lambda^{\prime}}\mathcal{C}_{\lambda}(\beta,\theta)\), with \(\theta\) being the polar angle of the propagation direction \(\hat{\mathbf{k}}\) and \(\mathcal{C}_{\lambda}(\beta,\theta)\) being given by:
\[\mathcal{C}_{\lambda}(\beta,\theta)=\gamma\left[1-\beta\cos\theta\right]\quad. \tag{102}\]
The same expression calculated using the parity basis is given by Eqn. (27) in De Cupis _et al._[34]. Note that the helicity \(\lambda\) of massless particles (and hence, electromagnetic fields) is invariant under Lorentz boosts [47]. Finally, summing up all the above, we get the Lorentz boost transformation given by Eq. (11) in the main text.
## Appendix B Vector spherical harmonics of well-defined helicity
We begin by using the following definition of the spherical harmonics:
\[\mathrm{Y}^{m}_{\ell}(\theta,\phi)\triangleq\Omega_{\ell m}P^{m}_{\ell}(\mathrm{ cos}\theta)\mathrm{e}^{\mathrm{i}m\phi}\quad, \tag{103}\]
where \(P^{m}_{\ell}(\mathrm{cos}\theta)\) is the associated Legendre function of the 1st kind, with \(\Omega_{\ell m}\triangleq\mathrm{i}^{m}\sqrt{\frac{(2\ell+1)(\ell-m)!}{4\pi( \ell+1)(\ell+m)!}}\) being the corresponding normalization factor.
Then, the VSHs of well-defined parity, \(\mathbf{M}^{(j)}_{\ell mk}\) and \(\mathbf{N}^{(j)}_{\ell mk}\), are defined as follows [48]:
\[\mathbf{M}^{(j)}_{\ell mk}\left(\mathbf{r}\right) \triangleq\nabla\times\left[\mathbf{r}z^{(j)}_{\mathrm{M},\ell}( kr)\mathrm{Y}^{m}_{\ell}(\theta,\phi)\right]\] \[=\mathrm{i}z^{(j)}_{\mathrm{M},\ell}(kr)\mathbf{m}_{\ell m}(\hat {\mathbf{r}}), \tag{104}\] \[\mathbf{N}^{(j)}_{\ell mk}\left(\mathbf{r}\right) \triangleq\frac{1}{k}\nabla\times\mathbf{M}^{(j)}_{\ell mk}\left( \mathbf{r}\right)\] \[=\hat{\mathbf{r}}\frac{\ell(\ell+1)}{\mathrm{k}_{0}\mathrm{r}}z^ {(j)}_{\mathrm{M},\ell}(kr)\mathrm{Y}^{m}_{\ell}(\theta,\phi)\] \[+z^{(j)}_{N,\ell}(kr)\mathbf{n}_{\ell m}(\hat{\mathbf{r}})\quad, \tag{105}\]
where
\[\mathbf{m}_{\ell m}(\hat{\mathbf{r}}) =\Omega_{\ell m}\left[\hat{\theta}\tau_{\ell m}(\theta)+\mathrm{ i}\hat{\phi}\tau^{\prime}_{\ell m}(\theta)\right]\mathrm{e}^{\mathrm{i}m\phi}, \tag{106}\] \[\mathbf{n}_{\ell m}(\hat{\mathbf{r}}) =\Omega_{\ell m}\left[\hat{\theta}\tau^{\prime}_{\ell m}(\theta) +\mathrm{i}\hat{\phi}\tau_{\ell m}(\theta)\right]\mathrm{e}^{\mathrm{i}m\phi}\quad. \tag{107}\]
The index \(\ell\) stands for the angular momentum quantum number that takes the values 1,2,... and corresponds to dipoles, quadrupoles, etc., and the index \(m\) stands for the angular momentum along the \(z\)-axis which takes the values \(-\ell,...,-2,-1,0,1,2,...,\ell\). The superscript \(j\) refers to the corresponding Bessel (\(j=1\)) and Hankel (\(j=3\)) functions, \(z^{(j)}_{\mathrm{M},\ell}(kr)\), of the first kind. The functions \(z^{(j)}_{\mathrm{N},\ell}(kr)\triangleq\frac{1}{kr}\frac{\mathrm{d}}{\mathrm{ d}(kr)}[krz^{(j)}_{\mathrm{M},\ell}(kr)]\) are the corresponding Riccati functions and \(\tau_{\ell m}(\theta)\triangleq m\frac{P^{m}_{\ell}(\mathrm{cos}\theta)}{ \mathrm{sin}\theta}\) and are the generalized Legendre functions.
The VSHs \(\mathbf{\Lambda}^{(j)}_{\lambda,\ell mk}(\mathbf{r})\) of well-defined helicity \(\lambda=\pm 1\) are defined with respect to the VSHs of well-defined parity according to the formula:
\[\mathbf{\Lambda}^{(j)}_{\lambda,\ell mk}(\mathbf{r}) =\frac{\mathbf{M}^{(j)}_{\ell mk}(\mathbf{r})+\lambda\mathbf{N}^ {(j)}_{\ell mk}(\mathbf{r})}{\sqrt{2}} \tag{108}\] \[=\frac{\lambda}{\sqrt{2}}\frac{\ell(\ell+1)}{kr}z^{(j)}_{\mathrm{ M},\ell}(kr)\mathrm{Y}^{m}_{\ell}(\theta,\phi)\;\hat{\mathbf{r}}\] \[+\sum_{\lambda^{\prime}=\pm 1}\left[\frac{\mathrm{i}z^{(j)}_{\mathrm{M}, \ell}(kr)+\lambda\lambda^{\prime}z^{(j)}_{\mathrm{N},\ell}(kr)}{2}\right.\] \[\cdot\left.\;\;\mathbf{f}_{\lambda^{\prime},\ell m}(\hat{\mathbf{r} })\right]\quad,\]
where we have defined:
\[\mathbf{f}_{\lambda,\ell m}(\hat{\mathbf{r}}) =\frac{\mathbf{m}_{\ell m}(\hat{\mathbf{r}})+\lambda\mathbf{n}_{ \ell m}(\hat{\mathbf{r}})}{\sqrt{2}}\] \[=\Omega_{\ell m}\tau^{(\lambda)}_{\ell m}(\theta)\mathrm{e}^{ \mathrm{i}m\phi}\;\hat{\mathbf{e}}_{\lambda}(\hat{\mathbf{r}}) \tag{109}\]
and
\[\tau^{(\lambda)}_{\ell m}(\theta)=-\tau^{\prime}_{\ell m}(\theta)-\lambda\tau_{ \ell m}(\theta)\quad, \tag{110}\]
which has the property \(\Omega_{-\ell m}\tau^{(\lambda)}_{-\ell m}(\theta)=\Omega_{\ell m}\tau^{(- \lambda)}_{\ell m}(\theta)=(-1)^{\ell+m+1}\Omega_{\ell m}\tau^{(\lambda)}_{\ell m }(\pi-\theta)\).
One can show that the functions \(\mathbf{\Lambda}^{(j)}_{\lambda,\ell mk}\) have the property [46]:
\[\frac{\nabla\times}{k}\mathbf{\Lambda}^{(j)}_{\lambda,\ell mk}=\lambda\mathbf{ \Lambda}^{(j)}_{\lambda,\ell mk}\quad, \tag{111}\]
that is, \(\mathbf{\Lambda}^{(j)}_{\lambda,\ell mk}\) is an eigenstate of the helicity operator \(\frac{\nabla\times}{k}\) with eigenvalue \(\lambda\). For the functions \(\mathbf{f}_{\lambda,\ell m}\), there exists the orthogonality property:
\[\int_{0}^{2\pi}\mathrm{d}\phi\int_{0}^{\pi} \mathrm{sin}\theta\mathrm{d}\theta\] \[\mathbf{f}_{\lambda,\ell m}(\hat{\mathbf{k}})\cdot\left[\mathbf{f}_ {\lambda^{\prime},\ell^{\prime}m^{\prime}}(\hat{\mathbf{k}})\right]^{*}\] \[=\delta_{\lambda\lambda^{\prime}}\delta_{\ell^{\prime}m^{\prime}} \delta_{\ell^{\prime}m^{\prime}}\quad. \tag{112}\]
Moreover, if we employ the large argument property of the Hankel functions:
\[z^{(3)}_{\alpha,\ell}(x)\xrightarrow{x>>1}\left\{\frac{\mathrm{e}^{ix}}{x}(- \mathrm{i})^{\ell}\quad\text{for}\quad\alpha=\mathrm{N}\right. \tag{113}\]
and also reject the O(\(\sfrac{1}{4}\)) radial term as negligible, for the radiating helical VSHs we can get the following asymptotic form in the far field:
\[\left[\mathbf{\Lambda}_{\lambda,\ell mk}^{(3)}\left(\mathbf{r}\right)\right]^{ \mathrm{H}}=\left(-\mathrm{i}\right)^{\ell}\mathbf{f}_{\lambda,\ell m}(\hat{ \mathbf{r}})\ \frac{\mathrm{e}^{\mathrm{i}kr}}{kr}\quad. \tag{13}\]
## Appendix C T-matrix in the helicity basis for a sphere
In the parity basis, the T-matrix is given by
\[\mathbf{T}=\begin{pmatrix}\mathbf{T}_{\mathrm{NN}}&\mathbf{T}_{\mathrm{MN}}\\ \mathbf{T}_{\mathrm{NM}}&\mathbf{T}_{\mathrm{MM}}\end{pmatrix}\quad, \tag{14}\]
where each element of Eqn. (14) is a diagonal \(\ell_{\mathrm{max}}(2+\ell_{\mathrm{max}})\times\ell_{\mathrm{max}}(2+\ell_{ \mathrm{max}})\) matrix and \(\ell_{\mathrm{max}}\) is the maximum multipolar excitation order of the sphere.
This can be transformed to the T-matrix \(\mathbf{T}^{\mathrm{H}}\) in the helicity basis by using [49]
\[\mathbf{T}^{\mathrm{H}}=\mathbf{P}^{-1}\mathbf{T}\mathbf{P}\quad, \tag{15}\]
where, as can be seen from Eqn. (2), \(\mathbf{P}\) is given by
\[\mathbf{P}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\quad. \tag{16}\]
In the case of a dielectric sphere, we have that \(\mathbf{T}_{\mathrm{MN}}=\mathbf{T}_{\mathrm{NM}}=0\), so Eqn. (15) reduces to
\[\mathbf{T}^{\mathrm{H}} =\begin{pmatrix}\mathbf{T}_{++}&\mathbf{T}_{+-}\\ \mathbf{T}_{-+}&\mathbf{T}_{--}\end{pmatrix}\] \[=\frac{1}{2}\begin{bmatrix}(\mathbf{T}_{\mathrm{NN}}+\mathbf{T}_ {\mathrm{MM}})&(\mathbf{T}_{\mathrm{NN}}-\mathbf{T}_{\mathrm{MM}})\\ (\mathbf{T}_{\mathrm{NN}}-\mathbf{T}_{\mathrm{MM}})&(\mathbf{T}_{\mathrm{NN}}+ \mathbf{T}_{\mathrm{MM}})\end{bmatrix}\quad, \tag{17}\]
where
\[\mathbf{T}_{\mathrm{NN}}=\begin{pmatrix}a_{1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&a_{\ell_{\mathrm{max}}}\end{pmatrix}\quad, \tag{18}\]
\[\mathbf{T}_{\mathrm{MM}}=\begin{pmatrix}b_{1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&b_{\ell_{\mathrm{max}}}\end{pmatrix}\quad, \tag{19}\]
and the values \(a_{\ell}\) and \(b_{\ell}\) are respectively the electric and magnetic Mie coefficients defined in App. E.
The components of the T-matrix \(\mathbf{T}^{\mathrm{H}}\) are given by \(\mathrm{T}_{\lambda_{\iota},\lambda_{\iota},\ell}\), and relate to the entries in \(\mathbf{T}^{\mathrm{H}}\) corresponding to the \(\ell\)'th multipolar order, along with an incident helicity \(\lambda_{\mathrm{i}}\) and scattered helicity \(\lambda_{\mathrm{s}}\). That is,
\[\mathrm{T}_{\lambda_{\iota},\lambda_{\mathrm{i}},\ell}=a_{\ell}+\lambda_{ \mathrm{i}}\lambda_{\mathrm{s}}b_{\ell}. \tag{20}\]
## Appendix D Computation-friendly expansion of Eqn. (34)
By substituting Eqn. (33) into the Eqn. (34), and using the orthogonality relation given by Eqn. (12), we can express the angular energy density \(U^{\prime}(\theta^{\prime},\phi^{\prime})\) in the following form that is suitable for efficient numerical evaluation:
\[U^{\prime}(\theta^{\prime},\phi^{\prime}) =\sum_{\lambda^{\prime}}\frac{4\pi c^{2}}{\eta_{0}}\int_{0^{+}}^{ \infty}\mathrm{d}\omega^{\prime}\frac{1}{(\omega^{\prime})^{2}}\] \[\cdot\left|\sum_{\ell^{\prime}m^{\prime}}\mathcal{B}_{\lambda^{ \prime}\ell^{\prime}m^{\prime}}(\omega^{\prime})(-\mathrm{i})^{\ell^{\prime}} \Omega_{\ell^{\prime}m^{\prime}}\tau_{\ell^{\prime}m^{\prime}}^{(\lambda^{ \prime})}(\theta^{\prime})\mathrm{e}^{\mathrm{i}\ell^{\prime}\phi^{\prime}} \right|^{2}\] \[=\sum_{\lambda^{\prime}}\sum_{\ell^{\prime}m^{\prime}}\sum_{\ell^{ \prime}m^{\prime}}Q_{\lambda^{\prime}\ell^{\prime}m^{\prime}}^{\ell^{\prime} \tilde{m}^{\prime}}(\theta^{\prime},\phi^{\prime})\] \[\cdot\sum_{\lambda_{0}\bar{\lambda}_{0}}J_{\lambda_{0}\ell^{ \prime}m^{\prime}}^{\tilde{\lambda}_{0}\tilde{m}^{\prime}\tilde{m}^{\prime}} \mathrm{T}_{\lambda^{\prime}\lambda_{0},\ell^{\prime}}\mathrm{T}_{\lambda^{ \prime}\lambda_{0},\ell^{\prime}}^{*}\quad, \tag{21}\]
where the T-matrix elements \(\mathrm{T}_{\lambda^{\prime}\lambda_{0},\ell^{\prime}}\) are defined in App. C. For the latter equation, we have assumed a non-dispersive T-matrix and have also defined the integral:
\[J_{\lambda_{0}\ell^{\prime}m^{\prime}}^{\tilde{\lambda}_{0}\tilde{\ell}^{ \prime}\tilde{m}^{\prime}} =\quad\int_{0^{+}}^{\infty}\mathrm{d}\omega^{\prime}\frac{\mathcal{A}_{ \lambda_{0}\ell^{\prime}m^{\prime}}(\omega^{\prime})\mathcal{A}_{\lambda_{0} \ell^{\prime}\tilde{m}^{\prime}}^{*}(\omega^{\prime})}{(\omega^{\prime})^{2}}\quad, \tag{22}\]
and the quantity:
\[Q_{\lambda^{\prime}\ell^{\prime}m^{\prime}}^{\tilde{m}^{\prime}}( \theta^{\prime},\phi^{\prime}) =\frac{4\pi c^{2}}{\eta_{0}}(-\mathrm{i})^{\ell^{\prime}-\ell^{ \prime}}\Omega_{\ell^{\prime}m^{\prime}}\Omega_{\ell^{\prime}\tilde{m}^{\prime}}^ {*}\] \[\cdot\tau_{\ell^{\prime}m^{\prime}}^{(\lambda^{\prime})}(\theta^{ \prime})\tau_{\tilde{\ell}^{\prime}m^{\prime}}^{(\lambda^{\prime})*}(\theta^{ \prime})\mathrm{e}^{\mathrm{i}(m^{\prime}-\tilde{m}^{\prime})\phi^{\prime}}\quad, \tag{23}\]
where \({}^{*}\) denotes the complex conjugate.
Furthermore, we can define express the total scattered energy \(W_{\mathrm{tot}}\) given by Eqn. (36) as follows:
\[W_{\mathrm{tot}} =\int_{0}^{\pi}\mathrm{d}\theta\int_{0}^{2\pi}\mathrm{d}\phi\sin \theta U(\theta,\phi)\] \[=\sum_{\lambda^{\prime}}\int_{0^{+}}^{\infty}\mathrm{d}\omega^{ \prime}\frac{1}{(\omega^{\prime})^{2}}\sum_{\ell^{\prime}m^{\prime};\min\{|m^{ \prime}|,1\}\leq\ell\leq\ell^{\prime}}\] \[\quad\mathrm{Re}\left\{\mathcal{B}_{\omega^{\prime}\lambda^{ \prime}\ell^{\prime}m^{\prime}}\mathcal{B}_{\omega^{\prime}\lambda^{\prime}\ell^{ \prime}m^{\prime}}^{*}I_{\lambda^{\prime}\ell^{\prime}m^{\prime}}^{\ell}\right\}\] \[=\sum_{\lambda^{\prime}}\sum_{\ell^{\prime}m^{\prime};\min\{|m^{ \prime}|,1\}\leq\ell\leq\ell^{\prime}}\sum_{\lambda_{0}\bar{\lambda}_{0}}\] \[\mathrm{Re}\left\{I_{\lambda^{\prime}\ell^{\prime}m^{\prime}}^{ \tilde{\lambda}_{0}\tilde{m}^{\prime}\ell^{\prime}}\mathrm{T}_{\lambda^{ \prime}\lambda_{0},\ell^{\prime}}\mathrm{T}_{\lambda^{\prime}\lambda_{0},\ell^{ \prime}}\mathrm{T}_{\lambda^{\prime}\bar{\lambda}_{0},\ell^{\prime}}^{*}\right\} \tag{24}\]
with
\[I_{\lambda^{\prime}\ell^{\prime}m^{\prime}}^{\ell} =2^{4-\delta_{\ell\ell^{\prime}}}\frac{\pi^{2}c^{2}}{\eta_{0}}(- \mathrm{i})^{\ell^{\prime}-\ell}\Omega_{\ell^{\prime}m^{\prime}}^{*}\Omega_{ \ell m^{\prime}}^{*}\] \[\int_{0}^{\pi}\mathrm{d}\theta\sin\left\{\gamma\left[1+\beta \,\theta^{\prime}(\beta,\theta)\right]\right\}^{3}\] \[\times\tau_{\ell^{\prime}m^{\prime}}^{(\lambda^{\prime})}\left[ \theta^{\prime}(\beta,\theta)\right]\tau_{\ell m^{\prime}}^{(\lambda^{\prime})*} \left[\theta^{\prime}(\beta,\theta)\right]\quad, \tag{25}\]
where the expression for \(\theta^{\prime}(\beta,\theta)\) is given by Eqn. (14).
Note the importance of writing Eqn. (46) as nested integrals instead of a standard triple integral. Computationally speaking, we are able to determine Eqn. (47) with a very low tolerance, while using a higher tolerance for the other integrals, thus significantly reducing computation time. The reason for this is because the expansion coefficients \(\mathcal{A}_{\lambda^{\prime}\ell m}(\omega^{\prime})\) (and hence the integrand in Eqn. (47)) form Gaussian-like peaks centered about \(\omega^{\prime}\) which tend to a delta distribution as the speed of the sphere decreases (see Fig. 4). If the tolerance is too high, the numerical integration could miss this peak entirely, thus ignoring vital non-zero values.
Moreover, by separating the integrals Eqn. (48) we are able to obtain \(100\times 100\) grids for the directivity (like those used to generate Fig. 3) in as little as \(\sim\)40 s. The reason for this is that, as the numerically-demanding integral \(J_{\lambda\varrho}^{\tilde{\lambda}_{0}\tilde{\ell}\,\tilde{m}^{\prime}}^{ \tilde{\lambda}_{0}\tilde{\ell}\,\tilde{m}^{\prime}}\) is independent of the Mie angles, we only need to compute it once for a given incident angle \(\Theta_{\mathrm{i}}\) and speed parameter \(\beta\). If the integrals were combined, \(J_{\lambda_{0}\varrho^{\prime}m^{\prime}}^{\tilde{\lambda}_{0}\tilde{\ell}\, \tilde{m}^{\prime}}\) would be computed for each combination of Mie angles, thus significanly increasing computation time.
## Appendix E Mie angles for a lossless sphere
The electric and magnetic Mie coefficients \(a_{\ell}\) and \(b_{\ell}\), respectively for each multipolar order \(\ell\) can be represented using Mie angles \(\theta_{\mathrm{E}\ell}\) and \(\theta_{\mathrm{M}\ell}\). In the lossless case, one can write [28; 42]
\[a_{\ell}=-\mathrm{i}\sin\alpha_{\ell}\exp(-\mathrm{i}\alpha_{\ell}) \tag{49}\]
and
\[b_{\ell}=-\mathrm{i}\sin\beta_{\ell}\exp(-\mathrm{i}\beta_{\ell})\quad, \tag{50}\]
where
\[\alpha_{\ell}=\frac{\pi}{2}-\theta_{\mathrm{E}\ell},\quad-\frac{\pi}{2}\leq \theta_{\mathrm{E}\ell}\leq\frac{\pi}{2}\quad, \tag{51}\]
and
\[\beta_{\ell}=\frac{\pi}{2}-\theta_{\mathrm{M}\ell},\quad-\frac{\pi}{2}\leq \theta_{\mathrm{M}\ell}\leq\frac{\pi}{2}\quad. \tag{52}\]
Note that the convention used in Rahimzadegan _et al._[28] omits the use of the minus sign in Eqns. (49) and (50). In our case, the minus sign is required for energy conservation.
|
2307.04557 | Exploring nonstandard quark interactions through solar neutrino studies | We investigate the effects of a Non-Standard Interaction (NSI) extension of
the standard model of particle physics on solar neutrino flavour oscillations.
This NSI model introduces a $U_{Z^\prime}(1)$ gauge symmetry through a
$Z^\prime$ boson that mixes with the photon, creating a neutral current between
active neutrinos and matter fields via a unique coupling to up and down quarks.
The interaction is defined by a single parameter, $\zeta_o$, which is related
to the $Z^\prime$ boson's mass $m_{Z^\prime}$ and coupling constant
$g_{Z^\prime}$. Notably, this model relaxes the bounds on Coherent Elastic
Neutrino-Nucleus Scattering experiments and fits the experimental values of the
anomalous magnetic dipole moment of the muon. In this study, we use solar
neutrino measurements and an up-to-date standard solar model to evaluate the
neutrino flavour oscillations and assess the constraints on $\zeta_o$. Our
study indicates that the NSI model aligns with the current solar neutrino data
when $\zeta_o$ is between $-0.7$ and $0.002$. These models have $\chi^2_{\nu}$
values equal to or better than the standard neutrino flavor oscillation model,
which stands at a $\chi^2_{\nu}$ of 3.12. The best NSI model comes with a
$\zeta_o$ value of -0.2 and a $\chi^2_{\nu}$ of 2.96. Including extra data from
the Darwin experiment in our analysis refines the range of $\zeta_o$ values
from $-0.7$ to $0.002$, down to $-0.5$ to $-0.002$. These results hint at the
possible existence of novel interactions, given that NSI models achieve a
comparable or superior fit to the solar neutrino data when contrasted with the
prevailing standard model of neutrino flavour oscillation. | Ilídio Lopes | 2023-07-10T13:46:48Z | http://arxiv.org/abs/2307.04557v2 | # Exploring nonstandard quark interactions through solar neutrino studies
###### Abstract
We investigate the effects of a nonstandard interaction NSI) extension of the standard model of particle physics on solar neutrino flavor oscillations. This NSI model introduces a \(U_{Z^{\prime}}(1)\) gauge symmetry through a \(Z^{\prime}\) boson that mixes with the photon, creating a neutral current between active neutrinos and matter fields via a unique coupling to up and down quarks. The interaction is defined by a single parameter, \(\zeta_{o}\), which is related to the \(Z^{\prime}\) boson's mass \(m_{Z^{\prime}}\) and coupling constant \(g_{Z^{\prime}}\). Notably, this model relaxes the bounds on coherent elastic neutrino-nucleus Scattering experiments and fits the experimental values of the anomalous magnetic dipole moment of the muon. In this study, we use solar neutrino measurements and an up-to-date standard solar model to evaluate the neutrino flavor oscillations and assess the constraints on \(\zeta_{o}\). Our study indicates that the NSI model aligns with the current solar neutrino data when \(\zeta_{o}\) is between \(-0.7\) and \(0.002\). These models have \(\chi_{\nu}^{2}\) values equal to or better than the standard neutrino flavor oscillation model, which stands at a \(\chi_{\nu}^{2}\) of 3.12. The best NSI model comes with a \(\zeta_{o}\) value of -0.2 and a \(\chi_{\nu}^{2}\) of 2.96. Including extra data from the Darwin experiment in our analysis refines the range of \(\zeta_{o}\) values from \(-0.7\) to \(0.002\), down to \(-0.5\) to \(-0.002\). These results hint at the possible existence of novel interactions, given that NSI models achieve a comparable or superior fit to the solar neutrino data when contrasted with the prevailing standard model of neutrino flavor oscillation.
## I Introduction
Neutrinos are widely regarded as one of the most valuable probes for studying the Standard Model (SM) of elementary particles and fundamental interactions, thanks to their unexpected behavior when compared to other elementary particles (e.g., [1; 2]). This insight has been derived from extensive experimental datasets from detectors around the world. Our knowledge of neutrinos spans many different physical contexts and energy scales, from detecting astrophysical neutrinos with energies ranging from MeV to PeV, to producing them in nuclear reactors and accelerators with energies above MeV and GeV, respectively (e.g., [3; 4]).
Astrophysical neutrinos have been historically at the heart of some of the most compelling challenges to modern physics and astrophysics. This sphere of exploration includes ground-breaking discoveries, such as the detection of solar neutrinos, as evidenced by Davis et al. [5], and the identification of neutrino production in remarkable events like Supernova 1987A, as reported by Hirata et al. [6] and Bionta et al. [7]. Additionally, the recent discovery of high-energy neutrinos sourced from distant celestial entities, as chronicled by the IceCube Collaboration et al. [8], highlights the substantial advancements unfolding within the specialized field of neutrino astronomy. A historical perspective on the critical role of astrophysical neutrinos within modern physics can be gleaned from comprehensive reviews, such as those by Zuber [9], Gerbino and Lattanzi [10], Fuller and Haxton [11], Nakahata [12]. These phenomena have collectively designated neutrinos as the ultimate messengers of novel physics extending beyond the Standard Model's boundaries.
Despite the SM providing the framework for how neutrinos interact with leptons and quarks through weak interactions, many fundamental questions remain unanswered, such as the mechanism for neutrino mass generation or whether neutrinos are Dirac or Majorana particles. For a more detailed account, please refer to the comprehensive reviews by Mohapatra et al. [13] and Athar et al. [14]. These questions provide solid motivation for thoroughly testing the standard picture of the three-neutrino flavor oscillation (e.g., [15; 16]). Specifically, neutrino oscillations over the years have presented compelling evidence for novel physics surpassing the boundaries of the Standard Model, as evidenced by Fukuda et al. [17] and Ahmad et al. [18]. Consequently, they function as a highly effective tool for examining the possible presence of novel particles and their interactions. With the increasing sensitivity of neutrino experiments [19], it is timely to investigate whether there are any new interactions between neutrinos and matter.
The particle physics community has proposed many alternative neutrino physics models to address these questions, including simple extensions to the SM and models addressing the origin of dark matter, dark energy, and experimental neutrino anomalies (e.g., [20; 21; 22; 23; 24]). These models encompass the introduction of novel particles, including new types of fermions and bosons, such as sterile neutrinos and axionlike particles (e.g., [14; 19; 25; 26; 27; 28]).
In this article, we delve into the impact of a new quark neutrino interaction on the three neutrino flavor oscillation model [16], which is predicted by the current standard solar model (e.g., [29; 30; 31]). This nonstandard Interaction (NSI) model, developed by Bernal and Farzan [32], provides a compelling explanation for some of the unsettled experimental data, including the coherent elastic neutrino-nucleus scattering (\(CE\nu NS\)) experiments [33] and the anomalous magnetic dipole moment of the muon (\(g-2\))\({}_{\mu}\)[34]. This model is based on a \(U(1)\) gauge symmetry, incorporating a light gauge boson
that mixes with the photon [e.g., 35, 36].
The coupling of neutrinos with up (\(u\)-) and down (\(d\)-) quarks leads to a ratio that nullifies the contribution to the \(CE\nu NS\) amplitude, relaxing the constraint on the NSI model with the \(CE\nu NS\) experimental measurements [33]. Furthermore, the constraints imposed on the parameter space of this model through experimental and observational bounds lead to a solution that is compatible with the \((g-2)_{\mu}\) anomaly.
Here, we present novel constraints on the NSI model using state-of-the-art solar neutrino data and an up-to-date standard solar model [e.g., 37]. Furthermore, we determine the parameter range that is consistent with solar neutrino experimental measurements and predict potential constraints that could be derived from future neutrino experiments.
The article is organized as follows: Sec.II provides a summary of the nonstandard quark-neutrino model used in this work. In Sec.III, we calculate the survival probability of electron neutrinos. Next, Sec.IV presents the constraints obtained from the standard solar model. Finally, Sec.V provides a summary and draws conclusions.
## II Neutrinos and Nonstandard Interaction with quarks
Here, we consider an extension to the standard model of elementary particles and fundamental interactions with a new interaction between active neutrinos and up and down quarks [e.g., 38, 39, 40, 27]. Accordingly, we consider that our model's Lagrangian density \(\mathcal{L}\) corresponds to the sum of the standard model's Lagrangian \(\mathcal{L}_{ST}\) plus a nonstandard Interaction (\(NSI\)) Lagrangian \(\mathcal{L}_{NSI}\). Hence,
\[\mathcal{L}=\mathcal{L}_{ST}+\mathcal{L}_{NSI}, \tag{1}\]
where \(\mathcal{L}_{NSI}\) is the effective Lagrangian that describes the \(NSI\) contribution resulting from the neutrino propagation in matter [e.g., 41, 27]. In this study, we focus on an extension of the standard model by a new local group \(U_{Z^{\prime}}(1)\). \(Z^{\prime}\) denotes the gauge boson of the \(U_{Z^{\prime}}(1)\) symmetry group. We also assume that \(Z^{\prime}\) has a mass \(m_{Z^{\prime}}\) and couples to matter with a coupling constant \(g_{Z^{\prime}}\). The \(\mathcal{L}_{NSI}\) corresponds now to a NSI vectorlike interaction [32], such that \(\mathcal{L}_{NSI}\equiv\mathcal{L}_{Z^{\prime}}\), where \(\mathcal{L}_{Z^{\prime}}\) is defined as
\[\mathcal{L}_{Z^{\prime}}=2\sqrt{2}G_{F}\epsilon_{\alpha\beta}^{f}\left(\bar{ \nu}_{\alpha}\gamma_{\mu}\frac{1-\gamma_{5}}{2}\nu_{\beta}\right)\left(\bar{ f}\gamma^{\mu}f\right), \tag{2}\]
where \(\alpha\) and \(\beta\) refer to neutrino flavors \(e\), \(\mu\) and \(\tau\); and \(f\) and \(\bar{f}\) correspond to the fermions or antifermions: up quarks, down quarks and electrons. The previous Lagrangian [Eq. 2] corresponds an NSI model with an arbitrary ratio of NSI coupling to the \(u\) -- and \(d\) -- quarks [e.g., 43, 42, 35]. Since we are interested in only the contribution of the NSI interaction for the neutrino oscillation experiments, only the vector part contributes to the interaction \(\epsilon_{\alpha\beta}^{f}\). Consequently, the coherent forward scattering of neutrino in the matter is unpolarized [e.g., 27]. In the case where \(|\epsilon_{\alpha\beta}^{f}|\sim 1\), the contribution of NSI becomes as strong as the weak interaction. We notice, in the limit that \(\epsilon_{\alpha\beta}^{f}=0\), we obtain the standard case for which \(\mathcal{L}=\mathcal{L}_{ST}\) (\(\mathcal{L}_{NSI}=0\)).
Here, we describe the propagation of neutrinos through vacuum and matter employing the three-flavor neutrino oscillation model [e.g., 44, 26, 45]. As usual, we follow the standard convention, \((\nu_{e},\nu_{\tau},\nu_{\mu})\), \((\nu_{1},\nu_{2},\nu_{3})\) and \((m_{1},m_{2},m_{3})\) correspond to the neutrino flavors, neutrino mass eigenstates and the associated neutrino masses. Accordingly, the neutrino evolution equation reads
\[i\frac{d\Psi}{dr}=\mathcal{H}_{\nu}\Psi=\left(\mathcal{H}_{\rm vac}+\mathcal{ H}_{\rm mat}\right)\Psi \tag{3}\]
where \(r\) (distance to the center of the Sun) is the coordinate along the neutrino trajectory, \(\mathcal{H}_{\nu}\) is the Hamiltonian and \(\Psi=(\nu_{e},\nu_{\tau},\nu_{\mu})^{T}\). Conveniently, we can decompose this \(\mathcal{H}_{\nu}\) in a vacuum and matter components: \(\mathcal{H}_{\rm vac}=\mathrm{UM}^{2}\mathrm{U}^{\dagger}/(2E)\) and \(\mathcal{H}_{\rm mat}\equiv\mathcal{V}\), where \(E\) is the energy of the neutrino, \(\mathrm{M}^{2}=\mathrm{diag}(0,\Delta m_{21}^{2},\Delta m_{31}^{2})\) is the neutrino mass matrix, \(\mathbf{U}\) is a unitary matrix describing the mixing of neutrinos in vacuum, \(\mathcal{V}\) is a diagonal matrix of Wolfenstein potentials. \(\Delta m_{21}^{2}\) and \(\Delta m_{31}^{2}\) are the mass-squared differences between neutrinos of different mass eigenstates, such as \(\Delta m_{21}^{2}=m_{2}^{2}-m_{1}^{2}\) and \(\Delta m_{31}^{2}=m_{3}^{2}-m_{1}^{2}\). Moreover, we decompose \(\mathcal{V}\) into two additional components [32], one related to the standard matter interactions and another one to NSI interactions:
\[\mathcal{V}=\mathcal{V}^{SM}+\mathcal{V}^{NSI}, \tag{4}\]
where \(\mathcal{V}^{SM}\) is the standard matter Wolfenstein potential defined as \(\mathcal{V}^{SM}=\mathrm{diag}(V_{e}^{SM},0,0)\), and \(\mathcal{V}^{NSI}\) is the NSI matter Wolfenstein potential defined as \(\mathcal{V}^{NSI}=\mathrm{diag}(0,V_{\mu}^{NSI},V_{\tau}^{NSI})\). Therefore, the nonstandard Interactions matrix, symbolized as \(\mathcal{V}^{NSI}\), is characterized as a diagonal \(3\times 3\) matrix, mirroring the structure of the standard Wolfenstein potential denoted as \(\mathcal{V}^{SM}\). This process corresponds to a generalisation of the well-known Mikheyev-Smirnov-Wolfenstein effect [MSW; 46, 47]. For the standard Wolfenstein potential for neutrino propagation [16], we conveniently chose to define it as
\[V_{e}^{SM}=\sqrt{2}G_{F}n_{e}(r), \tag{5}\]
where \(G_{F}\) is the Fermi constant and \(n_{e}(r)\) is the number density of electrons inside the Sun.
In this study we focus on the NSI model proposed by Bernal and Farzan [32]. They have opted to impose in this NSI model the additional condition: the lepton numbers \(L_{\mu}\) and \(L_{\tau}\), the baryon numbers \(B_{i}\) with flavor \(i\) (such that \(i=1,2,3\) corresponding to the three generations) and any arbitrary real value of \(c_{o}\), fulfil the following rule: \(L_{\mu}+L_{\tau}-c_{o}(B_{1}+B_{2})-2B_{3}(1-c_{o})\), which accommodates the B meson anomalies observed at LHC [48], under which the model is anomaly-free [49]. The relationship established earlier shows that if we consider an arbitrary real number, such as \(c_{o}\neq 2/3\), then the \(U_{Z^{\prime}}(1)\) charges of the third generation of quarks will differ from those of the first and second generations. In the
model calculated by Bernal and Farzan [32], the nonstandard Interaction contribution to the potential, which relates to neutrino propagation in matter, assumes a straightforward form: \(V_{\mu}^{NSI}=V_{\tau}^{NSI}=V_{Z^{\prime}}\). Here, \(V_{Z^{\prime}}\) is defined as
\[V_{Z^{\prime}}=2\sqrt{2}G_{F}n_{e}(r)\epsilon_{Z^{\prime}}(r), \tag{6}\]
demonstrating the relationship between the NSI potential, the Fermi constant (\(G_{F}\)), electron density (\(n_{e}\)), and the NSI strength parameter (\(\epsilon_{Z^{\prime}}\)).
In the previous equation, \(\epsilon_{Z^{\prime}}(r)\) estimates the contribution of the NSI Lagrangian. Here, \(\epsilon_{Z^{\prime}}(r)\) is given by
\[\epsilon_{Z^{\prime}}(r)=\zeta_{o}\frac{n_{n}(r)+n_{p}(r)}{n_{e}(r)}, \tag{7}\]
where \(\zeta_{o}=-c_{o}g_{Z^{\prime}}^{2}/(2\sqrt{2}G_{F}m_{Z^{\prime}}^{2})\), and \(n_{n}(r)\) and \(n_{p}(r)\) are the number density of neutrons and protons or \(u\) -- quarks and \(d\) -- quarks inside the Sun. We notice \(\zeta_{o}\), like \(c_{o}\) can be a positive or negative value. A detailed account of this model is available in Bernal and Farzan [32], and additional information is available in other related articles [e.g., 50, 51]. Furthermore, we will assume that the \(Z^{\prime}\) boson's mass is sufficiently large, and there is no need to consider the size of the medium in the computation of the Wolfenstein potentials [52].
The standard three-flavor neutrino oscillation model features a universal term, denoted as \(V_{e}^{SM}\), that applies to all active neutrino flavors and does not alter the flavor oscillation pattern. This allows us to simplify the model by setting \(\mathcal{V}=\mathcal{V}^{SM}\equiv\mathrm{diag}(V_{e}^{SM},0,0)\) Now, the inclusion of NSI interaction in the model alters \(\mathcal{V}\) [see Eq. 4] by incorporating a new interaction with \(u\) -- and \(d\) -- quarks, as a consequence \(\mathcal{V}=\mathrm{diag}(V_{e}^{SM},V_{Z^{\prime}},V_{Z^{\prime}})\). Now, if we subtract the common term, \(V_{Z^{\prime}}\) [Eq. 6] to the diagonal matrix \(\mathcal{V}\) [e.g., 26], the latter takes the simple form \(\mathcal{V}=\mathrm{diag}(V_{\mathrm{eff}},0,0)\) with \(V_{\mathrm{eff}}\equiv V_{e}^{SM}-V_{Z^{\prime}}\) defined as:
\[V_{\mathrm{eff}}=\sqrt{2}G_{F}\;n_{\mathrm{eff}}(r) \tag{8}\]
and \(n_{\mathrm{eff}}(r)\) is the effective number density given by
\[n_{\mathrm{eff}}=n_{e}(r)\left[1-2\epsilon_{Z^{\prime}}(r)\right], \tag{9}\]
where \(\epsilon_{Z^{\prime}}\) is given by Eq. (7).
## III Solar neutrinos: survival probability of electron neutrinos
We compute the survival probability of electron neutrinos \(P_{e}(E)\) of several NSI models with different \(\zeta_{o}\) [Eq. 7] values and compare them with the data from recent solar neutrino experiments. Several groups have shown that, at a reasonable approximation, the neutrino flavor oscillations are adiabatic [53, 54, 55]. As such, we can compute a full analytical \(P_{e}(E)\) expression that agrees with the current solar neutrino data [e.g., 56]. Moreover, many authors opted to include a second-order nonadiabatic contribution in \(P_{e}(E)\) by modifying the original adiabatic \(P_{e}(E)\) expression [e.g., 54, 56, 57, 58, 59]. The reader can find a detailed discussion about nonadiabatic neutrino flavor oscillations in many articles, among others, the following ones: Gonzalez-Garcia and Nir [60], Fantini et al. [61].
Here, we follow a recent review of particle physics on this topic [62], specifically in the computation described in the "Neutrino Masses, Mixing, and Oscillations" section [63, the update of November 2017]. The survival probability of electron neutrinos \(P_{e}(E)\) is given by
\[P_{e}(E)\approx\cos^{4}\left(\theta_{13}\right)P_{e}^{2\nu_{e}}+\sin^{4}\left( \theta_{13}\right) \tag{10}\]
and
\[P_{e}^{2\nu_{e}}(E)=\frac{1}{2}+\left(\frac{1}{2}-P_{\gamma}\right)\cos\left(2 \theta_{12}\right)\cos\left(2\theta_{m}\right). \tag{11}\]
In the previous expression, \(P_{e}^{2\nu_{e}}(E)\) gives the survival probability of electron neutrinos in the two neutrino flavor model (\(\theta_{13}=0\)), \(P_{\gamma}\) computes the probability jumps coming from the nonadiabatic correction, and \(\theta_{m}=\theta_{m}(r_{s})\) is the matter mixing angle [64]. \(\theta_{m}\) is evaluated in the neutrino production (source) region located at a distance \(r_{s}\) from the Sun's center [e.g., 65, 66]. The jump probability \(P_{\gamma}\) reads
\[P_{\gamma}=\frac{e^{-\gamma\sin^{2}\theta_{12}}-e^{-\gamma}}{1-e^{-\gamma}}P_{ \mathrm{H}} \tag{12}\]
where \(\gamma=2\pi h_{\gamma}\Delta m_{21}^{2}/2E\), \(h_{\gamma}\) is the scale height [67] and \(P_{\mathrm{H}}\) is a regular step function. The matter mixing angle [68]\(\theta_{m}\) is given by
\[\cos(2\theta_{m})=\frac{A_{m}}{\sqrt{A_{m}^{2}+\sin^{2}\left(2\theta_{12} \right)}} \tag{13}\]
Figure 1: The survival probability of the electron neutrino \(P_{e}(E)\) [Eqs. 10 and 11] computed for the standard model of neutrino flavor oscillations. We use an updated version of the standard solar model for this calculation. See main text for details. We compute \(P_{e}(E)\) including the term \(P_{\gamma}\) [Eq. 12] in two cases: one where we include the probability jump term \(P_{\gamma}\neq 0\) (continuous blue curves) and a second one for which \(P_{\gamma}=0\) (dashed red curve). \(P_{\gamma}\) is negligible for most of the neutrino energy interval shown, becoming marginally significant for \(E\geq 50~{}\mathrm{MeV}\).
where \(A_{m}\) reads
\[A_{m}=\cos{(2\theta_{12})}-V_{m}/\Delta m_{21}^{2}. \tag{14}\]
In the standard case [53], it corresponds to \(V_{m}=2V_{e}^{SM}\cos^{2}{(\theta_{13})}E\) where \(V_{e}^{SM}\) is given by Eq. (5). However, \(V_{e}^{SM}(r)\) in this study will be replaced by a new effective potential \(V_{\rm eff}(r)\) given by Eq. (8), with \(n_{\rm eff}(r)\) by Eq. (9).
We remind the reader that we use standard parametrization for the neutrino flavor oscillations: mass square splitting and angle between neutrinos of different flavors [e.g., 69]. Hence, we adopt the recent values obtained by the data analysis of the standard three-neutrino flavor oscillation model obtained by de Salas et al. [70]. Accordingly, for a parametrization with a normal ordering of neutrino masses the mass-square difference and the mixing angles have the following values [see table 3 of 70]: \(\Delta m_{21}^{2}=7.50^{+0.22}_{-0.20}\times 10^{-5}{\rm eV}^{2}\), \(\sin^{2}{\theta_{12}}=0.318\pm 0.016\), and \(\sin^{2}{\theta_{13}}=0.02250^{+0.00055}_{-0.00078}\). Similarly \(\Delta m_{31}^{2}=2.55^{+0.02}_{-0.03}\times 10^{-3}{\rm eV}^{2}\) and \(\sin^{2}{\theta_{23}}=0.574\pm 0.014\).
The maximum production of neutrinos in the Sun's core occurs in a region between 0.01 and 0.25 solar radius, with neutrino nuclear reactions of the proton-proton chain and carbon-nitrogen-oxygen cycle occurring at different locations [e.g., 30, 71]. These neutrinos produced at various values of \(r_{s}\), when traveling towards the Sun's surface, follow paths of different lengths. Moreover, neutrinos experience varying plasma conditions during their traveling, including a rapid decrease of the electron density from the center towards the surface. In general, we expect that nonadiabatic corrections averaged out and be negligible along the trajectory of the neutrinos, except at the boundaries (layer of rapid potential transition) of the neutrino path, typically around the neutrino production point or at the surface of the Sun. Therefore, we could expect Eq. (11) to be very different when considering such effects. Nevertheless, this is not the case: de Holanda et al. [72] analysed in detail the contribution to \(P_{e}\) [Eq. 10] coming from nonadiabaticity corrections and variation on the locations of neutrino production, i.e., \(r_{s}\), and they found that the impact is minimal. Generally, \(P_{\gamma}=0\) [Eq. 12] corresponds to an adiabatic flavor conversion and \(P_{\gamma}\neq 0\) to a nonadiabatic one. For reference, the conversion is called nonadiabatic only if \(P_{\gamma}\neq 0\) has a non-negligible value.
We notice that inside the Sun, the number densities of electrons, protons, and neutrons vary considerably among the different neutrino paths. Accordingly, \(n_{e}(r)\), \(n_{p}(r)\) and \(n_{n}(r)\) decrease monotonically from the center towards the surface. As the neutrinos produced in the core propagate towards the surface, a fraction is converted to other flavors. The magnitude of this conversion depends on the neutrino's energy and the coupling constant to electrons, up quarks and down quarks. We remember that in the standard neutrino flavor oscillation model with \(\zeta_{o}=0\), only the \(n_{e}(r)\) contributes to the matter flavor conversion. However, in our NSI model with \(\zeta_{o}\neq 0\), the \(n_{p}(r)\) and \(n_{n}(r)\) also participate in the flavor conversion.
Neutrinos in their path will cross a layer where \(A_{m}=0\) [Eq. 14]. This layer is defined by the resonance condition:
\[V_{m}=\Delta m_{21}^{2}\cos{(2\theta_{12})}. \tag{15}\]
We compute the effective number density associated with the resonance condition by matching Eqs. (14) and (15). Therefore, the \(n_{\rm eff}\) in the resonance layer reads
\[n_{\rm eff}^{o}\equiv n_{\rm eff}(r_{o})=\frac{\Delta m_{21}^{2}\cos{(2\theta _{12})}}{2\sqrt{2}G_{F}E\cos^{2}{(\theta_{31})}}, \tag{16}\]
where \(r=r_{o}\) (\(\neq h_{\gamma}\)) is defined as the layer where the resonance condition \(n_{\rm eff}(r_{o})=n_{\rm res}(E)\) occurs. We observe that in the previous equation, \(n_{\rm eff}^{o}(r)\) corresponds to the quantity defined in Eq. (9). Although in the classic case (\(\epsilon_{Z^{\prime}}=0\)), the effective number density is equivalent to the electronic number density in the resonance layer: \(n_{\rm eff}(r_{o})=n_{e}^{o}(r_{o})\). In general, the adiabatic and nonadiabatic nature of neutrino oscillations depends of the neutrino's energy E and the relative value of the resonance condition of \(n_{\rm res}(E)\) [Eq. 16]. For instance, if a neutrino of energy E is such that: (i) \(n_{\rm eff}^{o}(E)\gg n_{\rm eff}\) neutrinos oscillate practically as in vacuum, (ii) \(n_{\rm eff}^{o}\ll n_{\rm eff}(E)\) oscillations as suppressed in the presence of matter [63].
In our models most of the cases correspond to adiabatic transitions, for which \(P_{\gamma}\approx 0\). Nevertheless, it is possible to compute the contribution of the non adiabatic component \(P_{\gamma}\) to \(P_{e}(E)\) by using Eq. (12) and the following prescription: (i) compute the value of \(n_{\rm eff}^{o}\) (using Eq. 16) for each value of
Figure 2: Survival probability of electron neutrinos in standard and nonstandard Interaction (NSI) neutrino flavor oscillation models with distinct coupling to up and down quarks. Colored continuous curves represent \(P_{e}(E)\) [Eq. 10] for various NSI models, accompanied by the corresponding \({\chi_{\nu}}^{2}\) values calculated using Eq. (19). The NSI neutrino models include: \(\zeta_{o}=2\) (**gold curve, B**): \(\chi_{\nu}{}^{2}=111.6\); \(\zeta_{o}=-2\) **brown curve, C**): \(\chi^{\nu}{}^{2}=5.26\); \(\zeta_{o}=0.002\) **blue curve, D**): \({\chi_{\nu}}^{2}=3.13\); and \(\zeta_{o}=-0.04\) (**cyan curve, E**): \({\chi_{\nu}}^{2}=2.99\). The **red curve (A)** corresponds to the standard neutrino flavor model with \({\chi_{\nu}}^{2}=3.12\), and the **green curve (F)** represents the best-fit NSI model with \(\zeta_{o}=-0.5\) and \({\chi_{\nu}}^{2}=2.96\). Data points indicate the measured survival probabilities of electron neutrinos by three solar neutrino detectors (SNO, Super-Kamiokande, and Borexino) using a current standard solar model. For further details regarding the figure and data points, please consult the main text and the referenced sources.
\(E\) (with fixed values of \(\Delta m^{2}_{12}\), \(\theta_{12}\) and \(\theta_{13}\)), (ii) calculate the scale height \(h_{\gamma}=|n_{\rm eff}/(dn_{\rm eff}/dr)|_{r_{o}}\) at the point \(r_{o}\) defined as \(n_{\rm eff}(r_{o})=n_{\rm eff}^{0}(E)\), (iii) calculate \(P_{\gamma}\) and \(\gamma\) for the value of \(h_{\gamma}\). The scale-height \(h_{\gamma}\) also reads \(h_{\gamma}=|(d\ln n_{\rm eff}/dr)^{-1}|_{r_{o}}\). Conveniently, to properly take into account the nonadiabatic correction into Eqs. (11) and (12), we included the step function \(P_{\rm H}\), defined as \(P_{\rm H}(V_{m}-\Delta m^{2}_{21}\cos(2\theta_{12}))\). This function is one for \(\Delta m^{2}_{21}\cos(2\theta_{12})\leq V_{m}\), and is 0 otherwise [e.g., 73]. Figure 1 shows \(P_{e}(E)\) for the standard neutrino flavor oscillation model. In any case, in this study we focus on the solar neutrino energy window (\(0.1\) up to \(20\) MeV), as the \(P_{\gamma}\) contribution for \(P_{e}(E)\) is negligible.
Numerous studies [e.g., 71, 74] have highlighted that the nuclear reactions occurring in the Sun's core produce a significant amount of electron neutrinos. Due to their extensive mean free path, these neutrinos interact minimally with the solar plasma as they travel towards Earth. During their journey, these particles undergo flavor oscillations (neutrino's energy range spans from \(0.1\) to \(100\) MeV): lower-energy neutrinos experience flavor transformations due to vacuum flavor oscillations, while high-energy neutrinos participate in additional flavor oscillations, courtesy of the MSW effect or matter flavor oscillations [46, 47]. This additional oscillation mechanism is significantly influenced by both the origin of the neutrino-emitting nuclear reactions and the energy of the produced neutrinos.
Here, we will investigate the influence of these revised NSI neutrino models on the flux variation of different neutrino flavors. Specifically, we will consider how these variations are affected by the local alterations in the distributions of protons and neutrons. This new flavor mechanism will affect all electron neutrinos produced in the proton-proton (PP) chain reactions and carbon-nitrogen-oxygen (CNO) cycle [56, 74]. Therefore, the survival probability of electron neutrinos associated with each nuclear reaction will depend on the location of the neutrino source in the solar interior. A detailed discussion of how the location of solar neutrino sources affects \(P_{e}(E)\) [Eq. 10] can be found on Lopes [56, 74]. The average survival probability of electron neutrinos for each nuclear reaction in the solar interior, i.e., \(P_{e,k}\) (\(\equiv\langle P_{e}(E)\rangle_{k}\)) is computed as
\[P_{e,k}(E)=A_{k}^{-1}\int_{0}^{R_{\odot}}P_{e}(E,r)\phi_{k}(r)4\pi\rho(r)r^{2 }dr, \tag{17}\]
where \(A_{k}\) (\(=\int_{0}^{R_{\odot}}\phi_{i}(r)4\pi\rho(r)r^{2}\;\;dr\) in which \(\phi_{k}(r)\) is the electron neutrino emission function for the \(k\) solar nuclear reaction) is a normalization constant, and \(k\) corresponds to the following solar neutrino sources: \(pp\), \(pep\), \({}^{8}B\), \({}^{7}Be\), \({}^{13}N\), \({}^{15}O\) and \({}^{17}F\).
The probability of electron-neutrinos changing flavor is influenced by variables tied to both vacuum and matter oscillations and the intrinsic physics of the Sun's interior. In particular, matter flavor conversion significantly relies on the local plasma conditions. Consequently, the quantity of electron neutrinos detected on Earth for each "\(k\)" species, as indicated by \(\Phi_{\otimes,k}(E)\), diverges markedly from the electron neutrinos generated by each neutrino-producing nuclear reaction, denoted as \(\Phi_{\odot,k}(E)\). These quantities are related as follows:
\[\Phi_{\odot,k}(E)=P_{e,k}(E)\;\Phi_{\odot,k}(E), \tag{18}\]
where \(P_{e,k}(E)\) [Eq. 17] is the electron-neutrino survival probability of a neutrino of energy \(E\). In this study \(k\) is equal to \({}^{8}B\) or \({}^{7}Be\).
## IV Constraints to NSI neutrino model
We now turn our attention to the impact of the nonstandard Interactions model on neutrino flavor oscillations, as explored in previous sections. Specifically, we calculate the survival probability of electron neutrinos for varying values of the NSI parameter \(\zeta_{o}\) [as per Eq. 16]. This analysis applies to an updated standard solar model characterized by low metallicity, or 'low-Z.' A comprehensive explanation of the origins of low-Z solar models is presented in the review article of Haxton et al. [59]. For further exploration of the impact of low metallicity on solar modelling, we refer to the articles by Serenelli et al. [75], Vinyoles et al. [76] and Capelo and Lopes [29].
We obtain the present-day Sun's internal structure using an up-to-date standard solar model that agrees relatively well with current neutrino fluxes and helioseismic datasets. To that end, we use a one-dimensional stellar evolution code that follows the star's evolution from the premain sequence phase until the present-day solar structure: age, luminosity and effective temperature, \(4.57\,{\rm Gyr}\), \(3.8418\times 10^{33}\,{\rm erg\,s^{-1}}\), and
Figure 3: Survival probability of electron neutrinos: Curves labeled \({}^{8}\)**B** correspond to neutrinos generated by the \({}^{8}B\) nuclear reaction (\(\phi_{k}(r)\)), as described in Eq. (17), while the curve labeled as **Ref** represents the survival probability of electron neutrinos [Eq. 10] at the Sun’s center. The figure presents three distinct sets of \(P_{e,k}(E)\) for two NSI models: \(\zeta_{o}=2\) (top set of curves), \(\zeta_{o}=-2\) (lower set of curves), and the standard neutrino flavor model (middle set of curves).
\(5777^{\,\mathrm{o}}\mathrm{K}\), respectively. Moreover, our solar reference model has the following observed abundance ratio at the Sun's surface: \((Z_{s}/X_{s})_{\odot}=0.01814\), where \(Z_{s}\) and \(X_{s}\) are the metal and hydrogen abundances at the star's surface [29, 31, 77]. The details about the physics of this standard solar model in which we use the AGSS09 (low-Z) solar abundances [78] are described in Lopes and Silk [30], and Capelo and Lopes [29].
Figure 2 compares our predictions with current solar neutrino data. Each data point illustrated herein represents the measured survival probabilities of electron-neutrinos, as captured by three solar neutrino detectors: SNO, Super-Kamiokande, and Borexino. In detail: Borexino data include measurements from \(pp\) reactions (yellow diamond), \({}^{7}Be\) reactions (red upward triangle), \({}^{8}B\) reactions (blue downward triangle), and \({}^{8}B\) reactions in the high-energy region (HER), presented in salmon (HER), orange (HER-I), and magenta (HER-II) circles. SNO's \({}^{8}B\) measurements are denoted by a cyan square, while the joint KamLAND/SNO \({}^{7}Be\) measurements are represented by a green square. Refer to Borexino Collaboration et al. [79], Agostini et al. [80], Bellini et al. [81], Abe et al. [82, 83], Aharmim et al. [84], Cravens et al. [85] and included references for additional insight into this experimental data. The lowest neutrino energy data point relates to the anticipated precision of the Darwin experiment in measuring \(P_{e}\pm\Delta P_{e}\) (\(\zeta_{o}=0\)). Here, \(\Delta P_{e}\) has the potential to be as reduced as 0.017, as suggested by Aalbers et al. [86]. Here, we compute \(P_{e}\) for several NSI models as given by Eq. (10). It shows \(P_{e}\) for the standard three neutrino flavor model (continuous red curve) and different NSI models (other continuous colored curves). Only a restricted set of NSI models with relatively low \(\zeta_{o}\) agree with all the neutrino data. Notably, the NSI models with lower \(\zeta_{o}\) have an explicit agreement with the \({}^{8}B\) measurements for neutrino energies just below \(10\,MeV\) (as depicted in Fig. 2).
For illustration, we present a selection of NSI models that significantly diverges from the standard flavor oscillation model in their impact on \(P_{e}\). The degree of effect in these NSI models depends on the value of \(\zeta_{o}\), the location of neutrino emission, and the energy spectrum of neutrinos from each nuclear reaction. We illustrate this impact in Figs. 3 and 4, demonstrating how the parameter \(\zeta_{o}\) influences neutrino flavor oscillation [refer to Eq. 17] and modulates the \({}^{8}B\) spectrum [see Eq. 18]. To exemplify the influence of the neutrino source location on \(P_{e}\), Fig. 3 displays curves based on the presumption that neutrinos originate from the Sun's center, indicated as "Ref". These curves are then juxtaposed with those derived from neutrinos generated by the \({}^{8}B\) nuclear reaction for a variety of \(\zeta_{o}\) values.
To enhance the robustness of our analysis, we opt to calculate a chi-squared-like test (\(\chi^{2}_{\nu}\) - test). This test leverages the inherent reliance of \(P_{e}\) on the solar background structure. Therefore, we define this chi-squared-like test as follows:
\[\chi^{2}_{\nu}=\sum_{i,k}\left(\frac{P^{obs}_{e,k}(E_{i})-P^{th}_{e,k}(E_{i})} {\sigma_{obs(E_{i})}}\right)^{2}. \tag{19}\]
This function compares our theoretical predictions with the empirical data collected by various neutrino experiments, evaluated at different energy values, E, used to calculate the survival probability function \(P_{e,k}(E)\), as defined in Eq. (17). Here, the subscript "obs" and "th" signify the observed and theoretical values, respectively, at the neutrino energy \(E_{i}\). The subscript \(i\) points to specific experimental measurements [refer to Fig. 2], and \(k\) corresponds to the source of solar neutrino [see Eq. 17]. The term \(\sigma_{obs}(E_{i})\) represents the error in measurement \(i\). The data points, \(P^{obs}_{e,k}(E_{i})\), are measurements derived from solar neutrino experiments, as cited in Borexino Collaboration et al. [79], Agostini et al. [80], Bellini et al. [81], Abe et al. [82, 83], Aharmim et al. [84], Cravens et al. [85]. Fig. 2 presents the experimental data points, \(P^{obs}_{e,k}(E_{i})\), juxtaposed with the curves of select NSI models. The corresponding \(\chi^{2}_{\nu}\) values for these models are explicitly listed in the figure's caption. In the \(\chi^{2}_{\nu}\) test, as described by Eq. (19), the standard neutrino flavor model yields a \(\chi^{2}_{\nu}\) value of 3.12.
For comparison, when the \(\zeta_{o}\) values are at -2 and 2, the corresponding \(\chi^{2}_{\nu}\) values are 5.26 and 111.6, respectively. Our study reveals that a \(\chi^{2}_{\nu}\) value of 3.12 or less is achieved when \(\zeta_{o}\) lies between -0.7 and 0.002. This result is visually demonstrated in Fig. 5 with a dashed horizontal line intersecting the blue curve, which connects the series of red circles at the points -0.7 and 0.002. According to this preliminary analysis, an NSI neutrino model with \(\zeta_{o}=-0.2\) yields a \(\chi^{2}_{\nu}\) value of 2.96, suggesting a better fit to the solar neutrino data than the standard neutrino flavor model.
Figure 4: \({}^{8}B\) solar neutrino spectrum [refer to Eq. 18]: \(\Phi_{\otimes}(E)\) represents the electron neutrino energy spectrum of \({}^{8}B\) neutrinos for the current Sun, measured on Earth and computed for two NSI neutrino models: \(\zeta_{o}=2\) **(gold area, C)** and \(\zeta_{o}=-2\)**(brown area, D)**, as well as the standard neutrino flavor oscillation model **(orange area, B)**. The **dark blue curve (A)** corresponds to \(\Phi_{\otimes}(E)\), the neutrino spectrum emitted from the Sun’s interior. These neutrino spectra calculations utilize an up-to-date standard solar model.
## V Conclusion
Currently, a new class of models based on flavor gauge symmetries with a lighter gauge boson is being proposed in the literature to resolve some of the current particle anomalies in the standard model of physics. These new interactions lead to nonstandard neutral current interactions between neutrinos and quarks. Specifically, we focus on studying and testing an NSI model proposed by Bernal and Farzan [32] that incorporates a new U(1) gauge symmetry through a light gauge boson \(Z^{\prime}\), which mixes with the photon. The interaction leads to a neutral current between active neutrinos and matter fields, with an arbitrary coupling to the up and down quarks. This model has some intriguing features, as it relaxes the bound on the coherent elastic neutrino-nucleus scattering experiments and fits the measured value of the anomalous magnetic dipole moment of the muon.
In this paper, we analyze the impact of the NSI model proposed by Bernal and Farzan [32] on neutrino flavor oscillations, using an up-to-date standard solar model that is in good agreement with helioseismology and neutrino flux datasets. Specifically, we examine the impact of this nonstandard Interaction model on the survival probability of electron neutrinos, with a focus on the PP-chain nuclear reactions taking place in the Sun's core. Our results show that the shapes of the neutrino spectra vary with the location of the nuclear reactions in the core, depending on the algebraic value of \(\zeta_{o}\). The effect is particularly visible in the \({}^{8}B\) neutrino spectrum.
We find that the NSI models with \(-0.7\leq\zeta_{o}\leq 0.002\) fit the solar neutrino data equal or better than the standard neutrino flavor model. The best NSI model corresponds to \(\zeta_{o}=-0.2\). From Eq. (7), we can derive a relationship between the mass of the \(Z^{\prime}\) boson \(m_{Z^{\prime}}\), the gauge coupling \(g_{Z^{\prime}}\), and the quark charge \(c_{o}\): \(\zeta_{o}=-c_{o}g_{Z^{\prime}}^{2}/(2\sqrt{2}G_{F}m_{Z^{\prime}}^{2})=-0.2\).
In essence, our research underscores the significance of neutrino oscillation analyses in assessing NSI models. Our findings reveal the potential of these neutrino models to refine the parameters of NSI models. This methodology provides a robust and independent means to confirm this class of NSI models, especially as they address certain existing experimental data anomalies, such as those observed in coherent elastic neutrino-nucleus scattering experiments and in measurements of the muon's anomalous magnetic dipole moment.
In the future, the validation or exclusion of such a class of NSI models can be achieved more efficiently with new solar neutrino detectors that can obtain much more accurate measurements [e.g., 87, 88, 89]. For instance, the Darwin experiment [86] is set to generate data that can better calculate the survival rate of low-energy electron neutrinos [see Fig. 2]. The figure shows that by factoring in the predicted precision from Darwin and presuming the \(P(E)\) value to be standard at \(E=0.150~{}MeV\) (with \(\zeta_{o}=0.0\)), we anticipate a \(P_{e}\pm\Delta P_{e}\) where \(\Delta P_{e}=0.017\). This additional data point from Darwin, when included in the \(\chi^{2}\) analysis, narrows down the set of NSI models that perform equal or better than the standard case in terms of \(\chi^{2}\). Specifically, it shifts the \(\zeta_{o}\) interval from \(-0.7\) to \(0.002\) to a tighter range of \(-0.5\) to \(-0.002\). Furthermore, the addition of this data point also decreases the \(\chi^{2}/\mathrm{d.o.f.}\) value. For reference, in Fig. 5, the models with a \(\mathrm{d.o.f.}\) of 7 display \(\chi_{\nu}^{2}/\mathrm{d.o.f.}\) values that vary from \(0.50\) to \(0.53\) within the \(\zeta_{o}\) range of -1 to 0.2, and hit a local minimum of \(\chi_{\nu}^{2}/\mathrm{d.o.f.}=0.4\) at \(\zeta_{o}=-0.2\). Adding one more data point increases the \(\mathrm{d.o.f}\) to 8 and adjusts the \(\chi_{\nu}^{2}/\mathrm{d.o.f.}\) range to \(0.48\) to \(0.47\). The local minimum remains at \(\zeta_{o}=-0.2\), but its value reduces to \(\chi_{\nu}^{2}/\mathrm{d.o.f.}=0.37\).
This work emphasizes the significance of NSI models in defining the fundamental properties of particles and their interactions, driving theoretical progress in this research field. As research in experimental neutrino physics continues to advance at a rapid pace, studies of this nature will be critical for comprehensive analysis of neutrino properties [90]. We anticipate that the innovative approach outlined in this paper will offer a fresh perspective for exploring new particle physics interactions using the standard solar model combined with a comprehensive analysis of neutrino flavor oscillation experimental data.
###### Acknowledgements.
The author thanks the anonymous referee for the invaluable input which significantly enhanced the quality of the manuscript. I.L. would like to express gratitude to the Fundacao para a Ciencia e Tecnologia (FCT), Portugal, for
Figure 5: Values of the \(\chi_{\nu}^{2}\)-test plotted against the coupling constant \(\zeta_{o}\) for various NSI neutrino models. The red circles, interconnected by a blue line, represent varying \(\chi_{\nu}^{2}\) values corresponding to these NSI models, while the green vertical line signifies the \(\chi_{\nu}^{2}\) of the standard neutrino model. The green circles, linked with the salmon line, correspond to the same NSI model, including a data point from the Darwin experiment. In this calculation, we assign the Darwin data point a value corresponding to \(P(E)\), assuming \(\zeta_{o}=0\). The horizontal dashed lines guide delineating the range between two sets of NSI models - those without the Darwin data point (represented by the blue line) and those incorporating the Darwin data point (depicted by the salmon line).
providing financial support to the Center for Astrophysics and Gravitation (CENTRA/IST/ULisboa) through Grant Project No. UIDB/00099/2020 and Grant No. PTDC/FIS-AST/28920/2017.
|
2301.01979 | Theory of shallow and deep boron defects in 4H-SiC | Abstract Despite advances toward improving the quality of $p$-type 4H-SiC
substrates and layers, we still have no model capable of accounting for the
multitude of boron-related optical, junction, and paramagnetic resonance
experiments available in the literature. A conspicuous puzzle is the
observation of two shallow boron defects with rather distinct axial
orientations as found by electron paramagnetic resonance (EPR) and electron
nuclear double resonance (ENDOR) data. This feature is not observed in material
doped with other group-III elements. Another open issue involves conflicting
conclusions from photoluminescence and EPR studies of a deeper boron center,
which has been linked to rather distinct models, either based on substitutional
or vacancy-related boron defects. We unlock these and other problems by means
of first-principles calculations, where the temperature-dependent stability,
the electronic activity, and the paramagnetic response of boron defects in
4H-SiC are investigated. | Vitor J. B. Torres, Ivana Capan, José Coutinho | 2023-01-05T09:29:40Z | http://arxiv.org/abs/2301.01979v1 | # Theory of shallow and deep boron defects in 4H-SiC
###### Abstract
Despite advances toward improving the quality of \(p\)-type 4H-SiC substrates and layers, we still have no model capable of accounting for the multitude of boron-related optical, junction, and paramagnetic resonance experiments available in the literature. A conspicuous puzzle is the observation of two shallow boron defects with rather distinct axial orientations as found by electron paramagnetic resonance (EPR) and electron nuclear double resonance (ENDOR) data. This feature is not observed in material doped with other group-III elements. Another open issue involves conflicting conclusions from photoluminescence and EPR studies of a deeper boron center, which has been linked to rather distinct models, either based on substitutional or vacancy-related boron defects. We unlock these and other problems by means of first-principles calculations, where the temperature-dependent stability, the electronic activity, and the paramagnetic response of boron defects in 4H-SiC are investigated. [_Pre-print published in Physical Review B **106**, 224112 (2022)]_
DOI:10.1103/PhysRevB.106.224112
Point defects; Wide band gap semiconductors; Electron paramagnetic resonance; Density functional calculations
## I Introduction
Due to is rugged properties, including mechanical, thermal, and chemical stability, a large breakdown field, and the possibility of growing both electronic-grade \(n\)- and \(p\)-type layers, 4H silicon carbide (4H-SiC) is nowadays a semiconductor with an important and growing market on power electronics (used in electric vehicles, power supplies, motor control circuits, and inverters) [1; 2]. SiC also finds applications in fundamental and emerging fields like high-energy particle detection [3] and quantum technologies [4; 5; 6; 7].
The \(p\)-type dopants are usually boron, aluminum, and gallium. As for the former, there is ample evidence that its incorporation leads to the appearance of two types of acceptors, often referred to as _shallow_ and _deep_ boron centers, over to deal the relative depth of their respective levels within the band gap [8; 9]. The two boron species diffuse differently -- boron-implanted/diffused layers show heterogeneous incorporation, where the deep center dominates the profile tails [10; 11; 12]. While the assignment of the shallow species to substitutional boron on the Si site (B\({}_{\text{Si}}\)) seems consensual, the origin of the deep hole trap has remained elusive. Photoluminescence studies favor a boron atom on the carbon site (B\({}_{\text{C}}\)) [13], magnetic resonance experiments point to a boron-vacancy complex [14; 15], whereas first-principles results suggest either B\({}_{\text{C}}\)[11; 16] or a boron-silicon-antisite pair [17].
Another problem is that boron is often present in the SiC as a contaminant in trace concentrations. The deep species, also referred to as D-center, is of particular concern, especially in \(n\)-type SiC where it is negatively charged under equilibrium conditions. This state is a potential trap for holes, threatening the functioning of bipolar devices or \(n\)-type detectors [18].
A possible route for the elimination of the D-center involves thermal oxidation [19; 20; 21]. However, the impact of boron-related minority carrier lifetime degradation is not necessarily detrimental. The effect was actually explored to improve the switching time characteristics of \(p\)-i-\(n\) diodes, and that was attributed to the effect of a localized lifetime control in the intrinsic layer due to carrier recombination at deep boron traps [22; 23].
The D-center is known since early deep level transient spectroscopy (DLTS) studies of B doped 6H-SiC, where two nearly overlapping peaks corresponding to electronic transitions at \(E_{\text{v}}+0.63\) eV and \(E_{\text{v}}+0.73\) eV were revealed [24]. Suttrop _et al._[8] found that in addition to the deep boron center (measured in that work as a single DLTS peak at \(E_{\text{v}}+0.58\) eV), a hole trap at \(E_{\text{v}}+0.30\) eV was also present, and it was assigned to the shallower boron acceptor.
The presence of the D-center in the 4H polytype was also confirmed using DLTS by Sridhara _et al._[9]. The level was placed at \(E_{\text{v}}+0.55\) eV (assuming a \(T^{-2}\)-corrected cross-section), again without resolving a double peak structure. Although the shallower species could not be found by DLTS (the Si/C ratio of the samples did not favor its formation), admittance spectroscopy measurements of Si-poor samples arrived at an acceptor level for shallow boron in the range 284-295 meV above \(E_{\text{v}}\)[9].
Recently, Laplace-DLTS and Laplace-minority carrier transient spectroscopy (Laplace-MCTS) measurements were carried out for studying the shallow and deep boron centers in 4H-SiC [25]. Estimated activation energies for hole emission were respectively 0.27 eV and 0.60 eV. From Laplace-MCTS, it was shown that the D-center consists of two components, D1 and D2 with nearly 1:1 intensity ratio, respectively estimated at \(E_{\text{v}}+0.49\) eV and \(E_{\text{v}}+0.57\) eV. The pair of traps was assigned to boron at two different carbon sublattice locations in 4H-SiC. The peak of the shallow boron species was structureless. If it corresponded to the superposition of more than one point defect (in different sublattice sites), they were indistinguishable as far as the resolution offered by Laplace-MCTS.
Early electron paramagnetic resonance (EPR) studies [26] indicated that the symmetry of the shallow boron species in 6H-SiC experienced a remarkable change upon lowering the
temperature. In the 6H phase, two cubic (\(k_{1}\) and \(k_{2}\)) and one hexagonal (\(h\)) sites are available for B\({}_{\text{Si}}\) substitution. While above \(T\) = 50 K the EPR signals related to all tree substitutions show a trigonal pattern, below that temperature the \(k\)-related signals lower their symmetry to monoclinic. The \(h\)-related signal preserves C\({}_{3v}\) symmetry for temperatures as low as 5 K.
These findings were confirmed latter by electron nuclear double resonance (ENDOR) spectroscopy [27; 28]. The defect structure was interpreted as comprising a B-C broken bond, where boron is threefold coordinated (connected to three C ligands), while the remaining C atom holds a hole that is responsible for 40% of the total spin density. Strikingly, whereas the C radical is aligned along the main crystallographic axes for the case of B\({}_{\text{Si}}\) sitting on the \(h\) site, for some reason, boron on the cubic sites leave a C dangling bond aligned along a basal B-C direction. Analogous observations were reported in 4H-SiC samples [29; 30].
The deep boron center also has EPR-related signals and several experiments produced rich amounts of data (see Refs. [15; 14] and references therein). The defect has a spin-1/2 paramagnetic state, but unlike the shallow boron center, both \(h\)- and \(k\)-related signals show the same alignment along the hexagonal \(c\) axis, with a small basal anisotropy. Minute \({}^{13}\)C satellite lines were detected around the main signals, the \({}^{11}\)B hyperfine interactions were negligible, and no large \({}^{29}\)Si were observed either. However, the spin density was found to be almost 100% localized on Si ligands. Based on the data, a model combining a boron on a silicon position with an adjacent carbon vacancy (B\({}_{\text{Si}}\)-\(V_{\text{C}}\)) was proposed. The structure comprises an inert boron atom and three Si radicals edging the \(V_{\text{C}}\) unit, thus explaining the electronic and magnetic activity [14; 15]. An obvious difficulty of this model is that for some reason, the pair would have to be invariably formed with an alignment along the \(c\) axis. Such preferential alignment is not supported by first-principles modeling. In fact, the calculations also show that the lowest-lying level of B\({}_{\text{Si}}\)-\(V_{\text{C}}\) is a donor in the upper half of the gap, and therefore, the complex is not compatible with the D-center [16; 17].
While early semi-empirical Hartee-Fock calculations using small H-terminated SiC clusters predicted that B\({}_{\text{Si}}\) adopts an off-center configuration [31; 32], subsequent supercell calculations within the local density approximation (LDA) to density functional theory (DFT) led to ambiguous conclusions. Accordingly, some authors justified the off-site location of B\({}_{\text{Si}}\) with a Jahn-Teller (JT) effect [16; 33]. Others found an effective-mass-like defect with no distortion at all [34]. Finally, the authors of Ref. [35] found that the LDA cannot describe the shallow boron state due to overmixing with the valence band. After applying a scissors correction to the band gap during the self-consistent Kohn-Sham method, they obtained a pronounced JT distortion toward C\({}_{1h}\)-symmetry and a prominent \({}^{13}\)C-hyperfine interaction due to a C-radical [35]. Although they account for the measured localization of the spin density, these results cannot explain the different symmetries of \(k\)- and \(h\)-related boron EPR signals in both 4H- and 6H-SiC.
It is well-known that local and semilocal approximated DFT poorly describes insulator/semiconductor band gaps, making the discussion of defect properties, in particular those that involve gap states, vulnerable. For instance, several insufficiencies of conventional DFT and advancements in modeling the electronic structure of defects in SiC were presented in Ref. [36]. Among the findings it was shown that hybrid DFT, which replaces a fraction of the local exchange potential by a (possibly screened) Fock exchange contribution, can provide reliable electronic structure of defects in SiC where a local density description fails. We revisited the theory of substitutional boron defects to verify if modern electronic structure calculation methods, in particular hybrid density functional theory, can shed light on the open issues described above. After detailing the methods employed in Sec. II, we report on the physical picture of Si and C replacements by boron in 4H-SiC (Secs. III.1 and III.2). The following three sections connect our findings with photoluminescence and junction capacitance spectroscopies (Sec. III.3), with finite-temperature effects on the preferential formation of Si or C substitutions (Sec. III.4), as well as with the available EPR/ENDOR measurements (Sec. III.5).
We show that B\({}_{\text{Si}}\) and B\({}_{\text{C}}\) defects nicely explain the optical, capacitance and magnetic measurements related to shallow and deep boron centers in 4H-SiC, respectively. Importantly, it is argued that the _shallow_ label attributed to B\({}_{\text{Si}}\) should be interpreted as _shallower_ than the deep boron center. In other words, the B\({}_{\text{Si}}\) center has the characteristics of a localized and deep hole trap and not of an effective mass theory (EMT) dopant. The EMT picture for B\({}_{\text{Si}}\) has been advocated based on (semi-)local density functional results, but we show that higher level hybrid DFT predicts a strong atomistic relaxation upon hole capture at a \(\sim\) 0.3 eV deep trap, making the model compatible with the magnetic resonance observations. We rule out an assignment of deep boron to B\({}_{\text{Si}}\)-\(V_{\text{C}}\) based on the calculated \(g\) tensor elements. Along the paper, we also solve several problems, most notably we explain the observation of different orientations of \(g\) tensor and hyperfine interactions for shallow boron on cubic and hexagonal sites and the distinct temperature-dependence of the \(g\) tensors of both centers.
## II Theoretical Methods
First-principles calculations were carried out using the density functional Vienna _ab initio_ simulation package (VASP) [37; 38; 39; 40], employing the projector-augmented wave method, thus avoiding explicit treatment of core states [41]. A basis set of plane-waves with kinetic energy of up to 400 eV was used to describe the Kohn-Sham states. Total energies were evaluated self-consistently, using the hybrid density functional of Heyd-Scuseria-Ernzerhof (HSE06) [42; 43] with a numerical accuracy of 10\({}^{-7}\) eV. When compared to generalized gradient approximated (GGA) calculations [44] -- which underestimate the band gap of SiC by a factor of nearly one half -- the HSE06 functional has the main advantage of predicting a Kohn-Sham band gap width of 3.17 eV for 4H-SiC. This figure should be compared to the experimental value of 3.27 eV [45].
Defect energies were found using 400-atom (defect-free) supercells of 4H-SiC (with hexagonal shape), obtained by replication of 5\(\times\)5\(\times\)2 primitive cells, into which boron defects were inserted. The equilibrium (calculated) lattice parameters of 4H-SiC were \(a=3.071\) A and \(c=10.052\) A. These are close to the experimental values of \(a=3.079\) A and \(c=10.081\) A [46].
Defect structures were firstly optimized within the HSE06 approximation using \(\mathbf{k}=\Gamma\) to sample the Brillouin zone (BZ), until the largest force became lower than 0.01 eV/A. On a second step, electronic total energies of the obtained structures were found from single-point calculations with the band structure sampled at a \(\Gamma\)-centered 2\(\times\)2\(\times\)2 mesh of \(\mathbf{k}\)-points (also within HSE06). In line with Gerstmann _et al._[35], we found that structural optimizations of \(\mathrm{B_{Si}}\) defects within the GGA led to erroneous results due to overmixing of gap states with the valence band top. An analogous effect attributed to the overmixing of a carbon interstitial (\(\mathrm{C_{i}}\)) level, in this case with the SiC conduction band bottom, was also pointed out by Gouveia and Coutinho [47], and that will be further discussed below.
Electronic transitions of boron defects were calculated by finding the Fermi energy at crossing points of formation energies for different charge states \(q\). Defect formation energies (\(E_{\mathrm{f}}\)) were obtained as a function of the chemical potential of the "sample" constituents, according to the usual formalism (see for instance Refs. [48; 49]),
\[E_{\mathrm{f}}(\mu_{i},\mu_{\mathrm{e}};q)=E_{\mathrm{elec}}(q)-\sum_{i}n_{i} \mu_{i}-n_{\mathrm{e}}\mu_{\mathrm{e}}. \tag{1}\]
The first term on the right-hand side of Eq. 1 is given by \(E_{\mathrm{elec}}(q)=\tilde{E}_{\mathrm{elec}}(q)+E_{\mathrm{corr}}(q)\), and refers to the electronic energy of the periodic calculation \(\tilde{E}_{\mathrm{elec}}\) shifted by \(E_{\mathrm{corr}}\) to remove the effect of the artificial and infinite array of localized charges when the charge state is \(q\neq 0\). For that we use the method proposed by Freysoldt, Neugebauer, and Van de Walle [50], generalized for anisotropic materials by Kumagai and Oba [51]. The method uses the axial and transverse dielectric constants of 4H-SiC, calculated as \(\epsilon^{\parallel}=10.65\) and \(\epsilon^{\perp}=9.88\), respectively [52]. See Ref. [53] (and also, Refs. [54; 55; 56; 50; 51; 52]) for convergence tests to the formation energy of boron defects upon varying the boundary conditions. The second and third terms sum up the chemical potentials \(\mu_{i}\) of the \(n_{i}\) neutral atomic species and \(n_{\mathrm{e}}=-q\) extra electrons (with respect to the neutral state) that form the problem. The electronic chemical potential is \(\mu_{\mathrm{e}}=E_{\mathrm{v}}+E_{\mathrm{F}}\), where \(E_{\mathrm{v}}\) and \(E_{\mathrm{F}}\) are the valence band top and Fermi energies, respectively. The former is obtained as the highest occupied state in a bulk supercell, whereas the latter is an independent variable.
Chemical potentials for \(i=\{\mathrm{Si,C}\}\) were calculated as
\[\mu_{i}=\mu_{i}^{0}+(1-f_{i})\Delta E_{\mathrm{SiC}}^{\mathrm{f}}, \tag{2}\]
where \(\mu_{i}^{0}\) are energies per atom in pure silicon or carbon (diamondum phase), \(\Delta E_{\mathrm{SiC}}^{\mathrm{f}}\) is the heat of formation of SiC estimated as \(\Delta E_{\mathrm{SiC}}^{\mathrm{f}}=\mu_{\mathrm{SiC}}^{0}-\mu_{\mathrm{Si} }^{0}-\mu_{\mathrm{C}}^{0}=-0.62\) eV, with \(\mu_{\mathrm{SiC}}^{0}\) being the energy per SiC formula unit in a perfect 4H-SiC crystal. This result is close to the enthalpy of formation \(\Delta H_{\mathrm{SiC}}^{\mathrm{f}}=-0.72\) eV measured at standard conditions [57]. Eq. 2 allows for a variation of the chemical potentials in the range \(\mu_{i}^{0}+\Delta E_{\mathrm{SiC}}^{\mathrm{f}}\leq\mu_{i}\leq\mu_{i}^{0}\) subject to \(0\leq f_{i}\leq\sum_{i}f_{i}=1\), with the upper limit representing \(i\)-rich conditions during the material growth. We will calculate the relative energy of different boron defects, all of which possessing a single boron atom. Although being an irrelevant quantity for this purpose, the chemical potential of boron (\(\mu_{\mathrm{B}}\)) was found from the \(\alpha\)-rhombohedral ground state phase [12 atoms per unit cell with \(R\bar{3}m\) space group (group No. 166)], with equilibrium lattice parameters \(a=5.029\) A and \(\alpha=58^{\circ}\)[58].
We also examined the relative stability of boron acceptors on different lattice sites at finite temperatures. The range of temperatures close to those experienced during epitaxial growth are of particular importance. In this case intrinsic conditions apply, and for acceptors with levels in the lower part of the gap the relevant charge state is the negative one. The difference in the Helmholtz free energy of formation between two boron dopants replacing different crystalline species is obtained as,
\[F(\mathrm{B_{Si}^{-}})-F(\mathrm{B_{C}^{-}})=\Delta F_{\mathrm{elec}}+\Delta F _{\mathrm{vib}}+\mu_{\mathrm{Si}}-\mu_{\mathrm{C}}. \tag{3}\]
In the above, \(\Delta F_{\mathrm{elec}}=F_{\mathrm{elec}}(\mathrm{B_{Si}^{-}})-F_{\mathrm{elec} }(\mathrm{B_{C}^{-}})\) is the electronic free energy difference between the two defects, where \(F_{\mathrm{elec}}\) is replaced by the stationary solution of the electronic problem, \(E_{\mathrm{elec}}\) (obtained within hybrid density functional theory). This approximation essentially neglects electronic entropy, and it is justified by the depth of the electronic levels and the negligible density of defect states at the Fermi level under intrinsic conditions [59]. The second term on the right-hand side of Eq. 3 accounts for the vibrational free energy difference between the defects, and for each we have,
\[F_{\mathrm{vib}}(T)=k_{\mathrm{B}}T\sum_{i=1}^{3N-3}\ln\left[2\sinh\left(\frac{ \hbar\omega_{i}}{2k_{\mathrm{B}}T}\right)\right]. \tag{4}\]
The summation above runs over \(3N-3\) vibrational modes of the \(N\)-atom defective supercell, with respective angular frequencies \(\omega_{i}\). Symbols \(k_{\mathrm{B}}\) and \(\hbar\) refer to the Boltzmann and reduced Planck constants, respectively. It is noted that Eq. 4 already accounts for zero-point motion. Chemical potentials in Eq. 3 were found from Eq. 2, after adding a vibrational term \(F_{\mathrm{vib}}/N\) to \(\mu_{\mathrm{Si}}^{0}\) and \(\mu_{\mathrm{C}}^{0}\), obtained from respective supercells of silicon and diamond made of \(N=64\) atoms, and with the temperature set to \(T=273.15\) K. Analogously, a vibrational free energy term \(2F_{\mathrm{vib}}/N\) was added to \(\mu_{\mathrm{SiC}}^{0}\)For further details regarding the calculation of defect free energies, we direct the reader to Refs. [59; 60; 61] and references therein.
The vibrational mode frequencies of 4H-SiC cells containing boron defects were evaluated in \(N=72\)-atom cells (3\(\times\)3\(\times\)1 primitive cells). We considered the participation of all atoms in the dynamical matrix, whose elements were found from the force derivatives with respect to the atomic positions [61].
The \(g\) tensor and hyperfine (HF) interactions of paramagnetic boron defects were calculated using the gauge including projector augmented wave (GIPAW) method [62] as implemented in the QUANTUM ESPRESSO package [63, 64]. The GIPAW method is based on self-consistent density functional perturbation theory, describing the applied magnetic field and spin-orbit couplings as perturbations. The current implementation pertaining the \(g\) tensor calculation is limited to local and semilocal functionals. Hence, for these calculations, the Kohn-Sham states were found within the GGA [44]. We used hexagonal supercells of 256 atoms, a \(\Gamma\)-centered BZ sampling mesh of 2\(\times\)2\(\times\)2, and a plane-wave cutoff \(E_{\text{cut}}=612\) eV (45 Ry). The computation of reciprocal space derivatives to obtain spin currents in linear magnetic response, makes the calculation of \(g\) tensors rather sensitive to \(\mathbf{k}\)-point sampling [65]. Convergence issues can be especially severe for states whose \(g\) values show large deviations from that of the free-electron. For that reason, we also tested a denser \(3\times 3\times 3\) grid in the evaluation of \(g\) values for neutral \(\text{B}_{\text{Si}}\).
Due to erroneous geometries obtained for \(\text{B}_{\text{Si}}\) defects within GGA, atomistic structures for the GIPAW calculations were found within HSE06-level (using the VASP code). Such combined approach was successfully used in a recent study of defects in \(\text{Ga}_{2}\text{O}_{3}\)[66].
As for the HF coupling tensors \(A\), they describe the interaction between the electron spin of a paramagnetic state with magnetic nuclei at the defect core. For an axial state along an arbitrary principal direction 3, transverse principal values \(A_{1}=A_{2}\) and the HF tensor can be described by isotropic (\(a\)) and anisotropic (\(b\)) hyperfine constants, which relate to the diagonalized tensor components as \(a=(2A_{1}+A_{3})/3\) and \(b=(A_{3}-A_{1})/3\)[27]. The evaluation of the HF tensors rely on the accurate computation of the spin density embedding the nuclei of interest, and for the case of the isotropic term (also known as Fermi contact), it involves the description of the electron density within the core region. Therefore the use of pseudopotentials implies a core reconstruction from the pseudo-wavefunctions [67].
## III Results
### Boron on the silicon site: shallow boron
We start by looking at the boron impurity on the Si site. In the neutral charge state, the boron atom was clearly displaced from the perfect lattice site after optimizing the energy with respect to the atomistic geometry. Essentially, boron formed three B-C bonds, leaving an unsaturated C radical. The on-site structure was metastable with a small \(\sim 0.1\) eV barrier along the way toward the off-site ground state structure.
4H-SiC has two distinct sublattice sites, namely cubic (\(k\)) and hexagonal (\(h\)), and for each, substitutional boron atoms can form two types of C radicals, namely those polarized along the hexagonal axis of the crystal (labeled with 'a' and standing for 'axial') and those polarized along the basal bond directions (labeled with 'b' and standing for 'basal'). This leads to a total of four possible defect configurations to consider.
Among all structures, those depicted in Figs. 1(a) and 1(b), namely \(\text{B}_{\text{Si}}(k_{\text{b}})\) and \(\text{B}_{\text{Si}}(h_{\text{a}})\), were the most stable at \(k\) and \(h\) sites, respectively. The B atom in both structures displays threefold coordination, where three short (1.65 A) B-C bonds contrast with the \(\sim\!2.42\) A long separation between B and the C radical (see dashed lines in Fig. 1). See Ref. [53] (and also Ref. [68]), which provides further geometrical details of the structures. While B-C bond lengths are essentially the same for all configurations, the longer B-C distance can vary by about 0.04 A, depending on the specific site and orientation. The energies of the two most stable neutral states, namely \(\text{B}_{\text{Si}}^{0}(k_{\text{b}})\) and \(\text{B}_{\text{Si}}^{0}(h_{\text{a}})\), differ by 0.02 eV only, whereas \(\text{B}_{\text{Si}}^{0}(k_{\text{a}})\) and \(\text{B}_{\text{Si}}^{0}(h_{\text{b}})\) are metastable, respectively at 0.11 eV and 0.05 eV above the ground state \(\text{B}_{\text{Si}}^{0}(h_{\text{a}})\). The reason for the breaking of the B-C bond along different directions for \(\text{B}_{\text{Si}}(k)\) and \(\text{B}_{\text{Si}}(h)\) will become evident when we discuss the electronic structure of the center further below. We summarize the above results in the lower part of the configurational coordinate diagram represented in Fig. 1(c).
The above threefold coordinated \(\text{B}_{\text{Si}}\) defects are markedly
Figure 1: Low energy structures of neutral \(\text{B}_{\text{Si}}\) at (a) \(k\) and (b) \(h\) sites of 4H-SiC, respectively, and configurational coordinate diagram of neutral and negatively charge states (c). \(g\) tensor principal directions of neutral states are also shown. Boron, carbon and silicon are shown in black, gray and white, respectively. All energies in the diagram are in eV. Energies located below the energy minima are relative the \(\text{B}_{\text{Si}}^{0}(h_{\text{a}})\) ground state. Energies next to arrow heads are relative to the state next to the arrow base. See Ref. [53] for details regarding the barrier calculations.
different from those found from previous local density functional calculations. B\({}_{\text{Si}}\) in 3C-SiC was essentially reported as a fourfold coordinated center, showing only slightly different B-C bond lengths due to a weak JT driven \(C_{3v}\) distortion [33, 16]. Four-fold coordination was also found for B\({}_{\text{Si}}\) in 4H-SiC [34]. The neutral state was in this case interpreted as a shallow acceptor, binding a diffuse hole with the character of an EMT state. These conclusions are clearly at variance with our results -- we find that (1) the paramagnetic B\({}^{0}_{\text{Si}}\) state is a singlet, showing the highest symmetry allowed by the crystalline host, _i.e._, it is immune to the JT effect, and (2) is strongly localized on the carbon radical next to boron, which is not in line with an EMT state.
An explanation for the above conflict was put forward by Gerstmann _et al._[35], who interpreted the prediction of an effective-mass character for B\({}_{\text{Si}}\) as a failure of LDA, and as a corollary, a failure to describe the measured \({}^{13}\)C hyperfine data: "_like the well-known underestimation of the fundamental band gap, the localization of this defect state is also strongly underestimated_". Accordingly, the LDA gap is about 50% narrower than the measured value, and for that reason, the C dangling bond state becomes artificially over-mixed with the SiC valence states. On the other hand, the non-local HSE06 functional predicts a 3.2 eV wide-gap for 4H-SiC, allowing the singlet acceptor state to emerge above the valence band top.
An analogous effect was found by Gouveia and Coutinho [47] for C\({}_{\text{i}}\) in 3C-SiC, but in this case involving the mixing of a gap level with the conduction band. Based on analysis of the Kohn-Sham data of structures ranging between the \(D_{2d}\) (ground state with spin-1) and \(C_{1h}\) (metastable and diamagnetic) structures of C\({}^{0}_{\text{i}}\), an overestimated mixing between the C\({}_{\text{i}}\) highest occupied level and the conduction band states, was attributed to the narrow (semi-)local approximated band gap, which favored the incorrect \(C_{1h}\) structure. Besides the exchange-correlation treatment, this effect may depend on other factors, most notably the dispersion of the defect state (the mixing could be \(\mathbf{k}\)-point dependent), the sampling of the BZ, or the size/shape of the supercells. The authors of Ref. [47] used 512-atom cubic cells with the Brillouin zone sampled at \(\Gamma\). More recently, Schultz _et al._[69] found that upon improving the sampling to \(2\times 2\times 2\) (using identical supercells and GGA-level exchange-correlation treatment), the correct spin-1 \(D_{2d}\) state could be recovered. In Ref. [36] it was noted that the \(C_{1h}\) metastable structure of C\({}^{0}_{\text{i}}\), was the most stable when using 216-atom cells with \(\Gamma\)-sampling even at hybrid-DFT level. However, upon adding \(\mathbf{k}\)-points away from \(\Gamma\) to the sampling mesh (where the gap is wider), the correct \(D_{2d}\) configuration was also recovered. The result of Ref. [70], where the \(C_{1h}\) structure was originally proposed as the most stable, was therefore attributed to errors related to calculation settings.
One could ask if, for the case of B\({}_{\text{Si}}\), the above effect results from poor sampling of the Brillouin zone. In Ref. [53] we clearly demonstrate that the valence band edge overmixing of B\({}_{\text{Si}}\) at the GGA-level is a stable result and was found even for high-density \(\mathbf{k}\)-point samplings (up to \(4\times 4\times 4\)).
From inspection of the band structure of defective supercells we arrived at the orbital model for B\({}^{0}_{\text{Si}}\) depicted on the left hand side of Fig. 2. It consists of a schematic diagram without spin resolution. Upward/downward arrows simply reflect the electron occupancy. The model postulates how the sp\({}^{2\uparrow\uparrow\uparrow}\) states of atomic B(sp\({}^{2}\)) unfold under the effect of a trigonal crystal field of B(sp\({}^{2}\)-\(C_{3v}\)), and how these hybridize with the silicon vacancy states (\(V^{0}_{\text{Si}}\)) to produce the electronic structure of B\({}^{0}_{\text{Si}}\). Accordingly, three short B-C bonds of B\({}_{\text{Si}}\) are formed with the participation of six electrons on low-energy bonding states \(a_{1}+e\). These result from overlap of \(a_{1}\) and \(e(xy)\) states localized on three C atoms edging \(V^{0}_{\text{Si}}\), with \(a_{1}\)(s) and \(e(p_{x}p_{y})\) of threefold coordinated B(sp\({}^{2}\)-\(C_{3v}\)). Both \(a_{1}+e\) and corresponding anti-bonding states \(a_{1}^{*}+e^{*}\) of B\({}^{0}_{\text{Si}}\) are resonant with the valence and conduction bands, respectively. The weak interaction between \(a_{1}(z)\) (localized on the fourth car
Figure 2: Schematic one-electron models of B\({}^{0}_{\text{Si}}\) (left) and B\({}^{0}_{\text{C}}\) (right) impurities in 4H-SiC constructed by hybridization of valence states from sp\({}^{2}\) and sp\({}^{3}\) atomic boron (middle-left and middle-right) with states from the silicon and carbon vacancies (left and right), respectively. Labeling of states is according to the \(C_{3v}\) point group, except for isolated atoms in the middle. Some labels include the direction of the wave function polarization within parentheses. The diagrams are spin-averaged with upward/downward arrows indicating the level occupancy.
bon radical of \(V_{\text{Si}}^{0}\)) with the \(a_{1}(\text{p}_{z})\) state from the displaced boron atom, leaves the former within the gap and semioccupied. The \(a_{1}(z)\) state is the C radical responsible for the acceptor activity of \(\text{B}_{\text{Si}}\), the short covalent B-C bonds naturally explain the off-site distortion without JT effect.
A picture close to that of Fig. 2 was discussed in the literature nearly three decades ago by Bratus and co-workers [31]. From analysis using a linear combination of atomic orbitals (LCAO), it was argued that the \(\text{sp}^{2}+\text{p}_{z}\) hybridization of boron on the Si site was more stable than \(\text{sp}^{3}\) simply because (1) the covalent radius of B is much smaller than that of Si and (2) the three bonds of B(\(\text{sp}^{2}\)) with carbon (\(\sim\) 1.6 A long) are considerably shorter than the host Si-C bonds (\(\sim\) 1.9 A). Among the main conclusions was also the description of \(\text{B}_{\text{Si}}^{0}\) ground state as a singlet, and consequently, that the observed displacement of B from the perfect crystalline site could be explained without a Jahn-Teller effect.
Subsequent studies of Petrenko _et al._[32], now using a semi-empirical modified neglect of diatomic overlap method, also supported a pronounced off-site location for \(\text{B}_{\text{Si}}\) in SiC. The crystalline host was approximated as a hydrogen saturated spherical cluster of \(\sim\) 90 SiC atoms (3C phase). With the emergence of first-principles local density functional supercell calculations, a fourfold coordinated structure for \(\text{B}_{\text{Si}}\) with effective-mass character became favored, suggesting that the findings of Ref. [32] resulted from limitations of the method employed. For instance, one could argue that due to quantum confinement and underscreening effects, the band gap of the small clusters was rather wide. That effect could have eliminated the mixing of \(a_{1}(z)\) with the valence, thus favoring the \(\text{sp}^{2}\)-like bonding of boron. Another argument cautioning against the off-site location of \(\text{B}_{\text{Si}}\) is the fact that such relaxations are often overestimated when modeling defects in H-terminated clusters.
On the contrary, we argue that the (semi-)local density functional results for neutral \(\text{B}_{\text{Si}}\) are spurious, that hybrid DFT finds the correct off-site location of the B atom, and the LCAO-based arguments of Bratus _et al._[31] were essentially correct after all. Figure 3 depicts the spin density in the vicinity of neutral \(\text{B}_{\text{Si}}(h)\) in 4H-SiC as found for (a) the off-site ground state configuration within hybrid-DFT/HSE06 and (b) the on-site ground state configuration within conventional DFT/GGA. Both isosurfaces have the same spin density cutoff (0.003 e/A\({}^{3}\)). They depict the border within which the magnitude of the spin density is above the specified threshold. Figure 3(a) shows that the amplitude of the spin density near the core of threefold coordinated \(\text{B}_{\text{Si}}^{0}\) is much larger than in the fourfold coordinated configuration. In the latter case, many isosurface _bubbles_ (with that specific spin density magnitude) are scattered across the supercell volume, hidden behind the spheres and cylinders used to represent atoms and bonds. Upon decreasing the cutoff by half, no spin density isosurface could be seen for the fourfold coordinated boron, while the p-like state of threefold coordinated boron was well visible. This is consistent with deep threefold and shallow fourfold states, respectively. Clearly, the DFT/GGA approximation predicts a diffuse state with very little localization at the core of the defect.
Still regarding the bonding character of \(\text{B}_{\text{Si}}^{0}\) in SiC, we note that this center is isovalent to substitutional nitrogen on the Si site of SiC (\(\text{N}_{\text{Si}}\)) [71] as well as substitutional nitrogen in diamond (\(\text{N}_{\text{s}}\)) [72]. Within a simple Lewis picture, \(\text{B}_{\text{Si}}^{0}\) can be represented as [\(\equiv\text{B}_{\text{Si}}\)\(\bullet\)C\(\equiv\)], where each horizontal bar stands for a single C-B or C-Si bond, and the bullet is an unparried electron. Analogously, \(\text{N}_{\text{Si}}^{0}\) in SiC and neutral substitutional N in diamond can be described as [\(\equiv\text{N}_{\text{Si}}\): \(\bullet\)C\(\equiv\)] and [\(\equiv\)N\({}_{\text{s}}\): \(\bullet\)C\(\equiv\)], respectively, where the dots ":" represent a lone-pair of electrons tightly bound to nitrogen and deep within the valence band. Like the B species in the Si site of SiC, N atoms with four carbon nearest neighbors become threefold coordinated next to a paramagnetic C radical. However, unlike \(\text{B}_{\text{Si}}\), local and semilocal density functional calculations account well for their off-site structure [73, 71, 74]. Although an explanation for such behavior is outside the scope of the present work, we speculate that short C-N bonds combined with Coulomb repulsion between the N lone pair and the unpaired electron on the C dangling bond could be important ingredients for the stabilization of the off-site configuration. A strong indication in favor of this argument is that while the C-radical of \(\text{B}_{\text{Si}}^{0}\) in SiC induces a semioccupied state low in the gap, C-radicals of \(\text{N}_{\text{Si}}\) in SiC and \(\text{N}_{\text{s}}\) in diamond lead to semioccupied states in the upper half of the gap, suggesting a stronger repulsion of the unpaired electron in the N-related defects.
The semioccupied \(a_{1}^{\uparrow}(z)\) singlet of \(\text{B}_{\text{Si}}^{0}\) in 4H-SiC is represented in Fig. 2 just above the valence band top. A spin-averaged calculation of \(\text{B}_{\text{Si}}^{0}(h_{\text{a}})\) reveals that this level is located 0.52 eV above the highest occupied Kohn-Sham level from the bulk. On the other hand, in a spin-polarized calculation the spin-up \(a_{1}^{\uparrow}(z)\) level lies within the valence band (the highest occupied state is bulk-like), while the spin-down component of \(a_{1}(z)\) is 1.47 eV above the \(E_{\text{v}}\) level. This picture is indicative of deep acceptor activity.
Upon atomic relaxation of negatively charged defects (\(\text{B}_{\text{Si}}^{-}\)), we found that independently of the lattice site and initial configuration, the boron atom moved to the perfect substitutional site, thus forming four nearly equivalent 1.77 A long B-C
Figure 3: Spin density isosurface (cutoff 0.003 e/Å\({}^{3}\)) of neutral \(\text{B}_{\text{Si}}\) defects at the \(h\) site of 4H-SiC. (a) Off-site threefold coordinated ground state configuration obtained within HSE06. (b) Four-fold configuration as found from a GGA-level calculation. Si, C and B atoms are shown in white, gray and black, respectively.
bonds. Concurrently, the \(a_{1}(\mathrm{p}_{z})\) state of boron increased its mixing with \(a_{1}(z)\) from \(V_{\mathrm{Si}}\) to form the fourth B-C bond. The resulting \(a_{1}(z)\) bond state from \(\mathrm{B}_{\mathrm{Si}}^{-}\) became resonant with the valence, and the Kohn-Sham band gap was left clean. This does not imply that \(\mathrm{B}_{\mathrm{Si}}^{-}\) cannot capture a hole to become neutral. It does not imply that it is a shallow acceptor either. As will be shown in Sec. III.3, hole capture is accompanied by reconfiguration to the threefold coordinated structure, making the hole trap relatively deep. As summarized in Fig. 1(c), the energy of \(\mathrm{B}_{\mathrm{Si}}^{-}(h)\) was found slightly lower (0.04 eV) than that of \(\mathrm{B}_{\mathrm{Si}}^{-}(k)\).
Now we look at the origin of the site-dependent alignment of \(\mathrm{B}_{\mathrm{Si}}^{0}\) in 4H-SiC (the arguments discussed below apply to other polytypes as well). The analysis is best followed with help of Fig. 4. In 4H-SiC, the stacking of SiC dimers along the \(c\) axis occurs according to a A-B-C-B sequence, where A and C are hexagonal bilayers and B are cubic bilayers. Importantly, while hexagonal SiC dimers (type A and C) are replicated in steps of length \(c\) along the main crystallographic direction (where \(c\) is the axial lattice parameter), cubic bilayers (type B) are repeated every \(c/2\)-long steps. This results in a _wavier_ electrostatic potential and a stronger electric field in crystalline regions along type B columns (see Fig. 4).
The \(a_{1}^{\uparrow}(z)\) state on the C radical of \(\mathrm{B}_{\mathrm{Si}}^{0}(k_{\mathrm{a}})\) interacts with the extensive \(3\mathrm{sp}^{3}\) valence electrons of the nearest Si atom along the axis, only \(c/2-r_{0}=3.14\) A away from carbon, where \(r_{0}\) is the Si-C bond length (see left hand side of Fig. 4). This repulsion effectively raises the energy of \(\mathrm{B}_{\mathrm{Si}}^{0}(k_{\mathrm{a}})\) by 0.11 eV with respect to \(\mathrm{B}_{\mathrm{Si}}^{0}(h_{\mathrm{a}})\). In the latter case, the Si atom on the back of the C\(\cdots\)B\({}_{\mathrm{Si}}(h_{\mathrm{a}})\) unit is \(c-r_{0}=8.17\) A away from C (see right hand side of Fig. 4).
Due to symmetry reasons, the above analysis cannot be strictly applied to \(\mathrm{B}_{\mathrm{Si}}^{0}(k_{\mathrm{b}})\) and \(\mathrm{B}_{\mathrm{Si}}^{0}(h_{\mathrm{b}})\) defects (with C radicals polarized along Si-C basal bonds). However, analogous conclusions may be drawn by inspecting the amount of empty space between the carbon radical and the nearest atom along the B\(\cdots\)C direction. As depicted in the middle of Fig. 4, in a pristine 4H-SiC crystal, that distance is \(3c/4-r_{0}=5.64\) A for both \(k\) and \(h\) sites, thus lying right between the lower and upper limits of the axially distorted configurations. This is consistent with the energy ordering found for \(\mathrm{B}_{\mathrm{Si}}^{0}(h_{\mathrm{a}})<\mathrm{B}_{\mathrm{Si}}^{0}(k_{ \mathrm{b}})\sim\mathrm{B}_{\mathrm{Si}}^{0}(h_{\mathrm{b}})<\mathrm{B}_{ \mathrm{Si}}^{0}(k_{\mathrm{a}})\).
The reorientation barrier between basal and axial distortions of neutral \(\mathrm{B}_{\mathrm{Si}}\) defects, was found from a batch of nudged elastic band (NEB) calculations encompassing five intermediate structures between initial and final states. See Ref. [53] (and also Ref. [75]) for details of the barrier calculations. From the results we find activation barriers of 0.04 eV and 0.06 eV for \(k_{\mathrm{a}}\to k_{\mathrm{b}}\) and \(h_{\mathrm{b}}\to h_{\mathrm{a}}\) reorientations. These jumps involve a return from metastable to lowest energy structures of \(\mathrm{B}_{\mathrm{Si}}^{0}\) in \(k\) and \(h\) sites, respectively. These figures are reflected in the diagram of Fig. 1(c). Given the above meV-range barriers, the metastable states are probably not formed, even at liquid-He temperature.
The reorientation of the C radical of \(\mathrm{B}_{\mathrm{Si}}(k)\) between equivalent basal orientations was also investigated using the NEB method. We found that \(\mathrm{B}_{\mathrm{Si}}(k)\) has to surmount a barrier of 0.09 eV to perform a \(k_{\mathrm{b}}\to k_{\mathrm{b^{\prime}}}\) jump between neighboring alignments with the same energy. Hence, above a certain (low) temperature, \(\mathrm{B}_{\mathrm{Si}}(k)\) is likely to roam around all equivalent \(k_{\mathrm{b}}\) distortions, showing effective thermally averaged \(C_{3v}\) symmetry.
### Boron on the carbon site: deep boron
Regarding the boron replacement of carbon (\(\mathrm{B}_{\mathrm{C}}\)), we found that the boron impurity sits very close to the crystalline site. Very small B-C bond distortions were obtained when symmetry breaking was allowed during the relaxations. From inspection of the Kohn-Sham band structure we found that the on-site configuration (with \(C_{3v}\) symmetry) introduces a deep doublet state in the gap. In a spin-averaged calculation of a trigonal \(\mathrm{B}_{\mathrm{C}}^{0}(h)\) defect, a pair of doubly degenerate Kohn-Sham states occupied by three electrons appear at 0.29 eV above the highest occupied level from the bulk. On the other hand, in a spin-polarized calculation of the same structure the spin-up \(e^{\uparrow\uparrow}(xy)\) level lies at 0.06 eV above \(E_{\mathrm{v}}\), whereas the spin-down counterpart \(e^{\downarrow}(xy)\) is 0.44 eV above the \(E_{\mathrm{v}}\). Note that these figures neglect any Jahn-Teller relaxation and electron
Figure 4: Location and possible alignments of \(\mathrm{B}_{\mathrm{Si}}\) defects on the \(\{1120\}\) plane of 4H-SiC. Silicon and carbon atoms are shown in white and gray. Boron and carbon at the core of the defect are represented as black and gray-haloed circles. For the sake of clarity, all atomic positions are those of the perfect crystal. The C-B broken bond of the neutral state is represented as a dotted line. Relevant distances are indicated next to arrows, where \(c\) and \(r_{0}\) are the axial lattice parameter and the Si-C bond length, respectively. Stacking (A, B, C) and site (\(k\), \(h\)) indexes are also indicated.
phonon coupling effects (the occupation of the doublets was fixed -- not variational).
A simplified bond orbital model for neutral B\({}_{\text{C}}\) is shown on the right half of Fig. 2. It represents the conversion of atomic boron B(sp\({}^{3}\)) states under the effect of a trigonal crystal field, B(sp\({}^{3}\)-C\({}_{3v}\)), and the hybridization of the later with \(a_{1}^{\uparrow\downarrow}+a_{1}^{\uparrow\downarrow}(z)+e(xy)\) states of the carbon vacancy (where boron is sitting). The Si radicals edging the \(V_{\text{C}}\) defect are considerably more diffuse than the C radicals in \(V_{\text{Si}}\), and therefore their overlap with boron is significant for all states. The result is the formation of bonding \(a_{1}^{\uparrow\downarrow}+a_{1}^{\uparrow\downarrow}(z)\) and anti-bonding \(a_{1}^{*}+a_{1}^{*}(z)\) singlets within the valence and conduction bands, respectively, while a partially occupied \(e^{\uparrow\downarrow\uparrow}(xy)\) doublet is left in the gap. The \(a_{1}\) and \(a_{1}(z)\) states are respectively located on basal and axial B-Si bonds, while the components of \(e(xy)\) are B-centered p\({}_{x}\)- and p\({}_{y}\)-like states overlapping basal bonds only. It is clear that any electronic activity of B\({}_{\text{C}}\) must be ascribed to the \(e(xy)\) state.
Upon monoclinic distortion (\(C_{1h}\) symmetry), the \(e^{\uparrow\downarrow\uparrow}(xy)\) neutral state can either split into \(a^{\prime\prime}\uparrow\downarrow(x)+a^{\prime}\uparrow(y)\) or \(a^{\prime\prime}\uparrow(y)+a^{\prime\prime}\uparrow(x)\) states with net spin \(S=1/2\). Here \(a^{\prime}\) and \(a^{\prime\prime}\) are respectively symmetric and anti-symmetric with respect to a \(\{2\bar{1}\bar{1}0\}\) mirror plane. While \(a^{\prime\prime}\) is a p\({}_{x}\)-like state with a node coincident with the mirror plane, \(a^{\prime}\) is p\({}_{y}\)-like with a node on the boron atom and polarized along \(\langle 01\bar{1}0\rangle\). Irrespectively of the lattice site, we found that the most stable JT-distorted configuration of B\({}_{\text{C}}^{0}\) involved a minute (\(\sim 0.06\) A) displacement of boron along \(\langle 01\bar{1}0\rangle\), leading to two shorter B-Si bonds (and a slightly elongated one). That configuration corresponds to the electronic state \(a^{\prime\prime}\uparrow\downarrow+a^{\prime\prime}\uparrow\). The alternative \(a^{\prime\prime}\uparrow\downarrow+a^{\prime}\uparrow\) state was metastable by 15 meV only. In overall, B\({}_{\text{C}}^{0}(k)\) was more stable than B\({}_{\text{C}}^{0}(h)\) by 39 meV.
Interestingly, and despite the minute JT-driven bond deformations, the relaxation energy with respect to the high-symmetry (\(C_{3v}\)) state was about 0.25 eV for both B\({}_{\text{C}}^{0}(k)\) and B\({}_{\text{C}}^{0}(h)\). This is a surprisingly large value, and as far as we could find, it is not an artifact. The electronic occupancy of the high symmetry state (at the JT singularity) was not variational during the self-consistent cycle, and each pair of spin components of the doublet kept equal occupancy.
While the JT relaxation energy is a considerable barrier to surmount at liquid-He temperature, the question is -- how likely is boron able to jump between neighboring off-axis configurations and show a dynamic Jahn-Teller effect? There are in total 6 possible JT displacements of boron away from the perfect C-site. They comprise alternating \(a^{\prime\prime}\uparrow\downarrow+a^{\prime\prime}\uparrow\) and \(a^{\prime\prime}\uparrow\downarrow+a^{\prime}\uparrow\) states around the hexagonal axis of the crystal, defined by a rotation angle of \(\pi/3\). Jumping between neighboring structures involves a displacement of the B atom of only 0.04 A. Although the barrier was not calculated with a proper transition-state method, it was estimated from the energy of the structure at mid-way between two neighboring B\({}_{\text{C}}^{0}(k)\) and B\({}_{\text{C}}^{0}(h)\) states. The small traveling distance of the B atom justifies this simple approach. Accordingly, we found that the rotation barrier is about 15 meV for both B\({}_{\text{C}}^{0}(k)\) and B\({}_{\text{C}}^{0}(h)\). Such minute figure is smaller than the zero-point energy of an oscillating B-Si bond, suggesting that the B\({}_{\text{C}}^{0}\) defects effectively roam around the \(c\) axis, thus showing a dynamic-JT effect even at liquid-helium temperature.
In the negative charge state, the doublet becomes fully occupied and B\({}_{\text{C}}^{-}\) recovers the full trigonal symmetry of the C-site. In this charge state, the impurity at the \(k\)-site is 76 meV more stable than at the \(h\)-site.
### Connection with optical and junction spectroscopy
The formation energy of boron impurities, obtained according to Eq. 1, is shown in Fig. 5. There we show the results for the formation energy of B\({}_{\text{Si}}\) and B\({}_{\text{C}}\) defects in 4H-SiC under carbon rich and poor conditions (left- and right-hand side diagrams, respectively), as a function of the Fermi energy (referred with respect to the valence band top). Solid and dashed lines refer to boron defects located at \(k\) and \(h\) sites, respectively.
Clearly, and in agreement with previous findings [16; 33], in carbon rich material, where depletion of Si is favored, B\({}_{\text{Si}}\) has lower formation energy than B\({}_{\text{C}}\). The opposite is found for C-poor material. At growth temperatures, where the Fermi level can be assumed to be at mid-gap, the formation energy of B\({}_{\text{Si}}\) is 0.74-0.85 eV lower than that of B\({}_{\text{C}}\) in C-rich samples. On the other hand, B\({}_{\text{C}}\) is more stable than B\({}_{\text{Si}}\) by 0.36-0.48 eV in C-poor samples. The ranges result from considering \(k\) and \(h\) sites for each impurity.
Figure 5 shows that both B\({}_{\text{Si}}\) and B\({}_{\text{C}}\) are single acceptors. The defects adopt a negative charge state for a wide range of Fermi levels, and we did not find donor transitions or additional acceptor transitions within the gap.
Considering the lowest-energy configurations of neutral B\({}_{\text{Si}}\) defects at \(k\) and \(h\) sites, we place the acceptor levels of B\({}_{\text{Si}}(k)\)
Figure 5: Formation energy diagrams of B\({}_{\text{Si}}\) (blue lines) and B\({}_{\text{C}}\) (red lines) in 4H-SiC. Solid and dashed lines represent formation energies of boron defects located at \(k\) and \(h\) sites, respectively.
and \(\text{B}_{\text{Si}}(h)\) at \(E_{\text{v}}+0.34\) eV and \(E_{\text{v}}+0.32\) eV, respectively. These results are shown graphically in Fig. 1(c), and they indicate that the binding energy of the hole to \(\text{B}_{\text{Si}}\) is almost independent of the lattice site, despite the adoption of rather distinct crystalline alignments by neutral \(\text{B}_{\text{Si}}(k_{\text{b}})\) and \(\text{B}_{\text{Si}}(h_{\text{a}})\) ground states.
These results are in line with the observation of a single peak by DLTS and Laplace-DLTS related to a hole trap of shallow boron at \(E_{\text{v}}+0.27\) eV [8; 18; 25]. Despite the agreement, we note that the calculated difference between the acceptor levels of \(\text{B}_{\text{Si}}\) at \(k\) and \(h\) site (20 meV), is smaller than the typical error of the method employed for the calculation. Additionally, the detection of a single peak by the Laplace-DLTS technique suggests that the difference could be even smaller, or that one of the configurations is dominant. The calculated relative energies of \(\text{B}_{\text{Si}}(k)\) and \(\text{B}_{\text{Si}}(h)\) do not support the second possibility.
An important question relates to the mechanism behind the capture of holes by \(\text{B}_{\text{Si}}^{-}\). After all, the band structure of a supercell with this defect state shows a clean band gap. Our findings indicate that the mechanism involves a strong electron-phonon coupling, much like in a polaronic trapping effect [77]. Essentially, the off-site distortion of \(\text{B}_{\text{Si}}^{-}\) raises an occupied level above the valence band top, which is then stabilized upon hole capture. The first stage (level raising above \(E_{\text{v}}\)) translates into the sumcounting of a capture barrier, estimated to be of the order of 0.1 eV. See Ref. [53] (and also Refs. [78; 79; 80; 81; 82]) for details regarding the raising of the level above \(E_{\text{v}}\) and the estimation of the capture barrier.
Regarding boron on the carbon site, we find \((-/0)\) transitions at \(E_{\text{v}}+0.63\) eV and \(E_{\text{v}}+0.67\) eV for \(\text{B}_{\text{C}}(k)\) and \(\text{B}_{\text{C}}(h)\), respectively. Neutral ground states with electronic configuration \(a^{\prime}\uparrow 1+a^{\prime\prime}\uparrow\) were considered in our calculations. These figures agree well with early and recent measurements in 6H- and 4H-SiC [8; 9; 24; 25; 18], which indicate a transition of deep boron in the range 0.5-0.7 eV above the valence band top.
The separation between calculated levels of \(\text{B}_{\text{C}}(k)\) and \(\text{B}_{\text{C}}(h)\) is small, \(\approx\) 40 meV, but about twice larger than the analogous figure obtained for \(\text{B}_{\text{Si}}\). Again, this difference is lower than the error of the calculations, and therefore should be considered with due care. Considering that the signal of the D center was recently shown to comprise two equally intense peaks separated by nearly 0.1 eV, our results support the view that these peaks arise from two nearly equivalent deep boron acceptors: a "shallower" configuration sitting at the cubic carbon site and a "deeper" one replacing the hexagonal site. These correspond to measured transitions at \(E_{\text{v}}+0.49\) eV and \(E_{\text{v}}+0.57\) eV, respectively [25].
### Finite temperature calculations
Up until now, our results refer to zero temperature conditions, not even accounting for differences in zero-point motion between \(\text{B}_{\text{Si}}\) and \(\text{B}_{\text{C}}\) species. However, at high temperatures the effect of entropy to the relative stability of \(\text{B}_{\text{Si}}\) and \(\text{B}_{\text{C}}\) can be relevant. To strengthen our conclusions, we evaluated their respective free energies of formation at high temperatures, in particular under intrinsic conditions. For the sake of testing the methodology we calculated the specific heat at constant volume for bulk 4H-SiC as,
Figure 6: Specific heat of 4H-SiC calculated at constant volume within the harmonic approximation (solid line). Circles represent measured data for \(\alpha\)-SiC, obtained under constant pressure conditions as reported in Ref. [76]. The calculation employed Eq. 5 and considered a total of 213 vibrational frequencies from a 72-atom 4H-SiC supercell.
Figure 7: (a) Free energy of \(\text{B}_{\text{Si}}^{-}(k)\) with respect to that of \(\text{B}_{\text{C}}^{-}(k)\) in 4H-SiC as a function of temperature under Si-poor conditions. The Fermi level was considered to be located at midgap. (b) Concentration ratio of \(\text{B}_{\text{Si}}^{-}\) to that of \(\text{B}_{\text{C}}^{-}\) as a function of the stoichiometric growth conditions (represented by \(f_{\text{Si}}\)), for selected temperatures.
\[c_{\rm v}(T)=-T\left(\frac{\partial^{2}F_{\rm vib}}{\partial T^{2}}\right), \tag{5}\]
and the result is shown in Fig. 6. In that plot, we also report several data points recorded during experiments at constant pressure for \(\alpha\)-SiC (6H-SiC) [76].
The calculated specific heat describes the measurements very well up to nearly \(T\sim 800\) K, when anharmonic effects start to gain importance, and beyond which the calculated free energy and its derivatives become more qualitative. In Ref. [61], we demonstrated that these calculations cannot be improved by enlarging the supercells. Also important, is the fact that the constant volume calculations match well the constant pressure measurements across a wide range of temperatures. The reason is hinted by the minute thermal expansion of crystalline SiC, which is about \(5\times 10^{-6}\) K\({}^{-1}\) for temperatures as high as 1000 \({}^{\circ}\)C [46].
The calculated difference in the free energy of formation \(\Delta F(k)=F(\mathrm{B}_{\rm Si}^{-}(k))-F(\mathrm{B}_{\rm C}^{-}(k))\), is shown in Fig. 7(a) in the temperature range \(T=600\)-1800 K. The quantity represented refers to impurities located in cubic sites. For boron defects at the hexagonal sites the \(T\)-dependence of the analogous quantity was almost identical, although its magnitude increased by about 0.1 eV. Figure 7(a) shows that \(\mathrm{B}_{\rm Si}^{-}\) increases its relative stability with respect to \(\mathrm{B}_{\rm C}^{-}\) by almost 0.05 eV when raising the temperature from 1000 K to 2000 K. The implication of this result is illustrated in Fig. 7(b) where we plot the concentration ratio of \(\mathrm{B}_{\rm Si}^{-}\) to \(\mathrm{B}_{\rm C}^{-}\) defects as a function of the stoichiometric conditions (represented by \(f_{\rm Si}\)), at different temperatures. Under equilibrium, the concentration ratio is given by
\[\frac{\left[\mathrm{B}_{\rm Si}^{-}\right]}{\left[\mathrm{B}_{\rm C}^{-} \right]}=\exp\left(-\frac{\Delta F(k)-\Delta F(h)}{k_{\rm B}T}\right), \tag{6}\]
where \(\Delta F(k)\) and \(\Delta F(h)\) are free energy differences \(F(\mathrm{B}_{\rm Si}^{-})-F(\mathrm{B}_{\rm C}^{-})\) pertaining \(k\) and \(h\) sites, respectively [as represented in Fig. 7(a)]. For chemical vapor deposition grown material, reactors typically run at temperatures of about 1600-1650 \({}^{\circ}\)C (\(T\sim 1900\) K) [83]. Under these conditions we estimate \(\left[\mathrm{B}_{\rm Si}^{-}\right]/\left[\mathrm{B}_{\rm C}^{-}\right] \approx 200\) and about 0.1 for a Si-poor and Si-rich stoichiometry, respectively. It is evident that even for the limit of Si-poor growth, which is the most favorable for introduction of the \(\mathrm{B}_{\rm Si}\), thermodynamics imposes the formation of deep boron centers with a concentration about two orders of magnitude below that of the shallow counterpart. Figure 7(b) shows that even at \(T=1700\) K the \(\left[\mathrm{B}_{\rm Si}^{-}\right]/\left[\mathrm{B}_{\rm C}^{-}\right]\) ratio is nearly 350, and probably the elimination of \(\mathrm{B}_{\rm C}\) cannot be achieved during growth.
We finally note that from the calculated vibrational mode frequencies, we could not find boron-related modes outside the spectrum of the crystalline density of states. Therefore any boron vibrational mode must be resonant, and most certainly hard to detect experimentally.
### Connection with EPR
Figure 1 readily explains the rather distinct EPR signals of shallow boron at \(k\) and \(h\) sites, as well as their temperature dependence [26]. While \(\mathrm{B}_{\rm Si}^{0}(h)\) finds its ground state forming a paramagnetic p-like orbital on the C atom of a broken B-C bond along the main crystalline axis, the \(\mathrm{B}_{\rm Si}^{0}(k)\) lowest energy configuration has an analogous p-orbital (and a B-C broken bond) but it is now along the direction of a basal bond of the crystal.
The upper part of Tab. 1 records the calculated \(g\) tensors of shallow \(\mathrm{B}_{\rm Si}^{0}\) defects in 4H-SiC, along with the corresponding quantities measured by EPR [29]. For trigonal states (\(C_{3v}\) symmetry), the main \(g_{3}\) component is assumed to be parallel to the main crystallographic \(c\) axis. For monoclinic states (\(C_{1h}\) symmetry), \(g_{1}\) is perpendicular to the \(\{2\bar{1}\bar{1}0\}\) symmetry plane, while \(g_{2}\) and \(g_{3}\) are rotated by an angle \(\theta\) away from \(\langle 0\bar{1}10\rangle\) and \(\langle 0001\rangle\) directions, respectively. Figures 1(a) and 1(b) show this convention graphically for \(\mathrm{B}_{\rm Si}^{0}(k_{\rm b})\) with a broken B-C bond on the \(\{2\bar{1}\bar{1}0\}\) mirror plane and for \(\mathrm{B}_{\rm Si}^{0}(h_{\rm a})\), respectively.
Ground-states \(\mathrm{B}_{\rm Si}^{0}(k_{\rm b})\) and \(\mathrm{B}_{\rm Si}^{0}(h_{\rm a})\) have a calculated main
\begin{table}
\begin{tabular}{l c c c c c c} & \(T\) (K) & Sym & \(g_{1}\) & \(g_{2}\) & \(g_{3}\) & \(\theta\) (\({}^{\circ}\)) \\ \hline \(\mathrm{B}_{\rm Si}^{0}(k_{\rm b})\) & & \(C_{1h}\) & 2.0068 & 2.0078 & 2.0028 & 70 \\ EPR [29] & 4.2-45 & \(C_{1h}\) & 2.0059 & 2.0069 & 2.0025 & 69 \\ \(\mathrm{B}_{\rm Si}^{0}(k_{\rm dyn})\) & & \(C_{3v}\) & 2.0051 & 2.0051 & 2.0073 & 0 \\ EPR [29] & 61-83 & \(C_{3v}\) & 2.0046 & 2.0046 & 2.0064 & 0 \\ \(\mathrm{B}_{\rm Si}^{0}(h_{\rm a})\) & & \(C_{3v}\) & 2.0089 & 2.0089 & 2.0022 & 0 \\ EPR [29] & 4.2-83 & \(C_{3v}\) & 2.0070 & 2.0070 & 2.0019 & 0 \\ \hline \(\mathrm{B}_{\rm Si}(k)\)-\(V_{\rm C}^{0}(k)\) & & \(C_{1h}\) & 2.0041 & 2.0065 & 2.0028 & 78 \\ \(\mathrm{B}_{\rm Si}(k)\)-\(V_{\rm C}^{0}(k_{\rm dyn})\) & & \(C_{3v}\) & 2.0035 & 2.0035 & 2.0063 & 0 \\ \(\mathrm{B}_{\rm C}^{0}(k)\) & & \(C_{1h}\) & 2.0050 & 2.0205 & 2.0279 & 13 \\ \(\mathrm{B}_{\rm C}^{0}(k_{\rm dyn})\) & & \(C_{3v}\) & 2.0129 & 2.0129 & 2.0275 & 0 \\ EPR [14] & 4 & \(\sim C_{3v}\) & 2.0 & 2.0 & 2.029 & \(\sim 0\) \\ \(\mathrm{B}_{\rm C}^{0}(h)\) & & \(C_{1h}\) & 2.0056 & 2.0173 & 2.0246 & 13 \\ \(\mathrm{B}_{\rm C}^{0}(h_{\rm dyn})\) & & \(C_{3v}\) & 2.0116 & 2.0116 & 2.0240 & 0 \\ EPR [14] & 4 & \(\sim C_{3v}\) & 2.0 & 2.0 & 2.024 & \(\sim 0\) \\ \end{tabular}
\end{table}
Table 1: Calculated principal values (\(g_{1}\), \(g_{2}\) and \(g_{3}\)) of the gyromagnetic \(g\) tensor of paramagnetic boron-related defects at \(k\) and \(h\) sites of 4H-SiC. Assignments of experimental values from EPR signals of shallow (top) and deep (bottom) boron centers are indicated by their location in the table. For trigonal states (\(C_{3v}\)), \(g_{3}\) is parallel to the main crystallographic \(c\) axis. For monoclinic states (\(C_{1h}\)), \(g_{1}\) is perpendicular to the \(\{2\bar{1}\bar{1}0\}\) symmetry plane, while \(g_{2}\) and \(g_{3}\) are rotated by an angle \(\theta\) away from \(\langle 0\bar{1}10\rangle\) and \(\langle 0001\rangle\) directions, respectively. Dynamic trigonal states (labeled with a subscripted “dyn”) refer to averaged \(g\) tensors involving three symmetrically equivalent \(C_{1h}\) states (see text). Also indicated are the temperatures of the measurements.
\(g\) tensor component \(g_{3}\)\(\sim\) 2.002 along the C radical, making an angle with the [0001] direction of \(\theta\) = 70\({}^{\circ}\) and 0\({}^{\circ}\), respectively (see also Fig. 1). The \(g\) tensors are nearly or perfectly axial for both static B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{b}}\)) and B\({}^{0}_{\rm{SI}}\)(\(h_{\rm{a}}\)) structures, resulting from the conspicuous alignment of the spin density on the carbon radical as Fig. 3(a) clearly displays. The match with the measurements carried out at low temperature (\(T\lesssim 45\) K) is excellent, both in terms of magnitude (error \(\lesssim 0.0005\)) and monoclinic angle (error \(\sim\) 1\({}^{\circ}\)). The error bar of the components perpendicular to the C-radical (\(g_{1}\) and \(g_{2}\)) is about 4 times larger, but still, the agreement is deemed very good, especially considering that both calculated and observed \(g\) values show identical trends in terms of axial character and anisotropy: \(g_{2}-g_{1}\) = 0.0010 for B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{b}}\)), and \(g_{3}-g_{1}\) = \(-\)0.0067 for B\({}^{0}_{\rm{SI}}\)(\(h_{\rm{a}}\)).
The calculated \(g\) tensor components of Tab. 1 were found by sampling the band structure over a \(2\times 2\times 2\) mesh of \(\mathbf{k}\)-points. A denser \(3\times 3\times 3\)-mesh calculation for B\({}^{0}_{\rm{SI}}\)(\(h_{\rm{a}}\)) gave \(g_{1}=g_{2}=2.0083\) and \(g_{3}=2.0015\), which deviate from the results with the coarser mesh by 0.0007. Most importantly, the relative magnitude of the axial and transverse components is similar in both calculations and match very well the observations.
As discussed at the end of Sec. III.1, the activation energy for rotation of the broken bond of B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{b}}\)) around [0001] was estimated at about 0.1 eV, allowing the structure to jump between all three equivalent alignments at rather low temperatures. This result is consistent with the observed raise of symmetry of the EPR signal assigned to shallow boron in cubic sites, from monoclinic to trigonal above \(T\)\(\sim\) 45 K. We argue that above this temperature, the B\({}^{0}_{\rm{SI}}\)(\(k\)) defect jumps between three equivalent monoclinic configurations at a rate much faster than the inverse of the EPR recording time. The result is the observation of a "dynamic" state with effective \(C_{3v}\) symmetry (hereafter labeled with a "dyn" subscript), whose \(g\) tensor is estimated by averaging over all three equivalent \(C_{1h}\) orientations. The calculated axial component \(g_{3}\) = 2.0073 of B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{dyn}}\)), now along [0001], mostly inherits contributions from \(g_{2}\) = 2.0078 of static B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{b}}\)) configurations [see Fig. 1(a)], thus becoming the largest component. This contrasts with \(g_{3}\) of B\({}^{0}_{\rm{SI}}\)(\(h_{\rm{a}}\)) which is the smallest component of this configuration. The magnitude of the calculated \(g\) values of B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{dyn}}\)) agrees very well with those assigned to shallow boron on the cubic site measured in the temperature range \(T\) = 61-83 K (error \(<0.001\)). The calculated anisotropy \(g_{3}-g_{1}\approx 0.002\) for B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{dyn}}\)) differs from the measurements by 0.0004 only.
The coupling of the unpaired spin of B\({}^{0}_{\rm{SI}}\) defects with \({}^{13}\)C and \({}^{11}\)B magnetic isotopes quantifies the magnitude and shape of the spin density at the core of the defect. \({}^{11}\)B and \({}^{13}\)C hyperfine data was recorded experimentally at 3.4 K [26] and 1.5 K [28] by EPR and ENDOR, respectively. Under these conditions B\({}^{0}_{\rm{SI}}\) defects are static and the HF signals could be resolved. The calculated principal values of the HF tensors due to interactions with \({}^{13}\)C and \({}^{11}\)B elements at the broken C-B bond of B\({}^{0}_{\rm{SI}}\) defects (\(k_{\rm{b}}\) and \(h_{\rm{a}}\) structures) are reported in Tab. 2. Also reported are the isotropic and anisotropic HF constants (\(a\) and \(b\), respectively), which assume an axial character for the wave function of the unpaired electron. The upper and lower halves of the table show the results for boron located on cubic and hexagonal sublattice sites, respectively. The experimental data accompanying the calculations relate to boron defects in 6H-SiC samples [26; 28].
The calculations confirm that the paramagnetic state is essentially axial along direction 3 (see principal directions and monoclinic angle \(\theta\) in Fig. 1). Differences between \(A_{1}\) and \(A_{2}\) were always lower than 1 MHz. Both theory and experiments indicate a relatively large and close \({}^{13}\)C Fermi contact (\(a\)\(\sim\) 80-90 MHz), reflecting the large localization on the C radical. The calculated anisotropic \({}^{13}\)C HF constants (\(b\)\(\sim\) 50 MHz) are also in fair agreement with the EPR data (\(b\)\(\sim\) 40 MHz). Although not statistically meaningful, the error bar of the calculated HF constants (considering the measurements reported in Tab. 2) is estimated as \(\lesssim\) 10 MHz. We also note that the isotropic HF constants slightly underestimate previous calculations based on the local density approximation (LDA) [35]. This is interpreted as a tendency of GGA to underlocalize the electron density in comparison to the overlocalization of the LDA.
Regarding the \({}^{11}\)B HF interactions, like their measured analogues, the amplitudes are very small (few MHz). Unlike the calculations, the measured Fermi contact is negative. Still, the discrepancy is well within the estimated error. Hence, along with the \(g\) tensors, the HF calculations provide compelling support for the assignment of B\({}^{0}_{\rm{SI}}\) to the EPR/ENDOR data as reproduced in Tabs. 1 and 2.
The above HF interaction calculations were carried out using the GIPAW code within the GGA to the exchange-correlation potential. We performed test calculations at the HSE06 level (using the VASP code) and found that the Fermi
\begin{table}
\begin{tabular}{l c c c c c c} Defect & \(A_{1}\) & \(A_{2}\) & \(A_{3}\) & \(a\) & \(b\) & \(\theta\) (\({}^{\circ}\)) \\ \hline \({}^{13}\)C-B\({}_{\rm{SI}}\)(\(k_{\rm{b}}\)) & 34 & 34 & 183 & 84 & 50 & 72 \\ \({}^{13}\)C-EPR [26] & 48 & 48 & 169 & 88 & 40 & \(\sim\)70 \\ C-\({}^{11}\)B\({}_{\rm{SI}}\)(\(k_{\rm{b}}\)) & 0 & 0 & 6 & 2 & 2 & 74 \\ \({}^{11}\)B-ENDOR [28] & \(-\)6.78 & \(-\)6.78 & 2.40 & \(-\)3.72 & 3.06 & \(\sim\)70 \\ \hline \({}^{13}\)C-B\({}_{\rm{SI}}\)(\(h_{\rm{a}}\)) & 30 & 30 & 182 & 81 & 51 & 0 \\ \({}^{13}\)C-EPR [26] & 48 & 48 & 173 & 90 & 42 & 0 \\ C-\({}^{11}\)B\({}_{\rm{SI}}\)(\(h_{\rm{a}}\)) & 2 & 2 & 8 & 4 & 2 & 0 \\ \({}^{11}\)B-ENDOR [28] & \(-\)3.88 & \(-\)3.88 & 4.85 & \(-\)0.97 & 2.91 & 0 \\ \end{tabular}
\end{table}
Table 2: Calculated principal values of the hyperfine tensors (\(A_{1}\), \(A_{2}\) and \(A_{3}\)) for \({}^{13}\)C and \({}^{11}\)B species located in the broken C-B bond of B\({}^{0}_{\rm{SI}}\)(\(k_{\rm{b}}\)) and B\({}^{0}_{\rm{SI}}\)(\(h_{\rm{a}}\)) defects in 4H-SiC. Isotropic and anisotropic HF constants (\(a\) and \(b\)) are also shown and they assume an axial state pointing along direction 3. For the trigonal B\({}^{0}_{\rm{SI}}\)(\(h\)) defect, directions 1 and 2 are along the basal plane. For the monoclinic B\({}^{0}_{\rm{SI}}\)(\(k\)) defect, \(B_{1}\) is the component perpendicular to the \(\{2\bar{1}0\}\) mirror plane, while \(B_{2}\) and \(B_{3}\) are rotated by an angle \(\theta\) away from \(\langle 0\bar{1}10\rangle\) and \(\langle 0001\rangle\) directions, respectively (see Fig. 1). EPR data from \({}^{13}\)C-enriched samples (\({}^{13}\)C-EPR) [26] and \({}^{11}\)B-ENDOR [28] are also included for comparison. These were obtained in 6H-SiC and pertain to boron defects at \(k_{1}\) (\(k_{1}\) and \(k_{2}\) data are very similar) and \(h\) sites. All HF couplings are in MHz.
contact terms were about a factor of two larger. The HSE06-level dipolar terms were similar to those found using the semilocal functional. Such discrepancy was also reported in Ref. [66] for the evaluation of isotropic coupling constant using semilocal and hybrid functionals, and that calls for further investigations.
Regarding the deep boron species, among the arguments behind its assignment to a B\({}_{\text{Si}}\)-\(V_{\text{C}}\) structure were the negligible \({}^{13}\)C and \({}^{11}\)B hyperfine satellites next to the main signal, as well as a pronounced localization of the spin density on Si atoms [14; 15]. Unfortunately, the dynamic Jahn-Teller effect makes any comparison between the measurements and the static HF calculations rather difficult -- unlike the Zeeman effect, the \({}^{29}\)Si HF interactions are intermittent due to rotation of the nodal wave function.
We calculated the \(g\) tensor for neutral B\({}_{\text{Si}}(k)\)-\(V_{\text{C}}^{0}(k)\), with both the B atom and the vacancy aligned along the crystalline main axis. The ground state structure involves an electronically-inert threefold coordinated B atom next to three Si radicals edging the C-vacancy, two of which reconstruct to form an elongated bond due to JT effect (see Ref. [15] and references therein). Most spin density of this complex is localized on a single Si dangling bond polarized toward the center of the vacancy and the B atom. Although the distance between B and the Si radical is approximately the separation between second neighbors of the crystal, Si radical states are rather extended in space. In fact, considering its symmetry and character, the paramagnetic state must have a finite amplitude on B atom, and that feature does not favor the B\({}_{\text{Si}}\)-\(V_{\text{C}}^{0}\) model.
The calculated \(g\) tensor of B\({}_{\text{Si}}\)-\(V_{\text{C}}^{0}\) allows us draw more definite conclusions. The static JT distorted state of B\({}_{\text{Si}}(k)\)-\(V_{\text{C}}^{0}(k)\) with \(C_{1h}\) symmetry has a main \(g_{3}\) component along the Si dangling bond, which makes an angle of \(\theta=78^{\circ}\) with \([0001]\) - this was not observed at a temperature as low as \(T=4\) K. Accordingly, two nearly axial EPR signals with a main axis along \([0001]\) were reported [14]. The magnitude of the calculated \(g\) values also differ markedly from the observed ones. Even considering a dynamic JT state (with effective \(C_{3v}\) symmetry), the calculated effective \(g\) value of B\({}_{\text{Si}}(k)\)-\(V_{\text{C}}^{0}(k_{\text{dyn}})\) along the \(c\) axis (\(g_{3}=2.0063\)) is too small when compared to its measured counterpart (\(g_{3}=2.029\)) [14].
In Sec III.2 it was shown that the paramagnetic state of B\({}_{\text{C}}^{0}\) has spin-1/2, and that it derives from a partially occupied JT distorted \(e(xy)^{1\uparrow 1}\) doublet. Also as detailed on the right hand side of Fig. 2, this manifold derives from the boron p\({}_{x}\)p\({}_{y}\) states, which are nodal on the boron atom as well as along the \(c\) axis. The spin density of the JT distorted configurations \(a^{\prime\prime}1\!+\!a^{\prime}1\!\) and \(a^{\prime}1\!+\!a^{\prime\prime}1\!\) is depicted in Figs. 8(a) and 8(b). Such a shape anticipates a very small spin localization on the B atom. For the \(a^{\prime}1\!+\!a^{\prime\prime}1\!\) ground state, the amplitude is zero on boron and high on two basal Si ligands (Si\({}_{2}\) and Si\({}_{3}\)).
The spin density of the ground state \(a^{\prime}1\!+\!a^{\prime\prime}1\) configuration of B\({}_{\text{C}}^{0}(k)\) is zoomed in Fig. 8(c). The case of B\({}_{\text{C}}^{0}(h)\) is analogous and a similar discussion applies. The figure also depicts the principal directions of the \(g\) values with respect to the crystalline axes. Like it was considered for the shallow boron defect, trigonal (\(C_{3v}\)) states have its main \(g_{3}\) component along the \([0001]\) hexagonal axis. Also, monoclinic (\(C_{1h}\)) states have \(g_{1}\) perpendicular to the \(\{2\bar{1}\bar{1}0\}\) plane, while \(g_{2}\) and \(g_{3}\) are rotated by an angle \(\theta\) away from \(\langle 0\bar{1}10\rangle\) and \(\langle 0001\rangle\), respectively.
Let us first consider the case of static JT distorted configurations. These correspond to monoclinic states with calculated \(g\) values of \(g_{1}\approx 2.005\), \(g_{2}\approx 2.017\)-\(2.020\) and \(g_{3}\approx 2.024\)-\(2.028\). The latter is rotated away from \([0001]\) by \(\theta=13^{\circ}\) only. Although the magnitude of \(g_{3}\) is not far from the measured axial \(g\) values, the monoclinic rotation angle was not observed.
Considering that B\({}_{\text{C}}^{0}\) is predicted to show a dynamic JT effect, the effective \(g\) values are better estimated via averaging over symmetrically equivalent alignments. Hence, we find \(g_{1}=g_{2}\approx 2.012\) for both B\({}_{\text{C}}^{0}(k_{\text{dyn}})\) and B\({}_{\text{C}}^{0}(h_{\text{dyn}})\), whereas \(g_{3}\approx 2.028\) and \(2.024\) for B\({}_{\text{C}}^{0}(k_{\text{dyn}})\) and B\({}_{\text{C}}^{0}(h_{\text{dyn}})\), respectively. As reported in Tab. 1, the calculated main \(g_{3}\) values are in ex
Figure 8: Spin-density isosurface (cutoff 0.02 e/Å\({}^{3}\)) of a neutral B\({}_{\text{C}}\) defect at the \(k\) site of 4H-SiC. (a) and (b) depict two alternative Jahn-Teller distorted states, namely \(a^{\prime\prime}1\!+\!a^{\prime}1\!\) (metastable) and \(a^{\prime}1\!+\!a^{\prime\prime}1\!\) (ground state), which lead to opposite displacements of the B atom along \([01\bar{1}0]\) (see arrows). In (c) we depict the \(a^{\prime}1\!+\!a^{\prime\prime}1\!\) ground state along with the principal directions of the calculated \(g\) tensor (see text). Si, C and B atoms are shown in white, gray and black, respectively.
cellent agreement with the axial \(g\) values observed for deep boron defects in 4H-SiC [14]. The basal \(g\) values also compare well with the corresponding measured figures (\(\sim\) 2.0), although these are accompanied by relatively large error bars due to random \(g\)-strain broadening effects [15].
The nodal state shown in Fig. 8(c) strongly overlaps with two of the Si atoms connected to boron (Si\({}_{2}\) and Si\({}_{3}\)). The two other Si ligands are nodal (Si\({}_{1}\) and Si\({}_{4}\)) and have no overlap with the spin density. We suggest that the dynamic JT effect on this defect could be responsible for an intermittent localization on all atoms, thus explaining the weak and broad hyperfines detected for \({}^{11}\)B, \({}^{13}\)C, and \({}^{29}\)Si. Finally, we also note that the dynamical nature of the ground-state of B\({}_{\rm C}^{0}\), and a possible occupancy of both \(a^{\prime\prime}{}^{\uparrow\downarrow}+a^{\prime}{}^{\uparrow}\) and \(a^{\prime}{}^{\uparrow\downarrow}+a^{\prime\prime}{}^{\uparrow}\) states above few tens of degrees Kelvin, could explain the broadening and quenching of the EPR main signals of deep boron above \(T\approx 30\) K [14].
## IV Conclusions
We reported on first-principles hybrid density functional calculations of boron defects in 4H-SiC. Besides defect structures and electronic transition levels, defect free energies at finite temperatures, \(g\) tensor calculations and hyperfine coupling constants were also reported. The vibrational contribution to the free energies, as well as the one-electron states for the calculation of the paramagnetic properties, were found within a semilocal approximation to the electronic exchange and correlation interactions.
We support the assignment of the shallow boron species to B\({}_{\rm Si}\). In the neutral state, these defects possess a threefold coordinated B atom next to an unsaturated C radical. We mind the reader that this structure was obtained when the atomistic relaxation was performed within hybrid DFT. Lower level GGA calculations led to fourfold coordinated boron atoms. In line with arguments already reported [35], the erroneous GGA structure derives from the overmixing between the acceptor state of boron and the valence band top of the crystal. However, unlike Ref. [35], we conclude that the neutral B\({}_{\rm Si}\) defect adopts a singlet state. The axially distorted structure of this defect (along the \(c\) axis) conserves the maximum point group symmetry of the 4H-SiC crystal (\(C_{3v}\)). The displacement from the perfect lattice site can be explained by the host crystal field. Hence, the off-site structure cannot be justified by a Jahn-Teller effect -- it is simply driven by the short covalent radius of boron compared to that of Si.
As a word of caution, we note that the relative energy of on-site and off-site B\({}_{\rm Si}^{0}\) states cannot be easily obtained with the present method. If the fourfold coordinated B\({}_{\rm Si}^{0}\) is a diffuse effective-mass-like state, it could be disfavored due to the artificial confinement effect of the supercell approximation [80]. Still, even if that was the case, only the off-site threefold coordinated B\({}_{\rm Si}^{0}\) model (and not the EMT-model) could account for the measurements.
The C radicals on cubic and hexagonal B\({}_{\rm Si}\) defects are polarized differently, _i.e._, along basal and axial bond directions of the crystal, respectively. This feature has been previously detected by EPR but left unexplained. We demonstrate that it results from distinct crystal fields acting on each sublattice site.
Substitutional boron on the carbon site (B\({}_{\rm C}^{0}\)) is a dynamic Jahn-Teller system with a "Mexican hat" like potential. The potential ripples for rotation around the symmetry axis of the undisturbed state are 15 meV high only. This figure is lower than the zero-point energy of the defect, implying that is shows effective trigonal symmetry, even at liquid-helium temperature.
B\({}_{\rm Si}\) and B\({}_{\rm C}\) are both single acceptors. Despite adopting rather different alignments in the crystal, the acceptor levels of B\({}_{\rm Si}(k)\) and B\({}_{\rm Si}(h)\) are estimated in a narrow range \(E_{\rm v}+(0.34\)-\(0.32)\) eV. This could explain the observation of a single transition by Laplace-DLTS for shallow boron. The acceptor level of B\({}_{\rm C}\) is anticipated at \(E_{\rm v}+(0.63\)-\(0.67)\) eV, in excellent agreement with the D-center transition level measured in the range 0.5-0.7 eV above \(E_{\rm v}\). Our results suggest that recently reported Laplace-DLTS experiments unfolding the D-center signal into two components, relate to a "shallower" configuration sitting at the cubic carbon site and a "deeper" one replacing the hexagonal site.
From the calculated free-energies of B\({}_{\rm Si}\) and B\({}_{\rm C}\), we found that under typical growth temperatures, the equilibrium concentration ratio \([{\rm B_{Si}}]/[{\rm B_{C}}]\approx 200\) and about 0.1 for a Si-poor and Si-rich stoichiometry, respectively. This leads us to the conclusion that formation of B\({}_{\rm C}\) cannot be avoided during growth when boron is present, and contamination of n-type layers with boron could limit the mobility and lifetime of holes due to trapping and recombination at deep B\({}_{\rm C}\) acceptors.
We demonstrated that the EPR measurements of shallow boron can be described by a site- and temperature-dependent \(g\) tensor of B\({}_{\rm Si}\). Below \(\sim\) 50 K, neutral B\({}_{\rm Si}\) defects at \(k\) and \(h\) sites show static \(C_{1h}\) and \(C_{3v}\) symmetry, with comparable \(g\) values along the carbon radical p-state, respectively \(g_{3}=2.0028\) and \(2.0022\). These figures compare very well with 2.0025 and 2.0019 from the measurements, respectively. Above \(\sim\)50 K, the EPR signal related to the hexagonal species remains unchanged. However, the B-C broken bond in B\({}_{\rm Si}(k)\) can reorient by surmounting a barrier of about 0.1 eV, and the estimated _thermally-averaged_\(g_{3}\) value (now parallel to [0001]) increases to 2.0073 (to be compared to 2.0064 from the measurements).
Calculations of the gyromagnetic tensor are complemented with calculations of the most prominent \({}^{13}\)C and \({}^{11}\)B hyperfine splitting interactions involving core atoms at the threefold coordinated B\({}_{\rm Si}^{0}\) defects. The results agree well with the measurements both in terms of magnitude and axial direction of the interactions.
Our results rule against the assignment of a B\({}_{\rm Si}\)-\(V_{\rm C}\) complex to the deep boron defect. Both directions and magnitude of the calculated \(g\) values for this complex, differ markedly from the observations. Combining with previous calculations which concluded that B\({}_{\rm Si}\)-\(V_{\rm C}\) is a donor without levels in the lower half of the gap [17], we can definitely abandon the idea of a relation between the deep boron center and B\({}_{\rm Si}\)-\(V_{\rm C}\).
Instead we assign deep boron to B\({}_{\rm C}\). The calculated \(g\) val
ues for B\({}_{\text{C}}\) show excellent agreement with the measurements for deep boron if we account for the dynamics of the defect. We argue that the dynamic Jahn-Teller effect, along with the nodal shape of the paramagnetic state, could explain the weak and broad hyperfine signals related to \({}^{11}\)B, \({}^{13}\)C and \({}^{29}\)Si. Additionally, by considering B\({}_{\text{C}}\) as being responsible for the deep boron spectra, and hence ruling out the B\({}_{\text{Si}}\)-\(V_{\text{C}}\) model, we naturally avoid having to justify the inexplicable formation of B\({}_{\text{Si}}\)-\(V_{\text{C}}\) defects with exclusive axial orientations as observed by EPR.
## Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
###### Acknowledgements.
The present work was supported by the NATO Science for Peace and Security Programme, project no. G5674. JC and VJBT acknowledge the FCT through projects LA/P/0037/2020, UIDB/50025/2020 and UIDP/50025/2020.
|
2307.05891 | PID-Inspired Inductive Biases for Deep Reinforcement Learning in
Partially Observable Control Tasks | Deep reinforcement learning (RL) has shown immense potential for learning to
control systems through data alone. However, one challenge deep RL faces is
that the full state of the system is often not observable. When this is the
case, the policy needs to leverage the history of observations to infer the
current state. At the same time, differences between the training and testing
environments makes it critical for the policy not to overfit to the sequence of
observations it sees at training time. As such, there is an important balancing
act between having the history encoder be flexible enough to extract relevant
information, yet be robust to changes in the environment. To strike this
balance, we look to the PID controller for inspiration. We assert the PID
controller's success shows that only summing and differencing are needed to
accumulate information over time for many control tasks. Following this
principle, we propose two architectures for encoding history: one that directly
uses PID features and another that extends these core ideas and can be used in
arbitrary control tasks. When compared with prior approaches, our encoders
produce policies that are often more robust and achieve better performance on a
variety of tracking tasks. Going beyond tracking tasks, our policies achieve
1.7x better performance on average over previous state-of-the-art methods on a
suite of locomotion control tasks. | Ian Char, Jeff Schneider | 2023-07-12T03:42:24Z | http://arxiv.org/abs/2307.05891v2 | # PID-Inspired Inductive Biases for Deep Reinforcement Learning in Partially Observable Control Tasks
###### Abstract
Deep reinforcement learning (RL) has shown immense potential for learning to control systems through data alone. However, one challenge deep RL faces is that the full state of the system is often not observable. When this is the case, the policy needs to leverage the history of observations to infer the current state. At the same time, differences between the training and testing environments makes it critical for the policy not to overfit to the sequence of observations it sees at training time. As such, there is an important balancing act between having the history encoder be flexible enough to extract relevant information, yet be robust to changes in the environment. To strike this balance, we look to the PID controller for inspiration. We assert the PID controller's success shows that only summing and differencing are needed to accumulate information over time for many control tasks. Following this principle, we propose two architectures for encoding history: one that directly uses PID features and another that extends these core ideas and can be used in arbitrary control tasks. When compared with prior approaches, our encoders produce policies that are often more robust and achieve better performance on a variety of tracking tasks. Going beyond tracking tasks, our policies achieve 1.7x better performance on average over previous state-of-the-art methods on a suite of high dimensional control tasks. 1
Footnote 1: Code available at [https://github.com/IanChar/GPIDE](https://github.com/IanChar/GPIDE)
## 1 Introduction
Deep reinforcement learning (RL) holds great potential for solving complex tasks through data alone, and there have already been exciting applications of RL to playing video games [64], fine tuning language models [47], and control of robots [2]. Despite these successes, there still remain significant challenges in controlling real-world systems that stand in the way of realizing RL's full potential [18]. One major hurdle is the issue of partial observability, resulting in a Partially Observable Markov Decision Process (POMDP). In this case, the true state of the system is unknown and the policy must leverage its history of observations. Another hurdle stems from the fact that policies are often trained in an imperfect simulator, which is likely different from the true environment. Combining these two challenges necessitates striking a balance between extracting useful information from the history and avoiding overfitting to modelling error. Therefore, introducing the right inductive biases to the training procedure is crucial.
The use of recurrent network architectures in deep RL for POMDPs was one of the initial proposed solutions [25] and remains a prominent approach for control tasks [41; 68; 44]. Theses architectures are certainly flexible; however, it is unclear whether they are the best choice for control tasks,
especially since they were originally designed with other applications in mind such as natural language processing.
In contrast with deep RL methods, the Proportional-Integral-Derivative (PID) controller remains a cornerstone of modern control systems despite its simplicity and the fact it is over 100 years old [5; 42]. PID controllers are single-input single-output (SISO) feedback controllers designed for tracking problems, where the goal is to maintain a signal at a given reference value. The controller adjusts a single actuator based on the weighted sum of three terms: the current error between the signal and its reference, the integral of this error over time, and the temporal derivative of this error. PID controllers are far simpler than recurrent architectures and yet are still able to perform well in SISO tracking problems despite having no model for the system's dynamics. We assert that PID's success teaches us that in many cases only two operations are needed for successful control: summing and differencing.
To investigate this assertion, we conduct experiments on a variety of SISO and multi-input multi-output (MIMO) tracking problems using the same featurizations as a PID controller to encode history. We find that this encoding often achieves superior performance and is significantly more resilient to changes in the dynamics during test time. The biggest shortcoming with this method, however, is that it can only be used for tracking problems. As such, we propose an architecture that is built on the same principles as the PID controller, but is general enough to be applied to arbitrary control problems. Not only does this architecture exhibit similar robustness benefits, but policies trained with it achieve an average of _1.7x better performance_ than previous state-of-the-art methods on a suite of high-dimensional control tasks.
## 2 Preliminaries
The MDP and POMDPWe define the discrete time, infinite horizon Markov Decision Process (MDP) to be the tuple \((\mathcal{S},\mathcal{A},r,T,T_{0},\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(r:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function, \(T:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) is the transition function, \(T_{0}\subset\Delta(\mathcal{S})\) is the initial state distribution, and \(\gamma\) is the discount factor. We use \(\Delta(\mathcal{S})\) to denote the space of distributions over \(\mathcal{S}\). Importantly, the Markov property holds for the transition function, i.e. the distribution over a next state \(s^{\prime}\) depends only on the current state, \(s\), and current action, \(a\). Knowing previous states and actions does not provide any more information. The objective is to learn a policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) that maximizes the objective \(J(\pi)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t},s_{t+1})\right]\), where \(s_{0}\sim T_{0}\), \(a_{t}\sim\pi(s_{t})\), and \(s_{t+1}\sim T(s_{t},a_{t})\). When learning a policy, it is often key to learn a corresponding value function, \(Q^{\pi}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), which outputs the expected discounted returns after playing action \(a\) at state \(s\) and then following \(\pi\) afterwards.
In a Partially Observable Markov Decision Process (POMDP), the observations that the policy receives are not the true states of the process. In control this may happen for a variety of reasons such as noisy observations made by sensors, but in this work we specifically focus on the case where aspects of the state space remain unmeasured. In any case, the POMDP is defined as the tuple \((\mathcal{S},\mathcal{A},r,T,T_{0},\Omega,\mathcal{O},\gamma)\), where \(\Omega\) is the space of possible observations, \(\mathcal{O}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\Omega)\) is the conditional distribution of seeing an observation, and the rest of the elements of the tuple remain the same as before. The objective remains the same as the MDP, but now the policy and value functions are not allowed access to the state.
Crucially, the Markov property does not hold for observations in the POMDP. That is, where \(o_{1:t+1}:=o_{1},o_{2},\ldots,o_{t+1}\) are observations seen at times \(1\) through \(t+1\), \(o_{1:t-1}\not\perp o_{t+1}|o_{t},a_{t}\). A naive solution to this problem is to instead have the policy take in the history of the episode so far. Of course, it is usually infeasible to learn a policy that takes in the entire history for long episodes since the space of possible histories grows exponentially with the length of the episode. Instead, one can encode the information into a more compact representation. In particular, one can use an encoder \(\phi\) which outputs an encoding \(z_{t}=\phi(o_{1:t},a_{1:t-1},r_{1:t-1})\) (note that encoders need not always take in the actions and rewards). Then, the policy and Q-value functions are augmented to take in \((o_{t},z_{t})\) and \((o_{t},a_{t},z_{t})\), respectively.
Tracking Problems and PID Controllers.We first focus on the tracking problem, in which there are a set of signals that we wish to maintain at given reference values. For example, in espresso machines the temperature of the boiler (i.e. the signal) must be maintained at a constant reference temperature, and a controller is used to vary the boiler's on-off time so the temperature
is maintained at that value [37]. Casting tracking problems as discrete time POMDPs, we let \(o_{t}=\left(x_{t}^{(1)},\ldots,x_{t}^{(M)},\sigma_{t}^{(1)},\ldots,\sigma_{t}^{( M)}\right)\) be the observation at time \(t\), where \(x_{t}^{(i)}\) and \(\sigma_{t}^{(i)}\) are the \(i^{\text{th}}\) signal and corresponding reference value, respectively. The reward at time \(t\) is simply the negative error summed across dimensions, i.e. \(-\sum_{m=1}^{M}\left|x_{t}^{(m)}-\sigma_{t}^{(m)}\right|\).
When dealing with a single-input single-output (SISO) system (with one signal and one actuator that influences the signal), one often uses a Proportional-Integral-Derivative (PID) controller: a feedback controller that is often paired with feedforward control. This controller requires no knowledge of the dynamics, and simply sets the action via a linear combination of three terms: the error (P), the integral of the error (I), and the derivative of the error (D). When comparing other architectures to the PID controller, we will use orange colored text and blue colored text to highlight similarities between the I and D terms, respectively. Concretely, the policy corresponding to a discrete-time PID controller is defined as
\[\pi^{\text{PID}}(o_{t})=K_{P}(x_{t}^{(1)}-\sigma_{t}^{(1)})+K_{I} \underset{i=1}{\overset{t}{\sum}}(x_{i}^{(1)}-\sigma_{i}^{(1)})dt+K_{D} \frac{\left(x_{t}^{(1)}-\sigma_{t}^{(1)}\right)-\left(x_{t-1}^{(1)}-\sigma_{t- 1}^{(1)}\right)}{dt} \tag{1}\]
where \(K_{P}\), \(K_{I}\), and \(K_{D}\) are scalar values known as gains that must be tuned. PID controllers are designed for SISO control problems, but many real-world systems are multi-input multi-output (MIMO). In the case of MIMO tracking problems, where there are \(M\) signals with \(M\) corresponding actuators, one can control the system with \(M\) separate PID controllers. However, this assumes there is a clear breakdown of which actuator influences which signal. Additionally, there are often interactions between the different signals, which the PID controllers do not account for. Beyond tracking problems, it is less clear how to use PID controllers without substantial engineering efforts.
## 3 Methodology
To motivate the following, consider the task of controlling a tokamak: a toroidal device that magnetically confines plasma and is used for nuclear fusion. Nuclear fusion holds the promise of providing an energy source with few drawbacks and an abundant fuel source. As such, there has recently been a surge of interest in applying machine learning [1; 11], and especially RL [12; 39; 15; 65; 59; 58; 38], for tokamak control. However, applying deep RL has the same problems as mentioned earlier; the state is partially observable since there are aspects of the plasma's state that cannot be measured in real time, and the policy must be trained before-hand on an imperfect simulator since operation of the actual device is extremely expensive.
How should one choose a historical encoder with these challenges in mind? Previous works [44; 40] suggest using Long Short Term Memory (LSTM) [27], Gated Recurrent Units [13], or transformers [63]. These architectures have been shown to be powerful tools in natural language processing, where there exist complicated relationships between words and how they are positioned with respect to each other. However, do the same complex temporal relationships exist in something like tokamak control? The fact that PID controllers have been successfully applied for feedback control on tokamaks suggests this may not be the case [66; 23]. In reality, the extra flexibility of these architectures may become a hindrance when deployed on the physical device if they overfit to quirks in the simulator.
In this section, we present two historical encoders that we believe have good inductive biases for control. They are inspired by the PID controller in that they only sum and difference in order to combine information throughout time. Following this, in Section 5, we empirically show the benefits of these encoders on a number of control tasks including tokamak control.
The PID Encoder.Under the framework of a policy that uses a history encoder, the standard PID controller (1) is simply a linear policy with an encoder that outputs the tracking error, the integral of the tracking error, and the derivative of the tracking error. This notion can be extended to MIMO problems and arbitrary policy classes, resulting in the _PID-Encoder_ (PIDE). Given input \(o_{1:t}\), this encoder outputs a \(3M\) dimensional vector consisting of \((x_{t}^{(m)}-\sigma_{t}^{(m)})\), \(\sum_{i=1}^{t}(x_{i}^{(m)}-\sigma_{i}^{(m)})dt\), and \(\frac{\left(x_{i}^{(m)}-\sigma_{i}^{(m)}\right)-\left(x_{t-1}^{(m)}-\sigma_{i- 1}^{(m)}\right)}{dt}\quad\forall m=1,\ldots,M\). For SISO problems, policies with this encoder have access to the same information as a PID controller. However, for MIMO problems the policy has
access to all the information that each PID controller, acting in isolation, would have. Ideally a sophisticated policy would coordinate each actuator setting well.
The Generalized PID Encoder.A shortcoming of PIDE is that it is only applicable to tracking problems since it operates over tracking error explicitly. A more general encoder should instead accumulate information over arbitrary features of each observation. With this in mind, we introduce the _Generalized-PID-Encoder_ (GPIDE).
GPIDE consists of a number of "heads", each accumulating information about the history in a different manner. When there are \(H\) heads, GPIDE forms history encoding, \(z_{t}\), through the following:
\[v_{i}^{h} =f_{\theta}^{h}(\text{concatenate}(o_{i-1},a_{i-1},r_{i-1},o_{i} -o_{i-1})) \forall i\in\{1,\dots,t\},h\in\{1\dots,H\}\] \[w_{t}^{h} =\ell^{h}(v_{1:t}^{h}) \forall h\in\{1\dots,H\}\] \[z_{t} =g_{\theta}(\text{concatenate}(w_{t}^{1},w_{t}^{2},\dots,w_{t}^{h }))\]
Here, GPIDE is parameterized by \(\theta\). For head \(h\), \(f_{\theta}^{h}\) is a linear projection of the previous observation, action, reward, and difference between the current and previous observation to \(\mathbb{R}^{D}\), and \(\ell^{h}\) is a weighted summation of these projections. \(g_{\theta}\) is a decoder which combines all of the information from the heads. A diagram of this process is shown in Figure 1.
Notice that the key aspects of the PID controller are present here. The difference in observations is explicitly taken before the linear projection \(f_{\theta}^{h}\). We found that this simple method works best for representing differences when the observations are scalar descriptions of the state (e.g. joint positions). Although we do not consider image observations in this work, we imagine a similar technique could be done by taking the differences in image encodings. Like the integral term of the PID, \(\ell^{h}\) also accumulates information over time. In the following, we consider several possibilities for \(\ell^{h}\), and we will refer to these different choices as "head types" throughout this work. We omit the superscript \(h\) below for notational convenience.
**Summation.** Most in line with PID, the projections can be summed, i.e. \(\ell(v_{1:t})=\sum_{i=1}^{t}v_{i}\).
**Exponential Smoothing.** In order to weight recent observations more heavily, exponential smoothing can be used. That is, \(\ell(v_{1:t})=(1-\alpha)^{t-1}v_{1}+\sum_{i=2}^{t}\alpha(1-\alpha)^{t-i}v_{i}\), where \(0\leq\alpha\leq 1\) is the smoothing parameter. Unlike summation, this head type cannot accumulate information in the same way because it is a convex combination.
**Attention.** Instead of hard-coding a weighted summation of the projections, this weighting can be learned through attention [63]. Attention is one of the key components of transformers because of its ability to learn relationships between tokens. To implement this, two additional linear functions should be learned that project \(\text{concatenate}(o_{i-1},a_{i-1},r_{i-1},o_{i}-o_{i-1})\) to \(\mathbb{R}^{D}\). These new projections are referred to as they key and query vectors, denoted as \(k_{i}\) and \(q_{i}\) respectively. The softmax between their inner products is then used to form the weighting scheme for \(v_{1:t}\). We can rewrite the first two
Figure 1: **Architecture for GPIDE. The diagram shows how one encoding, \(z_{t}\), is formed. Each of the gray, rounded boxes corresponds to one of the heads that makes up GPIDE. Each green box shows a function to be learned from data, and the orange box shows the weighted summation of all previous vectors, \(v_{1:t}^{h}\). We write the difference in observations in blue text to highlight the part of GPIDE that relates to a PID controller’s D term. Note that \(q_{1:t}^{h}\) and \(k_{1:t}^{h}\) only play a role in this process if head \(h\) uses attention; as such, we write these terms in gray text.**
steps of GPIDE for a head that uses attention as
\[v_{i},k_{i},q_{i} =f_{\theta}(\text{concatenate}(o_{i-1},a_{i-1},r_{i-1},o_{i}-o_{i-1})) \quad\forall i\in\{1,\dots,t\}\] \[w_{1:t} =\ell(q_{1:t},k_{1:t},v_{1:t})=\text{softmax}\left(\frac{q_{1:t}k _{1:t}^{T}}{\sqrt{D}}\right)v_{1:t}\]
Here, \(q_{1:t}\), \(k_{1:t}\), and \(v_{1:t}\) are treated as \(t\times D\) dimensional matrices. Since it results in a convex combination, attention has the capacity to reproduce exponential smoothing but not summation.
## 4 Related Work
A control task may be partially observable for a myriad of reasons including unmeasured state variables [24; 68; 25], sensor noise[41], and unmeasured system parameters [69; 48]. When there are unmeasured system parameters, this is usually framed as a meta-reinforcement learning (MetaRL) [67] problem. This is a specific subclass of POMDPs where there is a collection of MDPs, and each episode, an MDP is sampled from this collection. Although these works do consider system parameters varying between episodes, the primary focus of the experiments usually tends to be on the multi-task setting (i.e. different reward functions instead of transition functions) [70; 16; 54]. We consider not only differing system parameters but also the presence of unmeasured state variables; therefore, the class of POMDPs considered in this paper is broader than the one studied in MetaRL.
Using recurrent networks has long been an approach for tackling POMDPs [25], and is still a common way to do so in a wide variety of settings [17; 67; 64; 44; 41; 68; 60; 10; 2]. Moreover implementations are publicly available both for on-policy [35; 26] and off-policy [44; 68; 10] algorithms, making it an easy pick for those wanting a quick solution. Some works [28; 70; 24; 16; 3] use recurrent networks to estimate the belief state [32], which is a distribution over the agent's true state. However, Ni et al. [44] recently showed that well-implemented, recurrent versions of SAC [22] and TD3 [20] perform competitively with many of these specialized algorithms. In either case, we believe works that estimate the belief state are not in conflict with our own since their architectures can be modified to use GPIDE instead of a recurrent unit.
Beyond recurrent networks, there has been a surge of interest in applying transformers to reinforcement learning [34]. However, we were unable to find many instances of transformers being used as history encoders in the online setting, perhaps because of their difficulty to train. Parisotto et al. [49] introduced a new architecture to remedy these difficulties; however, Melo [40] applied transformers to MetaRL and asserted that careful weight initialization is the only thing needed for stability in training. We note that GPIDE with only attention heads is similar to a single multi-headed self-attention block that appears in many transformer architectures; however, we show that attention is the least important type of head in GPIDE and often hurts performance (see Section 5.3).
Perhaps closest to our proposed architecture is PEARL [54], which does a multiplicative combination of Gaussian distributions corresponding to each state-action-reward tuple. However, their algorithm is designed for the MetaRL setting specifically. Additionally, we note that the idea of summations and averaging has been shown to be powerful in prior works. Specifically, Oliva et al. [46] introduced the Statistical Recurrent Unit, an alternative architecture to LSTMs and GRUs that leverages moving averages and performs competitively across several supervised learning tasks.
Lastly, we note there are many facets of RL where improvements can be made to robustness, and many works focus on altering the training procedure. They use techniques such as optimizing the policy's worst-case performance [53; 31] or using variational information bottlenecking (VIB) [4] to limit the information used by the policy [36; 29; 19]. In contrast, our work specifically focuses on how architecture choices of history encoders affect robustness, but we note our developments can be used in conjunctions with these other directions, possibly resulting in improved robustness.
## 5 Experiments
In this section, we experimentally compare PIDE and GPIDE against recurrent and transformer encoders. In particular, we explore the following questions:
* How does the performance of a policy using PIDE or GPIDE do on tracking problems? In addition, how well can policies adapt to different system parameters and how robust to modelling error are they on these problems? (Section 5.1)
* Going beyond tracking problems, how well does GPIDE perform on high dimensional locomotion control tasks (Section 5.2)
* How important is each type of head in GPIDE? (Section 5.3)
For the following tracking problems we use the Soft Actor Critic (SAC) [22] algorithm with each of the different methods for encoding observation history. Following Ni et al. [44], we make two separate instantiations of the encoders for the policy and value networks, respectively. Since the tracking problems are relatively simple, we use a small policy network consisting of 1 hidden layer with 24 units; however, we found that we still needed to use a relatively large Q network consisting of 2 hidden layers with 256 units each to solve the problems. All hyperparameters remain fixed across baselines and tracking tasks; only the history encoders change.
For the recurrent encoder, we use a GRU and follow the implementation of Ni et al. [44] closely. Our transformer encoder closely resembles the GPT2 architecture [52], and it also includes positional encodings for the observation history. For GPIDE, we use \(H=6\) heads: one summation head, two attention heads, and three exponential smoothing heads (with \(\alpha=0.25,0.5,1.0\)). This choice was not optimized, but rather was picked so that all types of heads were included and so that GPIDE has roughly the same amount of parameters as our GRU baseline. For additional reference we compare each of these RL methods with a tuned PID controller. Not only do PID controllers have an incredibly small number of parameters compared to the other RL-based controllers, but the training procedure is also much more straightforward since it can be posed as a black-box optimization over the returns. All methods are built on top of the rikit library [51]. More details about implementations and hyperparameters can be found in Appendices A and B, respectively. Implementations can be found at [https://github.com/TanChar/GPIDE](https://github.com/TanChar/GPIDE).
### Tracking Problems
In this subsection we consider a number of tracking problems. For each environment, the observation consists of the current signals, the reference values, and additional information about the last action made. Unless stated otherwise, the reward is as described in Section 2. More information about environments can be found in Appendix C. To make a fair comparison against PID controls, we choose to only encode the history of observations. For evaluation, we use 100 fixed settings of the environment (each setting consists of targets and system parameters). To avoid overfitting to these 100 settings, we used a separate set of 100 settings and averaged over 3 seeds when developing our methods. We evaluate policies throughout training, but report the average over the last 10% of evaluations as the final returns. We allow each policy to collect one million environment transitions, and all scores are averaged over 5 seeds. Lastly, each table shows scores formed by scaling the returns by the best and worst average returns across all methods in a particular variant of the environment, where scores of 0 and 100 correspond to the worst and best returns respectively.
Mass Spring Damper TrackingThe first tracking task is the control of a classic 1D toy physics system in which there is a mass attached to a wall by a spring and damper. The goal is then to apply a force to the mass in order to move it to a given reference location. There are three system parameters to consider here: the mass, spring constant, and damping factor. We also consider the substantially more difficult problem in which there are two masses sandwiched between two walls, and the masses are connected to the walls and each other by springs and dampers (see Appendix C.1 for a diagram of this). Overall there are eight system parameters (three spring constants, three damping factors, and two masses) and two actuators (a force applied to each mass). We refer to the first problem as Mass-Spring-Damper (MSD) and the second problem as Double-Mass-Spring-Damper (DMSD).
Additionally, we test how adaptive these policies are by changing system parameters in a MetaRL-type fashion (i.e. for each episode we randomly select system parameters and then fix them for the rest of the episode). Similar to Packer et al. [48], we train the policies on three versions of the environment: one with no variation in system parameters, one with a small amount of variation, and one with a large amount of variation. We evaluate all policies on the version of the environment with large system parameter variation to test generalization capabilities.
Table 1 shows the scores achieved for each of the settings. While GRU and transformers seem to do a good job at encoding history for the MSD environment, both are significantly worse on the more complex DMSD task when compared to our proposed encoders. This is true especially for GRU, which performs worse than two independent PID controllers for every configuration. Additionally, while it seems that GRU can generalize to large amounts of variation in system parameters when a
small amount is present, it fails horribly when trained on fixed system parameters. On the other hand, transformers are able to generalize surprisingly well when trained on both fixed system parameters and with small variation. We hypothesize the autoregressive nature of GRU may make it particularly susceptible to overfitting. Comparing PIDE and GPIDE, we see that PIDE tends to shine in the straightforward cases where there is little change in system parameters, whereas GPIDE is able to adapt when there is a large variation in parameters since it has additional capacity.
Navigation EnvironmentTo emulate the setting where the policy is trained on an imperfect simulator, we consider an environment in which the agent is tasked with moving itself across a surface to a specified 2D target as quickly and efficiently as possible. At every point in time, the agent can apply some force to move itself, but a penalty term proportional to the magnitude of the force is subtracted from the reward. Suppose that we have access to a simulator of the environment that is perfect except for the fact that it does not model friction between the agent and the surface. We refer to this simulator and the real environment as the "No Friction" and "Friction" environment, respectively. In both environments, the mass of the agent is treated as a system parameter that is sampled for each episode; however, the Friction environment has a larger range of masses and also randomly samples the coefficient of friction each episode.
Figure 2 shows the average returns recorded during training for both navigation environments and when the policies trained in No Friction are evaluated in Friction. A table of final scores can be found in Appendix D.3. One can see that GPIDE not only achieves the best returns in the environments it was trained in, but is also robust when going from the frictionless environment to the one with friction. On the other hand, PIDE has less capacity and therefore cannot achieve the same results; however, it is immediately more robust than the other methods, although it begins to overfit over time. It is also clear that using GRU is less sample efficient and less robust to changes in the test environment.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline Environment (Train/Test) & PID Controller & GRU & Transformer & PIDE & GPIDE \\ \hline MSD Fixed/Fixed & \(0.00\pm 3.96\) & \(83.73\pm 3.48\) & \(85.79\pm 1.98\) & \(\mathbf{100.00\pm 0.66}\) & \(83.72\pm 2.86\) \\ MSD Small/Small & \(0.00\pm 5.58\) & \(\mathbf{100.00\pm 1.59}\) & \(73.27\pm 4.98\) & \(75.51\pm 1.31\) & \(80.21\pm 8.59\) \\ MSD Fixed/Large & \(36.58\pm 2.86\) & \(0.00\pm 3.42\) & \(\mathbf{53.70\pm 1.71}\) & \(34.92\pm 0.93\) & \(29.55\pm 2.32\) \\ MSD Small/Large & \(43.52\pm 2.82\) & \(\mathbf{87.63\pm 2.28}\) & \(81.44\pm 0.82\) & \(53.21\pm 1.31\) & \(68.03\pm 4.43\) \\ MSD Large/Large & \(45.60\pm 1.71\) & \(\mathbf{100.00\pm 0.61}\) & \(92.60\pm 1.49\) & \(69.88\pm 0.69\) & \(93.03\pm 1.27\) \\ \hline Average & 25.14 & 74.27 & **77.36** & 66.70 & 70.91 \\ \hline \hline DMSD Fixed/Fixed & \(24.33\pm 3.97\) & \(0.00\pm 8.69\) & \(22.05\pm 3.58\) & \(\mathbf{100.00\pm 1.08}\) & \(76.23\pm 6.26\) \\ DMSD Small/Small & \(16.17\pm 3.09\) & \(0.00\pm 7.79\) & \(43.74\pm 3.70\) & \(\mathbf{100.00\pm 0.94}\) & \(86.74\pm 3.94\) \\ DMSD Fixed/Large & \(63.59\pm 2.91\) & \(0.00\pm 2.28\) & \(59.84\pm 1.13\) & \(\mathbf{78.77\pm 1.16}\) & \(63.89\pm 2.16\) \\ DMSD Small/Large & \(70.35\pm 1.44\) & \(39.26\pm 2.37\) & \(73.81\pm 1.60\) & \(88.52\pm 0.83\) & \(\mathbf{89.66\pm 1.33}\) \\ DMSD Large/Large & \(78.77\pm 1.97\) & \(52.01\pm 2.01\) & \(84.45\pm 1.41\) & \(86.90\pm 0.18\) & \(\mathbf{100.00\pm 0.91}\) \\ \hline Average & 50.64 & 18.25 & 56.78 & **90.84** & 83.30 \\ \hline \hline Total Average & 37.89 & 46.26 & 67.07 & **78.77** & 77.11 \\ \hline \end{tabular}
\end{table}
Table 1: **Mass Spring Damper Task Results**. The scores presented are averaged over five seeds and we show the standard error for each score.
Figure 2: **Average Returns for Navigation Environments.** The curves show the average over five seeds and the shaded region shows the standard error. For this plot, we allowed for 5x the normal amount budget to allow all methods to converge. We omit the PID controllers from this plot since it gets substantially worse returns.
Tokamak ControlFor our last tracking experiment we return to tokamak control. In particular, we focus on the DIII-D tokamak, a device operated by General Atomics in San Diego, California. We aim to control two quantities: \(\beta_{N}\), the normalized ratio between plasma and magnetic pressure, and rotation, i.e. how fast the plasma is spinning around the toroid. These are important quantities to track because \(\beta_{N}\) serves as an approximate economic indicator and rotation control of the plasma has been suggested to be key for stability [7; 61; 9; 55; 50]. The policy has control over the eight neutral beams [21], which are able to inject power and torque by blasting neutrally charged particles into the plasma. Importantly, two of the eight beams can be oriented in the opposite direction from the others, which decouples the total combined power and torque to some extent (see Figure 3).
To emulate the sim-to-real training experience, we create a simulator based on the equations described in Boyer et al. [8] and Scoville et al. [57]. This simulator has two major shortcomings: it assumes that certain states of the plasma (e.g. its shape) are fixed for entire episodes, and it assumes that there are no events that cause loss of confinement of the plasma. We make up for part of the former by randomly sampling plasma states each episode. The approximate "real" environment addresses these shortcomings by using a network trained on historical data as the transition function (an approach which has been shown to model the true system relatively accurately [12; 59; 58; 1]). The network has access to a greater set of the plasma's state in order to predict \(\beta_{N}\) and rotation, and we "replay" historical data in order to emulate the evolution of the plasma's state for each episode. Furthermore, the additional information provided to the network is rich enough that loss of confinement events play a role in the dynamics.
We consider two versions of this task: the first is a SISO task where total power is controlled to achieve a \(\beta_{N}\) target, and the second is a MIMO task where total power and torque is controlled to achieve \(\beta_{N}\) and rotation targets. The results for both of these tasks are shown in Table 2. Most of the RL techniques are able to do well if tested in the same environment they were trained in; the exception of this is PIDE, which curiously is unable to perform well in the simulator environment. While no reinforcement learning method matches the robustness of a PID controller, policies trained with GPIDE fare significantly better.
### High Dimensional Locomotion
Moving past tracking problems, we evaluate GPIDE on the PyBullet [14] benchmark proposed by Han et al. [24] and adapted in Ni et al. [44]. The benchmark has four robots: halfcheetah, hopper, walker, and ant. For each of these, either the current position information or velocity information is hidden from the agent. Except for GPIDE and transformer encoder, we use all of the performance traces given by Ni et al. [44]. In addition to SAC, they also train using PPO [56], A2C [43], TD3 [20], and VRM [24], a variational method that uses recurrent units to estimate the belief state. We reproduce as much of the training and evaluation procedure as possible, including using the same
\begin{table}
\begin{tabular}{l|l l l|l l} \hline Environment (Train/Test) & PID Controller & GRU & Transformer & PIDE & GPIDE \\ \hline \(\beta_{N}\)-Track Sim/Sim & \(40.69\pm 0.32\) & \(\mathbf{100.00\pm 0.20}\) & \(97.56\pm 0.19\) & \(0.00\pm 1.05\) & \(98.33\pm 0.41\) \\ \(\beta_{N}\)-Track Sim/Real & \(\mathbf{89.15\pm 0.99}\) & \(40.96\pm 5.45\) & \(40.05\pm 11.91\) & \(0.00\pm 21.04\) & \(55.21\pm 4.44\) \\ \(\beta_{N}\)-Track Real/Real & \(98.45\pm 0.77\) & \(98.24\pm 0.38\) & \(98.74\pm 0.29\) & \(\mathbf{100.00\pm 0.23}\) & \(99.30\pm 0.64\) \\ \hline Average & 76.10 & 79.73 & 78.79 & 33.33 & **84.28** \\ \hline \hline \(\beta_{N}\)-Rot-Track Sim/Sim & \(0.00\pm 0.83\) & \(99.06\pm 0.22\) & \(96.22\pm 0.94\) & \(67.98\pm 0.50\) & \(\mathbf{100.00\pm 0.29}\) \\ \(\beta_{N}\)-Rot-Track Sim/Real & \(\mathbf{83.71\pm 2.64}\) & \(39.76\pm 5.84\) & \(33.31\pm 0.69\) & \(0.00\pm 8.89\) & \(51.00\pm 1.92\) \\ \(\beta_{N}\)-Rot-Track Real/Real & \(92.02\pm 0.84\) & \(98.34\pm 0.52\) & \(96.32\pm 0.31\) & \(98.21\pm 0.23\) & \(\mathbf{100.00\pm 0.46}\) \\ \hline Average & 58.58 & 79.05 & 75.28 & 55.40 & **83.67** \\ \hline \hline Total Average & 67.34 & 79.39 & 77.03 & 44.36 & **83.97** \\ \hline \end{tabular}
\end{table}
Table 2: **Tokamak Control Task Results.** The scores presented are averaged over five seeds and we show the standard error for each score.
Figure 3: **Illustration of DIII-D from Above.** Each beamline in the figure contains two independent beams (yellow boxes). The plasma is rotating counter-clockwise and the two beams in the bottom left of the figure are oriented in the counter-current direction, allowing power and torque to be decoupled. This figure gives a rough idea of beam positioning but is not physically accurate.
hyperparameters in the SAC algorithm and giving the history encoders access to actions and rewards. For more information see Appendix B.2. Table 3 shows the performance of GPIDE along with a subset of best performing methods (more results can be found in Appendix D.5). These results make it clear that GPIDE is powerful in arbitrary control tasks besides tracking, as it dominates performance for every robot except HalfCheetah. The average score it achieves across all tasks is a 73% improvement over TD3-GRU, which we believe is the previous state-of-the-art.
### GPIDE Ablations
To investigate the role of each type of head, we reran all experiments with three variants of GPIDE: one with six exponential smoothing heads (ES), one with five exponential smoothing heads and one summation head (ES + Sum), and one with six attention heads (see Appendix B.3 for details). Table 4 shows the differences in the average scores for each environment. The first notable takeaway is that having summation is often important in some of the more complex environments. The other takeaway is that much of the heavy lifting is being done by the exponential smoothing. GPIDE fares far worse when only having attention heads, especially in DMSD and the PyBullet environments.
We visualize some of the attention schemes learned by GPIDE for MSD with small variation and HalfCheetah (Figure 4). While the attention scheme learned for MSD could potentially be useful since it recalls information from near the beginning of the episode when the most movement is happening, it appears that the attention scheme for HalfCheetah is simply a poor reproduction of exponential smoothing, making it redundant and suboptimal. In fact, we found this phenomenon to be true across all attention heads and PyBullet tasks. We believe that the periodicity that appears here is due to the oscillatory nature of the problem and lack of positional encoding (although we found including positional encoding degrades performance).
## 6 Discussion
In this work, we introduced the PIDE and GPIDE history encoders to be used for reinforcement learning in partially observable control tasks. Although both are far simpler than prior methods of encoding, they often result in powerful yet robust controllers. We hope that this work inspires the research community to think about how pre-existing control methods can inform architecture choices.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & MSD & DMSD & Navigation & \(\beta_{N}\) Track & \(\beta_{N}\)-Rot Track & PyBullet \\ \hline ES & +2.69\% & -11.14\% & -0.11\% & +2.57\% & +0.29\% & +5.81\% \\ ES + Sum & -8.33\% & +5.49\% & -1.65\% & +4.22\% & +0.76\% & +11.00\% \\ Attention & -0.36\% & -54.95\% & -3.91\% & -8.85\% & -7.55\% & -39.44\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: **GPIDE Ablation Percent Difference for Average Scores. All final scores can be found in Appendix D.**
Figure 4: **Averaged Attention Schemes for MSD-Small and HalfCheetah-P. Each y-position on the grid corresponds to an amount of history being recorded, and each x-position corresponds to a time point in that history. As such, each of the left-most points are the oldest observation in the history, and the diagonals correspond to the most recent observation. The darker the blue, the greater the weight that is assigned to that time point.**
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline Environment & PPO-GRU & TD3-GRU & VRM & SAC-Transformer & SAC-GPIDE \\ \hline HalfCheetah-P & \(27.09\pm 7.85\) & \(\mathbf{85.80\pm 5.15}\) & \(-107.00\pm 1.39\) & \(37.00\pm 9.97\) & \(82.63\pm 3.46\) \\ Hopper-P & \(49.00\pm 5.22\) & \(84.63\pm 8.33\) & \(3.53\pm 1.63\) & \(59.54\pm 19.64\) & \(\mathbf{93.27\pm 13.56}\) \\ Walker-P & \(1.67\pm 4.39\) & \(29.08\pm 9.67\) & \(-3.89\pm 1.25\) & \(24.89\pm 14.80\) & \(\mathbf{96.61\pm 1.60}\) \\ Ant-P & \(39.48\pm 3.74\) & \(-36.36\pm 3.35\) & \(-36.39\pm 0.17\) & \(-10.57\pm 2.34\) & \(\mathbf{66.66\pm 2.94}\) \\ HalfCheetah-V & \(19.68\pm 11.71\) & \(\mathbf{59.03\pm 2.88}\) & \(-80.49\pm 2.97\) & \(-41.31\pm 26.15\) & \(20.39\pm 29.60\) \\ Hopper-V & \(13.86\pm 4.80\) & \(57.43\pm 8.63\) & \(10.08\pm 3.51\) & \(0.28\pm 8.49\) & \(\mathbf{90.98\pm 4.28}\) \\ Walker-V & \(8.12\pm 5.43\) & \(-4.63\pm 1.30\) & \(-1.80\pm 0.70\) & \(-8.21\pm 1.31\) & \(\mathbf{36.90\pm 16.59}\) \\ Ant-V & \(1.43\pm 3.26\) & \(17.03\pm 6.55\) & \(-13.41\pm 0.12\) & \(0.81\pm 1.31\) & \(\mathbf{18.03\pm 5.10}\) \\ \hline \hline Average & 20.04 & 36.50 & -28.67 & 7.80 & **63.18** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **PyBullet Task Results. Each score is averaged over four seeds and we report the standard errors. Unlike before, we scale the returns by the returns of an oracle policy (i.e. one which sees position and velocity) and a policy which does not encode any history. For the environment names, “P” and “V” denote only position or only velocity in the observation, resepctively.**
LimitationsThere are many different ways a control task may be partially observable, and we do not believe that our proposed methods are solutions to all of them. For example, we do not think GPIDE is necessarily suited for tasks where the agent needs to remember events (e.g. picking up a key to unlock a door). Additionally, this work focuses on cases where observations are in the form of scalar descriptors; we leave extending these ideas to images as future work.
|
2306.17164 | On-Surface Synthesis and Characterization of a High-Spin
Aza-[5]-Triangulene | Triangulenes are open-shell triangular graphene flakes with total spin
increasing with their size. In the last years, on-surface-synthesis strategies
have permitted fabricating and engineering triangulenes of various sizes and
structures with atomic precision. However, direct proof of the increasing total
spin with their size remains elusive. In this work, we report the combined
in-solution and on-surface synthesis of a large nitrogen-doped triangulene
(aza-[5]-triangulene) and the detection of its high spin ground state on a
Au(111) surface. Bond-resolved scanning tunneling microscopy images uncovered
radical states distributed along the zigzag edges, which were detected as weak
zero-bias resonances in scanning tunneling spectra. These spectral features
reveal the partial Kondo screening of a high spin state. Through a combination
of several simulation tools, we find that the observed distribution of radical
states is explained by a quintet ground state (S = 2), instead of the expected
quartet state (S = 3/2), confirming the positively charged state of the
molecule on the surface. We further provide a qualitative description of the
change of (anti)aromaticity introduced by N-substitution, and its role in the
charge stabilization on a surface, resulting in a S = 2 aza-[5]-triangulene on
Au(111). | Manuel Vilas-Varela, Francisco Romero-Lara, Alessio Vegliante, Jan Patrick Calupitan, Adrián Martínez, Lorenz Meyer, Unai Uriarte-Amiano, Niklas Friedrich, Dongfei Wang, Natalia E. Koval, María E. Sandoval-Salinas, David Casanova, Martina Corso, Emilio Artacho, Diego Peña, Jose Ignacio Pascual | 2023-06-29T17:59:22Z | http://arxiv.org/abs/2306.17164v1 | # On-Surface Synthesis and Characterization of a High-Spin Aza-[5]-Triangulene
###### Abstract
Triangulenes are open-shell triangular graphene flakes with total spin increasing with their size. In the last years, on-surface-synthesis strategies have permitted fabricating and engineering triangulenes of various sizes and structures with atomic precision. However, direct proof of the increasing total spin with their size remains elusive. In this work, we report the combined in-solution and on-surface synthesis of a large nitrogen-doped triangulene (aza-[5]-triangulene) and the detection of its high spin ground state on a Au(111) surface. Bond-resolved scanning tunneling microscopy images uncovered radical states distributed along the zigzag edges, which were detected as weak zero-bias resonances in scanning tunneling spectra. These spectral features reveal the partial Kondo screening of a high spin state. Through a combination of several simulation tools, we find that the observed distribution of radical states is explained by a quintet ground state (\(S=2\)), instead of the expected quartet state (\(S=3/2\)), confirming the positively charged state of the molecule on the surface. We further provide a qualitative description of the change of (anti)aromaticity introduced by N-substitution, and its role in the charge stabilization on a surface, resulting in a \(S=2\) aza-[5]-triangulene on Au(111).
## I Introduction
Triangulenes are triangular-shaped polybenzenoid hydrocarbons with edges formed by \(n\) zig-zag units (hence, [n]-triangulene) and non-zero electronic spin ground state. The \(\pi\)-conjugated lattice of [n]-triangulene is frustrated, depicting a non-Kekule structure with _n-1_ unpaired \(\pi\)-electrons,[1; 2; 3; 4; 5] forming an electronic ground state with a net spin \(S=(n-1)/2\).[6] The linear increase of spin state with the triangulene size \(n\) endows these systems with a strong potential for becoming functional platforms for molecular spintronics and quantum computing applications.[1; 7; 8; 9; 10]
Owing to their open-shell character, the solution synthesis of triangulenes is very challenging.[11; 12; 13] Lately, on-surface-synthesis (OSS) strategies[14; 15] have demonstrated to be a viable route for the fabrication of atomically perfect triangulenes with increasing size[16; 17; 18; 19; 20] (some of these shown in Fig. 1a). Interestingly, OSS can also produce more complex triangulene nanostructures,[21; 22; 23; 24; 25; 26; 27] with a variety of magnetic properties emerging from the exchange interaction between triangulene units. Overall, the net spin state of triangulene derivatives is associated with an imbalance in the number of carbon sites in the two alternating triangular sublattices (\(N_{A}\) and \(N_{B}\)), following the Ovchinnikov's rule[6]\(S=\frac{1}{2}|N_{A}-N_{B}|\).
In addition to tailoring nanographene's shape and size, OSS strategies have also been applied to insert heteroatoms or functional groups for modifying the electronic properties of graphene-based nanostructures.[28; 29; 30; 31; 32; 33; 34; 35] Ovchinnikov[6] predicted that heteroatom substitution in alternant sublattices acts as a defect that modifies the sublattice imbalance and, hence, the resulting spin state.[36; 37] However, recent results on aza-[3]-triangulene (A3T)[35; 32; 12] found that the nitrogen substitution in minority sites reduces the spin state from \(S=1\) in all-carbon [3]-triangulene (3T) to \(S=1/2\) in A3T. This is in apparent contradiction with Ovchinnikov's prediction that a larger spin ground state (\(S=3/2\) in A3T) shall be expected when minority sites are removed. Investigation of this apparent paradox requires experimental access to larger triangulene structures, and determination of their spin state.
Despite the successful imaging of triangulenes on metal surfaces using low-temperature scanning tunneling microscopy (STM),[17; 18; 19; 20; 35; 39] the resolution of their intrinsic spin state has been hampered by the lack of spin-sensitive signals. Owing to the very weak magnetic anisotropy of carbon systems, the \(\pi\)-magnetism is fairly isotropic and paramagnetic, thus difficult to access by spin-polarized tunneling microscopy.[40] Instead, a zero-bias resonance due to the Kondo-screening[41; 42] of spins by the metallic substrate has been normally used as an unequivocal fingerprint of a spin-polarized ground state
in triangulenes.[20; 35; 43] Unfortunately, the universal behavior of Kondo screening determines that the associated zero-bias resonance decreases its intensity with increasing spin values,[22; 35; 44] requiring very low temperatures for its detection.
In this work, we present an OSS strategy for engineering a large nitrogen-doped triangulene, the aza-[5]-triangulene (A5T, rectangle in Fig. 1a), and demonstrate that it lies in an S=2 ground state on a Au(111) surface. The synthesis route involved the targeted thermal cyclod hydrogenation of a trisubstituted A3T derivative (**1** in Fig. 1b) over the gold substrate. Combining bond-resolved STM and spectral maps of the density of states close to the Fermi level, we show that A5T lies in a high-spin ground state, comprised of four singly-occupied states. Our density functional theory (DFT) simulations reveal that the nitrogen heteroatom, located in the majority sublattice in A5T, reduces the S=2 spin of the pristine [5]-triangulene (5T) down to 3/2. Although this coincides with the prediction by Ovchinnikov's rule, the origin is caused by the addition of an extra electron in the \(\pi\)-conjugated system, donated by the N atom, which also induces a marked anti-aromatic character around the aza group. On the metal substrate, the A5T system transfers this extra electron to the metal underneath, i.e., it is oxidized to A5T\({}^{+}\), recovering the quintet ground state of the parent 5T flake. These results validate predictions for high-spin nanographenes[18; 45] and further demonstrate that such a high-spin state survives over a metallic substrate.
Figure 1: **a)** Selected previously studied triangulene nanostructures of different sizes and doping. **b)** Synthetic route to obtain the A5T precursor **1** by solution chemistry. **c)** Two resonance structures of target A5T showing different locations of Clar sextets (in grey) and \(\pi\)-radicals. The C-C bonds highlighted in dark blue indicate the ones formed via on-surface assisted cyclodehydrogen (CDH). The light blue spots mark the sites where H atoms were removed via on-surface assisted dehydrogenation (DH). **d)** Resulting STM image (\(V=1.5\) V, \(I=10\) pA) of molecular precursor **1** deposited on a Au(111) surface after annealing the sample to 330\({}^{\circ}\)C. Yellow square indicates a fully planar structure. **e)** STM image (\(V=1\) V, \(I=10\) pA) of the A5T. **f)** Constant height bond resolved STM (BR-STM) image (\(V=5\) mV) of the A5T performed with a CO functionalized tip.
## Results and Discussion
**Solution and on-surface synthesis of A5T:** Inspired by the precursor design used by Su et al. for the generation of 5T,[18] we envisioned the synthesis of A5T by on-surface planarization of the A3T derivative \(\mathbf{1}\), which is substituted with three dimethylphenyl groups (Fig. 1b). Compound \(\mathbf{1}\) was obtained by solution chemistry in four steps from amine \(\mathbf{3}\).[35] First, sequential treatment with lithium aluminum hydride (LiAlH\({}_{4}\)), followed by oxidation with pyridinium chlorocrhormate (PCC) afforded tri-aldehyde \(\mathbf{4}\) in 54% yield. Then, the addition of three equivalents of organolithum derivative \(\mathbf{5}\), followed by BF\({}_{3}\)-promoted three-fold intramo-lecular Friedel-Crafts, led to the formation of the A5T precursor \(\mathbf{1}\) in 49% yield.
Precursor \(\mathbf{1}\) was deposited on a Au(111) substrate at room temperature and in ultra-high vacuum conditions. To obtain the targeted A5T, six new C-C bonds had to be created, and fifteen hydrogen atoms had to be removed from its bulky three-dimensional structure. We annealed the pre-covered substrate to 330 \({}^{\circ}\)C (see methods) to induce cyclodethylrogreenation (CDH) and dehydrogenation (DH) of the precursor and obtained a sample with planar molecular platforms, as we confirmed using low-temperature STM. Images like in Fig. 1d resolve that some molecules appeared with some bright lobes corresponding to sp3 carbon atoms surviving the CDH/DH step, while others underwent covalent couplings between flakes forming larger structures. These partially reacted molecules tend to self-assemble and \(\pi\)-stack and can be planarized by tip manipulation (see Fig. S1). Nevertheless, fully planarized structures corresponding to the target A5T molecule, (square in Fig. 1d, zoomed in Fig. 1e) can be found, allowing an in-depth study of their electronic and magnetic structure.
Figure 1f shows a high-resolution STM image of the intact A5T flake from Fig. 1e, obtained with a CO-functionalized tip at a low bias (typically \(\leq 5\) mV) and scanning in cons-tant-height mode with a tunneling resistance of around 20 M\(\Omega\).[49; 50] This allows us to achieve a bond-resolved (BR) image that shows the honeycomb lattice of the A5T and confirms the successful synthesis. In some cases, remnant H-atoms from incomplete CDH show up in the BR images as deformations of the zigzag edges (e.g., as in Fig. S2). However, they were removed by high-energy electron tunneling,[43] until the final product, A5T, was obtained, like in Fig. 1f. Interestingly, the current contrast in the BR images is not homogeneous across the A5T backbone but appears higher along zigzag edges and darker at the center and at the three vertexes. The larger current over the edges suggests the presence of zero-energy features, normally related to spin states. The lower tunneling current over the A5T center is probably due to both the absence of zero-energy features and a small out-of-plane distortion of the N towards the Au(111) surface. The darker rings at the vertexes
Figure 2: **a)** Low-energy dI/dV spectra taken on the spots marked by colored circles in b) at \(T=1.2\) K. Dashed lines are fits of the three spectra using a Frota function,[46] from which the indicated Kondo temperatures (\(T_{K}=\sqrt{\mathrm{FWHM}^{2}-(2\pi k_{B}T)^{2}}/2k_{B}\)[47]) are estimated assuming a strong-coupling Kondo regime (\(T<T_{K}\)). Spectra are vertically shifted for clarity. **b)** BR-STM image (\(V=5\) mV) of A5T. **c)** Low-energy dI/dV spectral line along one edge of A5T, marked by an arrow in b), obtained with a CO-functionalized tip. **d)** Magnetic field-dependent low-energy dI/dV measurements. Spectra were taken on the green spot marked in b). Dashed lines correspond to the fits of the Kondo resonances in a magnetic field for a spin-2 model including third-order terms with the code from Ternes.[48] Spectra were shifted vertically for clarity. The inset shows the average splitting of the Kondo peak fits as a function of the magnetic field for three sets of measurements (on each of the spots of the A5T in image b)). The splittings follow the line \(E_{\mathrm{ZS}}=g\mu_{B}SB\) with a fitted \(g=1.23\pm 0.12\). More details about the procedure are in Fig. S4.
also claim spin-free regions, such as one would expect if highly stable Clar sextets [51; 52] were localized at these sites. Tentatively, one may thus compare this tunneling current distribution with the dominant resonant structures expected for A5T (Fig. 1c), and find that structure **A5T-2** appears closer to the experimental tunneling current maps.
**Analysis of the Kondo effect to reveal a high spin state:** In order to experimentally address the spin ground state of A5T, we performed tunneling spectroscopy at \(T=1.2\) K, measuring the low-bias differential conductance signal (dI/dV) over various parts of the molecule. Fig. 2a shows \(dI/dV\) vs. bias spectra taken on the central rings of the three edges of the A5T molecule (colored dots in Fig. 2b). Narrow zero-bias spectral peaks reveal the existence of a Kondo-screened spin state at these sites. To capture the spatial distribution of the spin signal, we measured a spectral profile along the edges of the A5T molecule (Fig. 2c). The Kondo resonance extends along the three central rings with its maximum intensity above the central one but decreases towards the vertexes, as well as over the central region of the molecule as in Fig. S3. This distribution of the Kondo signal agrees with the tunneling current variations of the BR-images. The map of Fig. 2c also reveals that the amplitude of the Kondo peak is very small, only slightly larger than the small inelastic steps at \(\pm 5\) mV (also in Fig. S3) attributed to the excitation of the frustrated translational vibrational mode of the CO molecule attached to the tip apex. [53]
The narrow line-width (FWHM \(<\) 2 mV) of the Kondo resonances are consistent with a Kondo temperature \(\sim 10\) K (Fig. 2a), larger than the temperature of the measurement, indicating that the A5T molecules lie in the strong-coupling Kondo regime. However, zero-bias peaks with such a small amplitude are normally observed when the molecular system has a spin larger than \(S=1/2\), and the Kondo effect at the measuring temperature only applies to some of the spin channels, i.e., an underscreened Kondo effect. [22] The underscreened Kondo character of A5T can be further demonstrated by measuring the evolution of the zero-bias resonance under increasing magnetic fields. Fig. 2d shows the spectra measured on the green spot marked in Fig. 2b as the magnetic field is ramped up to 2.9 T. The Kondo resonance splits already for low values of the magnetic field, instead of slowly broadening with increasing magnetic field, as expected for a fully screened case. [43] As shown in the inset of Fig. 2d, the splitting energy follows the Zeeman energy of an \(S=1/2\) in a magnetic field, indicating that the Kondo screening is not complete. [22] Hence, the Kondo feature suggests a large spin ground state for A5T, but, we cannot obtain a precise indication of its total spin from its magnetic field dependence. This is because, in the absence of magnetic anisotropy, the split of the Kondo resonance always follows \(g\mu_{b}\), the \(S=1/2\) Zeeman excitation energy. [44]
**Identification of frontier states:** To find out the spin state, we analyze the orbital structure around the Fermi energy (E\({}_{F}\)) of the system using dI/dV maps and compare them to results from DFT and mean-field Hubbard (MFH) simulations (see methods). Fig. 3a shows a set of selected differential conductance maps at different bias values around zero bias, i.e. Fermi level (see Fig. S5 for additional bias values in the range of (-1.85, 2.05) V). Albeit dI/dV plots do not show marked resonances, the dI/dV maps exhibit clear shapes, corresponding to the local density of states of the different molecular orbitals of A5T. Fig. 3b and 3c show the wavefunctions and the energy level diagram respectively of the spin-unrestricted orbitals of A5T computed by DFT (see methods) around Fermi level (\(\pm 0.75\) eV). The \(C_{3v}\) symmetry of the molecule on the surface imposes degeneracy into \(\psi_{1}\) and \(\psi_{2}\) orbitals while the two others remain non-degenerate. For the case of A5T (mid panel in Fig. 3c), \(\psi_{1}^{\circ(u)}\), \(\psi_{2}^{\circ(u)}\), and \(\psi_{3}^{\circ(u)}\) corresponds to the singly (un)occupied molecular orbitals SOMO(SUMO), while \(\psi_{4}\) remains fully occupied. This results in a \(S=3/2\) spin ground state, also in good agreement with MFH calculations in Fig. S6.
To reveal the spin state on the surface, we compare the orbital distribution of the computed SOMOs with the dI/dV maps of Fig. 3a. The most important feature here is that the zero-bias map, which reproduces the amplitude distribution of the Kondo resonance, follows the characteristic shape of the \(\psi_{4}\) orbital, accounting for a larger signal on the edges than on the vertexes. This indicates that \(\psi_{4}\) is the Kondo-screened state, evidencing its singly occupied character on the Au(111) substrate, as it would be in the case of the cationic species A5T\({}^{+}\) in the right panel of Fig. 3c. The rest of the SOMOs can be also recognized in the dI/dV maps of Fig. 3a (\(\psi_{1,2}^{\circ}\) at \(\sim\)-1.15 eV, and the correlated pair \(\psi_{3}^{\circ}\) and \(\psi_{3}^{\circ}\) at \(\sim\) -0.15 eV and 0.25 eV, respectively). In other words, the \(S=2\) ground state is a consequence of the charge donation from \(\psi_{4}^{\circ/u}\) to the surface resulting in a positively charged molecule, as similarly observed in A3T. [1]
The effect of the N heteroatom substitution can be deduced by comparing its orbital structure with that of the all-carbon [5]-triangulene (5T). [18] Owing to its particular topology, 5T has four SOMOs, i.e., a nullity \(\eta\)=4. [1] This gives rise to a \(S=2\) ground state in the presence of Coulomb correlations, as confirmed by DFT (Fig. 3c and Fig. S7) and MFH (Fig. S6). Substituting the central carbon atom with a nitrogen atom does not distort the orbital shapes (see orbital shapes of 5T in Fig. S6 and S7), but simply adds an extra electron into the \(\pi\)-conjugated network. This "extra" \(\pi\)-electron populates the \(\psi_{4}\) state, which is the state with the largest amplitude at the N site (Fig. 3b and Fig. S7), and, consequently reduces the spin to \(S=3/2\) (from left to mid panel in Fig. 3c). Contrary to the smaller A3T, the extra electron does not lead to Jahn-Teller distortions, [38] because here it populates a non-degenerate level. Consequently, the lack of stabilizing Jahn-Teller distortion keeps the \(\psi_{4}\) state closer to the Fermi level than the other frontier orbitals.
Therefore, the electron added by the N heteroatom to the conjugated \(\pi\)-system remains the most valent and is donated to the Au(111) substrate, eventually leading to the oxidized A5T (A5T\({}^{+}\)), which adopts the \(S=2\) ground state of 5T on the surface.
**Global and local (anti)aromaticity:** Spectral maps of Kondo-amplitude like in Fig. 1f, 2b and 3a resolved a peculiar pattern of unpaired electron localization at the zigzag edges of the flakes. In fact, these maps resemble a resonant structure like **A5T-2** in Fig. 4a, with radical states delocalized within the middle zigzag edges, rather than the **A5T-1** structure, where radicals lie at the triangulene vertexes. Here, we theoretically analyze the (anti)aromaticity of the A5T molecule and show that such an electron delocalization pattern is a fingerprint for the spin state of the flake on the surface.
In Fig. 4a we plot the anisotropy of the induced current density (ACID) [54] and the nucleus independent chemical shift [55] (NICS indexes in ppm included in the ACID maps of Fig. 4) computed for the A5T molecule in the gas phase. The A5T molecule has a peripheral triangular rim formed by 30 C-atoms (dark blue C-C bonds in Fig. 4), which would fulfill the 4n+2 Huckel rule for aromaticity if each atom contributed with a \(\pi\)-electron. However, the corresponding ACID plot for a neutral A5T finds a very small ring current around this outer 30-carbon circuit, which is a sign of the global non-aromatic nature of the outer rim. This points to the presence of unpaired electrons at the edges or vertexes, reducing the Huckel counting rule, as described by both structures **A5T-1** and **A5T-2** in Fig. 4a. In fact, NICS indexes find that the benzene rings at the periphery present some (local) aromatic character (NICS(1)\({}_{zz}<0\) ppm), with larger negative NICS at the middle-edge rings, and to some extent also at the rings at the vertexes. This indicates a larger probability for hosting Clar sextets at these sites, [56] as
Figure 3: **a)** dI/dV maps at different bias acquired at constant current (\(I=500\) pA) of A5T. The one corresponding to 0.005 V maps the orbital that contributes the most to the Kondo resonance. In this case, instead of the dI/dV signal, the current is recorded at constant height with a CO-functionalized tip at larger distances from the molecule as compared to the BR images. **b)** Molecular orbital isosurfaces of the A5T and the A5T\({}^{+}\) of the SOMO-SUMO obtained from spin-polarized DFT. **c)** Energy levels diagram obtained from spin-polarized DFT calculations close to Fermi energy for the case of 5T, A5T, and A5T\({}^{+}\).
in **A5T-1** structure. However, the ACID plot of neutral A5T also displays an anticlockwise paramagnetic current flow in the inner rim (light blue C-C bonds in Fig. 4), which can be traced to the [12]annulene moiety present in the **A5T-2** structure, expected to exhibit global antiaromaticity. This is confirmed by the large NICS positive values (NICS(1)\({}_{zz}\) = 42.4 ppm) of the three central rings. Based on this electron delocalization and aromaticity indices we conclude that the neutral A5T molecule lies in a superposition of both, the non-aromatic **A5T-1** and the antiaromatic **A5T-2** resonant structures.
The antiaromatic character of the inner rim of A5T contrasts with their aromatic character in the all-carbon 5T flake (Fig. 4b). Our simulations (both DFT and MFH) suggest that the local antiaromaticity over the center is brought by the extra \(\pi\) electron inserted by the aza group, because it doubly populates the state with larger orbital amplitude around the center (\(\psi^{4}\)), and forces the formation of the [12]annulene inner rim. The aza-moiety thus endows the flake with an intrinsic tendency to donate charge by inserting a hole into the doubly occupied \(\psi^{4}\) state because this reduces the antiaromaticity of the center, a process that is favored by substrates with high electron affinity, such as Au(111).
The A5T\({}^{+}\) cation is built up by a hole extending along the \(\psi^{4}\) orbital (Fig. 3b), which is the Kondo screening channel. In fact, the \(\psi^{4}\)-hole contributes to breaking the conjugation of the [12]annulene circuit. The corresponding ACID plot in Fig. 4c shows no paratropic currents, and a pattern very similar to the all-carbon 5T flake (Fig. 4b) appears. The transition from antiaromatic to non-aromatic character is confirmed by NICS(1)\({}_{zz}\) values computed for A5T\({}^{+}\) (Fig. 4c). The three inner rings become non-aromatic in the cationic flake, and the vertexes host now the most aromatic rings (Clar sextet). Therefore, we conclude that the preferential spin localization observed in the experimental Kondo map of Fig. 3a (also hinted from the BR-STM images of Figs. 1f and 2b), reproduces the features of the cationic character of the A5T\({}^{+}\) cation and, hence, confirms its \(S=2\) ground state.
## Conclusion
In summary, we have described a route to fabricate a large aza-triangulene flake on a metal substrate and demonstrated that it lies in a high spin state. The fabrication was realized by combining solution synthesis of rationally-designed molecular precursors, and on-surface synthesis by thermally-induced dehydrogenation reactions. We demonstrated the magnetic state by detecting a very weak Kondo resonance originating from a molecular state with a hole radical caused by charge donation to the surface. Combining experimental orbital maps, Kondo amplitude maps, DFT and MFH simulations, and calculated ACID plots and NICS indexes, we determined that the aza-triangulene flake lies in the cationic \(S=2\) state on the Au(111) substrate, thus representing a high-spin nanographene. Our work confirms that aza-triangulenes are prone to act as charge donors, as a way to increase their aromatic character.
## Acknowledgements
The authors gratefully acknowledge financial support from grants No. PID2019-107338RB, PID2019-109555GB-I00, FIS2017-83780-P, and CEX2020-001038-M funded by MCIN / AEI / 10.13039 / 501100011033, the ELKARTEK project BRTA QUANTUM (no. KK-2022/00041), the European Regional Development Fund, and the European Union (EU) H2020 program through the FET Open project SPRING (grant agreement No. 863098). F.R.-L. thanks the Spanish Ministerio de
Figure 4: Map of the computed anisotropy of the induced current density (ACID), where the numbers inside the benzene rings indicate the nucleus-independent chemical shift \(ZZ\) indexes 1 Å above the molecular plane (NICS(1)\({}_{zz}\)), and resonance forms for the case of **a)** A5T, **b)** 5T, and **c)** A5T\({}^{+}\). The C-C bonds in dark/light blue correspond to the outer/inner rims.
Educacion y Formacion Profesional through the PhD scholarship no. FPU20/03305. M.E.S.-S. acknowledges the funding by the UK Research and Innovation under the UK government's Horizon Europe funding guarantee (grant number EP/X020908/1). We thank Thomas Frederiksen, Sofia Sanz, and Ricardo Ortiz for fruitful discussions.
|
2302.05097 | CCDN: Checkerboard Corner Detection Network for Robust Camera
Calibration | Aiming to improve the checkerboard corner detection robustness against the
images with poor quality, such as lens distortion, extreme poses, and noise, we
propose a novel detection algorithm which can maintain high accuracy on inputs
under multiply scenarios without any prior knowledge of the checkerboard
pattern. This whole algorithm includes a checkerboard corner detection network
and some post-processing techniques. The network model is a fully convolutional
network with improvements of loss function and learning rate, which can deal
with the images of arbitrary size and produce correspondingly-sized output with
a corner score on each pixel by efficient inference and learning. Besides, in
order to remove the false positives, we employ three post-processing techniques
including threshold related to maximum response, non-maximum suppression, and
clustering. Evaluations on two different datasets show its superior robustness,
accuracy and wide applicability in quantitative comparisons with the
state-of-the-art methods, like MATE, ChESS, ROCHADE and OCamCalib. | Ben Chen, Caihua Xiong, Qi Zhang | 2023-02-10T07:47:44Z | http://arxiv.org/abs/2302.05097v1 | # CCDN: Checkerboard Corner Detection Network for
###### Abstract
Aiming to improve the checkerboard corner detection robustness against the images with poor quality, such as lens distortion, extreme poses, and noise, we propose a novel detection algorithm which can maintain high accuracy on inputs under multiply scenarios without any prior knowledge of the checkerboard pattern. This whole algorithm includes a checkerboard corner detection network and some post-processing techniques. The network model is a fully convolutional network with improvements of loss function and learning rate, which can deal with the images of arbitrary size and produce correspondingly-sized output with a corner score on each pixel by efficient inference and learning. Besides, in order to remove the false positives, we employ three post-processing techniques including threshold related to maximum response, non-maximum suppression, and clustering. Evaluations on two different datasets show its superior robustness, accuracy and wide applicability in quantitative comparisons with the state-of-the-art methods, like MATE, ChESS, ROCHADE and OCamCalib.
Keywords:Camera Calibration, Checkerboard Corner Detection, Robustness, Fully Convolutional Network.
## 1 Introduction
Camera calibration is a classic task in machine vision with the purpose of estimating the intrinsic parameters as well as the distortion coefficients of camera; the most widely used calibration pattern is the planar checkerboard. Compared with other types of patterns, such as three dimensional objects [1], circles [2] and self-identifying patterns [3], the checkerboard pattern has the stronger robustness with respect to distortion bias and perspective bias [4]. It is also suitable for 3D pose estimation and localization in robot vision, and easy to generate at a low price. However, the checkerboard with poor quality, like low resolution, lens distortion, extreme poses, and sensor noise, can also lead to inaccurate inner corner detection and failed camera calibration.
There are various methods for the checkerboard corner detection. Harris [5], SUSAN [6] and their improved versions [7, 8] adopt distinct corner features to find target points, but those do not generally work well on chessboard pattern. Wang et al. [9] refer to checkerboard corner as the intersection of two adjacent grid lines, which could detect
the checkerboard pattern with small lens distortions successfully, but it will be less accurate for wide-angle cameras. ChESS [10] uses the specific features of circular neighborhood around the corners to select the candidates and it is faster and accurate for most cases. However, this method will produce a lot of false detections and heavily depends on the hand-crafted threshold. The widely used checkerboard detection algorithm embedded in OPENCV is based on the work of Vezhnevets [11]. It applies erosion to separate black quadrangles, and then combines them to construct the checkerboard and calculate the inner corners. Rufli et al. extend this algorithm in OCamCalib [12] to be more robust to lens distortion, while for low resolution images and highly distorted images, its detection performance is not as good as ROCHADE [13], a more complex combination of general image features, especially under strong perspective distortion as often presented in wide baseline stereo setups. What's more, these aforementioned algorithms need to know the number of squares of the calibration pattern in advance. Some algorithms attempt to use machine learning methods to detect the corners, such as FAST [14] and FAST-ER [15]. A foray into neural networks is MATE [16], which consists of three convolutional layers to extract the intrinsic feature effectively. But it may cause a little bit more false positives even for medium and high resolution images with little lens distortion.
In this paper we propose a fully convolutional neural network (CNN) model, namely checkerboard corner detection network (CCDN), to find the inner corners of checkerboards under multiply scenarios efficiently. This model can take an image of any size as input and output the response map of corresponding spatial dimensions with a corner score on each pixel. Aided by three post-processing techniques, threshold related to maximum response, non-maximum suppression, and clustering, to eliminate false positive points in different cases, this model is more accurate and robust for the checkerboard corner detection.
The outline of this paper is as follows. Section 2 details the architecture and properties of our checkerboard corners detection network and its training, as well as techniques in the post-processing. The following section describes the datasets for training and testing. Experiments and results are discussed in Section 4. Conclusions are given in the last section.
## 2 Methodology
The whole algorithm can be divided into two parts: the first is a fully convolutional network for extracting a series of corner candidates, which is detailed in Section 2.1; the second part, including threshold related to maximum response, non-maximum suppression and clustering, is described in Section 2.2 to eliminate the false positives.
### Checkerboard Corner Detection Network
**Architecture**. A checkerboard corner detection network is presented which can take an image of any size as input and output the response map of corresponding spatial dimensions with a corner score on each pixel. As depicted in Fig. 1, this network consists of
six convolutional layers, of which the first and the fourth are followed by max-pooling layers of size 2\(\times\)2. The ReLU non-linearity [17, 18] is applied to the output of each convolutional layer as the activation function.
The kernels of the first layer are intended to extract some useful features from the input image, so the spatial support radius of them should be set large enough to suppress the effect of blur and noise. As research in [16], a larger radius may lose some recall of the real corners while a smaller may falsely detect background pixels as checkerboard corners. To make a tradeoff between recall and precision, here we choose the spatial support radius of four pixels for the first layer, which is shown to be sufficient for corners detection of our model in the result section.
The first convolutional layer filters the input gray-scale image \(X\) into 20 channels \(L_{\lambda,i}(X)\) with kernels \(W_{\lambda,i}\) of size 9\(\times\)9\(\times\)1 and biases \(b_{\lambda,i}\) :
\[L_{\lambda,i}(X)(x,y)=\max((W_{\lambda,i}\times X)(x,y)+b_{\lambda,i},0). \hskip 42.679134pt\forall i=\hskip-1.422638pt1...20 \tag{1}\]
The second convolutional layer takes the max-pooled output of the first layer as input and filters it into 20 channels with kernels \(W_{2,i,j}\) of size 3\(\times\)3\(\times\)20 and biases \(b_{2,j}\), while the third and fourth convolutional layers of same size filters are followed without any pooling:
\[L_{\lambda,j}(X)(x,y)=\max(\sum_{i=1}^{20}(w_{\lambda,i,j}\times[L_{\lambda-1, i}(X)])(x,y)+b_{\lambda,j},0)\hskip 11.381102pt\forall c=2,3,4;\ j=\hskip-1.422638pt1...20 \tag{2}\]
The fifth convolutional layer has 20 kernels of size 3\(\times\)3\(\times\)20 connected to the max-pooled output of the fourth layer, as explained like Eq. (2). The last convolutional layer combines the 20 channels resulting from the fifth layer into a single response map, with small filters of size 3\(\times\)3\(\times\)20. The output of this layer is given as:
Figure 1: An illustration of the architecture of CCDN. It is a fully convolutional network with six convolutional layers, and the first and fourth are followed by max-pooling layers. The output is a single-channel response map with same size to the input. The activation functions following each convolutional layer are ReLUs.
\[L_{\text{b}}(X)(x,y)=\max(\sum_{i=1}^{20}(W_{\text{e}_{i},i}\times[L_{\text{S}_{i }}(X)])(x,y)+b_{\text{3}},0) \tag{3}\]
Note that the stride of kernels in both convolutional layers and max-pooling layers is one pixel, and the zero padding is applied to make the output feature map have the equivalent dimensions with input image. This network can be tailored towards application-specific scenarios for its capacity will vary with the depth and settings. We initialize the weights in all convolutional layers from a zero-mean Gaussian distribution with standard deviation 0.1, and the neuron biases with the constant 0.1.
Considering the gray-scale input and the spatial support of 9\(\times\)9 for the first filters, our net has 16301 parameters to train, which are a little more than MATE (only 2939 parameters), but much fewer than other types of object detection networks [19-21]. A smaller spatial support can get more effective input samples (291716 for MATE, 296100 for our net, refer to an 640\(\times\)480 image), and also little overlap. Furthermore, compared with the convolutional layers in MATE, this net is deeper with more filters, which can extract more features adapted to various scenarios, with no significant increase in time consumption, as well as less risk of overfitting.
**Loss Function**. For training this net, we assign a binary class label of being a corner or not to each input sample: the ground-truth corner locations are assigned a positive label (=1), while those non-corner locations are assigned a negative label (=0), then the corner label of the binary ground-truth image is denoted as \(G(x,y)\). All the parameters of the neural net are collected into a single vector \(\bar{p}\). Unlike MATE, we use cross entropy instead of mean square error as the loss function, for it is more suitable for discrete output variables [22]. The total loss function is defined as:
\[L(\bar{p})=\frac{1}{2}\lambda\parallel\bar{p}\parallel_{2}^{2}-\sum_{(x,y)\in \Omega}\left[\frac{1}{N_{p}}\log(a(x,y)),\right.\quad\text{ \emph{where} }G(x,y)=1, \tag{4}\]
where \(a(x,y)\) denotes the clipped output of the last layer as:
\[a(x,y)=\left\{\begin{aligned} &\min(\max(10^{\text{-6}},\,L_{\text{b}}(X)(x,y)), \,1),&\text{\emph{where} }G(x,y)=1,\\ &\min(\max(0,L_{\text{b}}(X)(x,y)),1-10^{\text{-6}}),& \text{\emph{where} }G(x,y)=0.\end{aligned}\right. \tag{5}\]
and thus the loss function can be meaningful to all pixels' responses. Noted that there are only few true positives (49, 54, 81, and 156 for our training set) of all effective input samples in an image and the mean loss to each location may make the net mistake all of them for the non-corner locations. In order to eliminate the effect of the disparity that the negative samples are dominate, we normalize the loss by the number of ground-truth positives (\(N_{p}\)) and negatives (\(N_{N}\)) on each term. In addition, we use the \(L^{2}\) parameter regularization to reduce the net's overfitting. \(\lambda\) is a balancing parameter that
weights the contribution of regularization term relative to the cross entropy function, and here we set \(\lambda\)=0.01.
**Training**. This network can be trained by back-propagation and stochastic gradient descent (SGD). We use a batch size of 20 images (about 900 to 3120 positive labels) and a momentum of 0.9 [18]. The learning rate is initially set to 0.01, and then decreases exponentially as the training progresses. The learning rate \(v_{i}\) of \(i_{\text{th}}\) iteration is expressed as:
\[v_{i}=v_{0}\sigma^{\left\lfloor i/\tau\right\rfloor} \tag{6}\]
where \(v_{0}\) is the initial learning rate, \(\sigma\) is the decay rate, and \(\tau\) represents the number of iterations required to train all the training images at a time, equal to the total number of training samples divided by the number of those in each batch. \(\left\lfloor i/\tau\right\rfloor\) with \(\left\lfloor\cdot\right\rfloor\) denoting floor operation, guarantees the decayed learning rate follows a staircase function so that all samples can be trained with same rate. Appling exponential decay to the learning rate can not only make the net get close to the optimal solution in the early stage of training, but also guarantee that it will not have too many fluctuations in the later stage, so as to get closer to the local optimal solution. Our implementation uses TensorFlow [23].
### Techniques for Eliminating the False Positives
The output of this network is a response map with a corner score on each pixel location, and the map is of same spatial dimensions as the input. This section introduces three efficient techniques combined to find the correct checkerboard corners effectively, for they are designed to eliminate false positive points in different cases.
**Threshold Related to Maximum Response**. As the loss function explained above, our model is a binary network for checkerboard corner detection. During the training process it accelerates responses of corner locations closer to 1 and responses of non-corner locations closer to 0. Thus we can set a threshold to distinct them, and the threshold can be adjusted to a higher value (for more precision) or a lower value (for more recall). Furthermore, different images contain different scenes, and their response values are subject to different distributions so a global fixed threshold is not valid for them.
By observing the distribution of the response values, we find that responses of the ground-truth corner locations are often higher than 1, even some false positives may get a value closing to 1, for neither cross entropy or mean square error sets any constrains on the output. This is probably one of the principal reasons why MATE (a fixed threshold of 0.5) is insensitive to the false positives even for pictures with little lens distortion and noise. However, we also find that the number of corner points is approximately linear with the maximum of responses. Here we set half of the maximum as the threshold, which is proved to be useful for most cases.
**Non-maximum Suppression**. After the threshold processing, the locations with lower responses are treated as false positives and removed. However, due to the error of manual annotation and local optimal learning of neural networks, many locations in the immediate neighborhood of corners or near the borders of the image may have slightly lower responses than those corners, and the threshold cannot effectively eliminate them. Non-maximum suppression (NMS), which has been used effectively in many object detection algorithms to solve the high overlap between the predicted bounding-boxes, can be adapted to our model with few modifications. Construct bounding-boxes (with area of \(~{}4\times 4~{}\) pixels) centered around the remaining locations, then apply NMS on them based on the sorted response values, the satisfactory results can be got with the threshold at 0.5. In the result section we will show that NMS can eliminate the double detections without harming the ultimate detection accuracy.
**Clustering.** For pictures with complex scenes, there are many false positives that have very similar appearance to the corners, and therefore their response values are almost the same as those of corners, so that the techniques mentioned above can't distinguish them very well. Considering that the checkerboard has a very regular geometric property, while the false positives are distributed randomly and a little away from the checkerboard in the image, we can use the clustering algorithms to separate them. The k-means\(++\) method is a widely used clustering technique that classes all points into certain clusters according to the minimized squared distance between points in the same cluster [24]. Here we apply this method to the remaining responses with \(~{}k=10\), then calculate the number of points \(~{}N_{i}\)(\(i=1\ldots k\).) in each cluster and eliminate points in the cluster with \(~{}N_{i}\Leftarrow 2\).
## 3 The datasets
The datasets for training and testing of our model should be large enough with considering lens distortion, extreme poses and sensor noise. The training is performed on two image series: images captured by us directly and digitally augmented versions of these captured images.
For generating abundant training images, we used four types of checkerboards with \(~{}7\times 7\),\(~{}6\times 9\),\(~{}7\times 11\),\(~{}9\times 9\),\(~{}12\times 13~{}\) inner corners as calibration patterns. Each pattern was placed under various circumstances to capture the datasets, with the background cluttered intentionally for simulating realistic calibration environments. In order to make our model become rotation (to some extent) and intensity invariant, here we rotated the original images by 90,180,270 degrees and reversed the intensities of half of those pictures randomly. The camera we used has little lens distortion and well capture conditions without much noise, so we artificially added both radial and tangential distortion as well as Gaussian noise as mentioned in MATE to multiply these pictures. Finally, all (a total of 8900 images) were converted to gray-scale images and resized into VGA
resolution with 640\(\times\)480 pixels (an optional operation) as the training dataset, as illustrated in Fig. 2. Among them 8000 images were selected randomly as the training dataset, and the rest were taken as the validation dataset.
The datasets for testing the generalization performance of our model consist two parts: the uEye and GoPro from ROCHADE [13]. The uEye dataset (with a resolution of 1280\(\times\)1024 pixels) has slight lens distortion and serve to evaluate the robustness against perspective transforms and noise. The GpPro dataset (with a resolution of 4000\(\times\)3000 pixels) is down-sampled to half-resolution and used to illustrate the robustness against lens distortion. These two datasets are shown in Fig. 3.
Figure 3: Two examples from the uEye (a) and GoPro (b) datasets[13] with perspective transforms and lens distortion. The presented images are resized with nearest neighbor interpolation.
Figure 2: Several sample images from the augmented data set. The top row shows the original image, the gray-scale image and the intensities inverted image. The second row shows the image rotated 180 degrees, the image with gaussian noise and the image with lens distortion.
Checkerboard corner detection is a supervised learning task so the ground-truth corner locations should be obtained accurately. At first we annotated the four outer corners of the checkerboard manually, and then the interior corners were interpolated to converge locally to the saddle points. Finally, we checked and removed the wrong corners. After annotations we normalized per-pixel value to between 0 and 1 corresponding with the corner label mentioned in subsection 2.1, and so to some extent the response value can also be regarded as the probability to be a corner or not.
## 4 Experiments and Results
Two groups of experiments are presented to initiate a detailed study of the proposed model's performances. The first experiment is to test the learning ability of cross entropy as the loss function comparing with mean squared error (MSE) mentioned in MATE, as well as the feasibility of the learning rate with exponential decay. In order to make a quantitative comparison, we use the mean squared value (MSV) of the difference between the real label and the predicted value on all pixels in validation images.
We can see from Fig. 4(a) that cross entropy can get the similar result as MSE after 2000 epochs. But the neuron network with cross entropy drove down the cost rapidly while MSV of network with MSE started out much more slowly for the first 150 epochs, as shown in Fig. 4(b). The results comply with the view in [22] that mean squared error usually studies slowly when used with gradient-based optimization. Taken together, the cost value decreased rapidly at first but gradually slowed down without too many mutations, this is consistent with the design of the learning rate with exponential decay with the purpose to get a local optimal solution. The purpose of this experiment is not to show the learning ability of the whole model, but only to illustrate that the loss function and the learning rate with exponential decay presented are feasible techniques.
Figure 4: MSV versus Training time (epochs) on the validation dataset. The neural network with cross entropy (red line) is equivalent to that with MSE (blue line), except that the initial learning rates were chosen independently to make training as good as possible.
In the second group of experiments we performed several quantitative comparisons with the state-of-the-art methods MATE, ChESS, ROCHADE and OCamCalib with respect to the accuracy, the missed corner rate, double detection rate, and the number of false positives on the testing datasets. The distance between the detected corner and the closest ground truth corner of all images is calculated, and if the distance is less than five pixels, the detected point is regarded as a true corner. The accuracy denotes the average distance calculated above. The missed corner rate shows how many ground truths are detected as non-corners, and the double detection rate illustrates how many points close to each other are detected as the same corner. False positives show how many non-corner locations are detected as the corners on the whole dataset.
Public implementations of the last three algorithms can be used in this evaluation. However, considering that the training details and the hyper parameters of MATE are not available, we used the best published results shown in [16]. The results of these experiments are shown in Table 1 and Table 2.
We can see above that the proposed model does not lose performance over these state-of-the-art methods, whether precision or recall. In terms of the accuracy, missed corners rate and double detection rate, our algorithm can get a much better detection result than MATE, as well as ChESS and ROCHADE. In particular, the accuracy can be better by using sub-pixel precision approaches [13, 25]. By adopting threshold related to maximum response, NMS and clustering, the number of false positives reduces significantly than MATE, and even maintains zero on the images with lens distortion. OCamCalib performs the best on the two datasets, but it requires the number of squares in checkerboard pattern in advance (that's why it didn't get any false positives and double detections), and it can only be used on the checkerboards with wide white border. So our model outperforms all tested methods on the generalization performance without any prior knowledge and is more adaptable to complex scenarios such as checkerboards with intensity reversal.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & Accuracy & Missed Corners & Double Detections & \multirow{2}{*}{False Positives} \\ & (px) & (\%) & (\%) & \\ \hline CCDN & 0.812 & 1.169 & 0.000 & 93 \\ MATE & 1.009 & 3.065 & 0.809 & 492 \\ ChESS & 0.946 & 3.398 & 0.000 & 11 \\ ROCHADE & 1.510 & 2.895 & 0.000 & 1 \\ OCamCalib & 0.319 & 0.000 & 0.000 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on the uEye dataset
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & Accuracy & Missed Corners & Double Detections & \multirow{2}{*}{False Positives} \\ & (px) & (\%) & (\%) & \\ \hline CCDN & 0.576 & 0.907 & 0.000 & 0 \\ MATE & 0.835 & 4.566 & 4.556 & 389 \\ ChESS & 1.389 & 5.481 & 0.222 & 56 \\ ROCHADE & 1.807 & 5.593 & 0.000 & 3 \\ OCamCalib & 0.458 & 0.537 & 0.000 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the GoPro dataset
Conclusion
In this paper we have presented a novel checkerboard corner detection algorithm to find the inner corners of checkerboards with high robustness for most situations. This whole algorithm contains a checkerboard corner detection network (CCDN) and some post-processing techniques. CCDN is a fully convolutional neural network, which contains six convolutional layers and about 16000 parameters. It realizes a complex and efficient combination of detected features to select the checkerboard corner candidates efficiently with two improvements of loss function and learning rate. Threshold related to maximum response, non-maximum suppression and clustering are gathered as the post-processing to eliminate false positives in different cases. Quantitative comparisons on two different datasets in results show it outperforms the state-of-the-art methods including MATE, ChESS, ROCHADE, and OCamCalib without any prior knowledge of the checkerboard pattern. Thus it can be seen as a specific corner detector that is accurate, robust and suitable for automatic detection.
## Acknowledgement
This work is partially supported by the National Natural Science Foundation of China (Grant No. 51335004 and No. 91648203) and the International Science & Technology Cooperation Program of China (Grant No. 2016YFE0113600).
|
2306.08044 | Pruning the Way to Reliable Policies: A Multi-Objective Deep Q-Learning
Approach to Critical Care | Most medical treatment decisions are sequential in nature. Hence, there is
substantial hope that reinforcement learning may make it possible to formulate
precise data-driven treatment plans. However, a key challenge for most
applications in this field is the sparse nature of primarily mortality-based
reward functions, leading to decreased stability of offline estimates. In this
work, we introduce a deep Q-learning approach able to obtain more reliable
critical care policies. This method integrates relevant but noisy intermediate
biomarker signals into the reward specification, without compromising the
optimization of the main outcome of interest (e.g. patient survival). We
achieve this by first pruning the action set based on all available rewards,
and second training a final model based on the sparse main reward but with a
restricted action set. By disentangling accurate and approximated rewards
through action pruning, potential distortions of the main objective are
minimized, all while enabling the extraction of valuable information from
intermediate signals that can guide the learning process. We evaluate our
method in both off-policy and offline settings using simulated environments and
real health records of patients in intensive care units. Our empirical results
indicate that pruning significantly reduces the size of the action space while
staying mostly consistent with the actions taken by physicians, outperforming
the current state-of-the-art offline reinforcement learning method conservative
Q-learning. Our work is a step towards developing reliable policies by
effectively harnessing the wealth of available information in data-intensive
critical care environments. | Ali Shirali, Alexander Schubert, Ahmed Alaa | 2023-06-13T18:02:57Z | http://arxiv.org/abs/2306.08044v2 | # Pruning the Way to Reliable Policies:
###### Abstract
Most medical treatment decisions are sequential in nature. Hence, there is substantial hope that reinforcement learning may make it possible to formulate precise data-driven treatment plans. However, a key challenge for most applications in this field is the sparse nature of primarily mortality-based reward functions, leading to decreased stability of offline estimates. In this work, we introduce a deep Q-learning approach able to obtain more reliable critical care policies. This method integrates relevant but noisy intermediate biomarker signals into the reward specification, without compromising the optimization of the main outcome of interest (e.g. patient survival). We achieve this by first pruning the action set based on all available rewards, and second training a final model based on the sparse main reward but with a restricted action set. By disentangling accurate and approximated rewards through action pruning, potential distortions of the main objective are minimized, all while enabling the extraction of valuable information from intermediate signals that can guide the learning process. We evaluate our method in both off-policy and offline settings using simulated environments and real health records of patients in intensive care units. Our empirical results indicate that pruning significantly reduces the size of the action space while staying mostly consistent with the actions taken by physicians, outperforming the current state-of-the-art offline reinforcement learning method conservative Q-learning. Our work is a step towards developing reliable policies by effectively harnessing the wealth of available information in data-intensive critical care environments.1
Footnote 1: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
## 1 Introduction
Specifying a reward function poses a challenge in many (applied) reinforcement learning (RL) settings. In the presence of multiple desirable factors, it is often unclear how to obtain a single scalar reward. Additionally, even when a single outcome is of interest, it may be too rare to learn from directly with limited data. To address this issue, less accurate but more frequent reward proxies can be incorporated. However, if these proxies are included in the reward specification without sufficient consideration, the resulting policy may deviate from optimizing the true reward.
This challenge frequently arises in the medical context where RL's promise is to obtain personalized treatment policies. Many medical decisions require physicians to periodically select among multiple treatments or to prescribe the right diagnostic tests at the right time. The sequential nature of these problems and the increasing availability of health data make it a potential use case of RL. In many settings, patient survival is a critical outcome of interest, but it is a rare event that only provides a delayed and sparse signal. As an alternative, medical risk scores can be used as more frequent intermediate rewards. However, these scores are only approximations of a patient's true individualized severity and may be biased for certain populations (Miller et al., 2021). Hence, determining the most effective way to integrate information from these intermediate signals into RL policies still remains a challenge.
In recent years, there has been a surge of interest in exploring the prospects of RL in critical care settings (Henry et al., 2015; Prasad et al., 2017; Raghu et al., 2017; Raghu et al., 2015; Raghu et al., 2017).
and Singh, 2018; Cheng, Prasad, and Barbara E. Engelhardt, 2018; Komorowski et al., 2018; Lin et al., 2018; Peine et al., 2021; Fatemi, Killian, et al., 2021). Different studies have taken different approaches towards defining the reward function. For instance, focusing on the task of managing vasopressor and intravenous fluids in sepsis patients, Komorowski et al. (2018) solely reward the agent based on 90-day survival, while Raghu, Komorowski, Celi, et al. (2017) have further included intermediate rewards based on the SOFA-score (Vincent et al., 1996) and arterial lactate levels.
As mentioned earlier, directly incorporating more frequent but less accurate intermediate reward signals comes at a cost. Distortions from these rewards can cause the final policy to deviate from the primary outcome of interest. Furthermore, the varying complexity in the structures of different rewards can pose unique learning challenges for an RL agent. As a result, there is a risk that the agent could primarily optimize toward rewards that are easier to learn, inadvertently neglecting the genuine, more complex outcomes of interest.
In order to overcome this challenge, we propose a new two-stage algorithm that partially disentangles learning from inaccurate intermediate rewards and the sparse reward of interest. In the first stage, we propose a multi-objective Q-learning algorithm that enables us to _prune_ the action set based on all available rewards without committing to an explicit relationship between them. Pruning is a relaxed version of a policy, that instead of prescribing a single action to take, removes unpromising actions which are unlikely to make any improvement to any linearly scalarized reward. This relaxation will minimize the possible distortion due to less accurate rewards while guiding the second stage where we run standard Q-learning on the pruned action set.
We evaluate our framework in both off-policy and offline settings using simulated environments and real data. In the off-policy setting of OpenAI Gym Lunar Lander environment (Brockman et al., 2016), we show that our algorithm can effectively prune the action space and that this reduction in complexity significantly improves learning based on the sparse reward later on.
We further evaluate our framework in an offline setting using real health records of septic patients in the intensive care unit (ICU). We demonstrate that simply linearly combining rewards does not effectively improve standard Q-learning. However, by using our two-stage algorithm, medical scores can effectively enhance the policy outcome. Additionally, we find that the pruning step significantly reduces the number of available actions without discarding the actions chosen by physicians in the majority of cases.
A desirable characteristic of a reliable offline RL policy is to adhere to the expert policy, in this case, the physician policy, as closely as possible while incurring minimal performance losses compared to an unrestricted policy. Conservative Q-learning (CQL) (Kumar et al., 2020) is a prevalent method used to enforce such similarity. Our results demonstrate that our method achieves a high similarity to physician actions comparable with CQL while attaining superior performance. This indicates that our approach is promising in obtaining relevant offline policies while avoiding out-of-distribution actions.
Reinforcement learning holds the promise of deriving personalized treatment policies in healthcare that cater to the unique history of each patient. However, achieving this requires models that can effectively assimilate information from diverse inputs, while also being able to accurately distinguish between key outcomes and potentially biased or less accurate proxies. Our work presents a new approach that takes us closer to achieving this goal. Our two-stage algorithm allows us to effectively integrate information from intermediate severity proxies, resulting in better performance compared to standard conservative Q-learning methods. This is particularly important in critical care processes, where intermediate indicators are abundant, and it is crucial that computational methods can accurately respond to these signals when providing recommendations.
The code to derive the presented results is available at:
[https://github.com/alishiraliGit/multi-criteria-deep-q-learning](https://github.com/alishiraliGit/multi-criteria-deep-q-learning)
## 2 Related Work
Multi-Objective Reinforcement Learning.The main body of work on multi-objective (or multi-criteria) reinforcement learning (RL) can be divided into single-policy methods and multiple-policy methods. We refer the reader to Roijers et al. (2013) and Hayes et al. (2022) for a survey of the field. Single-policy methods (Mannor and Shimkin, 2001; Tesauro et al., 2007) reduce multiple objectives into a single scalar
reward assuming a known user-specified or context-driven preference over different objectives and then seek to maximize the scalarized objective using standard RL techniques. The major difference between different single-policy approaches lies in the way in which they attempt to derive preferences over objectives. These methods, however, cannot be used when the preferences themselves are unknown.
In contrast, multi-policy methods aim to estimate the Pareto frontier based on a set of policies. One way to achieve this is by training an ensemble of policies based on different reward scalarizations (Natarajan and Tadepalli, 2005; Van Moffaert, Drugan, and Nowe, 2013; Mossalam et al., 2016; Cheng, Prasad, and Barbara E Engelhardt, 2018; Xu et al., 2020). These methods require exhaustive training and in some cases non-trivial scalarizations. Other methods extend the standard scalar variables in RL algorithms to vector variables and use updating rules in the vector space. Most related to our proposed method of multi-objective Q-learning (Section 4.1) are such value-based methods. Early work in this direction explored the problem of acquiring all Pareto optimal policies simultaneously (Barrett and Narayanan, 2008; Hiraoka, Yoshida, and Mishima, 2008; Iima and Kuroe, 2014). Their methods focused on applications in online settings and on small, finite-state spaces. Lizotte and Laber (2016) extended Barrett and Narayanan (2008)'s framework to real-valued state features but with an exponential complexity in time horizon and in the size of the action space. In contrast, our deep Q-learning method runs for arbitrarily long trajectories in polynomial time by using an approximate problem formulation. Furthermore, our familiar formulation enables our method to leverage the recent advances in offline Q-learning.
Overall, a key focus of this research area lies in either inferring optimal preference weights or directly uncovering a set of Pareto-optimal policies. Instead, to the best of our knowledge, this study is the first to leverage multi-objective RL to prune the action space by identifying and eliminating inferior state-actions pairs in order to reduce the complexity of the learning problem for subsequent learning tasks.
Non-Deterministic Policies.While the mathematical framework we employ to devise a pruning function in phase 1 of our algorithm echoes certain aspects of the research on non-deterministic policies (or set-valued policies) (Fard and Pineau, 2011; Tang, Modi, M. Sjoding, et al., 2020), our methodology diverges in various ways. Firstly, non-deterministic policies sacrifice performance to provide a set of potential solutions instead of a single distinct action. In contrast, our method intends to improve policy performance. This is achieved as it combines a set-valued policy formulation to prune the action space in phase 1, allowing for more effective policies in phase 2. While non-deterministic policies remain prescriptive, this way our pruning function largely acts as a prohibitive measure that eliminates suboptimal choices from the action set.
Reinforcement Learning in Health.In recent years, RL has been the focus of several studies in healthcare. See (Liu et al., 2020) for a review. There has been a particular focus on personalized treatment plans for sepsis patients in the ICU (Henry et al., 2015; Futoma et al., 2017; Raghu, Komorowski, Celi, et al., 2017; Komorowski et al., 2018; Saria, 2018; Peng et al., 2018; Tang, Modi, M. W. Sjoding, et al., 2020; Raghu, Komorowski, and Singh, 2018). However, these studies either leverage sparse mortality information alone to assign rewards or combine multiple intermediate sepsis risk proxies based on a deterministic weight. To the best of our knowledge, our approach is the first attempt to explicitly address the challenges imposed by sparse rewards in this context by proposing to prune the action space before estimating a final policy.
Dead-End State Identification.Given the focus of the pruning stage of our algorithm on eliminating dominated actions, our method is related to work on dead-end identification in RL (Irpan et al., 2018; Fatemi, Sharma, et al., 2019; Fatemi, Killian, et al., 2021). For instance, Fatemi, Killian, et al. (2021) developed an RL algorithm to identify and avoid "medical dead-end" states, defined as states from which no action can be made to achieve a positive terminal outcome (e.g. survival). While the aim of this method is conceptually close to the goal of our pruning stage, we propose a distinct technical approach to achieve this goal. Fatemi, Killian, et al. (2021)'s work focuses on classifying states that should be avoided; in contrast, our method directly identifies and excludes dominated state-action pairs. Furthermore, our method does not stop at the identification of risky actions but instead leverages the pruned action space for the subsequent training of an optimal policy.
Notation and Preliminaries
Reinforcement Learning.Following the terminology commonly adopted in the area of RL, we imagine an agent interacting with an environment. As the agent executes an _action_, denoted as \(a\in\mathcal{A}\), in a given _state_, denoted as \(s\in\mathcal{S}\), a _reward_, expressed as \(r(s,a)\), materializes, and the environment's state subsequently updates to \(s^{\prime}\). The agent can then utilize this reward as feedback to make more optimal future action choices. The agent's decision-making process is often elucidated through a _policy_. In general, a policy \(\pi:\mathcal{S}\to\Delta(\mathcal{A})\) provides a distribution over potential actions for each state. A deterministic policy \(\pi:\mathcal{S}\to\mathcal{A}\) singles out a specific action given the current state. Our study particularly focuses on _offline_ RL, where the aim is to devise an effective policy based on previously collected data. Simultaneously, our algorithms are also applicable in _off-policy_ settings where the agent interacts with the environment at selected time steps. We denote the available data as a set of transitions \(\mathcal{D}\). Each transition is a tuple \((s,a,s^{\prime},r)\) demonstrating that at state \(s\), the action \(a\) is taken, which resulted in a state transition to \(s^{\prime}\) and a reward of \(r\).
Q-Learning.A standard (scalar) _Q-function_\(Q^{\pi}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) estimates the value of policy \(\pi\) at state \(s\) given that the agent takes action \(a\) at \(s\) and follows policy \(\pi\) thereafter. This value signifies a discounted sum of rewards, with any rewards realized at a later timestep being discounted by a factor of \(\gamma\). At times, we omit the superscript \(\pi\) as our focus is on the optimal policy. The deep learning structure that implements the Q-function is occasionally referred to as a _Q-network_. We use the terms Q-function and Q-network interchangeably. In the context of Q-Learning, we sample a batch of transitions \(\mathcal{B}\in\mathcal{D}\) and employ the Bellman equation to update the Q-network:
\[Q(s,a)\gets r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{\prime}). \tag{1}\]
Let the Q-network be parameterized by \(\theta\). Then the above notation is a shorthand to update \(\theta\) from its current value \(\theta_{0}\) by minimizing the loss
\[\mathcal{L}(\theta)=\sum_{(s,a,s^{\prime},r)\in\mathcal{B}}\bigl{(}Q_{\theta} (s,a)-r-\gamma\max_{a^{\prime}}Q_{\theta_{0}}(s^{\prime},a^{\prime})\bigr{)}^ {2}. \tag{2}\]
Commonly, we implement this update via a single or a couple of gradient descent steps. However, Q-learning based on the above formulation suffers from a fast-moving target and overestimation. These challenges can be addressed by employing a target network \(Q^{\prime}\) along with an update rule, recognized as double Q-learning (Van Hasselt, Guez, and Silver, 2016):
\[Q(s,a)\gets r+\gamma\,Q^{\prime}\bigl{(}s^{\prime},\operatorname*{arg\, max}_{a^{\prime}}Q(s^{\prime},a^{\prime})\bigr{)}. \tag{3}\]
In the context of double Q-learning, the target network \(Q^{\prime}\) is updated by \(Q^{\prime}\gets Q\) after hundreds or thousands of updates to \(Q\). The final deterministic policy is then extracted as \(\pi(s;Q)=\operatorname*{arg\,max}_{a}Q(s,a)\).
(Suboptimal) Q-Learning With Softmax Policies.In the original Bellman equation, the optimal policy is greedy and deterministic with respect to the Q-values, achieved through the use of the \(\operatorname*{arg\,max}\) function to locate the most effective action. However, empirical evidence has shown that applying a softmax operator tends to yield superior policies, particularly in the context of deep Q-networks. This relaxation has also garnered recent theoretical support (Song, Parr, and Carin, 2019). Formally, given a Q-function \(Q\), we can derive a stochastic softmax policy \(\pi^{\beta}\) as
\[\pi^{\beta}(a|s;Q)=\frac{\exp\big{(}\beta Q(s,a)\big{)}}{\sum_{\bar{a}}\exp \big{(}\beta Q(s,\bar{a})\big{)}}, \tag{4}\]
where \(\beta\) is an inverse temperature parameter. Note that in the limit of \(\beta\to\infty\), the softmax policy converges to a deterministic \(\operatorname*{arg\,max}\) policy. Let us define
\[\operatorname*{softmax}_{a^{\prime}}Q(s^{\prime},a^{\prime})\coloneqq\sum_{a^ {\prime}}\pi^{\beta}(a^{\prime}|s^{\prime};Q)\,Q(s^{\prime},a^{\prime}). \tag{5}\]
We dropped the dependence on \(\pi^{\beta}\) from the softmax notation for convenience. Note that \(\operatorname{softmax}_{a^{\prime}}Q(s^{\prime},a^{\prime})\to\max_{a^{\prime}}Q(s^{ \prime},a^{\prime})\) as \(\beta\to\infty\). A Bellman update using softmax can then be obtained by substituting the max with a softmax operator in Equation 1. Similarly, we can implement double Q-learning using a softmax policy:
\[Q(s,a)\gets r+\gamma\sum_{a^{\prime}}\pi^{\beta}(a^{\prime}|s^{\prime};Q) \,Q^{\prime}\big{(}s^{\prime},a^{\prime}\big{)}. \tag{6}\]
Vector-Valued Reward and Q-Function.In many practical scenarios, the reward from an interaction can have multiple dimensions. For example, consider the case where there is a sparse main reward \(r_{*}\) as well as a set of noisy but more frequent rewards. In these cases, we gather all the rewards into a _vector reward_\(\mathbf{r}\in\mathbb{R}^{d}\) and use \(r_{i}\) to refer to the \(i^{\text{th}}\) reward. We introduce a _vector-valued Q-function_\(\mathbf{Q}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}^{d}\) which for any state-action pair outputs a vector of estimated values. \(Q_{i}\) denotes the \(i^{\text{th}}\) dimension of \(\mathbf{Q}\) that encodes the value from \(r_{i}\). We discuss learning vector-valued Q-functions in Section 4.1.
## 4 Methods
The main objective of this study is to formulate an approach that facilitates the efficient incorporation of frequent yet imprecise reward signals while preventing such signals from causing the policy to diverge from maximizing the primary sparse reward of interest. We hypothesize that while some biomarkers or clinical risk scores might be too noisy to be directly used in deriving an optimal policy they may still be useful to _identify actions to avoid_. Hence, we propose a two-stage algorithm: In the first phase, we learn a vector-valued Q-function relying on (noisy) intermediate rewards. This Q-function will be the basis to _prune_ the action set at each state. Then in the second phase, we search for the optimal policy based on the (accurate) sparse reward. Actions dropped in the first phase won't be available to the policy of the second phase, which reduces the complexity of the learning problem and thus facilitates learning from the sparse reward.
### Phase 1: Multi-Objective Deep Q-Learning
In this section, we first review direct ways to incorporate noisy intermediate reward signals into a reinforcement learning objective through explicit scalarization and discuss why this is challenging in our setting. We then motivate and propose a vector-valued Q-learning algorithm that avoids any explicit scalarization. We later augment our method for the offline setting by applying techniques from conservative Q-learning. We conclude the description of phase 1 by developing a stochastic policy based on the trained vector-valued Q-function.
Challenges of Single-Policy Methods.A direct method to combine a sparse main reward and intermediate reward signals is through scalarization of the reward vector \(\mathbf{r}\). In the multi-objective RL literature, this approach is generally referred to as single-policy methods. In its simplest form, consider a linearly weighted combination \(\mathbf{w}^{\text{T}}\mathbf{r}\) where \(\mathbf{w}\in\mathbb{R}^{d}\) determines the relative importance of each reward aspect. We assume \(w_{i}\geq 0\) and \(\sum_{i=1}^{d}w_{i}=1\). Leveraging the combined reward as the standard Q-learning objective, we can then obtain the Q-function \(Q^{\mathbf{w}}\) and its corresponding (possibly stochastic) policy \(\pi^{\beta}(\cdot|\cdot;Q^{\mathbf{w}})\). However, this approach comes with multiple challenges: First, we might not have knowledge of the best \(\mathbf{w}\) a priori. The selection of the weighting \(\mathbf{w}\) entails a trade-off between the smoothness of learning and the accuracy of approximating the main reward. The stronger a frequent intermediate reward is represented in the combined reward, the smoother the learning. But as the intermediate rewards carry a high level of noise, the final policy might deviate from maximizing \(r_{*}\) in an unwanted way. Second, a good \(\mathbf{w}\) to integrate the noisy intermediate reward signals might not exist at all. In fact, to facilitate learning, intermediate rewards should be present in the scalarized version with a minimal power. At this point, the signal from intermediate signals might be so strong that the Q-network mainly exploits the easy-to-learn intermediate rewards. Our experiments in the context of treating sepsis in an ICU setting suggest that such direct incorporation of intermediate reward signals into the objective leads to a deterioration of policy performance compared to a policy trained based on the sparse reward alone.
Challenges of Multi-Policy Methods With Ensemble of Policies.One way to get around the challenges of committing to a single weighting \(\mathbf{w}\) of rewards is to consider a set of weightings and obtain an ensemble of policies. Although this method does not obtain a single policy, it can still prune the action set at each state by removing the actions not prescribed by any policy in the ensemble. The possibility of having multiple weightings address our uncertainty about the best weighting but there remain multiple challenges: First, as mentioned previously, there might not be a good \(\mathbf{w}\) at all. In this case, policies in the ensemble are either unstable if their corresponding \(\mathbf{w}\) is heavily concentrated on the true sparse reward \(r_{*}\) or they are largely affected by noisy intermediate rewards. Second, training a comprehensive ensemble of models for a high-dimensional reward space is computationally expensive. To address these problems, we follow Barrett and Narayanan (2008) and Lizotte and Laber (2016) and suggest a vector-valued Q-learning approach that simultaneously learns optimal policies for several weightings of interest.
A Vector-Valued Q-Learning Approach.Given the aforementioned challenges of incorporating intermediate rewards through explicit scalarization, we present a Q-learning approach to obtain a vector-valued Q-function \(\mathbf{Q}\). We aim to obtain a vector-valued Q-function such that \(Q^{\mathbf{w}}(s,a)\approx\mathbf{w}^{\mp}\mathbf{Q}(s,a)\) for almost every \(\mathbf{w}\in\mathcal{W}\). Such a Q-function addresses the previously mentioned challenges as it does not commit to any explicit \(\mathbf{w}\) when incorporating intermediate rewards into learning and will subsequently be at the heart of our action-pruning approach in phase 2. In the following, we first show that a naive extension of the Bellman update to vector-valued Q-networks does not allow us to derive a universal update rule. Hence, to obtain a vector-valued Bellman update in the familiar form of Equation 1, we relax the problem in two ways: First, instead of estimating exact Q-functions, we conservatively approximate them. We show that our approximation may only underestimate values from actions unlikely to be taken by any optimal policy, and thus should have a minimal effect on performance. Second, we limit the weightings of interest by considering a prior \(\mathcal{P}\) over \(\mathcal{W}\). A good prior should be both inclusive and spread-out. It should assign enough weight to the true reward, ensuring its presence in the pruning policy with a non-negligible probability, while avoiding concentration that leads to the same problems as fixing a specific \(w\).
First, to show that a naive extension of the Bellman update of \(Q^{\mathbf{w}}\) cannot provide a universal updating rule for \(\mathbf{Q}\), consider a fixed \(\mathbf{w}\) and rewrite the Bellman update of \(Q^{\mathbf{w}}\) assuming \(Q^{\mathbf{w}}(s,a)=\mathbf{w}^{\mp}\mathbf{Q}(s,a)\):
\[\mathbf{w}^{\mp}\mathbf{Q}(s,a)\leftarrow\mathbf{w}^{\mp}\mathbf{r}+\gamma\operatorname*{ softmax}_{a^{\prime}}\mathbf{w}^{\mp}\mathbf{Q}(s^{\prime},a^{\prime}). \tag{7}\]
The above update depends on \(\mathbf{w}\) and in general, cannot be true for every \(\mathbf{w}\in\mathcal{W}\). To observe this, note that the left-hand side of Equation 7 is linear in \(\mathbf{w}\), but the right-hand side is not.2
Footnote 2: In the case of deterministic policies, the right-hand side will be piecewise linear.
To make the update of Equation 7 universally true for every \(\mathbf{w}\), we propose a linear conservative approximation:
\[\operatorname*{softmax}_{a^{\prime}}\mathbf{w}^{\mp}\mathbf{Q}(s^{\prime},a^{\prime}) \approx\sum_{a^{\prime}}\pi^{\beta}(a^{\prime}|s^{\prime};\hat{\mathbf{w}}^{\mp} \mathbf{Q})\;\mathbf{w}^{\mp}\mathbf{Q}(s^{\prime},a^{\prime}), \tag{8}\]
where \(\hat{\mathbf{w}}\sim\mathcal{P}\). To see that this approximation is conservative for any \(\hat{\mathbf{w}}\), note that for large values of \(\beta\), \(\pi^{\beta}\) converges to a deterministic policy which chooses \(\hat{a}=\operatorname*{arg\,max}_{a^{\prime}}\hat{\mathbf{w}}^{\mp}\mathbf{Q}(s^{ \prime},a^{\prime})\) and we have
\[\operatorname*{softmax}_{a^{\prime}}\mathbf{w}^{\mp}\mathbf{Q}(s^{\prime},a^{\prime}) \rightarrow\max_{a^{\prime}}\mathbf{w}^{\mp}\mathbf{Q}(s^{\prime},a^{\prime})\geq\mathbf{ w}^{\mp}\mathbf{Q}(s^{\prime},\hat{a}). \tag{9}\]
Using the approximation in Equation 8, both sides of Equation 7 become linear in \(\mathbf{w}\) and we can thus derive a vector update for \(\mathbf{Q}\) independent of \(\mathbf{w}\):
\[\mathbf{Q}(s,a)\leftarrow\mathbf{r}+\gamma\sum_{a^{\prime}}\pi^{\beta}(a^{\prime}|s^{ \prime};\hat{\mathbf{w}}^{\mp}\mathbf{Q})\;\mathbf{Q}(s^{\prime},a^{\prime}). \tag{10}\]
The approximation of Equation 8 can be loose for a specific \(\mathbf{w}\) if the optimal action for \(\hat{\mathbf{w}}\) at state \(s^{\prime}\) diverges significantly from the optimal action for \(\mathbf{w}\). We address this potential issue in the following manner. First, we consider a prior \(\mathcal{P}\) over \(\mathcal{W}\). This prior can rule out implausible weightings. Second, when considering a fixed \(\mathbf{w}\), if \(\pi^{\beta}(a|s;\mathbf{w}^{\mp}\mathbf{Q})\ll 1\), we do not need to worry about the approximation being inaccurate as it most likely underestimates the value of an action unlikely to be taken. Conversely, if \(\pi^{\beta}(a|s;\mathbf{w}^{\mp}\mathbf{Q})\) is substantial,
we desire a tight approximation. Hence, we only require \(\hat{\mathbf{w}}\) to be a close approximation for a \(\mathbf{w}\) that results in the selection of action \(a\) at state \(s\). Formally, given the prior \(\mathcal{P}\) and the observation that action \(a\) is taken at state \(s\), we can procure a posterior over \(\mathcal{W}\):
\[\mathcal{P}(\mathbf{w}|s,a)\sim\mathcal{P}(\mathbf{w})\;\pi^{\beta}(a|s;\mathbf{w}^{ \mathbb{T}}\mathbf{Q}). \tag{11}\]
We then draw \(\hat{\mathbf{w}}\) from this posterior. Any posterior sampling method can be used here. For simplicity, we use a one-iteration of particle filtering which is essentially sampling particles from \(\mathcal{P}\), reweighting them according to \(\pi^{\beta}\), normalizing the weights, and then resampling.
To further improve the stability and performance of the algorithm, we use double-Q learning (Van Hasselt, Guez, and Silver, 2016) by introducing another vector-valued Q-network \(\mathbf{Q}^{\prime}\) as the target network. The target network \(\mathbf{Q}^{\prime}\) is updated after multiple updates of \(\mathbf{Q}\) by copying the weights from \(\mathbf{Q}\). Our update rule for \(\mathbf{Q}\) is:
\[\mathbf{Q}(s,a)\leftarrow\mathbf{r}+\gamma\sum_{a^{\prime}}\pi^{\beta}(a^{\prime}|s^{ \prime};\hat{\mathbf{w}}^{\mathbb{T}}\mathbf{Q})\;\mathbf{Q}^{\prime}(s^{\prime},a^{\prime }), \tag{12}\]
where we draw \(\hat{\mathbf{w}}\) from posterior \(\mathcal{P}(\cdot|s,a)\). We will refer to this method as _multi-objective Q-learning (MQL)_ in the following sections.
Applying Techniques From Conservative Q-Learning.Offline Q-learning is prone to overestimating Q-values for states and actions not present in the dataset. Conservative Q-learning (CQL) has been proposed to mitigate this issue (Kumar et al., 2020). CQL seeks to prevent the overestimation of Q-values for out-of-distribution \((s,a)\) pairs by enforcing a conservative loss function. Note that our proposed approximate update rule of Equation 12 is already conservative. However, we can further increase its conservativeness towards actions that are already present in the data by applying a technique similar to CQL. Specifically, let \(\mathbf{Q}\) be parameterized by \(\theta\). We use an augmented loss function
\[\begin{split}\mathcal{L}_{\alpha}\big{(}\theta;(s,a,s^{\prime}, \mathbf{r})\big{)}=\mathcal{L}\big{(}\theta;(s,a,s^{\prime},\mathbf{r})\big{)}\\ +\frac{\alpha}{d}\sum_{i\in[d]}\Big{[}\log\big{(}\sum_{\bar{a}} \exp Q_{i,\theta}(s,\bar{a})\big{)}-Q_{i,\theta}(s,a)\Big{]},\end{split} \tag{13}\]
where \(\mathcal{L}\big{(}\theta;(s,a,s^{\prime},\mathbf{r})\big{)}\) is the loss from the updating rule of Equation 12:
\[\big{\|}\mathbf{Q}_{\theta}(s,a)-\mathbf{r}-\gamma\sum_{a^{\prime}}\pi^{\beta}(a^{ \prime}|s^{\prime};\hat{\mathbf{w}}^{\mathbb{T}}\mathbf{Q})\;\mathbf{Q}^{\prime}(s^{ \prime},a^{\prime})\big{\|}_{2}^{2}. \tag{14}\]
For a sampled batch of data \(\mathcal{B}\), the overall loss thus becomes:
\[\mathcal{L}_{\alpha}=\sum_{(s,a,s^{\prime},\mathbf{r})\in\mathcal{B}}\mathcal{L}_ {\alpha}\big{(}\theta;(s,a,s^{\prime},\mathbf{r})\big{)}. \tag{15}\]
We will refer to this method as _multi-objective conservative Q-learning (MCQL)_.
A Stochastic Policy Out of Vector-Valued Q-Function.Neither MQL nor MCQL requires obtaining an explicit policy from \(\mathbf{Q}\) to be trained in the offline setting. However, in the off-policy setting, the agent interacts with the environment and therefore requires to have a policy at different points during training. We can define a stochastic policy based on \(\mathbf{Q}\) for a prior \(\mathcal{P}\) over weightings as
\[\pi^{\beta}_{\mathcal{P}}(a|s)\coloneqq\mathbb{E}_{\mathbf{w}\sim\mathcal{P}} \big{[}\pi^{\beta}(a|s;\mathbf{w}^{\mathbb{T}}\mathbf{Q})\big{]}. \tag{16}\]
In practice, the agent interacts with the environment by sampling from \(\pi^{\beta}_{\mathcal{P}}\) in two steps:
1. Draw \(\mathbf{w}\sim\mathcal{P}\).
2. Draw \(a\sim\pi^{\beta}(\cdot|s;\mathbf{w}^{\mathbb{T}}\mathbf{Q})\).
The importance of \(\pi^{\beta}_{\mathcal{P}}\) extends beyond the off-policy setting as we will use it to prune actions at each state in the next section.
### Phase 2: Q-Learning with Pruning
The new notion of stochastic policy \(\pi^{\beta}_{\mathcal{P}}\) in Equation 16 can be interpreted as a relaxation to the notion of policy with explicit dependence on a specific \(\mathbf{w}\). An important property of \(\pi^{\beta}_{\mathcal{P}}\) is its inclusiveness, meaning that as long as the accurate reward \(r_{*}\) is sufficiently strongly represented in some \(\mathbf{w}\) with a noteworthy measure, the action maximizing \(r_{*}\) will be present in \(\pi^{\beta}_{\mathcal{P}}\) with a non-negligible probability. Therefore, a reasonable way to prune the action set is to drop actions with \(\pi^{\beta}_{\mathcal{P}}(a|s)\) below a threshold. But calculating \(\pi^{\beta}_{\mathcal{P}}(a|s)\) requires a posterior calculation that can be computationally hard. Thus, we propose the following pruning function \(\Pi^{\beta}:\mathcal{S}\to 2^{\mathcal{A}}\) by sampling from \(\pi^{\beta}_{\mathcal{P}}\):
1. At state \(s\), draw \(m\) actions \(a^{(k)}\sim\pi^{\beta}_{\mathcal{P}}(\cdot|s)\) for \(k\in[m]\): 1. Draw \(m\) samples \(\mathbf{w}^{(k)}\sim\mathcal{P}\) for \(k\in[m]\). 2. For each \(\mathbf{w}^{(k)}\), draw \(a^{(k)}\sim\pi^{\beta}(\cdot|s;\mathbf{w}^{(k)^{\mathsf{T}}}\mathbf{Q})\).
2. Set \(\Pi^{\beta}(s)=\{a^{(k)}\mid k\in[m]\}\).
The choice of \(m\) and \(\beta\) determines the expected size of the set \(\Pi^{\beta}(s)\). As a rule of thumb, actions with \(\pi^{\beta}_{\mathcal{P}}(a|s)<1/m\) are unlikely to remain in the action set after pruning. In practice, we fix \(m\) to at least three times \(|\mathcal{A}|\) to make sure that all actions have the chance to be in \(\Pi^{\beta}(s)\) but very rare actions do usually not appear. Then tuning the value of \(\beta\) allows us to control pruning strictness as larger values move the policy closer to a deterministic policy. We treat \(\beta\) as a hyperparameter.
For a choice of \(\beta\) we then run standard double-Q learning using only the sparse main reward and restricting the action set to the actions chosen by pruning function \(\Pi^{\beta}\). In doing so, we revisit Equation 3:
\[Q(s,a)\gets r+\gamma\,Q^{\prime}\big{(}s^{\prime},\operatorname*{arg\, max}_{a^{\prime}\in\Pi^{\beta}(s^{\prime})}Q(s^{\prime},a^{\prime})\big{)}. \tag{17}\]
It is worth noting that a similar modification of the loss function as in our discussion of conservative Q-learning (Kumar et al., 2020) yields a conservative version of Q-learning with pruning.
Examining Equation 17, we observe that \(\Pi^{\beta}(s^{\prime})\) is the only place where we incorporate noisy intermediate reward signals. Consequently, our final policy focuses solely on optimizing the accurate reward, ensuring that the original objective remains unaltered. We claim that the disentanglement of noisy and accurate rewards through this two-stage algorithm enables us to effectively incorporate information from intermediate signals to simplify the learning problem while minimally manipulating the policy's true objective.
We refer to the two-stage algorithm presented in this paper as _Pruned QL_, which consists of MQL in the first phase and Q-learning with pruning in the second phase. In the offline setting, we call the conservative version consisting of MCQL in the first phase and CQL with pruning in the second phase, _Pruned CQL_.
## 5 Synthetic Experiments
We first present the capability of our method within the OpenAI Gym Lunar Lander environment (Brockman et al., 2016), which provides the advantage of allowing us to observe the performance of the learned policy when rolled out. In Lunar Lander, the objective is to successfully land a small spaceship on the moon. In addition to the ultimate goal, three intermediate rewards related to the shape and fuel efficiency of the lander are also available, which can guide the landing process. The availability of a sparse main objective and informative intermediate signals makes Lunar Lander an appropriate choice for evaluating our proposed approach.
In particular, we conduct two experiments. First, we examine whether incorporating intermediate signals through action pruning is effective. To do so, we train a (double) Q-learning baseline model3 on the sparse reward and compare it to our Pruned QL method. Please refer to the note at the end of this section for the model parameters. Figure 1(a) shows the development of the policy best returns as we interact more with the environment. After \(500,000\) iterations the baseline method struggles to learn a good policy solely
based on the sparse reward and yields an average return of \(-70\) close to the minimum of \(-100\). In contrast, our Pruned QL approach, which also only leverages sparse rewards in its final learning phase, achieves a substantially higher best return of around \(50\) within a shorter time. These results hold for varying pruning strengths \(\beta\). It is important to note that Pruned QL benefits from additional reward signals during its first stage of MQL, making a direct comparison with the baseline not entirely fair. Nonetheless, these results are promising in that they underscore the potential of MQL-based action space pruning.
We next demonstrate how Pruned QL behaves as the intermediate reward signal becomes less precise. To achieve this we conduct a second experiment where we add white noise with standard deviations of \(0.1\), \(1\), and \(10\) to the intermediate rewards. Figure 1(b) depicts the results. We observe that Pruned QL remains robust until raising the white noise standard deviation to \(10\), where it only does marginally better than the baseline. This result illustrates two important characteristics of our method. First, it shows that Pruned QL has generally high robustness to noise. Second, the fact that the performance drops when the noise standard deviation is increased to \(10\) indicates that the intermediate rewards considered in this method must carry some relevant information about the sparse outcome of interest. At a white noise standard deviation of \(10\), the signal-to-noise ratio (SNR) of the shape reward is less than \(0.5\) and the SNR for the fuel rewards is less than \(0.02\), diluting nearly the entirety of the reward signal. Further evidence about the noise-robustness of different pruning strengths is provided in Section A.1.
Model Parameters.In both experiments on Lunar Lander we used similar parameters. We used a Dirichlet distribution with parameter \(10\) as the prior over the weighting of rewards.4 We did not discount future rewards (\(\gamma=1\)). The Q-function is implemented as a \(3\)-layer feed-forward neural network with ReLU activation. We used the Adam optimizer with a learning rate of \(10^{-4}\). The frequency of updating the target network is once every \(3000\) updates of the Q-network. Please refer to the accompanying code for the remaining Q-learning parameters.
Footnote 4: We chose a Dirichlet distribution as our prior because it is flexible, intuitive, and often the first choice for modeling nonnegative variables that sum to one, like probabilities. Our results are not sensitive to this parameter choice as long as the chosen prior is not very spread-out or very concentrated.
## 6 Real-World Data Experiments
### Cohort and Study Design
We evaluate our framework in a real-life offline learning setting by training and evaluating policies for the management of vasopressor and intravenous (IV) fluids in septic patients at intensive care units (ICU).
Figure 1: Learning curves of Pruned QL vs. best baseline Q-network under (a) different pruning strength, and (b) varying noise level.
Existing works (Komorowski et al., 2018; Peng et al., 2018; Fatemi, Killian, et al., 2021) in this area have mainly focused on reward specifications based on 90-day mortality. In contrast, we investigate whether Pruned CQL can facilitate learning superior policies by incorporating information from intermediate severity proxies such as the SOFA score (Vincent et al., 1996) and the patient's lactate level.
Data.We will use data from a cohort of septic ICU patients in the MIMIC (Medical Information Mart for Intensive Care)-III dataset (v1.4) (Johnson et al., 2016). This dataset consists of deidentified electronic health records of over \(40,000\) patients that were admitted to the critical care units of the Beth Israel Deaconess Medical Center in Boston between 2001 and 2012. To construct our sample, we followed the preprocessing steps applied in Komorowski et al. (2018). We excluded patients younger than 18 years old at ICU admission or where mortality and intravenous fluid intake have not been documented. A trajectory includes data from 24 hours before the sepsis onset up until 48 hours after the onset, recorded in 4-hour intervals. For every trajectory, we extracted a set of 48 variables, including demographics, Elixhauser comorbidity index (Elixhauser et al., 1998), vital signs, laboratory test results, and medication dosing decisions. All models are trained on 80% of the data, validated on 5%, and tested on 15% for the final evaluation.
Action Set and State Space.Following previous work by Komorowski et al. (2018) and Fatemi, Killian, et al. (2021), we discretize actions into 25 treatment choices (5 discrete levels for IV fluids and 5 discrete levels for vasopressor dosage). To derive the state space, we applied K-means clustering to the 44-dimensional features and obtained 752 clusters. As the resulting cluster indicators were not numerically meaningful, we additionally applied the continuous bag of words method (Mikolov et al., 2013) to the patients' state trajectories. This provided us with a 13-dimensional representation for each cluster. Thus, our problem formulation is based on a 25-dimensional action space and a 13-dimensional state space. The resulting dataset consists of \(20,912\) ICU stay trajectories, of which \(4,917\) resulted in patient death within 90 days of the patient's critical care visit.
Reward Specification.We assign a reward of 100 to the final state of a trajectory if the patient survives for at least 90 days past ICU admission, and a reward of \(-100\) otherwise. Achieving a low 90-day mortality is the ultimate goal of our learning agent but since this is a sparse and delayed signal, we further include four medically-motivated rewards that are observed more frequently throughout the patient's stay:
* **One-period SOFA score change**: The negative of the one-period change in SOFA score.
* **Two-period SOFA score change**: The negative of the two-period change in SOFA score.
* **One-period lactate level change**: The negative of the one-period change in the lactate level.
* **Two-period lactate level change**: The negative of the two-period change in the lactate level.
SOFA (Vincent et al., 1996) is a medical risk score that summarizes the extent of a patient's organ failure and in recent years has become a key indicator of the sepsis syndrome (Lambden et al., 2019). Arterial lactate levels are an important biomarker for septic shock because they are closely associated with cellular hypoxia. Sepsis can cause an imbalance in oxygen supply and demand, resulting in inadequate delivery of oxygen to the cells and tissues. This can lead to anaerobic metabolism and the production of lactate as a byproduct. Increased arterial lactate levels, therefore, indicate that there is a mismatch between oxygen supply and demand and that the body is experiencing hypoxia (Gernardin et al., 1996). Reward values are scaled to have a standard deviation of one before feeding to the algorithms.
Model Parameters.In this offline RL setting, we employ Pruned CQL with a conservativity level of \(\alpha=0.001\) in all experiments. As the trajectories tend to be short in this setting, we do not discount future rewards (\(\gamma=1\)). The Q-function is implemented as a 3-layer feed-forward neural network with ReLU activation. We used a Dirichlet distribution with parameter 10 as the prior \(\mathcal{P}\) but results are not sensitive to this choice as long as the prior is not very broad or very concentrated. We treated \(\beta\) as a hyperparameter that takes a value from 20, 40, or 160. We used the Adam optimizer with a learning rate of \(10^{-5}\) in the first phase and \(10^{-4}\) in the second phase. The target network is updated after every 1000 and 8000 updates of
the Q-network in the first and second phases, respectively. The baseline model employs the same parameters as the second-phase model. Please refer to the accompanying code for the remaining Q-learning parameters.
### Policy Evaluation Approach
In the offline RL setting, it is not feasible to roll out the learned policies and observe their returns. Therefore, we will evaluate the policies using weighted importance sampling. Additionally, we will use a descriptive measure to assess whether the learned Q-functions are capable of distinguishing between high and low mortality trajectories.
There are several approaches to estimating policy values in offline settings, each with its own limitations. Interested readers may refer to Tang and Wiens (2021) for an empirical comparison of different methods and Gottesman et al. (2018) for practical considerations. One effective class of methods for policy evaluation in the offline setting is the class of importance sampling methods, which estimate the policy value by re-weighting trajectories based on their relative likelihood of occurring. However, this requires having a stochastic policy and knowledge of the physician's policy. To accommodate our final phase 2 deterministic policy, we thus follow the recommendation of Tang, Modi, M. W. Sjoding, et al. (2020) and soften our policy: \(\tilde{\pi}(a|s)=(1-\epsilon)\mathbbm{1}\{a=\pi(s)\}+\frac{\epsilon}{|\mathcal{ A}|-1}\mathbbm{1}\{a\neq\pi(s)\}\). Here \(\pi\) and \(\tilde{\pi}\) are the original deterministic policies and the softened version, respectively, and \(\epsilon\) is a hyperparameter set to \(\epsilon=0.01\). To calculate the importance ratio, we approximate the physician's policy with a stochastic policy by training a multi-class logistic regression with cross-entropy loss. We then implement the weighted importance sampling (WIS) method to estimate policy values as it has a lower variance than ordinary importance sampling methods. Detailed information on the formulation of WIS can be found in Gottesman et al. (2018) and Tang and Wiens (2021).
One drawback of the WIS estimator is that it is biased toward the behavioral policy. Hence we further analyze a descriptive measure to provide a supplemental perspective to the established off-policy evaluation WIS. Specifically, we aim to evaluate the capability of our Q-function \(Q\) to differentiate between trajectories that resulted in patient survival compared to those that led to patient death. This is relevant since a high capacity of a Q-function to distinguish between low and high-risk state-action pairs increases our confidence that choosing the actions with the highest predicted Q-value is beneficial. We adopt the following approach to estimate such a descriptive measure. Consider a trajectory \(\tau\) from the test data and let \(r_{*}^{\tau}\) be the mortality-based reward assigned to its final state. As noted above, \(r_{*}^{\tau}=100\) if the patient survives and \(r_{*}^{\tau}=-100\) otherwise. We compute \(Q(s,a)\) for each state-action pair \((s,a)\in\tau\). Note that \(Q(s,a)\) is supposed to be the expected return from the optimal policy at state \(s\) followed by action \(a\) and not the return of the physician's policy \(r_{*}^{\tau}\). But if the optimal policy is sufficiently similar to the physicians, \(Q(s,a)\) shall reflect \(r_{*}^{\tau}\), and we would like to see a larger \(Q(s,a)\) if \(\tau\) results in the patient survival. To measure this association, we bin the Q-values into quartiles and calculate the mortality rate in the lower quartile range \(MR(Q_{1})\) and the mortality rate in the upper quartile range \(MR(Q_{3})\). We then report the difference \(\Delta MR=MR(Q_{1})-MR(Q_{3})\). A larger \(\Delta MR\) indicates that the policy is more effective in distinguishing survival from death trajectories.
### Policy Evaluation Results
We first observe that directly incorporating rewards based on the SOFA score and arterial lactate level does not improve performance. In fact, testing a range of relative weights, we found that the policies that performed the best were those that assigned the lowest weight to the intermediate rewards. This observation suggests that directly including intermediate signals may lead to more challenges than benefits in this setting. We further discuss this phenomenon and potential explanations for it in Section B.1.
We next evaluate whether our two-stage Pruned CQL algorithm can leverage the additional information provided by the intermediate rewards to achieve a better \(\Delta MR\) and higher policy value estimated by WIS. The results of our evaluation are summarized in Table 1. We note four key observations from our results. First, Pruned CQL often achieves better \(\Delta MR\) and WIS values compared to CQL, indicating that our two-stage strategy has been effective in extracting information from the intermediate rewards. Second, Pruned CQL with \(\beta=160\) is superior to the physician's policy with the observed value of \(51.9\). Since WIS is conservative in nature, this is an encouraging performance for the obtained policy. The standard CQL method does not show such promise. Third, one potential concern regarding action pruning is that it may eliminate important options from the action space. However, even when we perform strong pruning with
\(\beta=160\), the performance of the final policy is not substantially affected, and in fact, the final policies became more similar to those of the physicians, suggesting that the pruning procedure primarily removed less relevant actions. Last but not least, our method often suffers from less variance, here measured across 10 random seeds, which offers higher reliability.
Furthermore, Table 1 demonstrates that more stringent pruning leads to policies that are more similar to those of physicians. This is expected, as stricter pruning corresponds to a greater reliance on intermediate severity indicators, which are important in the physician's workflow. Similarly, as expected a higher value of the conservativity \(\alpha\) results in a greater agreement between the CQL policies and the actions taken by physicians. Notably, however, at similar levels of agreement with the actions taken by physicians, the Pruned CQL policy outperforms the baseline CQL policy in terms of \(\Delta MR\) and policy value. This effect is also evident in Figure 2, where we have plotted \(\Delta MR\) and the WIS value against the similarity to physician actions. The different points on the figure correspond to varying levels of \(\alpha\) and \(\beta\). Different trends for \(\Delta MR\) and WIS-based value might origin from the fact that WIS is biased towards policies more similar to the behavior policy. The results indicate that pruning is more effective than a standard conservative loss in learning effective RL policies while enforcing closeness to the behavior policy. Overall, this recommends action pruning as a promising avenue for incorporating human guidelines into policy optimization.
To further analyze the Q-values obtained from pruned CQL we calculated Q-values for each state-action pair in the test data and divided them into 25 bins. For each bin, we calculated the probability of patient survival. Figure 3 presents the survival rate per bin for pruned CQL with a conservativity level of \(\alpha=0.001\) and pruning strictness of \(\beta=40\). The plot shows a strong positive relationship, indicating a survival rate below 30% for state-action pairs with the lowest Q-values and of over 80% for trajectories with the highest Q-values. These results suggest that the Pruned CQL policy is based on a Q-function that effectively
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & \(\Delta MR\) (\%) & Value (WIS) & Similarity to physicians (\%) \\ \hline CQL(\(\alpha=0.001\)) & \(24.6\pm 1.0\) & \(14\pm 18\) & \(10.4\pm 0.7\) \\ CQL(\(\alpha=0.005\)) & \(23.6\pm 1.0\) & \(35\pm 21\) & \(18.6\pm 0.6\) \\ CQL(\(\alpha=0.01\)) & \(22.6\pm 0.9\) & \(26\pm 25\) & \(26.2\pm 1.0\) \\ \hline Pruned CQL(\(\alpha=0.001\), \(\beta=20\)) & \(24.5\pm 0.9\) & \(32\pm 12\) & \(11.3\pm 1.2\) \\ Pruned CQL(\(\alpha=0.001\), \(\beta=40\)) & \(25.2\pm 0.6\) & \(41\pm 15\) & \(15.6\pm 0.9\) \\ Pruned CQL(\(\alpha=0.001\), \(\beta=160\)) & \(24.2\pm 1.1\) & \(66\pm 19\) & \(22.1\pm 0.9\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different methods in terms of \(\Delta MR\), WIS-based policy value, and share of common actions between the policy and physicians. Standard errors are calculated based on ten random seeds. The behavior policy has a return of 51.9.
Figure 2: Comparison of Pruned CQL and CQL in terms of \(\Delta MR\) and WIS-based policy value, for different degrees of overlap with the behavior policy. Dashed lines correspond to linear fits.
distinguishes high- and low-risk state-action pairs, which is crucial for identifying relevant policies. In Section B.2 we demonstrate similar results for pruning strictness levels of \(\beta=20,160\).
### Pruning Analysis
In order to evaluate our pruning quality, we investigate two key questions: 1) Does our pruning procedure in phase 1 allow us to significantly reduce the size of the action space? 2) Is the resulting action space consistent with current medical practice?
Table 2 presents the average number of available actions after pruning and the corresponding recall of these action sets. As there is no ground truth best policy available, we define recall based on whether the pruned action sets contain the physician actions taken in each respective state in the test set. Since physicians are highly trained professionals and may have access to additional indicators not captured in our records, we aim to retain their decisions among the available options as much as possible. We observe that our pruning procedure significantly reduces the size of the original 25-dimensional action space. Moderate pruning with \(\beta=40\) reduces the mean action set size to less than half, while stricter pruning with \(\beta=160\) reduces the average number of available actions to almost one-sixth. However, even with such aggressive pruning, our method manages to maintain a high recall, ranging from 50% to above 90%, well beyond the chance level.
To investigate the pruning behavior further, Figure 4 displays the distribution of removed actions compared to the actions taken by physicians in the test set for \(\beta=40\). A similar observation holds for less or more strict pruning levels, as described in Section B.3. The figure suggests that our procedure primarily prunes extreme dosing regimes or incoherent decisions, such as a low intravenous fluid dose, while assigning a high vasopressor dose. These insights further support the idea that the pruning procedure reduces the complexity of the RL problem while retaining relevant actions for the second learning stage.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(\beta\)} & Num. of available actions & \multirow{2}{*}{Recall (\%)} \\ & after pruning & \\ \hline
20 & \(19.7\pm 0.3\) & \(94.7\pm 0.3\) \\
40 & \(11.6\pm 0.7\) & \(83.3\pm 1.4\) \\
160 & \(4.1\pm 0.3\) & \(49.4\pm 1.9\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean action set size and recall for different pruning levels. The initial action set size was 25.
Figure 3: Survival rate by Q-value. Rate plotted for 25 equal-sized bins.
## 7 Discussion
In this study, we have introduced a novel reinforcement learning method that utilizes action space pruning to facilitate learning when the main reward signal is sparse, intermediate signals are approximate, and there is ambiguity about the weights that should be given to optimal reward components. Our work introduces new algorithms in order to effectively integrate more frequent but imprecise reward proxies into learning.
We demonstrate in the off-policy setting of the Lunar Lander environment that action-space pruning enables Q-learning from the sparse reward alone when standard Q-learning approaches fail. Furthermore, in an offline setting aiming to design vasopressor and intravenous fluid dosing policies for septic patients in the ICU, our results indicate the effectiveness of the pruning approach to reduce the size of the action space while preserving relevant actions. Our trained policy behaves safely by staying mostly consistent with the physician's policy while achieving better performance than a conservative Q-learning policy for the same level of agreement with the physician's actions. These results indicate that our learning framework facilitates the efficient incorporation of recurrent yet imprecise reward signals while preventing such signals from causing the policy to diverge from maximizing the primary sparse reward of interest.
Though motivated by the healthcare setting our approach is applicable to further domains in which the true reward signal of interest is sparse and available proxies are frequent but only provide imperfect indicators of the ultimate outcome.
Limitations.While our study provides valuable insights, it has some limitations. First, our results are based on data up to the year 2012, and sepsis treatment guidelines have been evolving since then. Hence, the estimated physician policies in this work may reflect a slightly outdated standard of care. Second, our method relies on the availability of intermediate reward signals that can serve as proxies for the final outcome of interest. In cases where such meaningful signals are not available, our method may not be applicable. Finally, in this study, we have utilized weighted importance sampling (WIS) as an off-policy evaluation method. Although it is a widely accepted technique for evaluating RL policies in offline settings, it is also known to have high variance and is dependent on accurate estimation of the behavior policy (Tang and Wiens, 2021). We partially addressed this issue by evaluating our policy using a related descriptive measure that highlights consistent results with the provided WIS estimates.
Figure 4: Distribution of observed physician actions and pruned actions (\(\beta\)=40). |
2307.10522 | Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language
Models | Recent studies have revealed that the widely-used Pre-trained Language Models
(PLMs) propagate societal biases from the large unmoderated pre-training
corpora. Existing solutions require debiasing training processes and datasets
for debiasing, which are resource-intensive and costly. Furthermore, these
methods hurt the PLMs' performance on downstream tasks. In this study, we
propose Gender-tuning, which debiases the PLMs through fine-tuning on
downstream tasks' datasets. For this aim, Gender-tuning integrates Masked
Language Modeling (MLM) training objectives into fine-tuning's training
process. Comprehensive experiments show that Gender-tuning outperforms the
state-of-the-art baselines in terms of average gender bias scores in PLMs while
improving PLMs' performance on downstream tasks solely using the downstream
tasks' dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM
that works with original fine-tuning. | Somayeh Ghanbarzadeh, Yan Huang, Hamid Palangi, Radames Cruz Moreno, Hamed Khanpour | 2023-07-20T01:48:51Z | http://arxiv.org/abs/2307.10522v1 | # Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
###### Abstract
Recent studies have revealed that the widely-used Pre-trained Language Models (PLMs) propagate societal biases from the large un-moderated pre-training corpora. Existing solutions require debiasing training processes and datasets for debiasing, which are resource-intensive and costly. Furthermore, these methods hurt the PLMs' performance on downstream tasks. In this study, we propose _Gender-tuning_, which debiases the PLMs through fine-tuning on downstream tasks' datasets. For this aim, Gender-tuning integrates Masked Language Modeling (MLM) training objectives into fine-tuning's training process. Comprehensive experiments show that Gender-tuning outperforms the state-of-the-art baselines in terms of average gender bias scores in PLMs while improving PLMs' performance on downstream tasks solely using the downstream tasks' dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM that works with original fine-tuning.
## 1 Introduction
Pre-trained Language Models (PLMs) have achieved state-of-the-art performance across various tasks in natural language processing Devlin et al. (2019); Liu et al. (2019); Clark et al. (2020). One of the crucial reasons for this success is pre-training on large-scale corpora, which is collected from unmoderated sources such as the internet. Prior studies Caliskan et al. (2017); Zhao et al. (2018); May et al. (2019); Kurita et al. (2019); Gehman et al. (2020) have shown that PLMs capture a significant amount of social biases existing in the pre-training corpus. For instance, they showed that the PLMs learned that the word "he" is closer to the word "engineer" because of the frequent co-occurrence of this combination in the training corpora, which is known as social gender biases. Since PLMs are increasingly deployed in real-world scenarios, there is a serious concern that they propagate discriminative prediction and unfairness.
Several solutions for mitigating the social biases have been proposed, including: using banned word lists Raffel et al. (2020), building deliberated training datasets Bender et al. (2021), balancing the biased and unbiased terms in the training dataset Dixon et al. (2018); Bordia and Bowman (2019), debiasing embedding spaces Liang et al. (2020); Cheng et al. (2021), and self-debiasing in text generation Schick et al. (2021). Although all these solutions have shown different levels of success, they tend to limit the PLMs' ability Meade et al. (2022). For example, the banned words solution prevent gaining knowledge of topics related to banned words. Also, some of them hurt the PLMs' performance on downstream tasks. Furthermore, dataset curation and pre-training are two resource-intensive tasks needed for most of the above solutions Schick et al. (2021).
In this study, we address the challenges mentioned above by proposing an effective approach named _Gender-tuning_ for debiasing the PLMs through fine-tuning on downstream tasks' datasets. For this goal, Gender-tuning perturbs the training examples by first finding the gender-words in the training examples based on a given gender-word list. Then Gender-tuning replaces them with the new words to interrupt the association between the gender-words and other words in the training examples (Table 1). Finally, Gender-tuning classifies the examples with the replaced words according to the original training examples' ground-truth labels to compute a joint loss from perturbation and classification for training the Gender-tuning.
The key advantage of our method is integrating the debiasing process into the fine-tuning that allows the debiasing and fine-tuning to perform simultaneously. Thus, Gender-tuning does not require separate pre-training or additional training data. Also, this integration makes Gender-tuning
a plug-and-play debiasing tool for any PLMs that works with original fine-tuning.
To evaluate the effectiveness of our proposed method, we conducted comprehensive experiments following two state-of-the-art debiasing baselines: SENT-DEBIAS (Sent-D) (Liang et al., 2020) and FairFil (FairF) (Cheng et al., 2021). The results show that Gender-tuning outperforms both baselines in terms of the average gender-bias scores in the BERT model while improving its performance on the downstream tasks. In addition, we reported the performance of Gender-tuning applied to the RoBERTa that shows considerable improvement. Finally, our ablation studies demonstrate that all components of Gender-tuning, including two training phases and joint loss, play an essential role in achieving success.
## 2 Methodology
We propose a novel debiasing approach, named Gender-tuning (Figure 1), that performs the debiasing process and fine-tuning simultaneously on the downstream tasks' dataset. For this aim, Gender-tuning integrates two training objectives: 1) Masked Language Modeling (MLM) training objective for gender-word perturbation and 2) Fine-tuning for classification. In each training batch, Gender-tuning works as follows:
Gender-tuning uses MLM to perturb training examples by masking the existing gender-word(s). For gender-words, we use the feminine and masculine word lists created by (Zhao et al., 2018). The MLM training objective is to predict masked token(s) with a mean cross-entropy loss that we denote as perturbation-loss (\(\mathcal{L}_{perturb}\)). The training examples with predicted tokens, called _gender-perturbed examples_ (Table 1), are fed into fine-tuning to be classified according to the original examples' ground-truth label (\(y\)). Then \(p_{\theta}(y^{\prime}=y|\hat{x})\) is the fine-tuning classification function to predict the gender-perturbed example's label (\(y^{\prime}\)) based on the gender-perturbed example (\(\hat{x}\)) to compute the fine-tuning loss (\(\mathcal{L}_{fine-tuning}\)), where \(\theta\) is the PLM's parameters for the fine-tuning. A weighted aggregation of the perturbation loss and fine-tuning loss, called joint-loss (\(\mathcal{L}_{joint}\)), is used for training the Gender-tuning as follows:
\[\mathcal{L}_{joint}=\alpha\ \mathcal{L}_{perturb}+\ (1-\alpha)\mathcal{L}_{fine-tuning} \tag{1}\]
where \(\alpha\) is a weighting factor that is employed to adjust the contribution of the two training losses in computing the joint-loss.
The Gender-tuning training objective is to minimize joint-loss to ensure that the label of the perturbed example is the same as the label of the original training example. In the following, we present how joint-loss impacts the training process of Gender-tuning in each training batch:
Suppose the MLM predicts an incorrect token. For instance, the example: "the film affirms the power of the [actress]" changes to "the film affirms the power of the [trauma]". In this example, the predicted word [trauma] is a non-related gender-word that raises perturbation-loss value (\(\mathcal{L}_{perturb}\)
Figure 1: Illustration of Gender-tuning training process. MLM and PLM be trained based on Gender-tuning loss. The examples without any gender-word are directly fed to fine-tuning.
> 0). In this case, even if fine-tuning classifies the perturbed example correctly, joint-loss is still big enough to force Gender-tuning to continue training.
Also, suppose Gender-tuning creates social gender bias through gender perturbation. For instance, the example: "angry black [actor]" changes to "angry black [woman]" that "woman" and "actor" are not close semantically that raises perturbation-loss value (\(\mathcal{L}_{perturb}\) > 0). In this case, the output of the fine-tuning might be correct (\(\mathcal{L}_{fine-tuning}\approx 0\)) due to the PLMs' learned biases ("angry black woman" is a known gender/race bias). However, due to the big value of perturbation-loss, the joint-loss is big enough to override fine-tuning results and forces Gender-tuning to continue training.
Moreover, we observed that sometimes example perturbation changes the concept/label of training examples. For instance, the input: "[He] is an excellent [actor] (label: positive)" changes to "[She] is a wonderful [murderer] (label: positive)", and fine-tuning classification output is correct (\(\mathcal{L}_{fine-tuning}\approx 0\)). In this example, the predicted word [murderer] is conceptually far from gender-related words [actor]. So, perturbation loss becomes significant, which creates a big value for joint-loss to force Gender-tuning to continue training. Finally, we found examples that MLM replaces the gender-word with the [UNK] token. In these examples, the perturbation-loss is close to zero (\(\mathcal{L}_{perturb}\approx 0\)) and the output of the fine-tuning classifier is incorrect (\(\mathcal{L}_{fine-tuning}\) > 0). In this case, the joint-loss is big enough to continue training and provide a new chance for MLM to predict a meaningful token instead of a [UNK]. More analysis of our perturbation strategy can be found in Section 4.1 and Table 3.
## 3 Experimental Setup
To evaluate our proposed method, we conduct experiments by following the evaluation process of the two state-of-the-art baselines (Sent-D and FairF) such as the bias evaluation metric (SEAT), applied PLMs, and downstream tasks' datasets. (Details of the baselines, bias evaluation metric, PLMs, datasets, and hyperparameters are presented in Appendix A)
We report the SEAT effect size (e-size), average absolute e-size, and classification accuracy on downstream tasks for three different setups: 1) **Origin**: fine-tuning the PLMs on the downstream task datasets using huggingface transformers code Wolf et al. (2020). 2) **Gender-tuning**-random: instead of replacing the gender-words in an training example, Gender-tuning-random replaces a certain percentage of an input tokens randomly (5% of each input sequence). 3) **Gender-tuning**: the proposed method. We used the same hyperparameter for all three setups for a fair comparison.
## 4 Results and Discussion
Table 2 illustrates SEAT absolute effect size (e-size) (lower is better) on sentence templates of Terms/Names under different gender domains provided by Caliskan et al. (2017), average absolute e-size (lower is better), and classification accuracy on downstream tasks (higher is better) for three
\begin{table}
\begin{tabular}{l l} \hline \hline
1 & **Original example:** \\ & “[**he**] is at 22 a powerful [**actor**].” \\ & **Perturbed examples:** \\ & epoch 1 \(\Rightarrow\) “[**girl**] is at 22 a powerful [**UNK**].” \\ & epoch 2 \(\Rightarrow\) “[**boy**] is at 22 a powerful [**actor**].” \\ & epoch 3 \(\Rightarrow\) “[She] is at 22 a powerful [**actress**].” \\
2 & **Original example:** \\ & “[**she**] beautifully chaperon the **[girls**] in the kitchen.” \\ & **Perturbed examples:** \\ & epoch 1 \(\Rightarrow\) “[**lady**] beautifully chaperon the [**women**] in the kitchen.” \\ & epoch 2 \(\Rightarrow\) “[**girl**] beautifully chaperon the [**boys**] in the kitchen.” \\ & epoch 3 \(\Rightarrow\) “[**he**] beautifully chaperon the [**men**] in the kitchen.” \\ \hline \hline \end{tabular}
\end{table}
Table 1: Some perturbed examples generated by Gender-tuning through three training epochs.
experiment setups (Section 3) and two state-of-the-art baselines. The results show that Gender-tuning outperforms the baselines regarding the average absolute effect size for both PLMs on all datasets. Also, in contrast with the baselines, Gender-tuning improves the accuracy of both PLMs on all downstream tasks. It shows that the proposed method preserves the useful semantic information of the training data after debiasing. The Gender-tuning-random results show an inconsistent effect on the bias scores. Although Gender-tuning-random improves the PLMs' accuracy on the downstream tasks, it significantly magnifies the bias score in the BERT model on SST-2 and CoLA. Also, it slightly reduces the average bias score in the RoBERTa on all datasets and in BERT on the QNLI.
### Perturbation Analysis
The PLMs achieved state-of-the-art performance on the downstream tasks datasets by applying the MLM for the example perturbation in pre-training phase. Thus we hypothesize that the MLM can generate realistic gender-perturbed examples that can considerably modify the gender relation between the input tokens without affecting the label. However, there is a concern that the pre-trained MLM transfers the gender bias through the perturbation process.
To address this concern, we investigate the predicted tokens that the pre-trained MLM replaces with the gender-words. We randomly select 300 examples from training dataset including 150 examples with feminine words and 150 examples with masculine words. Based on these 300 examples, we observe five types of perturbation as shown through some examples in Table 3:
* **Neutral**; replace the gender-words with neutral word such as people, they, their, and etc.
* **Convert-gender**; replace the gender-words with opposite gender. the word "he" change to "she".
* **Same-gender**; replace the gender-words with the same gender. change the word "man" to "boy".
* **Deleting**; replace the gender-words with unknown token ([UNK]). In 300 examples, it only happens when there are several masked tokens.
* **Identical**; replace the gender-word with itself. It mostly happens when there is only one gender-word.
In our investigation with 300 examples, we had 46% Neutral, 29% Identical, 17% Convert-gender, 7% Same-gender, and 1% Deleting perturbation.
\begin{table}
\begin{tabular}{l|c|c c c c|c|c c} \hline
**SST-2** & \multicolumn{3}{c}{**BERT**} & \multicolumn{3}{c}{**RoBERTa**} \\ \cline{2-10} & Origin & Sent-D & Fair! & Gender-tuning\({}_{\text{residuals}}\) & Gender-tuning (ours) & Origin & Gender-tuning\({}_{\text{residuals}}\) & Gender-tuning (ours) \\ \hline Names, Career/Family & 0.03 & 0.10 & 0.21 & 0.46 & **0.03** & 0.07 & **0.08** & 0.14 \\ Terms, Career/Family & 0.01 & 0.05 & 0.37 & **0.03** & 0.16 & 0.33 & 0.44 & **0.01** \\ Terms, Math/Art & 0.21 & 0.22 & 0.26 & **0.05** & 0.39 & 1.32 & 1.25 & **0.57** \\ Names, Math/Art & 1.15 & 0.75 & **0.09** & 0.65 & 0.31 & 1.34 & 1.12 & **1.11** \\ Terms, Science/Art & 0.10 & 0.08 & 0.12 & 0.42 & **0.07** & 0.25 & **0.12** & 0.47 \\ Names, Science/Art & 0.22 & 0.04 & **0.05** & 0.38 & 0.10 & 0.47 & 0.62 & **0.47** \\ Avg. Abs. e-size & 0.291 & 0.212 & 0.182 & 0.331 & **0.176** & 0.630 & 0.605 & **0.461** \\ \hline Accuracy & 91.97 & 89.10 & 91.60 & **92.66** & 92.10 & 93.57 & **93.92** & 93.69 \\ \hline
**CoLA** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Names, Career/Family & 0.09 & 0.14 & **0.03** & 0.34 & 0.09 & 0.29 & 0.15 & **0.05** \\ Terms, Career/Family & 0.19 & 0.18 & 0.11 & 0.15 & **0.03** & 0.26 & 0.08 & **0.00** \\ Terms, Math/Art & 0.26 & 0.31 & 0.09 & 0.55 & **0.05** & 0.06 & **0.02** & 0.15 \\ Names, Math/Art & 0.15 & 0.30 & **0.10** & 0.72 & 0.24 & 0.06 & 0.25 & **0.07** \\ Terms, Science/Art & 0.42 & 0.16 & 0.24 & **0.05** & 0.07 & 0.32 & **0.57** & 0.70 \\ Names, Science/Art & 0.03 & 0.19 & 0.12 & 0.28 & **0.07** & 0.27 & 0.14 & **0.03** \\ \hline Avg. Abs. e-size & 0.181 & -2.27 & 0.120 & 0.343 & **0.096** & 0.210 & 0.201 & **0.166** \\ \hline Accuracy & 56.51 & 55.40 & 56.50 & **56.85** & 56.60 & 57.35 & 57.55 & **58.54** \\ \hline
**QNLI** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Names, Career/Family & 0.26 & 0.05 & 0.10 & **0.01** & 0.02 & 0.04 & 0.38 & **0.17** \\ Terms, Career/Family & 0.15 & **0.004** & 0.20 & 0.13 & 0.04 & 0.22 & 0.10 & **0.04** \\ Terms, Math/Art & 0.58 & **0.08** & 0.32 & 0.30 & **0.08** & 0.53 & 0.16 & **0.09** \\ Names, Math/Art & 0.58 & 0.62 & 0.28 & 0.23 & **0.16** & 0.48 & 0.06 & **0.03** \\ Terms, Science/Art & 0.08 & 0.71 & 0.24 & 0.25 & **0.21** & 0.47 & 0.57 & **0.53** \\ Names, Science/Art & 0.52 & 0.44 & 0.16 & 0.15 & **0.04** & 0.36 & 0.47 & 0.52 \\ \hline Avg. Abs. e-size & 0.365 & 0.321 & 0.222 & 0.178 & **0.091** & 0.350 & 0.290 & **0.230** \\ \hline Accuracy & 91.30 & 90.60 & 90.80 & **91.61** & 91.32 & 92.03 & **92.51** & 92.09 \\ \hline \end{tabular}
\end{table}
Table 2: Comparing the debiasing performance of Gender-tuning and two state-of-the-art baselines. First six rows measure binary SEAT effect size (e-size; lower is better) for sentence-level tests from (Caliskan et al., 2017). The seventh row presents the average absolute e-size. The eighth row shows the classification accuracy on downstream tasks. The Gender-tuning-random masks the input example randomly (not only gender-words). Gender-tuning gains the lowest average bias in both models and all datasets.
As illustrated in Table 3, Gender-tuning does not make a meaningful change in identical and same-gender perturbation. These examples likely conform to the gender biases in the MLM. Suppose identical, or same-gender perturbation gets the correct output from the perturbation process (\(\mathcal{L}_{perturb.}\)\(\approx\) 0). In this case, the only way to learn the biases in the MLM is to get the correct output from fine-tuning step and joint-loss close to zero. This issue stops the MLM and fine-tuning model from further update. However, joint-loss plays an essential role in alleviating learning gender bias from identical and same-gender perturbations.
To clarify the role of joint-loss in overcoming above problem, we investigated fine-tuning output on identical and same-gender perturbations. We observed that fine-tuning gets the incorrect output from 60% of the identical and 75% of the same-gender perturbation. Thus these examples return to training iteration because their joint-loss is large enough to update the language models and perform a new training iteration. New training iteration means re-perturbing and re-fine-tuning result on these examples. Therefore, training based on both training steps' loss and computing joint-loss persistently prevents learning from gender bias in MLM as well as the PLM.
## 5 Ablation
We conduct the ablation experiments to demonstrate the effectiveness of Gender-tuning components, including 1) joint-training process and 2) joint-loss in Gender-tuning's debiasing performance (Table 4). The experiments are as follows: 1) **Gender-tuning\({}_{no-joint-training}\)**: first we used MLM to train the PLM through the gender-word perturbation on downstream task datasets. Then we fine-tuned the PLM on the downstream task dataset. 2) **Gender-tuning\({}_{no-joint-loss}\)**: we train Gender-tuning based on only fine-tuning loss.
In both PLMs, results illustrate that Gender-tuning is more effective for reducing the average gender bias than in two ablation experiments. The two ablation experiments magnify the bias scores noticeably, while Gender-tuning gains the smallest SEAT absolute effect size, especially in the BERT model. Results also show that the ablation experiment setups that do not benefit from joint-loss cannot update the MLM and PLM when the output of the fine-tuning classification is correct (\(\mathcal{L}_{fine-tuning}\approx\) 0), even though the correct output likely bases on the gender biases in the PLMs.
## 6 Conclusion
We propose a novel approach for debiasing PLMs through fine-tuning on downstream tasks' datasets. The proposed method is an aggregation of bias-word perturbation using MLM and fine-tuning classification. In this study, we evaluated our proposed method on gender biases and named it _Gender-tuning_. Comprehensive experiments prove that Gender-tuning outperforms two state-of-the-art debiasing methods while improving the performance of the PLMs on downstream tasks. The key advantage of our approach is using the fine-tuning setting that allows the training process to be carried out without needing additional training processes or datasets. Also, it makes Gender-tuning a plug-and-play debiasing tool deployable to any PLMs.
\begin{table}
\begin{tabular}{l|l|l|c} \hline Training input & Perturbed & Type & Label \\ \hline with [**his**] usual intelligence and subtlety. & with [**the**] usual intelligence and subtlety. & neutral & 1 \\ by casting an [**actress**] whose face projects & by casting an [**image**] whose face projects & & \\ that [**woman**] ’s doubts and yearnings, & that [**person**] ’s doubts and yearnings, & neutral & 1 \\ it succeeds. & & & \\ certainly has a new career ahead of [**him**] if & certainly has a new career ahead of [**her**] if & convert-gender & 1 \\
[**he**] so chooses. & [**she**] so chooses. & & \\ by [**men**] of marginal intelligence, with & by [**people**] of marginal intelligence, with & neutral & 0 \\ reactionary ideas. & reactionary ideas. & & \\ why this distinguished [**actor**] would stop so low. & why this distinguished [**man**] would stop so low. & same-gender & 0 \\ it is very awful - - and oozing with creep [**men**]. & it is very awful - - and oozing with creepy [UNK]. & deleting & 0 \\ Proves once again [**he**] hasn’t lost. & Proves once again [**he**] hasn’t lost. & identical & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The illustration of the different types of perturbation outputs generated by Gender-tuning and their ground-truth label.
## 7 Limitation
Although Gender-tuning succeeds in reducing the gender bias scores in the pre-trained language models, there are some limitations to performing debiasing. Gender-tuning only works on gender-related words list. Thus Gender-tuning cannot cover the probable gender biases that do not exist in its' list. We defer the gender-related word list modification to future research. All our experiments ran on English language texts with English gender-word morphology.
|
2306.11515 | An implicit-explicit solver for a two-fluid single-temperature model | We present an implicit-explicit finite volume scheme for two-fluid
single-temperature flow in all Mach number regimes which is based on a
symmetric hyperbolic thermodynamically compatible description of the fluid
flow. The scheme is stable for large time steps controlled by the interface
transport and is computational efficient due to a linear implicit character.
The latter is achieved by linearizing along constant reference states given by
the asymptotic analysis of the single-temperature model. Thus, the use of a
stiffly accurate IMEX Runge Kutta time integration and the centered treatment
of pressure based quantities provably guarantee the asymptotic preserving
property of the scheme for weakly compressible Euler equations with variable
volume fraction. The properties of the first and second order scheme are
validated by several numerical test cases. | Mária Lukáčová-Medvid'ová, Ilya Peshkov, Andrea Thomann | 2023-06-20T13:04:34Z | http://arxiv.org/abs/2306.11515v3 | # An implicit-explicit solver
###### Abstract
We present an implicit-explicit finite volume scheme for two-fluid single-temperature flow in all Mach number regimes which is based on a symmetric hyperbolic thermodynamically compatible description of the fluid flow. The scheme is stable for large time steps controlled by the interface transport and is computational efficient due to a linear implicit character. The latter is achieved by linearizing along constant reference states given by the asymptotic analysis of the single-temperature model. Thus, the use of a stiffly accurate IMEX Runge Kutta time integration and the centered treatment of pressure based quantities provably guarantee the asymptotic preserving property of the scheme for weakly compressible Euler equations with variable volume fraction. The properties of the first and second order scheme are validated by several numerical test cases.
Keywords: All-speed scheme, IMEX method, reference state strategy, single temperature two-fluid flow, asymptotic preserving property, symmetric hyperbolic thermodynamically compatible model
## 1 Introduction
In continuum mixture theory, the constituents of a multiphase system, also called mixture, are present at every material element even if an element represents a pure phase. This approach is applicable to model both situations - the case of miscible [26] and immiscible [33, 5] multicomponent systems. The material interfaces, if present, are zones of rapid but smooth changes of a parameter distinguishing the phases of the mixture, typically the volume or mass fraction.
Despite the fact that almost any application in science and engineering deals with multiphase systems and there is an obvious need for a consistent and reliable mathematical model to describe such multicomponent systems, the continuum mixture theory is far from being complete and no widely accepted model exists. Perhaps, the most widely used approach is based on equations for every individual constituent of the system, i.e. phase mass balance, phase momentum balance, phase energy balance, etc. The key problem here is to find a closure for this system of equations which is represented by the coupling terms describing the exchange of mass, energy,
and momenta between the mixture constituents. Note that a first-principle theory to provide such a closure is currently not available. Consequently, various heuristic and phenomenological approaches are used. The Baer-Nunziato (BN) model [1] introduced in 1986 is a representative of this class of mathematical formulations and since then an active line of research has been done to adapt it to various applications, [32, 10, 33] and recently to low Mach number flows [27].
In this paper, we deal with another class of governing equations for mixtures, here in the single-temperature simplification, which represents an attempt to build a mixture theory based on the first-principle reasoning. The equations belong to the class of the so-called Symmetric Hyperbolic Thermodynamically Compatible (SHTC) equations [11, 13, 12]. The key ingredients are the variational principle and the second law of thermodynamics. The variational principle is used to deduce a reversible part of the evolution equations that is subject to entropy conservation. The second law of thermodynamics yields an irreversible part of a model and controls entropy production. In contrast, to the BN class of mixture models, the governing equations of the SHTC model are formulated not directly in terms of the phase quantities but mainly in terms of mixture quantities such as mixture mass density, mixture momentum, mixture energy, etc. The SHTC equations can be rewritten in terms of the phase balance equations, i.e. in a BN form, see [30, 28]. In this way, a new term appears in the phase balance equations, that is usually missing in the BN-type models, see [28]. The latter can be identified as lift forces [9] acting on a rotating fluid element of one phase immersed in another one.
In this work, we are interested in a _single_-temperature SHTC mixture model [20] which is a special case of the two-velocity, two-pressure, two-entropy SHTC model of two-phase flows derived in [30, 29]. In [34] the full model has been numerically solved based on an explicit time integration. The difficulty lies in handling the two entropy balance laws of the full two-fluid SHTC model and only one energy conservation law. One the other hand, applications such as sediment transport, granular flows or aerosol transport can be modeled with the single temperature approach resulting in one mixture entropy balance law associated to one energy conservation law. These applications lie within weakly compressible flow regimes, characterized by small Mach numbers. A severe difficulty in the construction of a numerical scheme applied to weakly compressible flow regimes is posed by the scale differences between acoustic and material waves. The focus of the numerical simulation usually lies on the evolution of the slower material waves following the two-fluid interface for which a time step controlled by the local flow speed is sufficient. The time step of an explicit scheme, as proposed in [29, 31] for compressible two-phase flow in the SHTC framework, is bounded by the smallest Mach number. This leads to very restrictive time steps in the low Mach number regime and consequently to long computational times, especially when long time periods are considered. This problem can be overcome by considering implicit-explicit (IMEX) time integrators, where fast waves are treated implicitly leading to the Courant-Friedrichs-Lewy (CFL) condition that is restricted only by the local flow velocity. It allows larger time steps while keeping the material waves well resolved. Additionally, an implicit treatment of the associated stiff pressure terms, which trigger fast acoustic waves, has the advantage that centered finite differences can be applied without loss of stability while guaranteeing a Mach number independent numerical diffusion of the scheme, see e.g. [8, 14, 18] for a discussion on upwind schemes. Indeed, the correct amount of numerical diffusion is crucial to obtain the so-called asymptotic preserving (AP) schemes [15]. Since the flow regime of the two-phase flow considered here is characterized by two potentially distinct phase Mach numbers, different singular Mach number limits can be obtained depending on the constitution of the mixture. For their formal derivation we apply asymptotic expansions, as done for the (isentropic) Euler equations, see [6, 7, 8, 14, 17, 18, 25]and the references therein. We refer the reader to our recent work on the isentropic SHTC two-fluid model [21]. To obtain physically admissible solutions, especially in the weakly compressible flow regime, the numerical scheme has to yield correct asymptotic behavior. This means a uniformly stable and consistent approximation of the limit equations as the Mach numbers tend to zero.
The profound knowledge of the structure of well-prepared initial data can be used to construct an AP scheme
by applying a reference solution (RS)-IMEX approach. This approach was successfully applied to construct AP schemes for the (isentropic) Euler equations [3, 16, 19, 35] and isentropic two-fluid flow [21]. This leads to a stiff linear part which is then treated implicitly whereas the nonlinear higher order terms are integrated explicitly respecting the asymptotes in the low Mach number limit. By doing this, nonlinear implicit solvers can be avoided which are computationally costly.
The paper is structured as follows. In Section 2, we briefly recall the model and give its non-dimensional formulation. For well-prepared initial data, we analyze its singular Mach number limit towards the incompressible Euler equations with variable density. Using the knowledge of the limit reference state, we construct first a semi discrete scheme in Section 3.2 and derive a fully discrete scheme in Section 3.3. The construction of higher order schemes within this framework is shortly discussed, too. Further, the AP property of the scheme is proven in Section 4. Finally, in Section 5, a series of 1D and 2D test problems is presented to numerically verifying the convergence of the proposed scheme and its behavior in compressible and weakly compressible flow regime.
## 2 Single temperature two-fluid flow
In this section we recall the SHTC two-fluid model derived in [30, 29]. We concentrate on the model in the thermal equilibrium regime [20] which is a legitimate approximation for many applications mentioned in the Introduction. Thus, we deal with the mixture of two fluids in which every material element (control volume) is characterized by the temperature \(T\) with \(T=T_{1}=T_{2}\), where the lower indices denote the respective phase \(l=1,2\). Moreover, we assume that every material element of volume \(\mathcal{V}\) and mass \(\mathcal{M}\) is occupied by both fluids, i.e. \(\mathcal{V}=\nu_{1}+\nu_{2}\) and \(\mathcal{M}=m_{1}+m_{2}\), with \(\nu_{l}\) and \(m_{l}\) being the volume and mass of the \(l\)-th phase in the control volume \(V\). However, to characterize the fluid content in a control volume, it is convenient to use non-dimensional scalars: the volume fractions \(\alpha_{l}\) and mass fractions \(c_{l}\) defined as
\[\alpha_{l}=\frac{\nu_{l}}{\mathcal{V}},\qquad c_{l}=\frac{m_{l}}{\mathcal{M}} =\frac{\varrho_{l}}{\rho}=\frac{\alpha_{l}\rho_{l}}{\rho}, \tag{1}\]
where
\[\rho=\frac{\mathcal{M}}{\mathcal{V}}=\varrho_{1}+\varrho_{2}=\alpha_{1}\rho_{ 1}+\alpha_{2}\rho_{2} \tag{2}\]
is the mass density of the mixture, \(\varrho_{l}\) is the mass density of the \(l\)-th phase in the control volume \(V\), and \(\rho_{l}\) is the mass density of the \(l\)-th phase. The volume and mass fractions obey the constraints
\[\alpha_{1}+\alpha_{2}=1,\qquad c_{1}+c_{2}=1. \tag{3}\]
Moreover, each phase is equipped with its own velocity field \(\mathbf{v}_{l}\in\mathbb{R}^{d}\), where \(d\) denotes the space dimension, and the mixture control volume is assume to have the velocity defined as the center of mass velocity, i.e. as the weighted average given by
\[\mathbf{v}=c_{1}\mathbf{v}_{1}+c_{2}\mathbf{v}_{2}. \tag{4}\]
The mixture momentum \(\rho\mathbf{v}=\varrho_{1}\mathbf{v}_{1}+\varrho_{2}\mathbf{v}_{2}\) is equal to the sum of the phase momenta. Additionally, one needs to characterize the relative motion of the phases which, in the SHTC theory, is done using the relative velocity field
\[\mathbf{w}=\mathbf{v}_{1}-\mathbf{v}_{2}. \tag{5}\]
For each phase, an entropy \(s_{l}\) and internal energy \(e_{l}(\rho_{l},s_{l})\) is prescribed yielding the phase pressures
\[p_{l}=\rho_{l}^{2}\frac{\partial e_{l}}{\partial\rho_{l}},\quad l=1,2. \tag{6}\]
We consider an _ideal gas_ equation of state (EOS) given in terms of the respective density and single temperature resulting in
\[s_{l}(\rho_{l},T)=c_{v,l}\log\left(\frac{T}{T_{0,l}}\left(\frac{1}{\rho_{l}} \right)^{\gamma_{l}-1}\right),\quad T_{0,l}=\frac{1}{(\gamma_{l}-1)c_{v,l}}, \quad e_{l}(\rho_{l},T)=c_{v,l}T,\quad p_{l}(\rho_{l},T)=(\gamma_{l}-1)c_{v,l }\rho_{l}T, \tag{7}\]
where \(\gamma_{l}\) denotes the ratio of specific heats and \(c_{v,l}\) the specific heat at constant volume for each phase \(l=1,2\). To conclude the definition of the mixture state variables, we have the specific mixture internal energy \(e=c_{1}e_{1}+c_{2}e_{2}\), the mixture pressure \(p=\rho^{2}\frac{\partial e}{\partial\rho}=\alpha_{1}p_{1}+\alpha_{2}p_{2}\) and the total energy density of the mixture given by
\[\rho E=\rho e+\rho\frac{\|\mathbf{v}\|^{2}}{2}+\rho c_{1}c_{2}\frac{\|\mathbf{w}\|^{2} }{2}. \tag{8}\]
The total mixture entropy reads \(S=c_{1}s_{1}+c_{2}s_{2}\). All state variables are summarized in the vector
\[\mathbf{q}=\left(\alpha_{1},\alpha_{1}\rho_{1},\alpha_{2}\rho_{2},\rho\mathbf{v},\mathbf{ w},\rho E\right)^{T}. \tag{9}\]
The SHTC model with a single temperature can be written in the following way
\[\frac{\partial\alpha_{1}}{\partial t}+\mathbf{v}\cdot\nabla\alpha_{1}=-\frac{p_{ 1}-p_{2}}{\tau^{(\alpha)}\rho}, \tag{10a}\] \[\frac{\partial(\alpha_{1}\rho_{1})}{\partial t}+\nabla\cdot(\alpha_{1}\rho_{1 }\mathbf{v}_{1})=0,\] (10b) \[\frac{\partial(\alpha_{2}\rho_{2})}{\partial t}+\nabla\cdot(\alpha_{2}\rho_{2 }\mathbf{v}_{2})=0,\] (10c) \[\frac{\partial(\rho v)}{\partial t}+\nabla\cdot(\rho\mathbf{v}\otimes\mathbf{v}+p \mathbf{I}+\rho c_{1}c_{2}\mathbf{w}\otimes\mathbf{w})=0,\] (10d) \[\frac{\partial\mathbf{w}}{\partial t}+\nabla\cdot\left(\left[\mathbf{w} \cdot\mathbf{v}+\mu_{1}-\mu_{2}+(1-2c_{1})\frac{\|\mathbf{w}\|^{2}}{2}\right]\mathbf{I} \right)+(\nabla\times\mathbf{w})\times\mathbf{v}=-\frac{c_{1}c_{2}\mathbf{w}}{\tau^{(w)}},\] (10e) \[\frac{\partial(\rho E)}{\partial t}+\nabla\cdot\left(\mathbf{v}(\rho E+p)+ \rho\ \left[\mathbf{w}\cdot\mathbf{v}+\mu_{1}-\mu_{2}+(1-2c_{1})\frac{\|\mathbf{w}\|^{2}}{2} \right]c_{1}c_{2}\mathbf{w}\right)=0. \tag{10f}\]
Here \(\mu_{l}=e_{l}+\frac{p_{l}}{\rho_{l}}-s_{l}T,\ l=1,2\) denote the chemical potentials. In the above formulation, the volume fraction is advected in a non-conservative way with the fluid flow \(\mathbf{v}\) balanced by a pressure relaxation source term. The mixture mass is conserved due to (10b), (10c), the momentum due to (10d) and the total energy due to (10f). The relative velocity is not conserved and driven by the difference in the chemical potentials \(\mu=\mu_{1}-\mu_{2}\) and a friction source term. The relaxation parameters \(\tau^{\alpha}\) and \(\tau^{(\mathbf{w})}\) characterize the relaxation rates of the mixture towards pressure (\(p_{1}=p_{2}\)) and relative velocity (\(\mathbf{v}_{1}=\mathbf{v}_{2}\)) equilibrium.
Moreover, the model is equipped with the entropy balance law
\[\frac{\partial(\rho S)}{\partial t}+\nabla\cdot(\rho S\mathbf{w})=\Pi\geq 0 \tag{11}\]
with the entropy production term
\[\Pi=\frac{1}{T\tau^{(\alpha)}\rho^{2}}(p_{1}-p_{2})^{2}+\frac{1}{T\tau^{(w)} \rho^{2}}c_{1}^{2}c_{2}^{2}\|\mathbf{w}\|^{2}. \tag{12}\]
For details on the derivation of the model and its thermodynamical properties we refer to [31, 30, 29, 28].
Since each phase is equipped with a respective pressure and density, a sound speed for each phase \(a_{l}\), as well as a mixture sound speed \(a\) can be defined by
\[(a_{l})^{2}=\frac{\partial p_{l}}{\partial p_{l}}=\gamma_{l}\frac{p_{l}}{\rho_{l }}\quad\text{and}\quad a^{2}=\frac{\partial p}{\partial\rho}=c_{1}\,(a_{1})^{2} +c_{2}\,(a_{2})^{2}. \tag{13}\]
Accordingly, a Mach number can be assigned to each phase. As usual it is defined by the ratio between the flow velocity \(\mathbf{v}\) and the sound speed \(a_{l}\). In the case that the flow is characterized by (at least) one small Mach number, different scales arise in the model that yield stiffness in the governing equations (10). To obtain a better understanding of the scales which are present in the model, we will rewrite system (10) in a non-dimensional form.
### Non-dimensional formulation of the two-fluid model
Let us denote the non-dimensional quantities by ( \(\tilde{\bullet}\) ) and the corresponding reference value by \((\bullet)_{\text{ref}}\). We assume that the convective scales are of the same order, i.e. \(\mathbf{v}_{l,\text{ref}}=\mathbf{v}_{\text{ref}}=x_{\text{ref}}/t_{\text{ref}}\). The ratio of the phase densities, however, can be large, especially when considering a mixture of a light gas and liquid. To take this potentially large difference into account, we define two different reference densities \(\rho_{l,\text{ref}}\). Note, that the volume fraction and mass fractions are already non-dimensional quantities. Further, we define two reference pressures \(p_{l,\text{ref}}\) from which we can compute the reference sound speeds \(a_{\text{ref}}\) and reference internal energies \(e_{\text{ref}}\) via the EOS (7). They are given by
\[\big{(}a_{l,\text{ref}}\big{)}^{2}=\frac{p_{l,\text{ref}}}{\rho_{l,\text{ref} }},\quad e_{l,\text{ref}}=\frac{p_{l,\text{ref}}}{\rho_{l,\text{ref}}},\quad T _{\text{ref}}=\frac{1}{\gamma_{l}(\gamma_{l}-1)c_{v,l}}\,\frac{p_{l,\text{ref} }}{\rho_{l,\text{ref}}},\quad l=1,2. \tag{14}\]
The dimensional state variables \(\mathbf{q}\) are then expressed as the product of non-dimensional quantities and reference values as follows
\[\rho_{l}=\tilde{\rho}_{l}\rho_{l,\text{ref}},\quad p_{l}=\tilde{\rho}_{l}p_{l,\text{ref}},\quad e_{l}=\tilde{e}_{l}\frac{p_{l,\text{ref}}}{\rho_{l,\text{ ref}}},\quad\mu_{l}=\tilde{\mu}_{l}\frac{p_{l,\text{ref}}}{\rho_{l,\text{ ref}}},\quad\mathbf{v}_{l}=\tilde{\mathbf{v}}_{l}\mathbf{v}_{\text{ref}},\quad\mathbf{v}=\tilde{\mathbf{v}} \mathbf{v}_{\text{ref}},\quad\mathbf{w}=\tilde{\mathbf{w}}\mathbf{v}_{\text{ref}}. \tag{15}\]
Further, a respective reference Mach number \(M_{l}\) and a local Mach number \(M_{\text{loc},l}\) are assigned to each phase
\[M_{l}=\frac{\mathbf{v}_{\text{ref}}}{a_{l,\text{ref}}},\quad M_{\text{loc},l}= \frac{|\mathbf{v}_{l}|}{\sqrt{\gamma_{l}(\gamma_{l}-1)c_{v,l}T}},\quad l=1,2. \tag{16}\]
Inserting expressions (15) into the dimensional equations (10), dropping the tilde ( \(\tilde{\bullet}\) ) and using (16), we obtain the following non-dimensional formulation
\[\frac{\partial\alpha_{1}}{\partial t}+\mathbf{v}\cdot\nabla\alpha_{1 }=-\frac{1}{\tau^{(a)}\rho}\left(\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref}} }\frac{p_{1}}{(M_{1})^{2}}-\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}\frac {p_{2}}{(M_{2})^{2}}\right), \tag{17a}\] \[\frac{\partial(\alpha_{1}\rho_{1})}{\partial t}+\nabla\cdot( \alpha_{1}\rho_{1}\nu_{1})=0,\] (17b) \[\frac{\partial(\alpha_{2}\rho_{2})}{\partial t}+\nabla\cdot( \alpha_{2}\rho_{2}\nu_{2})=0,\] (17c) \[\frac{\partial(\rho\mathbf{v})}{\partial t}+\nabla\cdot\left(\rho\bm {v}\otimes\mathbf{v}+\left(\alpha_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref}} }\frac{p_{1}}{(M_{1})^{2}}+\alpha_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ ref}}}\frac{p_{2}}{(M_{2})^{2}}\right)\mathbf{I}+\rho c_{1}c_{2}\mathbf{w}\otimes\mathbf{w} \right)=0,\] (17d) \[\frac{\partial\mathbf{w}}{\partial t}+\nabla\cdot\left(\left[\mathbf{w} \cdot\mathbf{v}+\frac{\mu_{1}}{(M_{1})^{2}}-\frac{\mu_{2}}{(M_{2})^{2}}+(1-2c)\, \frac{\|\mathbf{w}\|^{2}}{2}\right]\mathbf{I}\right)+(\nabla\times\mathbf{w})\times\mathbf{v}=- \frac{c_{1}c_{2}\mathbf{w}}{\tau^{(w)}},\] (17e) \[\frac{\partial(\rho E)}{\partial t}+\nabla\cdot\left(\mathbf{v}\left( \rho E+\alpha_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref}}}\frac{p_{1}}{(M_{ 1})^{2}}+\alpha_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}\frac{p_{2}}{(M_ {2})^{2}}\right)\right)\] (17f) \[+\nabla\cdot\left(\rho\,\left[\mathbf{w}\cdot\mathbf{v}+\frac{\mu_{1}}{( M_{1})^{2}}-\frac{\mu_{2}}{(M_{2})^{2}}+(1-2c)\,\frac{\|\mathbf{w}\|^{2}}{2}\right]c_{1}c_{2} \mathbf{w}\right)=0 \tag{17g}\]
with the scaled total energy
\[E=c_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref}}}\frac{e_{1}(\rho_{1},T)}{(M_ {1})^{2}}+c_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}\frac{e_{2}(\rho_{2 },T)}{(M_{2})^{2}}+\frac{\|\mathbf{v}\|^{2}}{2}+c_{1}c_{2}\frac{\|\mathbf{w}\|^{2}}{2} \tag{18}\]
and the mixture density \(\rho=\bar{\rho}\rho_{\text{ref}}=\alpha_{1}\bar{\rho}_{1}\rho_{1,\text{ref}}+ \alpha_{2}\bar{\rho}_{2}\rho_{2,\text{ref}}\) with \(\bar{\rho}=\alpha_{1}\bar{\rho}_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref} }}+\alpha_{2}\bar{\rho}_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}\). In the next section we introduce well-prepared initial data that will be used for the formal asymptotic analysis of (17) in the low Mach number limits.
### Well-prepared data and low Mach number limits
As we have seen from the Mach number definition (16), the difference in the flow regimes of the two phases depends mainly on the material constants \(\gamma_{l}\) and \(c_{v,l}\). In particular, from the single temperature EOS (7), we obtain
\[\frac{a_{1}^{2}}{\gamma_{1}(\gamma_{1}-1)c_{v,1}}=\frac{a_{2}^{2}}{\gamma_{2 }(\gamma_{2}-1)c_{v,2}}\Leftrightarrow a_{1}^{2}=\frac{\gamma_{1}(\gamma_{1}- 1)c_{v,1}}{\gamma_{2}(\gamma_{2}-1)c_{v,2}}a_{2}^{2} \tag{19}\]
and consequently, with \(a_{l,\text{ref}}=\sqrt{\gamma_{l}(\gamma_{l}-1)c_{v,l}T_{\text{ref}}}\), we find a direct relation between two Mach numbers
\[M_{1}=\mathcal{C}M_{2},\quad\mathcal{C}=\sqrt{\frac{\gamma_{2}(\gamma_{2}-1)c _{v,2}}{\gamma_{1}(\gamma_{1}-1)c_{v,1}}}>0. \tag{20}\]
In the following, for simplicity, we consider the case \(M_{1}=M_{2}=M\), where \(0<M\ll 1\), i.e. \(\mathcal{C}=1\). The cases \(\mathcal{C}>1\) and \(\mathcal{C}<1\) can be treated in a similar manner. For a full analysis of model (17) in the isentropic case we refer the reader to [21], where the singular limit for two different Mach numbers \(1\gg M_{1}>M_{2}>0\) and \(1\approx M_{1}\gg M_{2}>0\) are considered.
We proceed by expanding sufficiently smooth phase state variables with respect to \(M\). Note that the volume and mass fractions are non-dimensional quantities and are not expanded with respect to the Mach number.
\[\begin{split}\rho_{l}&=\rho_{l,(0)}+M\rho_{l,(1)}+ \mathcal{O}(M^{2}),\qquad l=1,2,\\ T&=T_{(0)}+MT_{(1)}+M^{2}\,T_{(2)}+\mathcal{O}(M^{3} ),\\ \mathbf{v}&=\mathbf{v}_{(0)}+M\mathbf{v}_{(1)}+\mathcal{O}(M^{2} ).\end{split} \tag{21}\]
Since the relative velocity is subject to a relaxation process, we set \(\tau^{(\mathbf{w})}=M\) leading to the desired zero background relative velocity \(\mathbf{w}_{(0)}=0\) in the limit, thus
\[\mathbf{w}=M\mathbf{w}_{(1)}+\mathcal{O}(M^{2}). \tag{22}\]
To obtain Mach number expansions also for the remaining thermodynamical quantities, we apply EOS (7) which yields
\[\begin{split}\rho_{l}&=\left(c_{v,l}(\gamma_{l}-1) \rho_{l,(0)}\,T_{(0)}\right)+Mc_{v,l}(\gamma_{l}-1)\left(\rho_{l,(0)}\,T_{(1)} +\rho_{l,(1)}\,T_{(0)}\right)+\mathcal{O}(M^{2}),\\ \mu_{l}&=\left(T_{(0)}\left(c_{v,l}\gamma_{l}-s_{l}( \rho_{l,(0)},T_{(0)})\right)+M\left(\frac{c_{v,l}(\gamma_{l}-1)\left(\rho_{l,( 0)}\,T_{(1)}+\rho_{l,(1)}\,T_{(0)}\right)}{\rho_{l,(0)}}-T_{(1)}s_{l}(\rho_{l,( 0)},T_{(0)})\right)+\mathcal{O}(M^{2}),\\ \rho_{l}e_{l}&=\left(c_{v,l}\rho_{l,(0)}\,T_{(0)} \right)+Mc_{v,l}\left(\rho_{l,(0)}\,T_{(1)}+\rho_{l,(1)}\,T_{(0)}\right)+ \mathcal{O}(M^{2})\end{split} \tag{23}\]
and imply the following expansions
\[\begin{split} p_{l}&=p_{l,(0)}+Mp_{l,(1)}+M^{2}p_{l,( 2)}+\mathcal{O}(M^{3}),\\ \mu_{l}&=\mu_{l,(0)}+M\mu_{l,(1)}+M^{2}\mu_{l,(2)}+ \mathcal{O}(M^{3}),\\ \rho_{l}e_{l}&=\left(\rho_{l}e_{l}\right)_{(0)}+M \left(\rho_{l}e_{l}\right)_{(1)}+M^{2}\left(\rho_{l}e_{l}\right)_{(2)}+ \mathcal{O}(M^{3}).\end{split} \tag{24}\]
We insert the Mach number expansions (21),(24) and \(\rho_{(0)}=\alpha^{1}\rho_{(0)}^{1}+\alpha^{2}\rho_{(0)}^{2}\) in the non-dimensional formulation (17) and sort by the equal order of the Mach number. Terms of the order \(\mathcal{O}(M^{-2})\) and \(\mathcal{O}(M^{-1})\) arise in the relaxation source term of equation (17b) yielding
\[\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}p_{1,(0)}=\frac{\rho_{1,\text{ ref}}}{\rho_{\text{ref}}}p_{2,(0)}\quad\text{and}\quad\frac{\rho_{2,\text{ ref}}}{\rho_{\text{ref}}}p_{1,(1)}=\frac{\rho_{1,\text{ref}}}{\rho_{\text{ ref}}}p_{2,(1)}, \tag{25}\]
as well as in the momentum equation (17d)
\[\nabla\left(\alpha_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ ref}}}p_{1,(0)}+\alpha_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}p_{2,(0)} \right)=0\Leftrightarrow\nabla p_{1,(0)}=0,\,\nabla p_{2,(0)}=0\quad\text{and} \tag{26}\] \[\nabla\left(\alpha_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ ref}}}p_{1,(1)}+\alpha_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}p_{2,(1)} \right)=0\Leftrightarrow\nabla p_{1,(1)}=0,\,\nabla p_{2,(1)}=0. \tag{27}\]
This implies that \(p_{l}\) and \(\rho_{l}e_{l}\) are constant in space up to the second order perturbation \(p_{l,(2)}\) and \(\left(\rho_{l}e_{l}\right)_{(2)}\). Furthermore, from the relative velocity equation (17e) we have the following conditions
\[\nabla\mu_{1,(0)}=\nabla\mu_{2,(0)},\quad\text{and}\quad\nabla\mu_{1,(1)}= \nabla\mu_{2,(1)}. \tag{28}\]
In particular this means that the difference of the chemical potentials \(\mu_{(0)}\) and \(\mu_{(1)}\) are constant in space. Taking these observations into account as well as \(\mathbf{w}_{(0)}=0,\mathcal{O}(M^{-2})\) order terms in the energy equation (17g) reduce to
\[\frac{\partial\rho_{(0)}e_{(0)}}{\partial t}+\nabla\cdot(\rho_{(0)}e_{(0)}\bm {v}_{(0)})+p_{(0)}\nabla\cdot\mathbf{v}_{(0)}=0. \tag{29}\]
We further assume that the phase pressure relaxation towards a common pressure is faster than the characteristic time of pressure wave propagation, and one obtains a uniform mixture pressure \(p_{(2)}=\alpha_{1}\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref}}}p_{1,(2)}+ \alpha_{2}\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}p_{2,(2)}=p_{2,(2)}\). This motivates the following assumption on the dynamics of the volume fraction in the low Mach number limit.
**Assumption 2.1** (Transport of interfaces).: _In the low Mach number limit we assume_
\[\frac{\partial\alpha_{1}}{\partial t}+\mathbf{v}^{(0)}\cdot\nabla\alpha_{1}=0. \tag{30}\]
With this assumption, we can rewrite the energy equation at the leading order (29) as follows
\[\alpha_{1}\frac{\partial(\rho_{1}e_{1})_{(0)}}{\partial t}+\alpha_{2}\frac{ \partial(\rho_{2}e_{2})_{(0)}}{\partial t}+\rho_{(0)}e_{(0)}\nabla\cdot\mathbf{v} _{(0)}+p_{(0)}\nabla\cdot\mathbf{v}_{(0)}=0. \tag{31}\]
In fact, (31) can be written as a convex combination with respect to the volume fraction
\[\alpha_{1}\left(\frac{\partial(\rho_{1,(0)}e_{1,(0)})}{\partial t}+\left(\rho _{1,(0)}e_{1,(0)}+p_{1,(0)}\right)\nabla\cdot\mathbf{v}_{(0)}\right)+\alpha_{2} \left(\frac{\partial(\rho_{2,(0)}e_{2,(0)})}{\partial t}+\left(\rho_{2,(0)}e_{ 2,(0)}+p_{2,(0)}\right)\nabla\cdot\mathbf{v}_{(0)}\right)=0. \tag{32}\]
Since the volume fraction can be arbitrary under the constraint \(0<\alpha_{l}<1,l=1,2\), (32) implies
\[\frac{\partial(\rho_{1,(0)}e_{1,(0)})}{\partial t}+\left(\rho_{1,(0)}e_{1,(0)} +p_{1,(0)}\right)\nabla\cdot\mathbf{v}_{(0)}=0\quad\text{and}\quad\frac{\partial( \rho_{2,(0)}e_{2,(0)})}{\partial t}+\left(\rho_{2,(0)}e_{2,(0)}+p_{2,(0)} \right)\nabla\cdot\mathbf{v}_{(0)}=0. \tag{33}\]
which is consistent with the limit of single phase flow of the Euler equations, see e.g. [8, 17, 18].
Then, analogously to [8] for the case of the Euler equations, we obtain from (33), that \(\rho_{l,(0)}e_{l,(0)}\) are constant in space _and_ time and consequently we obtain from (24) that also the phase pressures at leading order \(p_{l,(0)}\) and
\(\rho_{l,(0)}\,T_{(0)}\) are constant in space and time. Furthermore, we obtain the divergence free mixture velocity constraint \(\nabla\cdot\mathbf{v}_{(0)}=0\). Summarizing, we can formally write the following expansions for the pressure and internal energy
\[p=p_{(0)}+\mathcal{O}(M^{2}), p_{(0)}=\text{constant}, \tag{34a}\] \[\rho_{l}e_{l}=\rho_{l,(0)}e_{l,(0)}+\mathcal{O}(M^{2}), \rho_{l,(0)}e_{l,(0)}=\text{constant}. \tag{34b}\]
To obtain the expansion for the temperature, we look at the constraint for the chemical potentials. First, (34b) implies that \(\rho_{l,(0)}\,T_{(0)}\) are constant. Therefore, we can define two constants \(\mathcal{E}_{l}>0\) such that
\[T_{(0)}\rho_{1,(0)}=\mathcal{E}_{1},\quad T_{(0)}\rho_{2,(0)}=\mathcal{E}_{2}. \tag{35}\]
Then it follows from \(\nabla(\mu_{1,(0)}-\mu_{2,(0)})=0\) that
\[0=\left(c_{v,2}\log\left(\left(\frac{\rho_{0,2}}{\mathcal{E}_{2}}\right)^{ \gamma_{2}-1}\frac{T_{(0)}^{\gamma_{2}}}{T_{0}}\right)-c_{v,l}\log\left(\left( \frac{\rho_{0,1}}{\mathcal{E}_{1}}\right)^{\gamma_{1}-1}\frac{T_{(0)}^{\gamma _{1}}}{T_{0}}\right)\right)\nabla T_{(0)} \tag{36}\]
and consequently \(T_{(0)}\) is constant unless both phases coincide. Since we consider a general case of different phase densities, it follows from (35), that the phase densities \(\rho_{l,(0)}\) are constant as well. From relation (34a) follows \(p_{(1)}=0\) thus \(\rho_{l,(0)}\,T_{(1)}+\rho_{l,(1)}\,T_{(0)}=0\) which implies
\[T_{(1)}=-\frac{\rho_{l,(1)}}{\rho_{l,(0)}}\,T_{(0)}. \tag{37}\]
Relation (28) yields \(T_{(1)}\) is constant and together with (37) implies
\[\mu_{l}=\mu_{l,(0)}+\mathcal{O}(M^{2}),\quad\rho_{l}=\rho_{l,(0)}+\mathcal{O} (M^{2}),\quad T=T_{(0)}+\mathcal{O}(M^{2}),\quad\{\mu_{l,(0)},\,\rho_{l,(0)}\,T_{(0)}\}=\text{constant}. \tag{38}\]
Plugging these expansions in (17b) and (17c) and single out the first order perturbations for the phase densities, it follows \(\mathbf{v}_{(1)}=0\) and \(\mathbf{w}_{(1)}=0\). Consequently, we have for the friction source term \(\tau^{(\mathbf{w})}=\mathcal{O}(M^{2})\). We proceed by defining a set of well-prepared initial data.
**Definition 2.2** (Well-prepared initial data for variable volume fraction).: _Let \(\mathbf{q}\in\mathbb{R}^{4+2d}\) denote the state vector and let both phases be in the same Mach number regime denoted by \(M\). Let Assumption 2.1 hold. Then the set of well-prepared initial data is given as_
\[\Omega_{wp}^{M}=\bigg{\{}\,\mathbf{q}\in\mathbb{R}^{4+2d}:\quad\frac {\rho_{1,\text{ref}}}{\rho_{\text{ref}}}p_{(k)}^{2}=\frac{\rho_{1,\text{ref}} }{\rho_{\text{ref}}}p_{1,(k)},\ k=0,1,2;\quad\nabla p_{1,(0)}=0,\ \nabla p_{2,(0)}=0;\quad\nabla p_{1,(1)}=0,\ \nabla p_{2,(1)}=0;\\ \nabla\mu_{1,(0)}=\nabla\mu_{1,(0)},\ \nabla\mu_{1,(1)}=\nabla\mu_{2,(1)}; \quad\nabla\cdot\mathbf{v}_{(0)}=0,\quad\mathbf{v}_{(1)}=0;\quad\tau^{(\mathbf{w})}= \mathcal{O}(M^{2})\,\big{\}} \tag{39}\]
_using the Mach number expansions (21),(24)._
For well-prepared initial data, we obtain formally for \(M\to 0\) and \(\tau^{(\mathbf{w})}=\mathcal{O}(M^{2})\) the following incompressible limit equations with variable volume fraction
\[\frac{\partial\alpha_{1}}{\partial t}+\mathbf{v}_{(0)}\cdot\nabla \alpha_{1}=0,\quad\rho_{(0)}=\alpha_{1}\rho_{1,(0)}+\alpha_{2}\rho_{2,(0)}, \tag{40a}\] \[\frac{\partial\mathbf{v}_{(0)}}{\partial t}+\mathbf{v}_{(0)}\cdot\nabla \mathbf{v}_{(0)}+\frac{\nabla p_{(2)}}{\rho_{(0)}}=0,\quad\nabla\cdot\mathbf{v}_{(0)} =0, \tag{40b}\]
where \(p_{(2)}\) is the second order perturbation of the uniform pressure given by \(-\nabla\cdot\left(\nabla p_{(2)}/\rho_{(0)}\right)=\nabla^{2}:\left(\mathbf{v}_{(0) }\otimes\mathbf{v}_{(0)}\right)\) acting as the Lagrangian multiplier. Note that the limit velocity equation is derived applying \(\partial_{t}\alpha=-\mathbf{v}_{(0)}\cdot\nabla\alpha\) in the momentum formulation (17d).
## 3 Numerical scheme
Let us write the two-fluid model (17) in the following compact form
\[\frac{\partial\mathbf{q}}{\partial t}+\nabla\cdot\mathbf{f}(\mathbf{q})+\mathbf{B}(\mathbf{q})\cdot \nabla\mathbf{q}=\mathbf{r}(\mathbf{q}), \tag{41}\]
were \(\mathbf{q}\) denotes the vector of state variables defined in (9), \(\mathbf{f}\) the flux function consisting of the conservative terms, \(\mathbf{B}(\mathbf{q})\) is the matrix that contains the non-conservative contributions and \(\mathbf{r}\) the relaxation source terms acting on the volume fraction and the relative velocity.
In the following, we construct a numerical scheme for the two fluid single-temperature model (41) which is stable independently of the Mach numbers \(M^{1}\) and \(M^{2}\). This allows to follow the dynamics associated with the flow velocity \(\mathbf{v}\), especially the transport of the volume fraction that represents the interface between the two phases. In addition we require the new scheme to be asymptotically preserving (AP), meaning that the numerical scheme in the singular limit as \(M\to 0\) has to be consistent with a discretization of the incompressible limit equations (40).
To achieve this goal, we use an operator splitting approach, dividing the flux \(\mathbf{f}\) into terms \(\mathbf{f}^{\rm ex}\) treated explicitly and \(\mathbf{f}^{\rm im}\) integrated implicitly. The components of the non-conservative terms \(\mathbf{B}\) only involve terms with respect to the velocities \(\mathbf{v}\) and \(\mathbf{w}\) and are treated explicitly. The resulting implicit system is in general nonlinear due to the nonlinearity of the flux function \(\mathbf{f}^{\rm im}\) and nonlinearity of EOS (7). This, however, implies a huge computational overhead since nonlinear solvers would be required to solve large coupled implicit systems. To reduce computational costs, we construct a linear implicit numerical scheme whose implicit part can be solved by direct or iterative linear solvers. To avoid that the AP property is lost during the linearization process, we use the so called _reference solution (RS) approach_ detailed in the subsequent section that has been successfully applied to construct schemes for the Euler equations and isentropic two-phase flows [24, 3, 16, 21].
### Reference solution approach
In the singular limit as \(M\to 0\), the stiffness in the system is mainly connected to the pressure and chemical potential terms in the momentum and relative velocity equation. Further, these terms are coupled with the evolution equation for the total energy density \(\rho E\). Therefore, to obtain a time step that is dominated by the mixture velocity \(\mathbf{v}\), these terms need to be treated implicitly. It follows from the EOS (7), that the mixture pressure \(p\) depends linearly on \(\rho E\) and we can write
\[p=\left(\phi_{p}-1\right)\left(\rho E-\rho E_{\rm kin}\right),\quad\text{with} \quad\phi_{p}(\alpha_{1},\alpha_{2},\rho_{1},\rho_{2})=\frac{\gamma_{1} \alpha_{1}\rho_{1}c_{v,1}+\gamma_{2}\alpha_{2}\rho_{2}c_{v,2}}{\alpha_{1} \rho_{1}c_{v,1}+\alpha_{2}\rho_{2}c_{v,2}}, \tag{42}\]
where \(E_{\rm kin}\) contains all kinetic energy contributions
\[E_{\rm kin}=\frac{\|\mathbf{v}\|^{2}}{2}+c_{1}c_{2}\frac{\|\mathbf{w}\|^{2}}{2}. \tag{43}\]
For the chemical potentials we have a nonlinear dependence on \(\rho E\) via the phase entropies \(s_{l}(\mathbf{q}),l=1,2\), given by
\[\mu=\frac{\mu_{1}}{M_{1}^{2}}-\frac{\mu_{2}}{M_{2}^{2}}=\phi_{\mu}(\rho E-\rho E _{\rm kin}),\quad\text{where}\quad\phi_{\mu}(\mathbf{q})=\frac{\gamma_{1}c_{v,1}- s_{1}(\mathbf{q})-\mathcal{C}^{2}(\gamma_{2}c_{v,2}-s_{2}(\mathbf{q}))}{\alpha_{1} \rho_{1}c_{v,1}+\mathcal{C}^{2}\alpha_{2}\rho_{2}c_{v,2}} \tag{44}\]
with \(\mathcal{C}\) being the ratio of Mach numbers defined in (20).
Note that for single phase flow, \(\phi_{p}=\gamma-1\) is constant and \(\phi_{\mu}=0\). Consequently, formulations (42), (44) reduce to the Euler case, studied in [4], and is consistent with single phase flow. Therefore, we linearize only the difference of the chemical potentials \(\mu\) with respect to \(\rho E\) around a reference state \(\mathbf{q}_{\rm RS}\) as follows
\[\mu=\mu_{\rm RS}+\left(\frac{\partial\mu}{\partial(\rho E)}\right)_{\rm RS} \left(\rho E-(\rho E)_{\rm RS}\right)+\mathcal{O}\left(\left(\rho E-(\rho E) _{\rm RS}\right)^{2}\right), \tag{45}\]
where
\[\frac{\partial\mu}{\partial(\rho E)}=\frac{c_{v,1}(\gamma_{1}-1)-s_{1}(\rho_{1},T)- \mathcal{C}^{2}(c_{v,2}(\gamma_{2}-1)-s_{2}(\rho_{2},T))}{\alpha_{1}\rho_{1}c_{ v,1}+\mathcal{C}^{2}\alpha_{2}\rho_{2}c_{v,2}}=\mathcal{O}(1) \tag{46}\]
for all \(\mathcal{C}=\frac{M_{1}}{M^{2}}\). The reference state is set to
\[\mathbf{q}_{\text{RS}}=(\alpha_{1},\alpha_{1}\rho_{1,(0)},\alpha_{2}\rho_{2,(0)}, \rho_{(0)}\mathbf{v},\mathbf{w},(\rho E)_{\text{RS}})^{T},\quad(\rho E)_{\text{RS}}= \rho_{(0)}e_{(0)}+\rho_{(0)}E_{\text{kin}}, \tag{47}\]
where \(\rho_{l,(0)},\rho_{l,(0)}e_{l,(0)}\) are constant leading order states from (38) and
\[\rho_{(0)}=\alpha_{1}\rho_{1,(0)}+\alpha_{2}\rho_{2,(0)},\quad\rho_{(0)}e_{(0 )}=\alpha_{1}\frac{\rho_{1,(0)}e_{1,(0)}}{M_{1}^{2}}+\alpha_{2}\frac{\rho_{2, (0)}e_{2,(0)}}{M_{2}^{2}}.\]
We split \(\mu\) into a part that is linear in \(\rho E\) given by
\[\hat{\mu}=\hat{\mu}_{\text{RS}}+\left(\frac{\partial\mu}{\partial(\rho E)} \right)_{\text{RS}}\rho E,\quad\hat{\mu}_{\text{RS}}=\mu(\mathbf{q}_{\text{RS}})- \left(\frac{\partial\mu}{\partial(\rho E)}\right)_{\text{RS}}\times(\rho E)_{ \text{RS}} \tag{48}\]
and a nonlinear part
\[\tilde{\mu}=\mu-\hat{\mu}=\mathcal{O}\left(\left(\rho E-(\rho E)_{\text{RS}} \right)^{2}\right). \tag{49}\]
Note that if considering well-prepared initial data, \(\tilde{\mu}\) is of order \(M_{l}^{2},l=1,2\). This can be seen by multiplying \(\mu=\frac{\mu_{1}}{M_{1}^{2}}-\frac{\mu_{2}}{M_{2}^{2}}\), without loss of generality, by \(M_{1}^{2}\). Then we obtain
\[\mu_{1}-\mathcal{C}^{2}\mu_{2}=\mu_{1,\text{RS}}-\mathcal{C}^{2}\mu_{2,\text{ RS}}+\left(\frac{\partial\mu}{\partial(\rho E)}\right)_{\text{RS}}\times\left( \rho\tilde{E}-(\rho\tilde{E})_{\text{RS}}\right)+\mathcal{O}\left((\rho\tilde {E}-(\rho\tilde{E})_{\text{RS}})^{2}\right), \tag{50}\]
where
\[\rho\tilde{E}=\alpha_{1}\rho_{1}e_{1}+\mathcal{C}^{2}\alpha_{2}\rho_{2}e_{2}+ M_{1}^{2}\rho E_{\text{kin}}. \tag{51}\]
For the difference between the total energy density and its reference value we obtain
\[\rho\tilde{E}-(\rho\tilde{E})_{\text{RS}}=M_{1}^{2}\alpha_{1}\left(\rho_{1}e_ {1}\right)_{(2)}+(M_{2}\mathcal{C})^{2}\alpha_{2}\left(\rho_{2}e_{2}\right)_{( 2)}+M_{1}^{2}\rho_{(2)}E_{\text{kin}}=M_{1}^{2}\left(c_{1}e_{1,(2)}+c_{2}e_{2,(2)}+\rho_{(2)}E_{\text{kin}}\right) \tag{52}\]
with \(\rho_{(2)}=M_{1}^{2}\alpha_{1}\rho_{1,(2)}+M_{2}^{2}\alpha_{2}\rho_{2,(2)}\). Therefore, it follows
\[\tilde{\mu}=\frac{\mu_{1}-\mathcal{C}^{2}\mu_{2}}{M_{1}^{2}}-\hat{\mu}= \mathcal{O}\left(\frac{(\rho\tilde{E}-\rho\tilde{E}_{\text{RS}})^{2}}{M_{1}^{2 }}\right)=\mathcal{O}(M_{1}^{2}). \tag{53}\]
Consequently, the nonlinear term \(\tilde{\mu}\) vanishes as \(M_{1}\) tends to \(0\) and can be treated explicitly without imposing a sever time step restriction. Note that for compressible flow \(M_{1}\approx 1\), these terms are important to obtain the correct wave speeds and cannot be neglected.
Taking these considerations into account, the following subsystem will be treated explicitly
\[\frac{\partial\alpha_{1}}{\partial t}+\mathbf{v}\cdot\nabla\alpha_{1}=0, \tag{54a}\] \[\frac{\partial(\alpha_{1}\rho_{1})}{\partial t}+\nabla\cdot(\alpha_{1}\rho_{1} \mathbf{v}_{1})=0,\] (54b) \[\frac{\partial(\alpha_{2}\rho_{2})}{\partial t}+\nabla\cdot(\alpha_{2}\rho_{2} \mathbf{v}_{2})=0,\] (54c) \[\frac{\partial(\rho\mathbf{v})}{\partial t}+\nabla\cdot\left(\rho\mathbf{v} \otimes\mathbf{v}\right)=0,\] (54d) \[\frac{\partial\mathbf{w}}{\partial t}+\nabla\cdot\left(\left\|\mathbf{w} \cdot\mathbf{v}+(1-2c_{1})\,\frac{\|\mathbf{w}\|^{2}}{2}\,\right\|\mathbf{I}\right)+(\nabla \times\mathbf{w})\times\mathbf{v}=0,\] (54e) \[\frac{\partial(\rho E)}{\partial t}+\nabla\cdot\left(\rho\,\,\left[\mathbf{w} \cdot\mathbf{v}+(1-2c_{1})\,\frac{\|\mathbf{w}\|^{2}}{2}\,\right]\,c_{1}c_{2}\mathbf{w} \right)=0. \tag{54f}\]
Written in compact notation, we have
\[\frac{\partial\mathbf{q}}{\partial t}+\nabla\cdot\mathbf{f}^{\text{ex}}(\mathbf{q})+\mathbf{B}( \mathbf{q})\nabla q=0. \tag{55}\]
Subsystem (54) is weakly hyperbolic, since it lacks one linearly independent eigenvector for the characteristic speed \(\lambda_{1}^{\mathbf{v}}\). The complete list of characteristic speeds is given by
\[\lambda^{0}=0,\quad\lambda_{1}^{\mathbf{v}}=\mathbf{v}\cdot\mathbf{n}\;(8\times),\quad \lambda_{2}^{\mathbf{v}}=(\mathbf{v}+(1-2c_{1})\mathbf{w})\cdot\mathbf{n}. \tag{56}\]
Applying, e.g. the Rusanov numerical fluxes, a numerical solution of the weakly hyperbolic system (54) can be obtained. Further, rewriting pressures and chemical potentials in terms of \(\rho E\) and using the decomposition
\[\mu=\hat{\mu}+\tilde{\mu}\quad\text{and}\quad\partial\mu_{RS}=\left(\frac{ \partial\mu}{\partial(\rho E)}\right)_{\text{RS}},\]
the implicitly treated subsystem is given by
\[\frac{\partial\alpha_{1}}{\partial t}=-\frac{1}{\tau^{(\alpha)} \rho}\left(\frac{\rho_{1,\text{ref}}}{\rho_{\text{ref}}}\frac{p_{1}}{M_{1}^{2} }-\frac{\rho_{2,\text{ref}}}{\rho_{\text{ref}}}\frac{p_{2}}{M_{2}^{2}}\right), \tag{57a}\] \[\frac{\partial(\alpha_{1}\rho_{1})}{\partial t}=0,\] (57b) \[\frac{\partial(\alpha_{2}\rho_{2})}{\partial t}=0,\] (57c) \[\frac{\partial(\rho v)}{\partial t}+\nabla\cdot\left((\phi_{p}-1 )\left(\rho E-\rho E_{\text{kin}}\right)\mathbf{I}+\rho c_{1}c_{2}\mathbf{w}\otimes\mathbf{ w}\right)=0,\] (57d) \[\frac{\partial\mathbf{w}}{\partial t}+\nabla\cdot\left(\left[\partial \mu_{\text{RS}}\;\rho E+\hat{\mu}_{\text{RS}}+\tilde{\mu}\right]\mathbf{I}\right)=- \frac{c_{1}c_{2}\mathbf{w}}{\tau^{(w)}},\] (57e) \[\frac{\partial(\rho E)}{\partial t}+\nabla\cdot\left(\rho\mathbf{v} \left(\frac{\rho E+p}{\rho}\right)+\mu\;\rho c_{1}c_{2}\mathbf{w}\right)=0 \tag{57f}\]
which yields the corresponding compact form
\[\frac{\partial q}{\partial t}+\nabla\cdot\mathbf{f}^{\text{im}}(\mathbf{q})=\mathbf{r}( \mathbf{q}). \tag{58}\]
In the following, we construct an IMEX scheme for two subsystems (55) and (58). We start with the time semi-discrete scheme.
### Time semi-discrete scheme
Let the time interval \((0,T_{f})\) be discretized by \(t^{n}=n\Delta t\), where \(\Delta t\) denotes the time step subject to a time step restriction based on a CFL condition given by
\[\Delta t\leq\nu\frac{\Delta x}{\max_{\mathbf{x}\in\Omega}\;(|\lambda_{1}^{\mathbf{v}} (\mathbf{x},t^{n})|,|\lambda_{2}^{\mathbf{v}}(\mathbf{x},t^{n})|)}. \tag{59}\]
Therein, \(\lambda_{1}^{\mathbf{v}}\), \(\lambda_{2}^{\mathbf{v}}\) are the characteristic speeds of the explicit subsystem (56) evaluated at time level \(t^{n}\). For a first order scheme in time, we apply the forward Euler method for the explicit subsystem (55)
\[\mathbf{q}^{(1)}=\mathbf{q}^{n}-\Delta t\nabla\cdot\mathbf{f}^{ex}(\mathbf{q}^{n})-\Delta tB( \mathbf{q}^{n})\cdot\nabla\mathbf{q}^{n} \tag{60}\]
and a backward Euler method for the implicit subsystem (57). We find that there are still some nonlinear terms present yielding a nonlinear coupled system. Extending the approach from [4] for the Euler equations, we linearize certain flux terms in time yielding the following time discretization
\[(\alpha_{1}\rho_{1})^{n+1} =(\alpha_{1}\rho_{1})^{(1)}, \tag{61a}\] \[(\alpha_{2}\rho_{2})^{n+1} =(\alpha_{2}\rho_{2})^{(1)},\] (61b) \[(\rho\upsilon)^{n+1} =\rho\upsilon^{(1)}-\Delta t\nabla\left((\phi_{p}^{n}-1)\rho E^{ n+1}+\hat{\rho}^{n}\right)-\Delta t\nabla\cdot\left(\left(\rho c_{1}c_{2} \right)^{n+1}\mathbf{w}^{n}\otimes\mathbf{w}^{n}\right),\] (61c) \[\mathbf{w}^{n+1} =\mathbf{w}^{(1)}-\Delta t\nabla\left(\partial\mu_{\text{RS}}^{n}( \rho E)^{n+1}+\hat{\mu}_{\text{RS}}^{(1)}+\tilde{\mu}^{n}\right)-\frac{\Delta t }{\tau^{(w)}}\left(c_{1}c_{2}\right)^{n+1}\mathbf{w}^{n+1},\] (61d) \[(\rho E)^{n+1} =(\rho E)^{(1)}-\Delta t\nabla\cdot\left(\left(\rho\upsilon\right) ^{n+1}\frac{\left(\rho E+\mathbf{p}\right)^{n}}{\rho^{n+1}}+\mu^{n}(\rho c_{1}c_{2 })^{n+1}\mathbf{w}^{n+1}\right), \tag{61e}\]
where \(\hat{\rho}^{n}=-(\phi_{p}^{n}-1)\rho E_{\text{kin}}^{n}\). Rewriting the relative velocity equations with \(\rho^{n+1}=(\alpha_{1}\rho_{1})^{n+1}+(\alpha_{2}\rho_{2})^{n+1}\) implies
\[\mathbf{w}^{n+1}=\left(\frac{\tau^{(w)}}{\tau^{(w)}+\Delta t\left(c_{1}c_{2}\right) ^{n+1}}\right)\mathbf{w}^{(1)}-\left(\frac{\Delta t\ \tau^{(w)}}{\tau^{(w)}+\Delta t\left(c_{1}c_{2}\right)^{n+1}}\right)\nabla \left(\partial\mu_{\text{RS}}^{n}(\rho E)^{n+1}+\hat{\mu}_{\text{RS}}^{n}+ \tilde{\mu}^{n}\right). \tag{62}\]
Substituting the relative velocity and momentum in the total energy equation yields a linear implicit equation for the total energy given by
\[(\rho E)^{n+1}-\Delta t^{2}\nabla\cdot\left(\frac{\left(\rho E+ \mathbf{p}\right)^{n}}{\rho^{n+1}}\nabla\left((\phi_{p}^{n}-1)(\rho E)^{n+1}\right) \right)-\Delta t^{2}\nabla\cdot\left(\left(\frac{\tau^{(w)}\mu^{n}(\rho c_{1} c_{2})^{n+1}}{\tau^{(w)}+\Delta t\left(c_{1}c_{2}\right)^{n+1}}\right) \nabla\left(\partial\mu_{\text{RS}}^{n}(\rho E)^{n+1}\right)\right)\] \[= (\rho E)^{(1)}-\Delta t\nabla\cdot\left(\frac{\left(\rho E+\mathbf{p} \right)^{n}}{\rho^{n+1}}\left(\rho\upsilon^{(1)}+\Delta t\nabla\left((\phi_{p }-1)\rho E_{\text{kin}}\right)^{n}-\Delta t\nabla\cdot\left(\left(\rho c_{1}c _{2}\right)^{n+1}\mathbf{w}^{n}\otimes\mathbf{w}^{n}\right)\right)\right) \tag{63}\] \[-\Delta t\nabla\cdot\left(\left(\frac{\tau^{(w)}\mu^{n}(\rho c_{1 }c_{2})^{n+1}}{\tau^{(w)}+\Delta t\left(c_{1}c_{2}\right)^{n+1}}\right)\mathbf{w} ^{(1)}-\left(\frac{\Delta t\ \tau^{(w)}\mu^{n}(\rho c_{1}c_{2})^{n+1}}{\tau^{(w)}+\Delta t \left(c_{1}c_{2}\right)^{n+1}}\right)\nabla\left(\beta_{\text{RS}}^{n}+\tilde{ \mu}^{n}\right)\right).\]
Having obtained the total energy \((\rho E)^{n+1}\) we can successively update the relative velocity
\[\mathbf{w}^{n+1}=\left(\frac{\tau^{(w)}}{\tau^{(w)}+\Delta t\left(c_{1}c_{2}\right) ^{n+1}}\right)\left(\mathbf{w}^{(1)}-\Delta t\nabla\mu^{n+1}\right) \tag{64}\]
and the momentum
\[(\rho\upsilon)^{n+1}=\rho\mathbf{v}^{(1)}-\Delta t\nabla p^{n+1}-\Delta t\nabla \cdot\left(\left(\rho c_{1}c_{2}\right)^{n+1}\mathbf{w}^{n+1}\otimes\mathbf{w}^{n+1} \right). \tag{65}\]
Finally, the volume fraction at the next time level is obtained from the pressure relaxation
\[\frac{\partial\alpha_{1}}{\partial t}=-\frac{1}{\tau^{(\alpha)}\rho}\left(\frac {p_{1}}{M_{1}^{2}}-\frac{p_{2}}{M_{2}^{2}}\right). \tag{66}\]
Rewriting the source term in terms of the state variables \(\mathbf{q}\), we find
\[\frac{\partial\alpha_{1}}{\partial t}=-\frac{1}{\tau^{(\alpha)}}\left(\frac{( \gamma_{1}-1)c_{\nu,1}c_{1}}{\alpha_{1}}-\frac{(\gamma_{2}-1)c_{\nu,2}c_{2}}{1- \alpha_{1}}\right)\frac{\rho E-\rho E_{\text{kin}}}{\alpha_{1}\rho_{1}c_{\nu,1 }+\alpha_{2}\rho_{2}c_{\nu,2}} \tag{67}\]
which is a nonlinear ordinary differential equation in \(\alpha_{1}\) and can be solved implicitly applying the backward Euler scheme and the Newton algorithm to solve the nonlinear implicit system which concludes the time semi-discrete scheme. We proceed with the construction of the fully discrete scheme in the next section.
### Fully discrete scheme
In time, we set as before \(t^{n+1}=t^{n}+\Delta t\), where \(\Delta t\) obeys the CFL condition (59). In space, we consider a two-dimensional computational domain \(\Omega\) divided into cells \(C_{I}=[x_{1,i-1/2},x_{1,i+1/2}]\times[x_{2,j-1/2},x_{2,j+1/2}]\) with \(\mathbf{x}=(x_{1},x_{2})^{T}\). The common edge between two neighboring cells \(\Omega_{I}\) and \(\Omega_{J}\) is denoted by \(\partial\Omega_{IJ}\) and the set of neighbors of \(\Omega_{I}\) associated with the unit normal vector pointing from the cell \(\Omega_{I}\) to \(\Omega_{J}\) given by \(\mathbf{n}_{IJ}\) is denoted by \(\mathcal{N}_{I}\). We consider a uniform mesh size \(\Delta x_{1},\Delta x_{2}\) in each direction and the barycenter of \(C_{I}\) is denoted by \(\mathbf{x}_{I}=(i\Delta x_{1},j\Delta x_{2})\) for \(i,j=1,\ldots,N\). We use a finite volume framework, where the solution on the cell \(C_{I}\) at time \(t^{n}\) is approximated by the average given by
\[\mathbf{q}_{I}^{n}\approx\frac{1}{|\Omega_{I}|}\int_{\Omega_{I}}\mathbf{q}(\mathbf{x},t^{ n})\;d\mathbf{x}. \tag{68}\]
A fully discrete finite volume (FV) method for (55) reads
\[\mathbf{q}_{I}^{(1)}=\mathbf{q}_{I}^{n}-\Delta t\sum_{K\in\mathcal{N}_{I}}\frac{| \partial\Omega_{IK}|}{|\Omega_{I}|}\big{(}\mathbf{F}^{\text{ex}}(\mathbf{q}_{I}^{n}, \mathbf{q}_{K}^{n})\cdot\mathbf{n}_{IK}+\mathbf{D}(\mathbf{q}_{I}^{n},\mathbf{q}_{K}^{n})\cdot\bm {n}_{IK}\big{)}, \tag{69}\]
using a Rusanov numerical flux
\[\mathbf{F}^{\text{ex}}(\mathbf{q}_{I}^{n},\mathbf{q}_{K}^{n})\cdot\mathbf{n}_{IK}=\frac{1}{2} \big{(}\mathbf{f}^{\text{ex}}(\mathbf{q}_{I}^{n})+\mathbf{f}^{\text{ex}}(\mathbf{q}_{K}^{n}) \cdot\mathbf{n}_{IK}-s_{IK}\;\mathbf{I}\;(\mathbf{q}_{K}^{n}-\mathbf{q}_{I}^{n}), \tag{70}\]
where \(s=\max_{k}(|\lambda_{k}(\mathbf{q}_{I})|,|\lambda(\mathbf{q}_{I})|)\) denotes the maximum eigenvalue at the interface \(\partial\Omega_{IK}\). The non-conservative product is approximated in the following way
\[\mathbf{D}(\mathbf{q}_{I}^{n},\mathbf{q}_{K}^{n})\cdot\mathbf{n}_{IK}=\frac{1}{2}\mathbf{B}\big{(} \tilde{\mathbf{q}}^{n}\big{)}\cdot(\mathbf{q}_{K}^{n}-\mathbf{q}_{I}^{n}),\quad\tilde{\bm {q}}=\frac{1}{2}\big{(}\mathbf{q}_{K}^{n}+\mathbf{q}_{I}^{n}\big{)}. \tag{71}\]
The implicit elliptic equation for the total energy (63) is based on centered finite difference approximation for the space discretization and can be formulated on cell \(C_{I}\) in the following way
\[\begin{split}\big{(}\rho E\big{)}_{I}^{n+1}-\Delta t^{2}\left( \mathcal{L}_{I}^{n}\big{(}\rho E\big{)}_{I}^{n+1}+\mathcal{K}_{I}^{n}\big{(} \rho E\big{)}_{I}^{n+1}\right)&=\big{(}\rho E\big{)}_{I}^{(1)}- \Delta t\sum_{K\in\mathcal{N}_{I}}\frac{|\delta\Omega_{IK}|}{|\Omega_{I}|} \mathcal{F}(\mathbf{q}_{I}^{(1)},\mathbf{q}_{K}^{(1)})\cdot\mathbf{n}_{IK}\\ &\quad\quad-\Delta t^{2}\mathcal{L}_{I}^{n}\big{(}\rho E_{\text{ kin}}^{n}\big{)}-\Delta t^{2}\mathcal{K}_{I}^{n}\big{(}(\rho E)_{\text{RS}}^{n}-( \partial\mu_{\text{RS}}^{n})^{-1}\tilde{\mu}^{n}\big{)}.\end{split} \tag{72}\]
Here, the weighted Laplacians are discretized as follows
\[\mathcal{L}_{I}\big{(}\rho E\big{)}_{I}=\sum_{K\in\mathcal{N}_{I}}\frac{| \partial\Omega_{IK}|}{|\Omega_{I}|}G_{1}(\mathbf{q}_{I},\mathbf{q}_{K})\big{[}H_{1}( \rho E)\big{]}\,(\mathbf{q}_{I},\mathbf{q}_{K}),\quad\mathcal{K}_{I}\big{(}\rho E\big{)} _{I}=\sum_{K\in\mathcal{N}_{I}}\frac{|\partial\Omega_{IK}|}{|\Omega_{I}|}G_{2} (\mathbf{q}_{I},\mathbf{q}_{K})\big{[}H_{2}(\rho E)\big{]}\,(\mathbf{q}_{I},\mathbf{q}_{K}) \tag{73}\]
with
\[G_{k}(\mathbf{q}_{I},\mathbf{q}_{K})=\frac{1}{2}\big{(}g_{k}(\mathbf{q}_{I})+g_{k}(\mathbf{q}_ {K})\big{)},\quad H_{k}(\mathbf{q}_{I},\mathbf{q}_{K})=\frac{|\partial\Omega_{IK}|}{| \Omega_{I}|}\big{(}h_{k}(\mathbf{q}_{I})-h_{k}(\mathbf{q}_{K})\big{)},\quad k=1,2 \tag{74}\]
where
\[g_{1}=\frac{(\rho E+p)^{n}}{\rho^{n+1}},\quad h_{1}=\phi_{p}^{n}-1,\quad g_{2} =\frac{\tau^{(w)}\mu^{n}(\rho c_{1}c_{2})^{n+1}}{\tau^{(w)}+\Delta t\,(c_{1}c_{ 2})^{n+1}},\quad h_{2}=\partial\mu_{\text{RS}}^{n}. \tag{75}\]
The divergence terms are approximated as
\[\mathcal{F}(\mathbf{q}_{I}^{(1)},\mathbf{q}_{K}^{(1)})\cdot\mathbf{n}_{IK}=\frac{1}{2} \Big{(}g_{1}(\mathbf{q}_{I})\rho\mathbf{v}_{I}^{(1)*}+g_{1}(\mathbf{q}_{K})\rho\mathbf{v}_{K}^{ (1)*}\Big{)}\cdot\mathbf{n}_{IK}, \tag{76}\]
where \(\rho\mathbf{v}^{(1)*}\) contains the relative velocity flux component \(\big{(}(\rho c_{1}c_{2})^{n+1}\mathbf{w}^{n}\otimes\mathbf{w}^{n}\big{)}\) with centered differences analogously to (76). The coefficient matrix resulting from the linear equation (73) is strictly diagonal dominant. Therefore, the linear system of equations has a unique solution independent of the Mach number regime. Numerically it is solved by a preconditioned linear iterative solver GMRES provided by PetSc [2].
Once the energy \(\rho E\) is computed at \(t^{n+1}\), the relative velocity (64) and momentum (65) are updated consecutively by the FV method
\[\begin{split}\mathbf{w}_{I}^{n+1}&=\mathbf{w}_{I}^{(1)}-\Delta t \sum_{K\in\mathcal{N}_{I}}\frac{|\partial\Omega_{IK}|}{|\Omega_{I}|}\mathbf{F}_{( \mathbf{w})}^{\text{im}}(\mathbf{q}_{I}^{n+1},\mathbf{q}_{K}^{n+1})\cdot\mathbf{n}_{IK}+\Delta tr _{(\mathbf{w})}(\mathbf{q}_{I}^{n+1}),\\ \rho\mathbf{v}_{I}^{n+1}&=\rho\mathbf{v}_{I}^{(1)}-\Delta t \sum_{K\in\mathcal{N}_{I}}\frac{|\partial\Omega_{IK}|}{|\Omega_{I}|}\mathbf{F}_{( \rho v)}^{\text{im}}(\mathbf{q}_{I}^{n+1},\mathbf{q}_{K}^{n+1})\cdot\mathbf{n}_{IK}.\end{split} \tag{77}\]
The numerical flux \(\mathbf{F}^{\text{im}}\) is constructed by the finite difference approximation defined analogously as in (76) based on the implicit flux \(\mathbf{f}^{\text{im}}\). The update of the volume fraction is approximated by the backward Euler method
\[\alpha_{I}^{n+1}=\alpha_{I}^{(1)}+\Delta tr_{(\mathbf{\alpha})}(\mathbf{q}_{I}^{n+1}), \tag{78}\]
where \(\mathbf{r}_{(\mathbf{\alpha})}\) denotes the pressure relaxation source term. Update (78) approximates on each cell the corresponding ordinary differential equation (ODE) independently. To solve the nonlinear implicit system arising from the backward Euler discretization of the ODE, a Newton algorithm is applied.
The above procedure fits in the framework of IMEX Runge Kutta methods, using a forward Euler scheme for the explicit and an backward Euler scheme for the implicit subsystems. The corresponding Butcher tableaux are given in Table 1. For an \(s\) stage IMEX Runge Kutta method, the Butcher tableaux \((A,b,d)\) for the implicit and \((\bar{A},\bar{b},\bar{d})\) for the explicit parts are given by
\[A=\begin{bmatrix}a_{11}&\cdots&0\\ \vdots&\ddots&\vdots\\ a_{s1}&\cdots&a_{ss}\end{bmatrix},\quad b=\begin{bmatrix}b_{1}\\ \vdots\\ b_{s}\end{bmatrix}\quad d=\begin{bmatrix}d_{1}\\ \vdots\\ d_{s}\end{bmatrix},\quad\bar{A}=\begin{bmatrix}\bar{a}_{11}&\cdots&0\\ \vdots&\ddots&\vdots\\ \bar{a}_{s1}&\cdots&\bar{a}_{ss}\end{bmatrix},\quad\bar{b}=\begin{bmatrix} \bar{b}_{1}\\ \vdots\\ \vdots\\ \bar{b}_{s}\end{bmatrix}\quad\bar{d}=\begin{bmatrix}\bar{d}_{1}\\ \vdots\\ \vdots\\ \bar{d}_{s}\end{bmatrix}, \tag{79}\]
Consequently, the IMEX method can be written as
\[\mathbf{q}_{I}^{n+1}=\mathbf{q}_{I}^{n}-\Delta t\sum_{k=1}^{s}\bar{\rho}_{k}\Big{(} \nabla\cdot\mathbf{f}^{\text{ex}}(\mathbf{q}^{(k)}+\mathbf{B}(\mathbf{q}^{(k)})\cdot\nabla\mathbf{ q}^{(k)}\Big{)}+\beta_{k}\Big{(}\nabla\cdot\mathbf{f}^{\text{im}}(\mathbf{q}^{(k)} )-\mathbf{r}(\mathbf{q}^{(k)})\Big{)}, \tag{80}\]
with the stages
\[\mathbf{q}_{I}^{(k)}=\mathbf{q}_{I}^{n}-\Delta t\sum_{i=1}^{k-1}\bar{a}_{ki}\Big{(} \nabla\cdot\mathbf{f}^{\text{ex}}(\mathbf{q}^{(i)})+\mathbf{B}(\mathbf{q}^{(i)})\cdot\nabla\bm {q}^{(k)}\Big{)}-\Delta t\sum_{i=1}^{k}a_{ki}\Big{(}\nabla\cdot\mathbf{f}^{\text{ im}}(\mathbf{q}^{(i)})-\mathbf{r}(\mathbf{q}^{(i)})\Big{)}. \tag{81}\]
\begin{table}
\begin{tabular}{c||c c c c|c c c} IMEX-RK Scheme & \multicolumn{2}{c}{\(d\)} & \(A\) & \multicolumn{2}{c}{\(\bar{d}\)} & \(\bar{A}\) & \multicolumn{2}{c}{\(\bar{b}^{T}\)} \\ \cline{3-8} & & \multicolumn{2}{c}{\(b^{T}\)} & & \multicolumn{2}{c}{\(\bar{b}^{T}\)} & & \multicolumn{2}{c}{\(\bar{b}^{T}\)} \\ \hline \hline Backward/Forward Euler & \multicolumn{2}{c}{\(1\)} & \(1\) & \multicolumn{2}{c}{\(0\)} & \(0\) & \multicolumn{2}{c}{\(0\)} \\ \cline{3-8} & & \multicolumn{2}{c}{\(1\)} & & \multicolumn{2}{c}{\(1\)} & & \multicolumn{2}{c}{\(1\)} \\ \hline \hline & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ & \(\gamma\) & \(0\) & \(\gamma\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ & \(1\) & \(0\) & \(1-\gamma\) & \(\gamma\) & \(1\) & \(\delta\) & \(1-\delta\) & \(0\) \\ \cline{3-8} & & \(0\) & \(1-\gamma\) & \(\gamma\) & \(0\) & \(\delta\) & \(1-\delta\) & \(\gamma=1-\frac{1}{\sqrt{2}}\), \(\delta=1-\frac{1}{2\gamma}\) \\ \cline{3-8} & & \(0\) & \(1-\gamma\) & \(\gamma\) & \(0\) & \(\delta\) & \(1-\delta\) & \(\gamma=1-\frac{1}{\sqrt{2}}\), \(\delta=1-\frac{1}{2\gamma}\) \\ \cline{3-8} & & \multicolumn{2}{c}{\(0\)} & \(1-\gamma\) & \(\gamma\) & \(0\) & \(\delta\) & \(1-\delta\) & \(\gamma=1-\frac{1}{\sqrt{2}}\), \(\delta=1-\frac{1}{2\gamma}\) \\ \cline{3-8} & & \multicolumn{2}{c}{\(0\)} & \(1-\gamma\) & \(\gamma\) & \(0\) & \(\delta\) & \(
To be consistent with the asymptotic limit, we apply globally stiffly accurate (GSA) IMEX-RK methods like for the time discretization. For the first order method forward/backward Euler method is applied, for the second order method ARS(2,2,2) is used, see Table 1. A second order method in space is achieved by a second order reconstruction with a minmod limiter in the Rusanov numerical flux (70). In the implicit part, central finite differences yield second order accuracy. Note that for GSA Runge Kutta methods, the final update (80) coincides with the last computational stage (81) and thus does not need to be performed.
## 4 Asymptotic preserving property
Motivated by the analysis in Section 2.2, we consider the case \(M_{1}=M_{2}=M\ll 1\). For the cases \(M_{1}\neq M_{2}\) and \(M_{1}\approx 1\geq M_{2}>0\) we refer to the study of the isentropic case performed in [21]. The principle idea is the same and the proof can be performed along the lines presented in [21] combined with the analysis for the case \(M_{1}=M_{2}\) presented here.
Applying an analogous asymptotic analysis as in Section 2.2 on the semi-discrete scheme consisting of (60), (63), (64), (65), (78) and using well-prepared initial data as defined in Definition 2.2, we can prove the asymptotic preserving (AP) property for the semi-discrete scheme.
**Theorem 4.1** (Asymptotic preserving property).: _The first order RS-IMEX FV scheme consisting of the explicit part (60), the linear implicit elliptic system (63) and the implicit updates (64), (65) and (78) is asymptotic preserving up to \(\mathcal{O}(\Delta t)\). More precisely, for well-prepared initial data \(q^{0}\in\Omega_{wp}^{M}\) the RS-IMEX FV scheme yields a consistent approximation of limit equations (40) up to \(\mathcal{O}(\Delta t)\)._
Proof.: For the proof of the AP property we refer a reader to Appendix A.
We want to point out, that the \(\mathcal{O}(\Delta t)\) errors arising in the velocity equation and divergence free constraint at leading order are due to the non-constant volume fraction in the low Mach number limit. For single phase flow, i.e. the Euler equations, or homogeneous mixtures, i.e. constant \(\alpha\), we obtain a stronger result of \(\nabla\cdot\mathbf{v}^{n+1}=\mathcal{O}(M^{2})\).
**Corollary 4.2** (Asymptotic preserving property for constant volume fraction).: _For constant volume fraction or single phase flows, we have_
\[\mathbf{v}_{(0)}^{n+1}=\mathbf{v}_{(0)}^{(1)}-\Delta t\mathbf{v}_{(0)}^{n}\cdot\nabla\mathbf{ v}_{(0)}^{n}-\Delta t\frac{\nabla p_{(2)}^{n+1}}{\rho^{n+1}}. \tag{82}\]
_Moreover, the energy update_
\[(\rho e)_{(0)}^{n+1}=(\rho e)_{(0)}^{n}-\Delta t((\rho e+p)_{(0)}^{n}\mathbf{v}_{ (0)}^{n+1}) \tag{83}\]
_and \(T_{(0)}^{n+1}=T_{RS}+\mathcal{O}(M^{2})\) yield \(\nabla\cdot\mathbf{v}_{(0)}^{n+1}=\mathcal{O}(M^{2})\) since \((\rho e+p)_{(0)}^{n}\) are constant. Consequently, for well-prepared initial data, the RS-IMEX FV scheme gives a consistent approximation of the limit equations as the Mach number tends to 0 independently of \(\Delta t\)._
Proof.: The proof can be done following the lines of the proof of Theorem 4.1. Due to \(\rho^{n}\) and \(\rho^{n+1}\) being a convex combination of the states \(\rho_{1,\mathrm{RS}}\) and \(\rho_{2,\mathrm{RS}}\), they are constant. As a consequence, we obtain (82). Further, \(\nabla\cdot\mathbf{v}_{(0)}^{n+1}=\mathcal{O}(M^{2})\) since \((\rho e+p)_{(0)}^{n}\) is constant. As the initial data are well-prepared, i.e. \(q^{0}\in\Omega_{wp}^{M}\), we also obtain recursively \(q^{n}\in\Omega_{wp}^{M}\) for all successive time iterations \(n>0\).
## 5 Numerical results
In this section, we illustrate by numerical experiments theoretical properties of the first and second order RS-IMEX FV scheme, denoted respectively by RS-IMEX1 and RS-IMEX2, proposed in Section 3. All test cases are performed
under the material CFL condition (59) based on the eigenvalues of the explicit subsystem (55) which are of the order of the advection scale. The initial conditions, if not mentioned otherwise, are given in dimensional form using the transformations (15), (35) and the definition of the Mach numbers (16). Whenever possible, we compare the numerical results obtained with our RS-IMEX FV schemes with an exact or reference solution of the two-fluid model with single temperature (10).
### Numerical convergence study
To verify the experimental order of convergence (EOC) we construct an exact solution of the homogeneous two-fluid model single temperature system (10) given by a stationary vortex. It is obtained by considering zero radial velocities and a constant solution to be in angular direction, i.e.
\[v_{r}=0,\quad w_{r}=0,\quad\frac{\partial}{\partial t}\left(\cdot\right)=0, \quad\frac{\partial}{\partial\theta}\left(\cdot\right)=0. \tag{84}\]
In Appendix B, the two-phase model (10) without the relaxation source terms is written in polar coordinates (118). Applying (84), it reduces to
\[\frac{\partial p}{\partial r}=\frac{\alpha_{1}\rho_{1}v_{0}{}_{1 }^{2}+\alpha_{2}\rho_{2}v_{\theta}{}_{2}^{2}}{r}, \tag{85a}\] \[\frac{\partial}{\partial r}\left(\frac{v_{0}{}_{1}^{2}-v_{0}{}_ {2}^{2}}{2}+\mu_{1}-\mu_{2}\right)-v_{\theta}\left(\frac{1}{r}\frac{\partial} {\partial r}\left(r\,w_{\theta}\right)\right)=0, \tag{85b}\]
with \(p=\alpha_{1}p_{1}+\alpha_{2}p_{2},\ w_{\theta}=v_{\theta 1}-v_{02},\ v_{ \theta}=c_{1}v_{\theta 1}+c_{2}v_{\theta 2}\). We set the phase velocities and the profile for the volume fraction to
\[v_{\theta l}=r\,v_{c,l}\exp\left(\nu_{v,l}(1-r^{2})\right)\quad\text{and} \quad\alpha_{1}=c_{\alpha}+\alpha_{c}\exp\left(\nu_{\alpha}(1-r^{2})\right),\]
respectively. This yields two equations for three unknowns \(\rho_{1},\ \rho_{2}\) and \(T\). To eliminate one unknown, we set \(\rho_{2}=c_{\rho}\rho_{1}\) with \(c_{\rho}\) being constant. Then, the unknowns \(\rho_{1}\) and \(T\) can be determined via the following system of ordinary differential equations
\[\begin{pmatrix}\alpha_{1}\frac{\partial p_{1}}{\partial\rho_{1}}+\alpha_{2}c _{\rho}\frac{\partial p_{2}}{\partial\rho_{2}}&\alpha_{1}\frac{\partial p_{1} }{\partial T}+\alpha_{2}\frac{\partial p_{2}}{\partial T}\\ \frac{\partial\mu_{1}}{\partial\rho_{1}}-c_{\rho}\frac{\partial\mu_{2}}{ \partial\rho_{2}}&\frac{\partial\mu_{1}}{\partial T}-\frac{\partial\mu_{2}}{ \partial T}\end{pmatrix}\begin{pmatrix}\frac{\partial\rho_{1}}{\partial r}\\ \frac{\partial T}{\partial r}\end{pmatrix}=\begin{pmatrix}\frac{\alpha_{1} \rho_{1}v_{0}{}_{1}^{2}+\alpha_{2}\rho_{2}v_{\theta}{}_{2}^{2}}{r}-p_{1} \frac{\partial\alpha_{1}}{\partial r}+p_{2}\frac{\partial\alpha_{1}}{\partial r }\\ -\frac{\partial}{\partial r}\left(\frac{v_{0}{}_{1}^{2}-v_{0}{}_{2}^{2}}{2} \right)+v_{\theta}\left(\frac{1}{r}\frac{\partial}{\partial r}\left(r\,w_{ \theta}\right)\right)\end{pmatrix}. \tag{86}\]
Applying the ideal gas law yields
\[\frac{\partial p_{1}}{\partial\rho_{1}} =(\gamma_{1}-1)c_{v,1}\,T,\quad\frac{\partial p_{2}}{\partial \rho_{2}}=(\gamma_{2}-1)c_{v,2}\,T \tag{87a}\] \[\frac{\partial p_{1}}{\partial T} =(\gamma_{1}-1)c_{v,1}\rho_{1},\quad\frac{\partial p_{2}}{\partial T }=(\gamma_{2}-1)c_{v,2}\rho_{2}\] (87b) \[\frac{\partial\mu_{1}}{\partial\rho_{1}} =\frac{(\gamma_{1}-1)c_{v,1}\,T}{\rho_{1}},\quad\frac{\partial\mu _{2}}{\partial\rho_{1}}=\frac{(\gamma_{2}-1)c_{v,2}\,T}{\rho_{2}}\] (87c) \[\frac{\partial\mu_{1}}{\partial T} =(\gamma_{1}-1)c_{v,1}-s_{1},\quad\frac{\partial\mu_{1}}{\partial T }=(\gamma_{2}-1).c_{v,2}-s_{2} \tag{87d}\]
To obtain the initial condition on the computational domain \([-1,1]^{2}\), we integrate (86) numerically with RK4, starting with the initial data \(\rho_{1}=1,\rho_{2}=1,\ T=2\). The parameters in order to obtain the velocities \(v_{l,\theta}\) and the volume fraction \(\alpha_{1}\) are set as
\[c_{\alpha}=0.4,\quad\alpha_{c}=10^{-4},\quad\nu_{\alpha}=10,\quad v_{c,1}=2 \cdot 10^{-5},\quad v_{c,2}=2.5\cdot 10^{-5},\quad\nu_{v,1}=15,\quad\nu_{v,2}=14. \tag{88}\]
This setting yields two different phase velocities and consequently a non-zero relative velocity. To obtain a vortex in the compressible flow regime, we assign the following material parameters
\[\gamma_{1}=\frac{7}{5},\quad\gamma_{2}=\frac{5}{3},\quad c_{v,1}=1,\quad c_{v,2}=1. \tag{89}\]
The maximal Mach number for phase 1 is 0.62, for phase 2 it is 0.21 and the maximal mixture Mach number is 0.54. Consequently, the flow is compressible.
Since the sound speeds depend on the magnitude of the pressures which itself depend on \(c_{v,l}\), we scale \(c_{v,l}\) with one over the Mach number \(M\) to achieve flows in a desired Mach number regime. In the next test case, we set \(c_{v,l}/M^{2}\) which yields a maximum Mach number of 0.018 for the phase 1 and 0.014 for the phase 2 and the mixture Mach number of 0.016. Setting
\[\gamma_{1}=2,\quad\gamma_{2}=2.8,\quad c_{v,1}=20,\quad c_{v,2}=20, \tag{90}\]
the vortex flow is now weakly compressible. Note that, according to Definition 2.2, the initial data are ill-prepared since the phase densities are not constant. However, we see from Tables 2 and 3 that the numerical scheme RS-IMEX2 FV converges with the expected EOC of two for both Mach number regimes. The results are obtained with a material CFL condition (59) with \(\nu=0.25\).
### 1D Riemann Problems
To test the first and second order versions of the RS-IMEX FV scheme in a high Mach number regime, we consider two Riemann Problems (RPs) for the homogeneous system (10) on the domain \([0,1]\). The initial configuration is given in Table 4 and the initial jump position is located at \(x=0.5\). The first RP (RP1) consists of an initial jump in density of phase two and the temperature where the volume fraction is kept constant. The second RP (RP2) is a double rarefaction test with an initial jump in the volume fraction resulting in a discontinuous mixture density and internal energy. In Figure 1, we compare the results for the first and second order RS-IMEX FV schemes using 2000 cells and a material CFL condition \(\Delta t=\nu\Delta x\). For the RS-IMEX1 FV scheme we set \(\nu=0.8\) and for the RS-IMEX2 FV scheme \(\nu=0.4\). This results in \(\Delta t=4\cdot 10^{-4}\) and \(2\cdot 10^{-4}\), respectively. The reference solution was computed by a second order explicit SSP-RK2 FV scheme using 10000 cells resulting in \(\Delta t=7.65\cdot 10^{-6}\). Note that the CFL condition for the explicit scheme is dictated by the fastest wave speed arising in the model which depends on the
\begin{table}
\begin{tabular}{c c c c c c c c} & 16 & \multicolumn{2}{c}{32} & \multicolumn{2}{c}{64} & \multicolumn{2}{c}{128} \\ \hline \(a_{1}\) & 6.31E-03 & — & 1.15E-03 & 2.45 & 2.20E-04 & 2.38 & 4.69E-05 & 2.23 \\ \(\rho_{1}\) & 2.98E-02 & — & 9.44E-03 & 1.65 & 2.24E-03 & 2.07 & 3.94E-04 & 2.50 \\ \(\rho_{2}\) & 2.78E-02 & — & 8.21E-03 & 1.75 & 1.60E-03 & 2.35 & 2.56E-04 & 2.64 \\ \(\mathbf{v}_{1,1}\) & 5.51E-02 & — & 1.40E-02 & 1.98 & 2.44E-03 & 2.51 & 3.33E-04 & 2.87 \\ \(\mathbf{v}_{2,1}\) & 5.51E-02 & — & 1.40E-02 & 1.97 & 2.45E-03 & 2.51 & 3.41E-04 & 2.84 \\ \(\mathbf{v}_{1,2}\) & 6.85E-02 & — & 1.58E-02 & 2.11 & 2.50E-03 & 2.65 & 3.41E-04 & 2.87 \\ \(\mathbf{v}_{2,2}\) & 6.85E-02 & — & 1.58E-02 & 2.11 & 2.49E-03 & 2.66 & 3.51E-04 & 2.82 \\ \(T\) & 4.45E-02 & — & 1.84E-02 & 1.27 & 3.92E-03 & 2.23 & 6.54E-04 & 2.58 \\ \hline \end{tabular}
\end{table}
Table 2: Two-fluid stationary vortex: \(L^{1}\) error and EOC for the second order RS-IMEX FV scheme in the compressible regime with parameters given in (89).
sound speeds of the respective phases, [31]. Moreover a comparable time step of the explicit scheme for 2000 cells is \(3.3\cdot 10^{-5}\) which is 10 times smaller than the one used for the IMEX schemes. Since there are no shear processes present, RP1 consists of 5 waves. The wave speeds and positions produced by both RS-IMEX FV schemes are in good agreement with the reference solution, where the first order scheme is more diffusive on the fast waves as the second order scheme. The material wave however, is captured accurately by both RS-IMEX FV schemes. This is due to the fact that the time step is oriented towards the resolution of this wave.
The wave structure of RP2 is more intricate, as can be seen in Figure 2. This is due to the initial jump in the volume fraction. It consists of the contact wave associated with \(\alpha_{1}\) and three waves traveling to the left of the boundaries of the domain and two waves to the right. Note that due to the single temperature assumption, the wave propagation is not symmetric. We can observe, that the first order RS-IMEX1 FV scheme is too diffusive in order to capture the complicated sequence of slower waves. On the other hand, the second order RS-IMEX2 FV scheme shows a great improvement in the capturing of the slower waves near the initial jump position.
### Advection of a Bubble
We consider a diagonally advected bubble initially centered at \((x_{0},y_{0})=(0.5,0.5)\) with the radius \(r_{0}=0.2\). The computational domain is set to \([0,1]\times[0,1]\) and is discretized by \(256\times 256\) rectangular mesh cells. Further we apply periodic boundary conditions. The velocity fields are given by
\[\mathbf{v}_{1}=(1,1)^{T},\mathbf{v}_{2}=(1,1)^{T}. \tag{91}\]
\begin{table}
\begin{tabular}{c c c c c c c c} Test & \(T_{f}\) & state & \(\alpha\) & \(\rho_{1}\) & \(\rho_{2}\) & \(\mathbf{v}_{1,1}\) & \(\mathbf{v}_{2,1}\) & \(T\) \\ \hline \hline RP1 & 0.2 & left & 0.3 & 2 & 1.2 & 0 & 0 & 1.2 \\ & & right & 0.3 & 2 & 2 & 0 & 0 & 1 \\ \hline \hline RP2 & 0.2 & left & 0.7 & 1 & 2 & -1 & -1 & 1 \\ & & right & 0.3 & 1 & 2 & 1 & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Initial condition for the 1D Riemann problems presented in Section 5.2 with \(\gamma_{1}=1.4\) and \(\gamma_{2}=2\) and \(c_{p,1}=c_{v,2}=1\) on the domain \([0,1]\) with initial jump at \(x=0.5\).
\begin{table}
\begin{tabular}{c c c c c c c c} & 16 & 32 & & 64 & 128 & \\ \hline \(\alpha_{1}\) & 6.53E-03 & — & 1.29E-03 & 2.34 & 2.57E-04 & 2.32 & 5.42E-05 & 2.24 \\ \(\rho_{1}\) & 6.95E-03 & — & 1.70E-03 & 2.02 & 4.68E-04 & 1.86 & 1.19E-04 & 1.97 \\ \(\rho_{2}\) & 2.73E-02 & — & 1.20E-03 & 4.50 & 3.23E-04 & 1.89 & 8.17E-05 & 1.98 \\ \(\mathbf{v}_{1,1}\) & 4.11E-01 & — & 1.32E-02 & 4.95 & 2.09E-03 & 2.66 & 3.08E-04 & 2.76 \\ \(\mathbf{v}_{2,1}\) & 4.11E-01 & — & 1.33E-02 & 4.95 & 2.09E-03 & 2.66 & 3.08E-04 & 2.76 \\ \(\mathbf{v}_{1,2}\) & 4.17E-01 & — & 2.64E-02 & 3.98 & 6.14E-03 & 2.10 & 1.41E-03 & 2.12 \\ \(\mathbf{v}_{2,2}\) & 4.17E-01 & — & 2.64E-02 & 3.98 & 6.14E-03 & 2.10 & 1.41E-03 & 2.12 \\ \(T\) & 2.50E-02 & — & 3.20E-03 & 2.96 & 8.24E-04 & 1.95 & 2.01E-04 & 2.03 \\ \hline \end{tabular}
\end{table}
Table 3: Two-fluid stationary vortex: \(L^{1}\) error and EOC for the second order RS-IMEX FV scheme in the weakly-compressible regime with parameters given in (90).
Figure 1: Numerical solutions of the homogeneous Riemann problem RP1 obtained at time \(T_{f}\) = 0.2 with constant volume fraction without relaxation source terms using the new first and second order RS-IMEX FV schemes. The reference solution is computed by the explicit second order SSP-RK2 FV scheme. From top left to bottom right: Phase densities \(\rho_{1},\,\rho_{2}\), phase velocities \(\mathbf{v}_{1,1},\,\mathbf{v}_{2,1}\) and temperature \(T\).
Figure 2: Numerical solutions of the homogeneous Riemann problem RP2 obtained at time \(T_{f}=0.2\) with an initial jump in the volume fraction without relaxation source terms using the new first and second order RS-IMEX FV schemes. The reference solution is computed by the explicit second order SSP-RK2 FV scheme. From top to bottom: Phase densities \(\rho_{1}\), \(\rho_{2}\), phase velocities \(\mathbf{v}_{1,1}\), \(\mathbf{v}_{2,1}\) and temperature \(T\). Left: Computational domain \([0,1]\). Right: Zoom on material waves.
The bubble of phase 1 is moved through a second phase 2 which is modeled by a change in the volume fraction given in dependence of the radius \(r=\sqrt{(x-x_{0})^{2}+(y-y_{0})^{2}}\). The initial volume fraction is given by
\[\alpha_{1}(r,0)=(\alpha_{L}-\alpha_{R})\frac{\arctan{(-\theta\left(r-r_{0} \right))}}{\pi}+\frac{(\alpha_{L}+\alpha_{R})}{2}, \tag{92}\]
where \(\alpha_{L}=0.9,\ \alpha_{R}=0.1\) and \(\theta=2000\). The parameter \(\theta\) indicates the diffusivity of the interface in the initial data. To have a bubble, that is initially in pressure equilibrium, we set \(\rho_{2}\) such that \(p_{1}=p_{2}\), i.e.
\[\rho_{2}=\frac{(\gamma_{1}-1)c_{\nu,1}}{(\gamma_{2}-1)c_{\nu,2}}\rho_{1}, \tag{93}\]
where \(\rho_{1}=2\). To ensure that the phase-pressure equilibrium holds during the simulation, we set the relaxation parameter \(\tau^{(\alpha)}=10^{-16}\), i.e. "instantaneous" pressure relaxation. Further, we set the initial temperature to \(T=2\). Finally, we set \(\gamma_{1}=1.4,\ \gamma_{2}=2\), \(c_{\nu,1}=1\) and \(c_{\nu,2}\) in accordance with (20) by
\[c_{\nu,2}=\frac{\gamma_{1}(\gamma_{1}-1)c_{\nu,1}}{\gamma_{2}(\gamma_{2}-1)} \mathcal{C}^{2}. \tag{94}\]
As before, \(\mathcal{C}\) denotes the ratio between the Mach numbers and will be used to adjust the flow regimes. Note, that the relative velocity equation for initially constant chemical potentials does not reduce to pure advection, but creates perturbations in \(\mathbf{w}\). Therefore, to decrease these perturbations which can interfere with the advection of the bubble, we assume a high friction by setting \(\tau^{(\mathbf{w})}=10^{-8}\) and \(\tau^{(\mathbf{w})}=10^{-12}\) depending on the Mach number regimes associated with \(\mathcal{C}=10\) and \(\mathcal{C}=50\), respectively. This leads to the Mach numbers \(M_{1,\text{max}}=1.336\) and \(M_{2,\text{max}}=1.336\cdot 10^{-1}\) for the first case and \(M_{1,\text{max}}=1.336\) and \(M_{2,\text{max}}=2.67\cdot 10^{-2}\) for the second case. The bubble is evolved up to the final time \(T_{f}=1\) when the bubble is back in its initial position. In Figure 3 the volume fraction \(\alpha\) together with the mixture Mach number (16) is plotted along the diagonal for the first and second order schemes. Both schemes use a material CFL condition (59) with \(\nu=0.5\) for the first order scheme and \(\nu=0.25\) for the second order scheme. The numerical solutions are in good agreement with the initial data. The RS-IMEX1 FV scheme is quite diffusive whereas the RS-IMEX2 FV scheme captures well the initial configuration. Note further, that the mixture Mach number changes rapidly from \(\approx 1.23\) inside the bubble to \(\approx 0.38\) outside of the bubble and from \(\approx 1.24\) to \(\approx 0.36\) for \(\mathcal{C}=10\) and \(\mathcal{C}=50\), respectively. Even though the phase Mach number \(M_{2}\) is significantly smaller in the second case, the mixture Mach number does not change due to the averaging with respect to the mass fraction. It is therefore not a good indicator to the individual Mach number regimes that determine the scales in the model.
### Kelvin Helmholtz instability
We modify a set-up from [23, 22] for the single phase Euler equations to the single temperature two-phase model (10). It describes two phases flowing in opposite directions which creates the Kelvin Helmholtz instability. We apply periodic boundary conditions and set the computational domain to \([0,1]\times[0,1]\). The two fluids are characterized by \(\gamma_{1}=2\) and \(\gamma_{2}=1.4\), respectively. Further \(\rho_{1}=1\) and \(\rho_{2}\) is set according to (93) in such a way that the initial condition is in pressure equilibrium. Furthermore, we require both fluids to have the same Mach number. We set \(c_{\nu,1}=1/\varepsilon^{2}\) and \(c_{\nu,2}\) with \(\mathcal{C}=1\) according to (94). Setting \(T=12.5\) with \(\varepsilon=1\) yields the maximal initial Mach number \(M=10^{-1}\), and choosing \(\varepsilon=0.1\) yields the maximal initial Mach number \(M=3\cdot 10^{-2}\). To ensure that the flow stays in pressure equilibrium, we set \(\tau^{(\alpha)}=10^{-16}\) and in accordance with the well-prepared initial data, we
set \(\tau^{(\mathbf{w})}=M^{2}\). Initially, we choose the same phase velocities, defined as
\[\mathbf{v}_{1,1}=\mathbf{v}_{2,1}=\begin{cases}v_{L}-v_{m}\exp((y-0.25)/L),&\text{if}\quad 0 \leq y<0.25\\ v_{R}+v_{m}\exp(-(y-0.25)/L),&\text{if}\quad 0.25\leq y<0.5\\ v_{R}+v_{m}\exp((y-0.75)/L),&\text{if}\quad 0.5\leq y<0.75\\ v_{L}-v_{m}\exp(-(y-0.75)/L),&\text{if}\quad 0.75\leq y\leq 1\end{cases}, \tag{95}\]
where \(v_{L}=0.5\), \(v_{R}=-0.5\), \(v_{m}=(v_{L}-v_{R})/2\) and \(L=0.025\). In \(y\)-direction we apply an initial perturbation \(\mathbf{v}_{1,2}=\mathbf{v}_{2,2}=10^{-2}\sin(4\pi x)\) which yields an initial relative velocity \(\mathbf{w}=0\) and divergence free velocity field \(\nabla\cdot\mathbf{v}=0\).
The volume fraction is set as
\[\alpha_{1}=\begin{cases}\alpha_{L}-\alpha_{m}\exp((y-0.25)/L),&\text{if}\quad 0 \leq y<0.25\\ \alpha_{R}+\alpha_{m}\exp(-(y-0.25)/L),&\text{if}\quad 0.25\leq y<0.5\\ \alpha_{R}+\alpha_{m}\exp((y-0.75)/L),&\text{if}\quad 0.5\leq y<0.75\\ \alpha_{L}-\alpha_{m}\exp(-(y-0.75)/L),&\text{if}\quad 0.75\leq y\leq 1\end{cases} \tag{96}\]
where \(\alpha_{L}=0.9\), \(\alpha_{R}=0.2\) and \(\alpha_{m}=(\alpha_{R}-\alpha_{L})/8\). In Figure 4 numerical solutions computed by the second order RS-IMEX FV scheme for the passively transported volume fraction for the Mach numbers \(10^{-1}\) and \(3\cdot 10^{-2}\) are depicted. Two different grids consisting of \(256\times 256\) and \(512\times 512\) mesh cells and the material CFL condition (59) \(\nu=0.25\) were used. The final time is \(T_{f}=3\). One can observe that despite we deal with a mixture of inviscid fluids, the mesh refinement does not yield new small scale vortices. The latter is typical for the Kelvin-Helmholtz instabilities in an ideal fluid. This is due to the fact that we solve numerically the non-homogeneous system (17), i.e. physical dissipation is included due to the relative velocity equation. Therefore, only large vortices are present which corresponds to the frequency modes of the initial data. Moreover, since the initial data are well-prepared
Figure 3: Numerical solutions of the diagonally advected bubble obtained at time \(T_{f}=1\) by the first and second order RS-IMEX FV schemes displayed along the diagonal from \([0,0]\) to \([1,1]\). Top: Case \(\mathcal{C}=10\). Bottom: Case \(\mathcal{C}=50\). Left: Volume fraction \(\alpha_{1}\). Right: Mixture Mach number \(M_{\text{mix}}\).
in the sense of (2.2), the \(L^{1}\) errors in the phase densities decreases with the Mach number. We refer to Table 5 that validates the AP property of the RS-IMEX FV scheme.
## 6 Conclusions
We have derived an analyzed a new implicit-explicit finite volume (RS-IMEX FV) scheme for single-temperature SHTC model. We note that the two-fluid model allows two velocities and pressures. Further, it includes two dissipative mechanisms: phase pressure and velocity relaxations. In the proposed scheme these are treated differently.
Figure 4: Kelvin-Helmholtz instability: numerical solutions for the passively transported volume fraction \(\alpha_{1}\) obtained at time \(T_{f}=3\) for the two-fluid single temperature model. The numerical solution is obtained by the new second order RS-IMEX FV scheme. Top panel: \(M_{\max}=10^{-1}\). Bottom panel: \(M_{\max}=3\cdot 10^{-2}\). Left column: \(256\times 256\) grid. Right column: \(512\times 512\) grid.
\begin{table}
\begin{tabular}{c c c} \(M\) & \(\rho_{1}\) & \(\rho_{2}\) \\ \hline \(10^{-1}\) & \(1.896\cdot 10^{-3}\) & \(1.327\cdot 10^{-3}\) \\ \(3\cdot 10^{-2}\) & \(5.037\cdot 10^{-4}\) & \(3.525\cdot 10^{-4}\) \\ \end{tabular}
\end{table}
Table 5: Kelvin-Helmholtz instability: \(L^{1}\) error of the phase densities for different Mach numbers computed on a mesh with \(512\times 512\) grid cells at final time \(T_{f}=3\).
The relative velocity relaxation term is linear and is resolved as a part of the implicit sub-system, whereas the pressure relaxation is strongly nonlinear and therefore is treated separately by the Newton method.
Our RS-IMEX FV method is constructed in such a way that acoustic-type waves are linearized around a suitably chosen reference state (RS) and approximated implicitly in time and by means of central finite differences in space. The remaining advective-type waves are approximated explicitly in time and by means of the Rusanov FV method. The RS-IMEX FV scheme is suitable for all Mach number flows, but in particular it is asymptotic preserving in the low Mach number flow regimes.
Many multi-phase flows, such as granular or sediment transport flows, can be modeled within the single-temperature approximation. In turn, many of these flows are weakly compressible and therefore impose severe time step restrictions if solved with a time-explicit numerical scheme. Therefore, the proposed RS-IMEX FV scheme is suitable to model various environmental flows.
The proposed method was tested on a number of test cases for low and moderately high Mach number flows demonstrating the capability of the scheme to properly capture both regimes. The theoretical second order accuracy of the scheme was confirmed on a stationary vortex test case. We compared the second order scheme against its first order variant and shown that the second order scheme has remarkably more accurate approximation of discontinuities. Finally, the asymptotic preserving property was verified by approximating the Kelvin-Helmholtz instability with well-prepared initial data.
AcknowledgmentsA.T. and M.L. have been partially supported by the Gutenberg Research College, JGU Mainz. Further, M.L. is grateful for the support of the Mainz Institute of Multiscale Modelling. I. P. is a member of the Gruppo Nazionale per il Calcolo Scientifico of the Istituto Nazionale di Alta Matematica (INdAM GNCS) and acknowledges the financial support received from the Italian Ministry of Education, University and Research (MIUR) in the frame of the Departments of Excellence Initiative 2018-2022 attributed to the Department of Civil, Environmental and Mechanical Engineering (DICAM) of the University of Trento (Grant No. L.232/2016).
## Appendix A Proof of Theorem 4.1
We will show the AP property for the first order time semi-discrete scheme (60) - (61). Indeed, to obtain a consistent approximation of the limit equations, an appropriate time discretization is essential. Thereby we will use techniques that were developed in the context of the AP proof for the Euler equations, see for instance [3], and for the isentropic two-phase subsystem, see [21]. For simplicity we consider without loss of generality \(\frac{\rho_{1,\text{est}}}{\rho_{\text{est}}}=1\) and \(\frac{\rho_{2,\text{est}}}{\rho_{\text{est}}}=1\).
Let the initial data be well-prepared, i.e. \(\mathbf{q}^{0}\in\Omega_{wp}\) as given in Definition 2.2. We assume that at time level \(t^{n}\) we have the Mach number expansion for each phase \(l=1,2\)
\[\rho_{l}^{n}=\rho_{l,\text{RS}}+\mathcal{O}(M^{2}),\quad T^{n}=T_{\text{RS}} +\mathcal{O}(M^{2})\quad\rho_{l,\text{RS}},T_{\text{RS}}=\text{ const.}, \tag{97}\]
in pressure equilibrium up to order \(\mathcal{O}(M^{3})\), see Assumption 2.1,
\[p_{1,(k)}^{n}=p_{1,(k)}^{n},\quad k=0,2,\quad p_{1,(0)}^{n}=p_{1}(\rho_{1, \text{RS}},T_{\text{RS}}),\quad p_{2,(0)}^{n}=p_{2}(\rho_{2,\text{RS}},T_{ \text{RS}})=\text{ const.} \tag{98}\]
Further, for the velocities we assume in concordance with Definition 2.2 that
\[\mathbf{v}^{n}=\mathbf{v}_{(0)}^{n}+\mathcal{O}(M^{2}),\quad\nabla\cdot u_{(0)}^{n}= 0,\quad\mathbf{w}^{n}=\mathcal{O}(M^{2}),\quad\tau^{(\mathbf{w})}=M^{2}. \tag{99}\]
Moreover, we assume that the data at the next time level has the Mach number expansion (21) for the phase densities, temperature and mixture velocity leading to the Mach number expansions (24) for pressures, chemical
potentials and internal phase energies. Our aim is to show that the first order IMEX FV method yields a consistent approximation of the incompressible Euler system with variable volume fraction (40). To obtain this goal, we show that \(q^{n+1}\in\Omega_{wp}^{M}\), where the divergence free property of the velocity field is fulfilled up to a \(\mathcal{O}(\Delta t)\) term.
Plugging the expansion (97) at level \(t^{n}\) into the explicit update (60) we directly have for the volume fraction
\[\alpha_{1}^{(1)}=\alpha_{1}^{n}-\Delta t\mathbf{v}_{(0)}^{n}\cdot\nabla\alpha_{1}^ {n}. \tag{100}\]
Rewriting equations for \((\alpha_{1}\rho_{1})^{(1)},(\alpha_{2}\rho_{2})^{(1)}\) in terms of \(\rho\mathbf{v}\) and \(\mathbf{w}\) and using (100), \(\nabla\cdot\mathbf{v}_{(0)}^{n}=0\) and \(\mathbf{w}_{(0)}^{n}=0\), we have at leading order
\[\alpha_{1}^{(1)}\rho_{1,(0)}^{(1)}=\alpha_{1}^{n}\rho_{1,\text{RS}}-\Delta t \rho_{1,\text{RS}}\mathbf{v}_{(0)}^{n}\cdot\nabla\alpha_{1}^{n}-\Delta t\nabla \cdot(\rho_{(0)}^{n}c^{1}c^{2}\mathbf{w}_{(0)}^{n})=\alpha_{1}^{(1)}\rho_{1,\text{ RS}},\] (101a) thus \[\rho_{1,(0)}^{(1)}=\rho_{1,\text{RS}}\]. With the same strategy we obtain \[\rho_{1,(1)}^{n+1}=0\]. Analogously, we obtain \[\rho_{2,(0)}^{(1)}=\rho_{2,\text{RS}}\] and \[\rho_{2,(1)}^{(1)}=0\]. Summarizing, the phase densities satisfy the expansion ( 97 ) at the intermediate time level \[t^{(1)}\]. Using \[\mathbf{w}_{(0)}^{n}=0\] and the evolution of the volume fraction ( 100 ), we obtain for the momentum and relative velocity equations \[(\rho\mathbf{v})_{(0)}^{(1)}=(\rho\mathbf{v})_{(0)}^{n}-\Delta t\mathbf{v}_{(0 )}^{n}\cdot\nabla(\rho\mathbf{v})_{(0)}^{n}, \tag{102a}\] \[\mathbf{w}_{(0)}^{(1)}=0,\quad\mathbf{w}_{(1)}^{(1)}=0. \tag{102b}\]
Multiplying the energy equation in the explicit update (60) by \(M^{2}\) and using the notation (51), yields
\[(\alpha_{1}\rho_{1}e_{1})^{(1)}+(\alpha_{2}\rho_{2}e_{2})^{(1)}+M^{2}(\rho E_ {\text{kin}})^{(1)}=(\alpha_{1}\rho_{1}e_{1})^{n}+(\alpha_{2}\rho_{2}e_{2})^{ n}+M^{2}(\rho E_{\text{kin}})^{n}+\mathcal{O}(M^{2}). \tag{103}\]
For the leading order terms of the internal energy, we obtain directly
\[\big{(}\alpha_{1}(\rho_{1}e_{1})_{(0)}+\alpha_{2}(\rho_{2}e_{2})_{(0)}\big{)} ^{(1)}=\big{(}\alpha_{1}(\rho_{1}e_{1})_{(0)}+(\alpha_{2}(\rho_{2}e_{2})_{(0)} \big{)}^{n}\]
which completes the analysis of the explicit part (60).
For the implicit part, we will follow the reasoning of [3], where the AP property is shown for the Euler equations analyzing the structure of the implicit elliptic operator. Since \(\alpha_{1}\rho_{1}\) does not change during the implicit part, the expansion of the phase densities at \(t^{n+1}\) fulfills (97). Therefore, we obtain \(\rho_{1,(0)}^{n+1}=\rho_{1,\text{RS}}\) and analogously \(\rho_{2,(0)}^{n+1}=\rho_{2,\text{RS}}\) for the second phase density.
Next, we analyze the elliptic update of the total energy (63). Analogously to the fully discrete operators \(\mathcal{L}_{I}\) and \(\mathcal{K}_{I}\) in (73), we define semi-discrete operators
\[L_{h} =\nabla\cdot\left(\left(\frac{(\alpha_{1}\rho_{1}e_{1})^{n}+( \alpha_{2}\rho_{2}e_{2})^{n}+M^{2}(\rho E_{\text{kin}})^{n}+\alpha_{1}^{n}p_{1 }^{n}+\alpha_{2}^{n}p_{2}^{n}}{\rho^{n+1}}\right)\nabla(\phi_{p}^{n}-1)\right), \tag{104}\] \[K_{h} =\nabla\cdot\left(\tau^{(\mathbf{w})}\frac{(\mu_{1}^{n}-\mu_{2}^{2})( \rho c_{1}c_{2})^{n+1}}{\tau^{(\mathbf{w})}+\Delta t(c_{1}c_{2})^{n+1}}\nabla\big{(} \partial\mu_{\text{RS}}^{n}\big{)}\right). \tag{105}\]
Note that with (42) we have \(L_{h}=\mathcal{O}(1)\). From (46) and \(\tau^{(\mathbf{w})}=\mathcal{O}(M^{2})\) it follows \(K_{h}=\mathcal{O}(M^{2})\). Using the notation as in (51), we define
\[\rho\tilde{E}=\big{(}\alpha_{1}\rho_{1}e_{1}+\alpha_{2}\rho_{2}e_{2}+M^{2}\rho E _{\text{kin}}\big{)},\quad\tilde{p}=(\alpha_{1}p_{1}+\alpha_{2}p_{2})\quad \tilde{\mu}=\mu_{1}-\mu_{2}. \tag{106}\]
Now taking into account the scaling of \(\mathbf{w}^{n}\) given in (99), we write the implicit update for the total energy (73) as
\[\left(\mathbf{I}-\frac{\Delta t^{2}}{M^{2}}(L_{h}+K_{h})\right)(\rho\tilde{E})^{n+1} =(\rho\tilde{E})^{n}-\Delta t\nabla_{h}\cdot\big{(}(\rho\tilde{E}+\tilde{p})^{n }\mathbf{v}^{(1)}\big{)}-\frac{\Delta t^{2}}{M^{2}}L_{h}(M^{2}\rho E_{\text{kin}}^{ n})-\frac{\Delta t^{2}}{M^{2}}K_{h}(\rho\tilde{E})_{\text{RS}}^{n}+ \mathcal{O}(M^{2}). \tag{107}\]
The operators \(L_{h}\) and \(K_{h}\) are symmetric, positive definite and the inverse of \(A=\mathbf{I}-\frac{\Delta t^{2}}{M^{2}}(L_{h}+K_{h})\) exists. Consequently, system (107) has a unique solution for any \(M>0\). Similar as in [3], we obtain that the eigenvalues of \(A^{-1}\) are \(1\) and \(\mathcal{O}(M^{2})\). Applying analogous arguments as in [3, Lem. 4.6], we derive
\[(\rho e)^{n+1}=(\rho e)^{n}-\Delta t\nabla_{h}\cdot\left(\left(\rho e+\alpha_{ 1}p_{1}+\alpha_{2}p_{2}\right)^{n}\mathbf{v}^{n}\right)+\mathcal{O}(M^{2}). \tag{108}\]
Focusing on the leading order terms and using the evolution of the volume fraction (100), \(\nabla\mathbf{\cdot}\mathbf{v}_{(0)}^{n}=0\), see (99), \(p_{1,(0)}^{n}=p_{2,(0)}^{n}\), see (98), and EOS (7), yields for the temperature the following expansion
\[\left(\alpha_{1}^{(1)}\rho_{1,\text{RS}}c_{v,1}+\alpha_{2}^{(1)} \rho_{2,\text{RS}}c_{v,2}\right)T_{(0)}^{n+1}= \left(\alpha_{1}^{n}\rho_{1,\text{RS}}c_{v,1}+\alpha_{2}^{n} \rho_{2,\text{RS}}c_{v,2}\right)T_{\text{RS}}-\Delta t\mathbf{v}_{(0)}^{n}\left( \rho_{1,\text{RS}}c_{v,1}-\rho_{2,\text{RS}}c_{v,2}\right)T_{\text{RS}}\nabla \alpha_{1}^{n}+\mathcal{O}(M^{2})\] \[= \left(\alpha_{1}^{(1)}\rho_{1,\text{RS}}c_{v,1}+\alpha_{2}^{(1)} \rho_{2,\text{RS}}c_{v,2}\right)T_{\text{RS}}+\mathcal{O}(M^{2}).\]
Since the factor \(\alpha_{1}^{(1)}\rho_{1,\text{RS}}c_{v,1}+\alpha_{2}^{(1)}\rho_{2,\text{RS}}c _{v,2}\) is positive and independent of \(M\), we derive \(T_{(0)}^{n+1}=T_{\text{RS}}+\mathcal{O}(M^{2})\), thus the temperature has a correct asymptotic expansion. Moreover, in the limit as \(M\to 0\), we obtain \(T=T_{\text{RS}}\). Further, the update of the relative velocity (64) and momentum (65) yield
\[\mathbf{w}_{(0)}^{n+1}=0,\quad\nabla\mu_{(0)}^{n+1}=0,\quad\nabla\mu_ {(1)}^{n+1}=0, \tag{109}\] \[(\rho\mathbf{v})_{(0)}^{n+1}=(\rho\mathbf{v})_{(0)}^{(1)}-\Delta t\nabla p _{(2)}^{n+1},\quad\nabla p_{(0)}^{n+1}=0,\quad\nabla p_{(1)}^{n+1}=0. \tag{110}\]
Since the mass densities of the phases are not evolved at the implicit step, it holds \(\rho^{n+1}=(\alpha_{1}\rho_{1})^{(1)}+(\alpha_{2}\rho_{2})^{(1)}\). Using the volume fraction, we can rewrite the momentum equation as
\[\mathbf{v}_{(0)}^{n+1}=\mathbf{v}_{(0)}^{(1)}-\Delta t\frac{\rho^{n}}{\rho^{n+1}}\mathbf{v }_{(0)}^{n}\cdot\nabla\mathbf{v}_{(0)}^{n}-\Delta t\frac{\nabla p_{(2)}^{n+1}}{ \rho^{n+1}} \tag{111}\]
which is consistent with the low Mach number limit (40) up to a \(\mathcal{O}(\Delta t)\) term. From the energy equation (61e) we obtain
\[(\rho e)_{(0)}^{n+1}=(\rho e)_{(0)}^{n}-\Delta t\nabla\cdot\left(\left((\alpha _{1}^{n}(\rho e)_{1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2,\text{RS}})+p_{(0)} ^{n}\right)\mathbf{v}_{(0)}^{n+1}\right)+\mathcal{O}(M^{2}). \tag{112}\]
Using the definition of the internal mixture energy, we obtain
\[\alpha_{1}^{(1)}(\rho e)_{1,\text{RS}}+\alpha_{2}^{(1)}(\rho e)_{ 2,\text{RS}} =\alpha_{1}^{n}(\rho e)_{1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2, \text{RS}}\] \[-\Delta t\nabla\cdot\left(\left((\alpha_{1}^{n}(\rho e)_{1,\text {RS}}+\alpha_{2}^{n}(\rho e)_{2,\text{RS}})+p_{(0)}^{n}\right)\mathbf{v}_{(0)}^{n +1}\right)+\mathcal{O}(M^{2}).\]
Applying the evolution of the volume fraction (100) and \(\nabla p_{(0)}^{n}=0\), we obtain
\[(\alpha_{1}^{n}-\Delta t\mathbf{v}_{(0)}^{n}\cdot\nabla\alpha_{1}^{n} )(\rho e)_{1,\text{RS}}+(\alpha_{2}^{n}+\Delta t\mathbf{v}_{(0)}^{n}\cdot\nabla \alpha_{1}^{n})(\rho e)_{2,\text{RS}} =\alpha_{1}^{n}(\rho e)_{1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2, \text{RS}}\] \[\qquad-\Delta t\mathbf{v}_{(0)}^{n+1}\cdot\nabla\left((\alpha_{1}^{n} (\rho e)_{1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2,\text{RS}})\right)\] \[\qquad-\Delta t\left((\alpha_{1}^{n}(\rho e)_{1,\text{RS}}+\alpha_ {2}^{n}(\rho e)_{2,\text{RS}})+p_{(0)}^{n}\right)\nabla\cdot\mathbf{v}_{(0)}^{n+1}+ \mathcal{O}(M^{2}),\]
which reduces to
\[(\mathbf{v}_{(0)}^{n}-\mathbf{v}_{(0)}^{n+1})\cdot\nabla\left(\alpha_{1}^{n}(\rho e)_{ 1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2,\text{RS}}\right)=\left((\alpha_{1}^{n} (\rho e)_{1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2,\text{RS}})+p_{(0)}^{n}\right) \nabla\cdot\mathbf{v}_{(0)}^{n+1}+\mathcal{O}(M^{2}).\]
The left hand side is of order \(\mathcal{O}(\Delta t)\) and the factor \((\alpha_{1}^{n}(\rho e)_{1,\text{RS}}+\alpha_{2}^{n}(\rho e)_{2,\text{RS}})+p_{(0)} ^{n}\) is positive. Consequently, we obtain the result \(\nabla\cdot\mathbf{v}_{(0)}^{n+1}=\mathcal{O}(\Delta t)+\mathcal{O}(M^{2})\).
Finally, we apply the pressure relaxation on the volume fraction, and we obtain
\[p_{1}^{n+1}=p_{2}^{n+1}-\tau^{(\alpha)}\rho^{n+1}M^{2}\left(\frac{\alpha^{n+1}- \alpha^{n}}{\Delta t}\right). \tag{113}\]
Thus \(p_{1,(0)}^{n+1}=p_{2,(0)}^{n+1}\) and \(p_{1,(2)}^{n+1}=p_{2,(2)}^{n+1}\) since \(p_{1,(1)}^{n+1}=p_{2,(1)}^{n+1}=0\). Note that this is due to phase densities and the temperature fulfill expansion (97) at the new time level \(t^{n+1}\). Moreover, \(\rho^{n+1}(\frac{\alpha^{n+1}-\alpha^{n}}{\Delta t})=\mathcal{O}(1)\) with respect to the Mach number, since the time step is independent of the Mach number as well.
This concludes the proof.
## Appendix B Polar coordinates
We consider a continuous solution of the homogeneous part of system (10) without relaxation source terms. Let the Cartesian coordinates in 2D be denoted by \(x=(x_{1},x_{2})\). We define the polar coordinates in terms of radius \(r\) and angle \(\theta\) as
\[x_{1}=r\cos(\theta),\quad x_{2}=r\sin(\theta). \tag{114}\]
The velocity based quantities are defined by
\[v_{1} =v_{r}\cos(\theta)-v_{\theta}\sin(\theta),\quad w_{1}=w_{r}\cos (\theta)-w_{\theta}\sin(\theta), \tag{115a}\] \[v_{2} =v_{r}\cos(\theta)+v_{\theta}\sin(\theta),\quad w_{2}=w_{r}\cos (\theta)+w_{\theta}\sin(\theta). \tag{115b}\]
Using
\[\frac{\partial x_{1}}{\partial r}=\cos(\theta),\quad\frac{\partial x_{1}}{ \partial\theta}=-\frac{\sin(\theta)}{r},\quad\frac{\partial x_{2}}{\partial r }=\sin(\theta),\quad\frac{\partial x_{1}}{\partial\theta}=\frac{\cos(\theta )}{r}, \tag{116}\]
we obtain for
\[\mathbf{q}=(\alpha^{1},\alpha^{1}\rho^{1},\alpha^{2}\rho^{2},v_{r},v_{\theta},w_{ r},w_{\theta},\rho E)^{T} \tag{117}\]
the following system in polar coordinates
\[\frac{\partial\alpha^{1}}{\partial t}+\frac{v_{r}}{r}\frac{ \partial}{\partial r}\left(r\alpha^{1}\right)+\frac{v_{\theta}}{r}\frac{ \partial}{\partial\theta}\alpha^{1}=0, \tag{118a}\] \[\frac{\partial(\alpha^{1}\rho^{1})}{\partial t}+\frac{1}{r}\frac {\partial}{\partial r}\left(r\alpha^{1}\rho^{1}\nu_{r}\right)+\frac{1}{r} \frac{\partial}{\partial\theta}\left(\alpha^{1}\nu_{0}{}^{1}\right)=0,\] (118b) \[\frac{\partial(\alpha^{2}\rho^{2})}{\partial t}+\frac{1}{r}\frac {\partial}{\partial r}\left(r\alpha^{2}\rho^{2}\nu_{r}\right)+\frac{1}{r} \frac{\partial}{\partial\theta}\left(\alpha^{2}\rho^{2}\nu_{0}{}^{2}\right)=0,\] (118c) \[\frac{\partial(\rho\nu_{r})}{\partial t}+\frac{1}{r}\frac{ \partial}{\partial r}\left(r\left(\rho\nu_{r}{}^{2}+\rho c(1-c)w_{r}w_{r}+p \right)\right)\] (118d) \[\frac{\partial(\rho\nu_{0})}{\partial t}+\frac{1}{r}\frac{ \partial}{\partial r}\left(r\left(\rho\nu_{r}v_{\theta}+\rho c(1-c)w_{r}w_{ \theta}\right)\right)\] (118e) \[\frac{\partial w_{r}}{\partial t}+\frac{\partial}{\partial r} \left(v_{r}w_{r}+v_{\theta}w_{\theta}+(1-2c)\frac{w_{r}w_{r}+w_{\theta}w_{ \theta}}{2}+\mu_{1}-\mu_{2}\right)\] (118f) \[\quad+v_{\theta}\left(\frac{1}{r}\frac{\partial}{\partial\theta} w_{r}-\frac{1}{r}\frac{\partial}{\partial r}\left(r\,w_{\theta}\right) \right)=0,\] (118g) \[\frac{\partial w_{\theta}}{\partial t}+\frac{1}{r}\frac{\partial} {\partial\theta}\left(v_{r}w_{r}+v_{\theta}w_{\theta}+(1-2c)\frac{w_{r}w_{r} +w_{\theta}w_{\theta}}{2}+\mu_{1}-\mu_{2}\right)\] (118h) \[\quad+v_{r}\left(\frac{1}{r}\frac{\partial}{\partial r}\left(rw_{ \theta}\right)-\frac{1}{r}\frac{\partial}{\partial\theta}w_{r}\right)=0,\] (118h) \[\frac{\partial(\rho E)}{\partial t}+\frac{1}{r}\frac{\partial}{ \partial r}\left(r\left(v_{r}(\rho E+p)+\rho\,\left[v_{r}w_{r}+v_{\theta}w_{ \theta}+(1-2c^{1})\frac{w_{r}{}^{2}+w_{\theta}{}^{2}}{2}\right]c^{1}c^{2}w_{r} \right)\right)\] (118h) \[\quad+\frac{1}{r}\frac{\partial}{\partial\theta}\left(v_{\theta} (\rho E+p)+\rho\,\left[v_{r}w_{r}+v_{\theta}w_{\theta}+(1-2c^{1})\frac{w_{r}{} ^{2}+w_{\theta}{}^{2}}{2}\right]c^{1}c^{2}w_{\theta}\right)=0.\] |
2305.08514 | Generative Adversarial Networks for Spatio-Spectral Compression of
Hyperspectral Images | The development of deep learning-based models for the compression of
hyperspectral images (HSIs) has recently attracted great attention in remote
sensing due to the sharp growing of hyperspectral data archives. Most of the
existing models achieve either spectral or spatial compression, and do not
jointly consider the spatio-spectral redundancies present in HSIs. To address
this problem, in this paper we focus our attention on the High Fidelity
Compression (HiFiC) model (which is proven to be highly effective for spatial
compression problems) and adapt it to perform spatio-spectral compression of
HSIs. In detail, we introduce two new models: i) HiFiC using Squeeze and
Excitation (SE) blocks (denoted as HiFiC$_{SE}$); and ii) HiFiC with 3D
convolutions (denoted as HiFiC$_{3D}$) in the framework of compression of HSIs.
We analyze the effectiveness of HiFiC$_{SE}$ and HiFiC$_{3D}$ in compressing
the spatio-spectral redundancies with channel attention and inter-dependency
analysis. Experimental results show the efficacy of the proposed models in
performing spatio-spectral compression, while reconstructing images at reduced
bitrates with higher reconstruction quality. The code of the proposed models is
publicly available at https://git.tu-berlin.de/rsim/HSI-SSC . | Martin Hermann Paul Fuchs, Akshara Preethy Byju, Alisa Walda, Behnood Rasti, Begüm Demir | 2023-05-15T10:23:14Z | http://arxiv.org/abs/2305.08514v3 | # Generative Adversarial Networks for Spatio-Spectral Compression of Hyperspectral Images
###### Abstract
Deep learning-based image compression methods have led to high rate-distortion performances compared to traditional codec. Recently, Generative Adversarial Networks (GANs)-based compression models, e.g., High Fidelity Compression (HIFiC), have attracted great attention in the computer vision community. However, most of these works aim for spatial compression only and do not consider the spatio-spectral redundancies observed in hyperspectral images (HSIs). To address this problem, in this paper, we adapt the HiFiC spatial compression model to perform spatio-spectral compression of HSIs. To this end, we introduce two new models: i) HiFiC using Squeeze and Excitation (SE) blocks (denoted as HiFi\({}_{SE}\)); and ii) HiFiC with 3D convolutions (denoted as HiFiC\({}_{3D}\)). We analyze the effectiveness of HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) in exploiting the spatio-spectral redundancies with channel attention and inter-dependency analysis. Experimental results show the efficacy of the proposed models in performing spatio-spectral compression and reconstruction at reduced bitrates and higher reconstruction quality when compared to JPEG 2000 and the standard HiFiC spatial compression model. The code of the proposed models is publicly available at [https://git.tu-berlin.de/rsim/HSI-SSC](https://git.tu-berlin.de/rsim/HSI-SSC).
Spatio-spectral image compression, generative adversarial networks, deep learning, hyperspectral images.
## I Introduction
Advancements in hyperspectral imaging technologies have led to a significant increase in the volume of hyperspectral data archives. Dense spectral information provided by hyperspectral imagery leads to a very high capability for the identification and discrimination of the materials in a given scene. However, the storage and transmission of such high volume data hinders the development of these fields. This necessitates the need for efficient compression techniques that encode the data into fewer bits with minimal loss of information.
Hyperspectral image compression has been a long studied area with various methods developed so far. Generally, they can be divided into two categories: i) traditional approaches; and ii) learning-based approaches. Traditional approaches mostly rely on transform coding. As an example, in [1] HSIs are compressed by applying the JPEG 2000 [2] algorithm in combination with a principal component analysis (PCA) that is responsible for spectral decorrelation as well as spectral dimensionality reduction. Lim et. al [3] apply a three dimensional wavelet transform to compress the HSIs. Abousleman et al. [4] use differential pulse code modulation (DPCM) to spectrally decorrelate the data while a 2D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation.
On the other hand, the current state of the art in learning-based hyperspectral image compression is formed by convolutional autoencoders (CAEs) [5, 6, 7, 8] that reduce the dimensionality of the latent space by sequentially applying convolutions and downsampling operations. The 1D-CAE [5, 6] therefore stacks multiple 1D convolutions combined with two pooling layers and LeakyReLU activation functions and thus only compresses in the spectral dimension. The spectral signal compressor network (SSCNet) [7] incorporates spatial compression by utilizing 2D convolutions, PReLUs and 2D max poolings for encoding and strided 2D transposed convolutions for decoding. 3D-CAE [8] combines spatial and spectral compression. The network is built by strided 3D convolutions, LeakyReLU, 3D batch normalization, residual blocks and upsampling layers in the decoder. However, due to the fixed compression ratio of CAEs and their relatively high bitrates, there is a need for developing more effective approaches.
The development of learning-based compression methods is much more extended in the computer vision (CV) community. In detail, generative adversarial network (GAN)-based generative compression models [9, 10] have recently gained popularity in CV. GANs are capable of producing visually convincing results that are perceptually similar to the input and thus GAN-based compression displays improved performance in spatial compression even at extremely low bitrates keeping intact the perceptual quality of the reconstructed image. However, the applicability of GANs in spatio-spectral compression of HSIs has not been studied yet.
In this paper, we focus our attention on GANs and explore their effectiveness in achieving lower bitrates with good perceptual reconstruction capability for spatio-spectral compression of HSIs. We select the state-of-the-art high performance GAN-based High Fidelity Compression (HiFiC) [9] as our base model. It consists of blocks with 2D convolutions and thus treats each band separately to perform band-by-band spatial compression. To effectively compress the spectral and spatial information content of HSIs, we introduce two new
models: i) HiFiC\({}_{SE}\) using squeeze and Excitation (SE) blocks [11]; and ii) HiFiC\({}_{3D}\) using 3D convolutions (convs) that can exploit both spatial and spectral redundancies. The SE blocks reduce feature redundancies using channel attention while 3D convs use 3D kernels to learn feature dependencies across the channels. Thus, the proposed models are capable of learning to compress spatial and spectral redundancies to achieve better compression performances for HSIs.
## II Proposed Models for Spatio-Spectral HSI Compression
Let \(\textbf{X}=\{x_{i}\}_{i=1}^{H}\) be a set of \(H\) uncompressed HSIs, where \(x_{i}\) is the \(i^{th}\) image. The aim of spatio-spectral compression is to produce spatially and spectrally decorrelated latent representations \(\textbf{Y}=\{y_{i}\}_{i=1}^{H}\) that can be entropy coded and stored with minimal number of bits. At the same time, the learned representations should retain all the necessary information for reconstructing back \(\textbf{X}^{\prime}=\{x_{i}^{\prime}\}_{i=1}^{H}\) with the least possible distortion.
The proposed models HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) for the spatio-spectral hyperspectral image compression are defined in the framework of a HiFiC model [9]. Figure 1(a) shows the base HiFiC model with four main blocks: i) encoder \(E\); ii) probability model \(P\); iii) generator \(G\); and iv) discriminator \(D\). The proposed HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) models follow the base HiFiC but include modifications on the architectures of \(E\) and \(G\) using SE blocks and 3D convs, respectively (see Figure 1(b) and 1(c)). We introduce these modifications to the architectures to specifically exploit spatio-spectral redundancies. The HiFiC\({}_{SE}\) consists of SE blocks prior to the regular 2D convs that enable it to attend to the spectral channel information. The SE block consists of layers of global pooling, Fully Connected Multi-Layer Perceptron (FC-MLP) with ReLU activation, where neuron numbers are decreased by reduction ratio \(r_{r}\), followed by another FC-MLP with Sigmoid activation which outputs weights for each channel (see Figure 2). The SE block investigates the relationship between channels by explicitly modelling the inter-dependencies between them. We use \(l_{1}\) regularization at the FC-MLP layers for generating sparse weights indicating the channel importance. These weights are then applied to the feature maps to generate the output of the SE block, which can be fed directly into subsequent layers of the network. The HiFiC\({}_{3D}\) is built by replacing the first 2D conv of the Encoder (E) and the last 2D conv of the Generator (G) with 3D convs where the 3D kernels can evaluate spectral redundancies (see Figure 1(b) and 1(c)). It is inspired from the video compression work in [12] that uses 3D convs to remove temporal redundancies.
Fig. 1: Proposed spatio-spectral hyperspectral image compression models. (a) The overall GAN-based HiFiC model architecture [9]. (b), (c) and (d) The Encoder (E), Generator (G) and Discriminator (D) architectures of HiFiC\({}_{opt}\), HiFiC\({}_{SE}\) with added SE Blocks and HiFiC\({}_{3D}\) with two Conv3D layers. Q: quantization, P: probability model, AE: arithmetic encoder, AD: arithmetic decoder, Conv3D(T)-N-KxK(xK): 2D/3D (transposed) convolution layer with N output channels and kernel size K, \(\downarrow^{2}\)\(\uparrow\)\(\downarrow\)\(\downarrow\) stride 2, CNorm: channel normalization layer, NN-\(\uparrow_{16}\): nearest neighbor upsampling.
Given an image \(x_{i}\in\mathbf{X}\), it is encoded to a spatially and spectrally decorrelated latent representation \(y_{i}\) and then decoded back to \(x_{i}^{\prime}\) as:
\[y_{i}=Q(E(x_{i})),\;x_{i}^{\prime}=G(y_{i}). \tag{1}\]
Where \(Q\) is a quantizer which is modeled as rounding operation during inference and straight-through estimator [13] during training. In the bottleneck of the network, the latents \(y_{i}\) are entropy coded into bitstreams using arithmetic encoding (AE) with minimum bitrate \(r(y_{i})=-\log(P(y_{i}))\) where \(P\) follows the hyper-prior model in [14] with side information. The bitstreams are further decoded back to \(y_{i}\) using arithmetic decoding (AD). Finally, a single-scale discriminator \(D\) with the conditional information (\(y_{i}\)) decides if the reconstructed image is real or generated. Each of the modules \(E\), \(P\), \(G\), and \(D\) is parameterized by convolutional neural networks and optimized together to obtain the minimum rate-distortion (RD) trade-off. The loss functions \(L\) for optimization of each of the four blocks are:
\[L_{E,G,P}=\mathbb{E}_{x_{i}\sim r_{\mathbf{X}}}[\lambda r(y_{i})+d(x_{i},x_{i} ^{\prime})-\beta\log D(x_{i}^{\prime},y_{i})], \tag{2}\]
\[L_{D}=\mathbb{E}_{x_{i}\sim r_{\mathbf{X}}}[-\log(1-D(x_{i}^{\prime},y_{i}))] +\mathbb{E}_{x_{i}\sim r_{\mathbf{X}}}[-\log D(x_{i},y_{i})]. \tag{3}\]
Where \(r(y_{i})\) is the rate, \(d(x_{i},x_{i}^{\prime})\) is the distortion loss, \(\lambda\) is the hyperparameter controlling the RD trade-off, and \(\log(D(x_{i}^{\prime},y_{i}))\) is the conditional discriminator loss with \(\beta\) controlling the discriminator loss effect. The distortion loss is modelled as:
\[d(x_{i},x_{i}^{\prime})=\theta_{1}\cdot\text{MSE}+\theta_{2}\cdot(1-\text{ SSIM})+\theta_{3}\cdot\text{LPIPS}. \tag{4}\]
where \(\theta\)'s are hyperparameters that control the effect of mean square error (MSE), structural similarity index (SSIM), and learned perceptual image patch similarity (LPIPS) losses in the total distortion loss calculation. The LPIPS loss mimics the human visual system and measures reconstruction quality in the feature space, while the SSIM loss is added as an improvement to HSI compression as it enhances reconstruction by preserving the structural information relevant in the scene.
Since \(r(y_{i})\) is at odds with \(d(x_{i},x_{i}^{\prime})\) and \(-\log(D(x_{i}^{\prime},y_{i}))\) in (2), controlling the RD trade-off is difficult (as for a fixed \(\lambda\), different \(\theta\)'s and \(\beta\) would result in models with different bitrates). Hence, we use target bitrates \(r_{t}\) as in [9] and control \(\lambda\) using \(\lambda^{(a)}\) and \(\lambda^{(b)}\) with the following rule:
\[\lambda=\begin{cases}\lambda^{(a)},&\text{if }r(y_{i})>r_{t}.\\ \lambda^{(b)},&\text{otherwise}.\end{cases} \tag{5}\]
With \(\lambda^{(a)}\gg\lambda^{(b)}\) the model learns bitrates closer to \(r_{t}\). By keeping \(\lambda^{(b)}\) fixed and changing only \(r_{t}\) and \(\lambda^{(a)}\), it is possible to achieve different bitrates.
## III Dataset Description and Design of Experiments
To evaluate the efficacy of our proposed models, we have used a hyperspectral dataset consisting of 3,840 image patches with 369 spectral bands covering the visible and near-infrared (VNIR) portion of the electromagnetic spectrum in the wavelength range between 400 - 1,000 nm. Each patch is a section of 96 x 96 pixels with a spatial resolution of 28cm. We also normalized the data between 0 - 1 which is required for our compression models. The patches are then split in the ratio of 80%, 10% and 10% to form train, validation and test sets, respectively.
All the experiments were carried out in NVIDIA Tesla V100 GPU with 32 GBs of memory. The code for the compression model is implemented with TensorFlow v1.15.2 and build on TensorFlow Compression v1.3 [15] and HiFiC [9]. We trained all the models with Adam optimizer with a batch size of 8. We use a baseline model trained without GAN (\(\beta=0\) in (2)) as an initializer for each of our models. For the distortion loss in (4) we used \(\theta_{1}=0.15\cdot 2^{-5}\), \(\theta_{2}=0.075\cdot 2^{-3}\) and \(\theta_{3}=1\). These values were determined through multiple experiments. The discriminator loss effect \(\beta\) was set as 0.15, while the reduction ratio in the SE block \(r_{r}\) was used as 2. As mentioned in Section II, different bitrates can be achieved by changing two hyperparameters: i) target rate \(r_{t}\); and ii) \(\lambda^{(a)}\). The target rates \(r_{t}\) was varied from [0.2,1] with an interval of 0.2 with its associated \(\lambda^{(a)}\) values set accordingly. In total, we configured five settings for each models with different \(r_{t}\) and \(\lambda^{(a)}\) combinations (see Table I). We compared the proposed spatio-spectral compression models (HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\)) with: i) the adapted (with the loss in (4)) and optimized spatial compression HiFiC model (termed as HiFiC\({}_{opt}\)); and ii) the traditional JPEG 2000. The comparison was performed at different bitrates and the compression quality was analysed using the reconstruction performance measured using peak signal-to-noise ratio (PSNR) and the compression rate achieved in terms of bitrate measured in bits per pixel (bpp). The higher the PSNR and lower the bpp indicates a better compression performance.
## IV Experimental Results
### _Ablation Study_
In this subsection, we present an ablation study to understand the impact of the choice of placement of SE block and 3D conv in HiFiC framework. The performance was evaluated in terms of reconstruction quality measured with average
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(r_{t}\) & 0.2 & 0.4 & 0.6 & 0.8 & 1 \\ \hline \(\lambda^{(a)}\) & \(2^{1}\) & \(2^{0}\) & \(2^{-1}\) & \(2^{-2}\) & \(2^{-3}\) \\ \hline \end{tabular}
\end{table} TABLE I: List of target rates \(r_{t}\)’s and its associated \(\lambda^{(a)}\) values.
Fig. 2: The basic structure of an SE block [11].
PSNR and runtime measured as average number of iterations completed per second (iters/s: the higher the value the faster the model) to decide the best placement. The obtained metric by placing the SE and 3D blocks at different layers in the model is provided in Table II. The SE blocks were placed in the initial layer of \(E\) alone, then we placed it in the initial layer and after every normalization layer in \(E\), and finally we extended SE blocks to cover all layers in \(E\) and \(G\) after every normalization layer. We found out that the best PSNR performance with comparable runtime to other placements was achieved when the SE block was incorporated in both \(E\) and \(G\) after the normalization layer. Similarly, the impact of replacing 2D convs with 3D convs is also analysed.
We found that replacing only the initial layer of \(E\) and the final layer of the \(G\) works equally well when compared to replacing all 2D convs with an added advantage on the runtime and computational complexity.
### _Spatio-Spectral HSI Compression_
In this subsection, we evaluate the efficacy of our proposed spatio-spectral hyperspectral image compression models. As a first step we analyse the importance of replacing 3D layer in GANs for reconstruction. Fig 3 shows the reconstructed images with HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) using GAN. The reconstruction without 3D conv layer in Figure 3b fails to generate the textures (such as grass) and does not retain visible objects (such as road). But with the integrated 3D conv layer in Figure 3c the perceptual quality is improved and the textures in the grass areas are recovered back. In addition, high quality compressed images are obtained with reduced perceptible artifacts. This shows the importance of the 3D convs in GAN-based models for HSI compression. Figure 4 shows the rate-distortion curves for the spatio-spectral compression methods HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) compared with the optimized spatial compression method HiFiC\({}_{opt}\) and the traditional JPEG 2000. We can observe that all the end-to-end deep compression methods outperform JPEG 2000 at these small bitrates consistently. Amongst the spatial and spatio-spectral compression methods, we can observe that the HiFiC\({}_{3D}\) performs slightly better than HiFiC\({}_{SE}\) and HiFiC\({}_{opt}\) throughout most bitrates. We observe that the proposed model HiFiC\({}_{3D}\) achieves significantly higher PSNR values compared to other methods. The qualitative results of reconstruction at \(r_{t}=0.6\) with the proposed spatio-spectral compression models in comparison with HiFiC\({}_{opt}\) are also shown in Figure 5. Although the qualitative results look similar with spatial and spatio-spectral compression methods. However, on closer inspection we notice slight improvement in edge details and sharpness which results in better PSNR at lower bpp with the spatio-spectral compression models HiFiC\({}_{SE}\) and/or HiFiC\({}_{3D}\) when compared to HiFiC\({}_{opt}\).
Improvement in visual quality with increase in bpp is depicted in Figure 6. With increase in the bitrate the reconstruction results obtained improves in visual quality as well as PSNR. The figure also shows that at lower bpp the HiFiC\({}_{3D}\) in Figure 6 (first row) is better when compared to HiFiC\({}_{SE}\). Unlike HiFiC\({}_{SE}\) that distorts the image appearance with artifacts at lower bpp, HiFiC\({}_{3D}\) also demonstrates better reconstruction.
## V Conclusion
In this paper, we have studied the effectiveness of the generative adversarial networks (GAN)-based models for spatio-spectral compression of hyperspectral images (HSIs). To this end, we have proposed two models: i) HiFiC\({}_{SE}\) made with Squeeze and Excitation (SE) blocks; and ii) HiFiC\({}_{3D}\) made with 3D convolutions. Experimental results show that the proposed models remarkably outperform JPEG 2000 by reducing the redundancy in latent representations. In addition, compared to the spatial compression model HiFiC\({}_{opt}\), the proposed HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) provide higher PSNR values for the same bitrate producing perceptually better reconstructions with richer details. As a future work, we plan to explore the proposed models in the framework of the scene classification and content-based image retrieval within the 3D compressed
\begin{table}
\begin{tabular}{|l|c|c|} \hline SE block placement & PSNR & Runtime \\ & (in dB) & (iters/s) \\ \hline HiFiC\({}_{SE}\) - \(E\): initial layer & 29.80 & 7.84 \\ \hline HiFiC\({}_{SE}\) - \(E\): initial+after Norm & 30.12 & 7.49 \\ \hline HiFiC\({}_{SE}\) - \(E\) and \(G\): after Norm & 30.89 & 7.01 \\ \hline HiFiC\({}_{3D}\) - \(E\) and \(G\): all places & 30.12 & 7.31 \\ \hline HiFiC\({}_{3D}\) - \(E\): initial and \(G\): final & 29.24 & 9.25 \\ \hline \end{tabular}
\end{table} TABLE II: Performance analysis with placement of SE and 3D Conv Blocks in the HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) models, respectively.
Fig. 4: Rate-distortion performance of our proposed HiFiC\({}_{SE}\) and HiFiC\({}_{3D}\) compared to HiFiC\({}_{opt}\) and JPEG2000.
Fig. 3: Impact of the 3D conv layers. (a) Original image, (b) and (c) show reconstructed images without and with 3D convs, respectively. Bottom row: Close-ups of the top image parts.
domain. In addition, we plan to consider strategies that integrate lossless compression only for the particular areas of interest within the HSIs.
## Acknowledgment
This work is funded by the European Research Council (ERC) through the ERC-2017-STG BigEarth Project under Grant 759764. The Authors would like to thank Nimisha Thekke Madam for the joint discussions during the initial phase of this study.
|
2307.09073 | 3-spherical twin buildings | We classify thick irreducible 3-spherical twin buildings of rank at least 3
in which every panel contains at least 6 chambers. Together with the Main
result of [11] we obtain a classification of thick irreducible 3-spherical twin
buildings. | Sebastian Bischof | 2023-07-18T08:47:30Z | http://arxiv.org/abs/2307.09073v1 | # 3-spherical twin buildings
###### Abstract
We classify thick irreducible 3-spherical twin buildings of rank at least 3 in which every panel contains at least 6 chambers. Together with the Main result of [11] we obtain a classification of thick irreducible 3-spherical twin buildings.
**Keywords** Groups of Kac-Moody type, 3-spherical RGD-systems, Twin buildings
**Mathematics Subject Classification** 51E24, 20E42
## 1 Introduction
In [20] Tits gave a complete classification of all thick irreducible spherical buildings of rank at least 3. The decisive step in this classification is the extension theorem for isometries (Theorem 4.1.2 in loc. cit.). It implies that a thick spherical building is uniquely determined by its local structure. Inspired by the paper [21] on Kac-Moody groups over fields, Ronan and Tits introduced twin buildings. These combinatorial objects appear to be natural generalisations of spherical buildings, as they come equipped with an opposition relation which shares many important properties with the opposition relation in a spherical building. In [15] Muhlherr and Ronan gave a proof of the extension theorem for 2-spherical twin buildings satisfying an additional condition they call (co). Condition (co) turns out to be rather mild (see [15, Introduction]). In order to classify such twin buildings it suffices to determine all possible local structures which appear in twin buildings.
The local structure of a building mentioned before is essentially the union of all rank 2 residues of a chamber. In [18] Ronan and Tits introduced the notion of a _foundation_ which is designed to axiomatize geometric structures that may occur as local structures of a building. Roughly speaking, it is a union of rank 2 buildings which are glued along certain rank 1 residues (for the precise definition we refer to Section 3). A foundation is called _integrable_, if it is the local structure of a twin building. One can associate with each twin building of 2-spherical type a foundation and all such are _Moufang foundation_, i.e. the irreducible rank 2 buildings are Moufang and the glueings are compatible with the Moufang structures induced on the rank 1 residues.
Inspired by [18], Tits conjectured that a Moufang foundation is integrable if and only if each of its spherical rank 3 restrictions is integrable (cf. [22, Conjecture 2]). It turned out that this conjecture was too optimistic and he gave a reformulation by omitting the word "spherical" in his earlier conjecture (cf. [23]). A proof of this conjecture would reduce the classification of 2-spherical twin buildings to the rank 3 case. In [12] and [13] a strategy
for the classification of 2-spherical twin buildings is outlined that does not make use of this conjecture. Several important steps in this classification programme have been carried out in the meantime (cf. [11], [25], [26]). The results obtained up until now suggest that the conjecture will be a consequence of the classification once it will be accomplished. However, it would be most desirable to have a proof of this conjecture that is independent of the classification. Our main result is a conceptual proof of this conjecture in the 3-spherical case. It turned out that we only need the integrability for the restriction to irreducible types (cf. Corollary (5.3)):
**Theorem A:** Let \(\mathcal{F}\) be a Moufang foundation of irreducible 3-spherical type and of rank at least 3 such that every panel contains at least 6 chambers. Then the following are equivalent:
1. \(\mathcal{F}\) is integrable.
2. Each irreducible rank 3 restriction of \(\mathcal{F}\) is integrable.
A consequence of Theorem A together with [11, 4] is the classification of thick irreducible 3-spherical twin buildings (cf. Theorem (5.4)):
**Corollary B:** Let \(\Delta\) be a thick irreducible 3-spherical twin building of rank at least 3. Then \(\Delta\) is known.
Let \((W,S)\) be a Coxeter system and let \(\Phi\) be the associated set of roots (viewed as half-spaces). An _RGD-system of type_\((W,S)\) is a pair \((G,(U_{\alpha})_{\alpha\in\Phi})\) consisting of a group \(G\) together with a family of subgroups \(U_{\alpha}\) indexed by the set of roots satisfying a few axioms (for the precise definition see Section 2). Let \(\mathcal{F}\) be a Moufang foundation of type \((W,S)\) satisfying a certain Condition (lco) (e.g. this is satisfied if every panel contains at least 5 chambers). In Section 3 we construct for all \(s\neq t\in S\) RGD-systems \(X_{s,t}\) acting on the corresponding building of the foundation. Moreover, we construct RGD-systems \(X_{s}\) and canonical homomorphisms \(X_{s}\to X_{s,t}\). Let \(G\) be the direct limit of the groups \(X_{s}\) and \(X_{s,t}\). Then there is a natural way of defining a family of _root groups_\((U_{\alpha})_{\alpha\in\Phi}\) inside \(G\). Let \(\mathcal{D}_{\mathcal{F}}:=(G,(U_{\alpha})_{\alpha\in\Phi})\). We prove the following result (cf. Corollary (3.13)):
**Theorem C:** Let \(\mathcal{F}\) be a Moufang foundation of 2-spherical type satisfying Condition (lco). If the canonical mappings \(U_{\pm\alpha_{s}}\to G\) are injective and if \(\mathcal{D}_{\mathcal{F}}\) satisfies (RGD3), then \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system and \(\mathcal{F}\) is integrated in the twin building associated to \(\mathcal{D}_{\mathcal{F}}\).
We use Theorem C to prove Theorem A. Our strategy is to let \(G\) act on a building and deduce that the hypotheses of Theorem C are satisfied. Thus we have a twin building \(\Delta(\mathcal{D}_{\mathcal{F}})\) in which \(\mathcal{F}\) is integrated. In particular, we have the following corollary (cf. Theorem (5.2)):
**Corollary D:** Let \(\mathcal{F}\) be an irreducible Moufang foundation of 3-spherical type such that every panel contains at least 6 chambers. If each irreducible rank 3 restriction of \(\mathcal{F}\) is integrable, then \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system.
**Remarks:**
1. In [11] Muhlherr accomplished the classification of thick locally finite twin buildings of irreducible 2-spherical type and \(m_{st}<8\) without residues associated with one of the groups \(B_{2}(2),G_{2}(2),G_{2}(3)\),\({}^{2}F_{4}(2)\). In particular, the thick locally finite twin buildings of irreducible 3-spherical type without residues associated with \(B_{2}(2)\) are already known. As we will see in the proof of Theorem (5.4), the assumption about the residues can be dropped in the 3-spherical case.
2. By Corollary B we have a classification of 3-spherical simply-laced (i.e. \(m_{st}\in\{2,3\}\)) twin buildings. We note that in this case the integrability of those Moufang foundations is
already established by different methods: In a 3-spherical simply-laced Moufang foundation, the Moufang triangles are parametrised by skew-fields and all such are isomorphic. If there exists \(s\in S\) with three neighbours in the Coxeter diagram, then there is a \(D_{4}\) subdiagram (as there is no \(\tilde{A}_{2}\) subdiagram) and hence the Moufang triangles are parametrised by a field. The existence of a twin building with the prescribed foundation follows now from [11]. If each \(s\in S\) has at most 2 neighbours, the diagram is either \(A_{n}\) or \(\tilde{A}_{n}\). In the first case the existence follows from projective geometry and in the second case from Kac-Moody theory.
### Acknowledgement
I am very grateful to Bernhard Muhlherr for proposing this project to me, as well as for many helpful comments and suggestions.
## 2 Preliminaries
### Direct limits
This subsection is based on [19].
Let \(I\) be a set and let \((G_{i})_{i\in I}\) be a family of groups. Furthermore, let \(F_{i,j}\) be a set of homomorphisms of \(G_{i}\) into \(G_{j}\). Then a group \(G:=\underset{\longrightarrow}{\lim}G_{i}\) together with a family of homomorphisms \(f_{i}:G_{i}\to G\) such that \(f_{j}\circ f=f_{i}\) holds for all \(f\in F_{i,j}\) is called _direct limit_ of the groups \(G_{i}\), relative to the \(F_{i,j}\) if it satisfies the following condition:
If \(H\) is a group and if \(h_{i}:G_{i}\to H\) is a family of homomorphisms such that \(h_{j}\circ f=h_{i}\) holds for all \(f\in F_{i,j}\), then there exists exactly one homomorphism \(h:G\to H\) such that \(h_{i}=h\circ f_{i}\).
In this paper we only consider the case where \(X_{i}\) and \(X_{i,j}:=\langle X_{i},X_{j}\rangle\) are groups for \(i\neq j\in I\) and \(f:X_{i}\to X_{i,j}\) are the canonical homomorphisms. We call this direct limit the \(2\)_-amalgam of the groups \(X_{i}\)_.
### Coxeter systems
Let \((W,S)\) be a Coxeter system and let \(\ell\) denote the corresponding length function. For \(s,t\in S\) we denote the order of \(st\) in \(W\) by \(m_{st}\). The _rank_ of a Coxeter system is the cardinality of the set \(S\). The _Coxeter diagram_ corresponding to \((W,S)\) is the labeled graph \((S,E(S))\), where \(E(S)=\{\{s,t\}\ |\ m_{st}>2\}\) and where each edge \(\{s,t\}\) is labeled by \(m_{st}\) for all \(s,t\in S\). We call a Coxeter system _irreducible_, if the underlying graph is connected; otherwise we call it _reducible_. It is well-known that the pair \((\langle J\rangle,J)\) is a Coxeter system (cf. [5, Ch. IV, SS1 Theorem 2]). A subset \(J\subseteq S\) is called _spherical_ if \(\langle J\rangle\) is finite; for \(k\in\mathbb{N}\) the Coxeter system is called \(k\)_-spherical_, if \(\langle J\rangle\) is spherical for each subset \(J\) of \(S\) with \(|J|\leq k\). Given a spherical subset \(J\) of \(S\), there exists a unique element of maximal length in \(\langle J\rangle\), which we denote by \(r_{J}\) (cf. [2, Corollary 2.19]). For \(s_{1},\ldots,s_{k}\in S\) we call \(s_{1}\cdots s_{k}\)_reduced_, if \(\ell(s_{1}\cdots s_{k})=k\). Similar as in [16, Ch. 2.2] we define for distinct \(s,t\in S\) with \(m_{st}<\infty\) the element \(p(s,t)\) to mean \(stst\ldots\) with \(\ell(p(s,t))=m_{st}\); e.g. if \(m_{st}=3\), we have \(p(s,t)=sts\). It is well-known that for \(w\in W,s\in S\) with \(\ell(sw)<\ell(w)\), there exists \(w^{\prime}\in W\) such that \(\ell(w^{\prime})=\ell(w)-1\) and \(w=sw^{\prime}\) (cf. [2, exchange condition on p.79]).
**(2.1) Convention.** For the rest of this paper we let \((W,S)\) be a Coxeter system.
### Chamber systems
Let \(I\) be a set. A _chamber system over \(I\)_ is a pair \((\mathcal{C},(\sim_{i})_{i\in I})\) consisting of a non-empty set \(\mathcal{C}\) whose elements are called _chambers_ and where \(\sim_{i}\) is an equivalence relation on the set of
chambers for each \(i\in I\). Given \(i\in I\) and \(c,d\in\mathcal{C}\), then \(c\) is called _\(i\)-adjacent_ to \(d\) if \(c\sim_{i}d\). The chambers \(c,d\) are called _adjacent_, if they are \(i\)-adjacent for some \(i\in I\). If we restrict \(\sim_{i}\) to \(\emptyset\neq\mathcal{X}\subseteq\mathcal{C}\) for each \(i\in I\), then \((\mathcal{X},(\sim_{i})_{i\in I})\) is a chamber system over \(I\) and we call it the _induced_ chamber system.
A _gallery_ in \((\mathcal{C},(\sim_{i})_{i\in I})\) is a sequence \((c_{0},\ldots,c_{k})\) such that \(c_{\mu}\in\mathcal{C}\) for all \(0\leq\mu\leq k\) and \(c_{\mu-1},c_{\mu}\) are adjacent for all \(1\leq\mu\leq k\). Given a gallery \(G=(c_{0},\ldots,c_{k})\), then we put \(\beta(G):=c_{0}\) and \(\varepsilon(G):=c_{k}\). If \(G\) is a gallery and if \(c,d\in\mathcal{C}\) are such that \(\beta(G)=c\) and \(\varepsilon(G)=d\) then we say that \(G\) is a _gallery from \(c\) to \(d\)_ or \(G\)_joins \(c\) and \(d\)_. The chamber system \((\mathcal{C},(\sim_{i})_{i\in I})\) is said to be _connected_, if for any two chambers there exists a gallery joining them. A gallery \(G\) will be called _closed_ if \(\beta(G)=\varepsilon(G)\). Given two galleries \(G=(c_{0},\ldots,c_{k})\) and \(H=(d_{0},\ldots,d_{l})\) such that \(\varepsilon(G)=\beta(H)\), then \(GH\) denotes the gallery \((c_{0},\ldots,c_{k}=d_{0},\ldots,d_{l})\).
Let \(J\) be a subset of \(I\). A _\(J\)-gallery_ is a gallery \((c_{0},\ldots,c_{k})\) such that for each \(1\leq\mu\leq k\) there exists an index \(j\in J\) with \(c_{\mu-1}\sim_{j}c_{\mu}\).
### Homotopy of galleries and simple connectedness
In the context of chamber systems there is the notion of \(m\)-homotopy and \(m\)-simple connectedness for each \(m\in\mathbb{N}\). In this paper we are only concerned with the case \(m=2\). Therefore our definitions are always to be understood as a specialisation of the general theory to the case \(m=2\).
Let \((\mathcal{C},(\sim_{i})_{i\in I})\) be a chamber system over a set \(I\). Two galleries \(G\) and \(H\) are said to be _elementary homotopic_ if there exist two galleries \(X,Y\) and two \(J\)-galleries \(G^{\prime},H^{\prime}\) for some \(J\subseteq I\) of cardinality at most \(2\) such that \(G=XG^{\prime}Y,H=XH^{\prime}Y\). Two galleries \(G,H\) are said to be _homotopic_ if there exists a finite sequence \(G_{0},\ldots,G_{l}\) of galleries such that \(G_{0}=G,G_{l}=H\) and such that \(G_{\mu-1}\) is elementary homotopic to \(G_{\mu}\) for all \(1\leq\mu\leq l\).
If two galleries \(G,H\) are homotopic, then it follows by definition that \(\beta(G)=\beta(H)\) and \(\varepsilon(G)=\varepsilon(H)\). A closed gallery \(G\) is said to be _null-homotopic_ if it is homotopic to the gallery \((\beta(G))\). The chamber system \((\mathcal{C},(\sim_{i})_{i\in I})\) is called _simply connected_ if it is connected and if each closed gallery is null-homotopic.
### Buildings
A _building of type \((W,S)\)_ is a pair \(\Delta=(\mathcal{C},\delta)\) where \(\mathcal{C}\) is a non-empty set and where \(\delta:\mathcal{C}\times\mathcal{C}\to W\) is a _distance function_ satisfying the following axioms, where \(x,y\in\mathcal{C}\) and \(w=\delta(x,y)\):
1. \(w=1_{W}\) if and only if \(x=y\).
2. if \(z\in\mathcal{C}\) satisfies \(s:=\delta(y,z)\in S\), then \(\delta(x,z)\in\{w,ws\}\), and if, furthermore, \(\ell(ws)=\ell(w)+1\), then \(\delta(x,z)=ws\).
3. if \(s\in S\), there exists \(z\in\mathcal{C}\) such that \(\delta(y,z)=s\) and \(\delta(x,z)=ws\).
The _rank_ of \(\Delta\) is the rank of the underlying Coxeter system. The elements of \(\mathcal{C}\) are called _chambers_. Given \(s\in S\) and \(x,y\in\mathcal{C}\), then \(x\) is called _\(s\)-adjacent_ to \(y\), if \(\delta(x,y)=s\). The chambers \(x,y\) are called _adjacent_, if they are \(s\)-adjacent for some \(s\in S\). A _gallery_ from \(x\) to \(y\) is a sequence \((x=x_{0},\ldots,x_{k}=y)\) such that \(x_{l-1}\) and \(x_{l}\) are adjacent for every \(1\leq l\leq k\); the number \(k\) is called the _length_ of the gallery. Let \((x_{0},\ldots,x_{k})\) be a gallery and suppose \(s_{i}\in S\) such that \(\delta(x_{i-1},x_{i})=s_{i}\). Then \((s_{1},\ldots,s_{k})\) is called the _type_ of the gallery. A gallery from \(x\) to \(y\) of length \(k\) is called _minimal_ if there is no gallery from \(x\) to \(y\) of length \(<k\).
Given a subset \(J\subseteq S\) and \(x\in\mathcal{C}\), the _\(J\)-residue of \(x\)_ is the set \(R_{J}(x):=\{y\in\mathcal{C}\mid\delta(x,y)\in\langle J\rangle\}\). A _residue_ is a subset \(R\) of \(\mathcal{C}\) such that there exist \(J\subseteq S\) and \(x\in\mathcal{C}\) with \(R=R_{J}(x)\). Since the subset \(J\) is uniquely determined by \(R\), the set \(J\) is called the _type_ of \(R\) and the
_rank_ of \(R\) is defined to be the cardinality of \(J\). Each \(J\)-residue is a building of type \((\langle J\rangle,J)\) with the distance function induced by \(\delta\) (cf. [2, Corollary 5.30]). Given \(x\in\mathcal{C}\) and a \(J\)-residue \(R\subseteq\mathcal{C}\), then there exists a unique chamber \(z\in R\) such that \(\ell(\delta(x,y))=\ell(\delta(x,z))+\ell(\delta(z,y))\) holds for every \(y\in R\) (cf. [2, Proposition 5.34]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_ and is denoted by \(\operatorname{proj}_{R}x\). Moreover, if \(z=\operatorname{proj}_{R}x\) we have \(\delta(x,y)=\delta(x,z)\delta(z,y)\) for each \(y\in R\). A residue is called _spherical_ if its type is a spherical subset of \(S\). A building is _spherical_ if its type is spherical; for \(k\in\mathbb{N}\) it is called \(k\)_-spherical_ if \((W,S)\) is \(k\)-spherical. Let \(R\) be a spherical \(J\)-residue. Then \(x,y\in R\) are called _opposite in \(R\)_ if \(\delta(x,y)=r_{J}\). A _panel_ is a residue of rank \(1\). An \(s\)_-panel_ is a panel of type \(\{s\}\) for \(s\in S\). For \(c\in\mathcal{C}\) and \(s\in S\) we denote the \(s\)-panel containing \(c\) by \(\mathcal{P}_{s}(c)\). The building \(\Delta\) is called _thick_, if each panel of \(\Delta\) contains at least three chambers. For an \(s\)-panel \(P\) we will also write \(P_{s}\) instead of \(P\) to underline the type of \(P\). We denote the set of all panels in a given building \(\Delta\) by \(\mathcal{P}_{\Delta}\). For \(c\in\mathcal{C}\) and \(k\in\mathbb{N}\) we denote the union of all residues of rank at most \(k\) containing \(c\) by \(E_{k}(c)\). Let \(\Delta=(\mathcal{C},\delta),\Delta^{\prime}=(\mathcal{C}^{\prime},\delta^{ \prime})\) be two buildings of type \((W,S)\) and let \(X\subseteq\mathcal{C},X^{\prime}\subseteq\mathcal{C}^{\prime}\). Then a map \(\varphi:X\to X^{\prime}\) is called _isomorphism_ if it is bijective and preserves the distance functions. In this case we will write \(X\cong X^{\prime}\). We denote the set of all isomorphisms from a building \(\Delta\) to itself by \(\operatorname{Aut}(\Delta)\).
_(2.2) Remark._ Our definition of a building agrees with the definition given in [16, Ch. 3] (cf. [2, Proposition 5.23]).
### Coxeter buildings
**(2.3) Example.**: We define \(\delta:W\times W\to W,(x,y)\mapsto x^{-1}y\). Then \(\Sigma(W,S):=(W,\delta)\) is a building of type \((W,S)\). Moreover, \(W\) acts on \(\Sigma(W,S)\) by left-multiplication.
A _reflection_ is an element of \(W\) that is conjugate to an element of \(S\). For \(s\in S\) we let \(\alpha_{s}:=\{w\in W\mid\ell(sw)>\ell(w)\}\) be the _simple root_ corresponding to \(s\). A _root_ is a subset \(\alpha\subseteq W\) such that \(\alpha=v\alpha_{s}\) for some \(v\in W\) and \(s\in S\). We denote the set of all roots by \(\Phi(W,S)\). The set \(\Phi(W,S)_{+}=\{\alpha\in\Phi(W,S)\mid 1_{W}\in\alpha\}\) is the set of all _positive roots_ and \(\Phi(W,S)_{-}=\{\alpha\in\Phi(W,S)\mid 1_{W}\notin\alpha\}\) is the set of all _negative roots_. For \(J\subseteq S\) we put \(\Phi(W,S)^{J}:=\Phi(\langle J\rangle,J)\) (resp. \(\Phi(W,S)_{+}^{J},\Phi(W,S)_{-}^{J}\)). For each root \(\alpha\in\Phi(W,S)\) we denote the _opposite_ root by \(-\alpha\) and we let \(r_{\alpha}\) be the unique reflection which interchanges these two roots. A pair of distinct roots \(\{\alpha,\beta\}\subseteq\Phi(W,S)\) is called _prenilpotent_ if both \(\alpha\cap\beta\) and \((-\alpha)\cap(-\beta)\) are non-empty sets. Given such a pair \(\{\alpha,\beta\}\) we will write \([\alpha,\beta]:=\{\gamma\in\Phi(W,S)\mid\alpha\cap\beta\subseteq\gamma\text{ and }(-\alpha)\cap(-\beta)\subseteq-\gamma\}\) and \((\alpha,\beta):=[\alpha,\beta]\setminus\{\alpha,\beta\}\). For a pair \(\{\alpha,\beta\}\subseteq\Phi\) of prenilpotent roots there are two possibilities: either \(o(r_{\alpha}r_{\beta})<\infty\) or \(o(r_{\alpha}r_{\beta})=\infty\). The second case implies \(\alpha\subsetneq\beta\) or \(\beta\subsetneq\alpha\).
**(2.4) Convention.** For the rest of this paper we let \((W,S)\) be a Coxeter system of finite rank and \(\Phi:=\Phi(W,S)\) (resp. \(\Phi_{+},\Phi_{-}\)) be the set of roots (resp. positive roots, negative roots).
For \(\alpha\in\Phi\) we denote by \(\partial\alpha\) (resp. \(\partial^{2}\alpha\)) the set of all panels (resp. spherical residues of rank \(2\)) stabilized by \(r_{\alpha}\). The set \(\partial\alpha\) is called the _wall_ associated to \(\alpha\). For a gallery \(G=(c_{0},\ldots,c_{k})\) we say that \(G\)_crosses the wall_\(\partial\alpha\) if \(\{c_{i-1},c_{i}\}\in\partial\alpha\) for some \(1\leq i\leq k\).
**(2.5) Lemma.**: _Let \(\alpha\in\Phi\) and let \(P,Q\in\partial\alpha\). Then there exist a sequences \(P_{0}=P,\ldots,P_{n}=Q\) of panels in \(\partial\alpha\) and a sequence \(R_{1},\ldots,R_{n}\) of spherical rank \(2\) residues in \(\partial^{2}\alpha\) such that \(P_{i-1},P_{i}\) are distinct and contained in \(R_{i}\)._
Proof.: This is a consequence of [7, Proposition 2.7]. The fact that \(P_{0},\ldots,P_{n}\in\partial\alpha\) follows from the implication (iii)\(\Rightarrow\)(ii) in loc.cit. Since \(P_{i}\subseteq R_{i}\), we infer \(R_{i}\in\partial^{2}\alpha\)
Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\). A subset \(\Sigma\subseteq\mathcal{C}\) is called _apartment_ if it is isomorphic to \(W\). A subset \(\alpha\subseteq\mathcal{C}\) is called a _root_ if it is isomorphic to \(\alpha_{s}\) for some \(s\in S\).
### Two conditions for buildings
**(2.6) Example**.: Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\). We define \(x\sim_{s}y\) if \(\delta(x,y)\in\langle s\rangle\). Then \(\sim_{s}\) is an equivalence relation and \((\mathcal{C},(\sim_{s})_{s\in S})\) is a chamber system.
We now introduce two conditions for a building:
1. A building \(\Delta\) satisfies Condition (lco) if it is 2-spherical and if \(R\) is a rank 2 residue of \(\Delta\) containing a chamber \(c\), then the induced chamber system defined by the set of chambers opposite \(c\) inside \(R\) is connected.
2. A building \(\Delta\) satisfies Condition (lsco) if it is 3-spherical and if \(R\) is a rank 3 residue of \(\Delta\) containing a chamber \(c\), then the induced chamber system defined by the set of chambers opposite \(c\) inside \(R\) is simply connected.
Buildings which satisfy both conditions are discussed in [9, Chapter 9] and [10, Introduction]. In [9] it is stated that a 3-spherical building of simply-laced type (i.e. \(m_{st}\in\{2,3\}\) for all \(s,t\in S\)) in which every panel contains at least 4 chambers satisfies both conditions (lco) and (lsco). Moreover, (following [9] and [10]) a (general) 3-spherical building satisfies the Conditions (lco) and (lsco) if every panel contains at least 6 chambers.
### Spherical Moufang buildings
Let \(\Delta=(\mathcal{C},\delta)\) be a Mukerducible spherical building of type \((W,S)\) and of rank at least 2. For a root \(\alpha\) of \(\Delta\) we define the _root group_\(U_{\alpha}\) as the set of all automorphisms fixing \(\alpha\) pointwise and fixing every panel \(P\) pointwise, where \(|P\cap\alpha|=2\). The building \(\Delta\) is called _Moufang_ if for every root \(\alpha\) of \(\Delta\) the root group \(U_{\alpha}\) acts simply transitive on the set of apartments containing \(\alpha\).
Let \(\Delta=(\mathcal{C},\delta)\) be a Moufang building of type \((W,S)\), let \(\Sigma\) be an apartment of \(\Delta\) and let \(c\in\Sigma\). Identify the set of roots in \(\Sigma\) with \(\Phi\) and let \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\). Let \(H=\operatorname{Fix}_{G}(\Sigma)\) and \(B_{+}=\operatorname{Stab}_{G}(c)=H\langle U_{\alpha}\mid c\in\alpha\in\Phi\rangle\). Then \(\Delta(G,B_{+})=(G/B_{+},\delta)\) is a building of type \((W,S)\), where \(\delta:G/B_{+}\times G/B_{+}\to W,(gB_{+},hB_{+})\mapsto w\), where \(g^{-1}h\in B_{+}wB_{+}\) (for more information we refer to [2, Section 6, 7]). Moreover, the buildings \(\Delta\) and \(\Delta(G,B_{+})\) are isomorphic. By [2, Corollary 7.68] and [16, Lemma (7.4)] every chamber \(gB_{+}\) can be written in the form \(u_{1}n_{s_{1}}\cdots u_{k}n_{s_{k}}B_{+}\) with \(u_{i}\in U_{\alpha_{s_{i}}}\) and \(s_{1}\cdots s_{k}=\delta(B_{+},gB_{+})\) is reduced. If we fix the _type_\((s_{1},\ldots,s_{k})\) of \(u_{1}n_{s_{1}}\cdots u_{k}n_{s_{k}}B_{+}\), then the elements \(u_{i}\in U_{\alpha_{s_{i}}}\) are uniquely determined.
### Twin buildings
Let \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{ -})\) be two buildings of the same type \((W,S)\). A _codistance_ (or _twinning_) between \(\Delta_{+}\) and \(\Delta_{-}\) is a mapping \(\delta_{*}:(\mathcal{C}_{+}\times\mathcal{C}_{-})\cup(\mathcal{C}_{-}\times \mathcal{C}_{+})\to W\) satisfying the following axioms, where \(\varepsilon\in\{+,-\},x\in\mathcal{C}_{\varepsilon},y\in\mathcal{C}_{-\varepsilon}\) and \(w=\delta_{*}(x,y)\):
1. \(\delta_{*}(y,x)=w^{-1}\);
2. if \(z\in\mathcal{C}_{-\varepsilon}\) is such that \(s:=\delta_{-\varepsilon}(y,z)\in S\) and \(\ell(ws)=\ell(w)-1\), then \(\delta_{*}(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(y,z)=s\) and \(\delta_{*}(x,z)=ws\).
A _twin building of type_\((W,S)\) is a triple \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\), where \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{-})\) are buildings of type \((W,S)\) and where \(\delta_{*}\) is a twinning between \(\Delta_{+}\) and \(\Delta_{-}\). The twin building is called _thick_, if \(\Delta_{+}\) and \(\Delta_{-}\) are thick. The twin building \(\Delta\) satisfies Condition (lco) if both buildings \(\Delta_{+},\Delta_{-}\) satisfy Condition (lco). For \(\varepsilon\in\{+,-\}\) and \(c\in\mathcal{C}_{\varepsilon}\) we define \(c^{\mathrm{op}}=\{d\in\mathcal{C}_{-\varepsilon}\mid\delta_{*}(c,d)=1_{W}\}\). Let \(\Sigma_{+}\subseteq\mathcal{C}_{+}\) and \(\Sigma_{-}\subseteq\mathcal{C}_{-}\) be apartments of \(\Delta_{+}\) and \(\Delta_{-}\), respectively. Then the set \(\Sigma:=\Sigma_{+}\cup\Sigma_{-}\) is called _twin apartment_ if for all \(\varepsilon\in\{+,-\}\) and \(x\in\Sigma_{\varepsilon}\) there exists exactly one \(y\in\Sigma_{-\varepsilon}\) with \(\delta_{*}(x,y)=1_{W}\). Moreover, for any \(c_{+}\in\mathcal{C}_{+},c_{-}\in\mathcal{C}_{-}\) with \(\delta_{*}(c_{+},c_{-})=1_{W}\) there exists a unique twin apartment \(\Sigma\) containing \(c_{+}\) and \(c_{-}\) (cf. [2, Proposition 5.179(1)]). Let \(\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be another twin building of type \((W,S)\). A mapping \(\varphi:\mathcal{C}_{+}\cup\mathcal{C}_{-}\to\mathcal{C}^{\prime}_{+}\cup \mathcal{C}^{\prime}_{-}\) is called _isomorphism_, if it preserves the sign, the distance and the codistance.
**(2.7) Lemma**.: _Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a twin building of type \((W,S)\) and let \(c\) be a chamber of \(\Delta\) such that \(|\mathcal{P}_{s}(c)|\geq 3\) holds for all \(s\in S\). Then \(\Delta\) is thick._
Proof.: Let \(\varepsilon\in\{+,-\}\) and let \(c\in\mathcal{C}_{\varepsilon}\). For each \(d\in c^{\mathrm{op}}\) and each \(s\in S\) the \(s\)-panel containing \(d\) contains at least three elements by [2, Corollary 5.153]. Let \(c^{\prime}\in\mathcal{C}_{\varepsilon}\) and \(d\in c^{\mathrm{op}}\). If \(c^{\prime}\in d^{\mathrm{op}}\) the \(s\)-panel containing \(c^{\prime}\) contains at least three chambers by [2, Corollary 5.153]. Otherwise, there exists a chamber \(d^{\prime}\in E_{1}(d)\) such that \(d^{\prime}\in c^{\mathrm{op}}\) and \(\ell(\delta(c^{\prime},d^{\prime}))=\ell(\delta(c^{\prime},d))-1\). Using induction we obtain a chamber \(d^{\prime\prime}\in\mathcal{C}_{-\varepsilon}\) such that \(d^{\prime\prime}\in c^{\mathrm{op}}\cap(c^{\prime})^{\mathrm{op}}\). Applying [2, Corollary 5.153] twice, we obtain that the \(s\)-panel containing \(c^{\prime}\) contains at least three chambers for each \(s\in S\). Thus \(\Delta_{\varepsilon}\) is a thick building. The thickness of \(\Delta_{-\varepsilon}\) follows similarly.
**(2.8) Lemma**.: _Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a thick \(2\)-spherical twin building of type \((W,S)\), let \(\varepsilon\in\{+,-\},c\in\mathcal{C}_{\varepsilon}\) and for each \(J\subseteq S\) with \(|J|=2\) we let \(\Sigma_{J}\) be an apartment of \(R_{J}(c)\) containing \(c\) and such that \(\Sigma_{\{r,s\}}\cap\mathcal{P}_{s}(c)=\Sigma_{\{s,t\}}\cap\mathcal{P}_{s}(c)\) for all \(r\neq s\neq t\in S\). Then there exists a twin apartment \(\Sigma\) such that \(\Sigma_{J}\subseteq\Sigma\)._
Proof.: This follows from [26, Theorem 6.3.6].
### RGD-systems
An _RGD-system of type_\((W,S)\) is a pair \(\big{(}G,(U_{\alpha})_{\alpha\in\Phi}\big{)}\) consisting of a group \(G\) together with a family of subgroups \(U_{\alpha}\) (called _root groups_) indexed by the set of roots \(\Phi\), which satisfies the following axioms, where \(H:=\bigcap_{\alpha\in\Phi}N_{G}(U_{\alpha})\) and \(U_{\varepsilon}:=\langle U_{\alpha}\mid\alpha\in\Phi_{\varepsilon}\rangle\) for \(\varepsilon\in\{+,-\}\):
1. For each \(\alpha\in\Phi\), we have \(U_{\alpha}\neq\{1\}\).
2. For each prenilpotent pair \(\{\alpha,\beta\}\subseteq\Phi\), the commutator group \([U_{\alpha},U_{\beta}]\) is contained in the group \(U_{(\alpha,\beta)}:=\langle U_{\gamma}\mid\gamma\in(\alpha,\beta)\rangle\).
3. For each \(s\in S\) and each \(u\in U_{\alpha_{s}}\backslash\{1\}\), there exist \(u^{\prime},u^{\prime\prime}\in U_{-\alpha_{s}}\) such that the product \(m(u):=u^{\prime}uu^{\prime\prime}\) conjugates \(U_{\beta}\) onto \(U_{s\beta}\) for each \(\beta\in\Phi\).
4. For each \(s\in S\), the group \(U_{-\alpha_{s}}\) is not contained in \(U_{+}\).
5. \(G=H\langle U_{\alpha}\mid\alpha\in\Phi\rangle\).
Let \(\mathcal{D}=(G,(U_{\alpha})_{\alpha\in\Phi})\) be an RGD-system of type \((W,S)\) and let \(H:=\bigcap_{\alpha\in\Phi}N_{G}(U_{\alpha})\) and \(B_{\varepsilon}:=H\langle U_{\alpha}\mid\alpha\in\Phi_{\varepsilon}\rangle\) for \(\varepsilon\in\{+,-\}\). It follows from [2, Theorem 8.80] that there exists an _associated_ twin building \(\Delta(\mathcal{D})=(\Delta_{+},\Delta_{-},\delta_{*})\) of type \((W,S)\) such that \(\Delta_{+}=(G/B_{+},\delta_{+})\) and \(\Delta_{-}=(G/B_{-},\delta_{-})\) on which \(G\) acts via left multiplication.
We define \(X_{s}:=\langle U_{\alpha_{s}}\cup U_{-\alpha_{s}}\rangle\) and \(X_{s,t}:=\langle X_{s}\cup X_{t}\rangle\) for all \(s\neq t\in S\). If \((W,S)\) is \(2\)-spherical, \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\) and \(\Delta(\mathcal{D})\) satisfies Condition (lco), it follows by the Main
result of [3] that \(G\) is isomorphic to the direct limit of the inductive system formed by the groups \(L_{s}:=HX_{s}\) and \(L_{s,t}:=HX_{s,t}\) for all \(s\neq t\in S\). Furthermore, the direct limit of the inductive system formed by the groups \(X_{s}\) and \(X_{s,t}\) (\(s\neq t\in S\)) can be naturally endowed with an RGD-system and is a central extension of \(G\) (cf. [6, Theorem 3.7]).
For \(s\in S\) we define \(H_{s}:=\langle m(u)m(v)\mid u,v\in U_{\alpha_{s}}\rangle\langle\{1\}\rangle\). By [6, Lemma 3.3] we have \(H=\prod_{s\in S}H_{s}\), if \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\) and if \((W,S)\) is \(2\)-spherical. By [2, Consequence (6) on p.415] we have \(H_{t}^{m(u)}\subseteq H_{s}H_{t}\) for all \(1\neq u\in U_{\alpha_{s}}\). For the rest of this subsection we fix \(1\neq e_{s}\in U_{\alpha_{s}}\) and put \(n_{s}:=m(e_{s})\). By [2, Consequence (11) on p.416] there exist for every \(1\neq u_{s}\in U_{\alpha_{s}}\) elements \(\overline{u}_{s}\in U_{\alpha_{s}}\) and \(b(u_{s})\in\langle U_{\alpha_{s}}\cup H_{s}\rangle\) such that \(n_{s}u_{s}n_{s}=\overline{u}_{s}n_{s}b(u_{s})\) (note that \(\langle U_{\alpha_{s}}\cup H_{s}\rangle\cap U_{-\alpha_{s}}=\{1\}\)). As in the case of a Coxeter system, we define \(p(n_{s},n_{t})\) to mean \(n_{s}n_{t}n_{s}n_{t}\ldots\), where \(n_{s},n_{t}\) appear \(m_{st}\) times, e.g. if \(m_{st}=3\), we have \(p(n_{s},n_{t})=n_{s}n_{t}n_{s}\). By [16, Lemma 7.3] we have \(p(n_{s},n_{t})=p(n_{t},n_{s})\).
**(2.9) Theorem**.: _Let \((G,(U_{\alpha})_{\alpha\in\Phi})\) be an RGD-system spherical type \((W,S)\) such that \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\). Let \(B:=\langle U_{+},H\rangle\). Then \(G\) has the following presentation: as generators we have \(\bigcup_{s\in S}H_{s}\cup\bigcup_{\alpha\in\Phi_{+}}U_{\alpha}\) and \(\{n_{s}\mid s\in S\}\) and as relations we have all relations in \(B\) and for \(s,t\in S,\alpha_{s}\neq\alpha\in\Phi_{+},h\in H_{t},u_{s}\in U_{s}\) we have the relations_
\[n_{s}hn_{s}=n_{s}^{2}h^{n_{s}},\ \ \ \ n_{s}u_{\alpha}n_{s}=n_{s}^{2}u_{\alpha}^{ n_{s}},\ \ \ \ \ n_{s}u_{s}n_{s}=\overline{u}_{s}n_{s}b(u_{s}),\ \ \ \ \ p(n_{s},n_{t})=p(n_{t},n_{s}),\]
_where \(u_{\alpha}^{n_{s}}\in U_{s\alpha},n_{s}^{2}\in H_{s},h^{n_{s}}\in H_{s}H_{t}\)._
Proof.: All these relations are relations in \(G\). Thus it suffices to show that all relations in [20, Corollary (13.4)] can be deduced from the relations in the statement. A case distinction yields the result.
**(2.10) Example**.: Let \(\Delta\) be an irreducible spherical Moufang building of rank at least \(2\) and let \(\Sigma\) be an apartment of \(\Delta\). Identifying the set of roots \(\Phi\) with the set of roots in \(\Sigma\), we deduce that \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\) is an RGD-system.
**(2.11) Lemma**.: _Let \(\big{(}G,(U_{\alpha})_{\alpha\in\Phi}\big{)}\) be an RGD-system of \(2\)-spherical type \((W,S)\) such that \(\Delta(\mathcal{D})\) satisfies Condition (lco) (e.g. each root group contains at least four elements). Then we have \(\langle U_{\gamma}\mid\gamma\in[\alpha_{s},\alpha_{t}]\rangle=\langle U_{ \alpha_{s}}\cup U_{\alpha_{t}}\rangle\) or equivalently \([U_{\alpha_{s}},U_{\alpha_{t}}]=U_{(\alpha_{s},\alpha_{t})}\) for all \(s\neq t\in S\)._
Proof.: This is a consequence of [1, Lemma 18 and Proposition 7].
_(2.12) Remark_.: Let \(\big{(}G,(U_{\alpha})_{\alpha\in\Phi}\big{)}\,,\big{(}H,(V_{\alpha})_{\alpha \in\Phi}\big{)}\) be two RGD-systems of the same irreducible spherical type \((W,S)\) and of rank at least \(2\). Assume that \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\), the root groups \(U_{\alpha}\) are nilpotent and that both twin buildings associated with the RGD-systems satisfy Condition (lco). Then \(G,H\) are perfect by the previous lemma. Assume that there exists a homomorphism \(\varphi:G\to H\) such that \(\varphi(U_{\alpha})=V_{\alpha}\). As \(\ker\varphi\neq G\), we have \(\ker\varphi\leq Z(G)\) by [2, Proposition 7.132].
**(2.13) Lemma**.: _Let \(\mathcal{D}=\big{(}G,(U_{\alpha})_{\alpha\in\Phi}\big{)}\) be an RGD-system of \(3\)-spherical type \((W,S)\) such that \(\Delta(\mathcal{D})_{\pm}\) satisfy the Conditions (lco) and (lsc) (e.g. every root group contains at least five elements). Then \(B_{+}\) is the \(2\)-amalgum of the groups \(HU_{\alpha_{s}}\)._
Proof.: This is a consequence of [8, Corollary 1.2].
**(2.14) Lemma**.: _Let \((G,(U_{\alpha})_{\alpha\in\Phi})\) be an RGD-system of type \((W,S)\) and let \(X_{s,t}=\langle U_{\alpha}\mid\alpha\in\Phi^{\{s,t\}}\rangle,B_{\{s,t\}}= \langle H_{s}\cup H_{t}\cup U_{\alpha}\mid\alpha\in\Phi^{\{s,t\}}_{+}\rangle\). Then \((X_{s,t},(U_{\alpha})_{\alpha\in\Phi^{\{s,t\}}})\) is an RGD-system and \(X_{s,t}/B_{s,t}\to R_{\{s,t\}}(B_{+}),gB_{s,t}\mapsto gB_{+}\) is an isomorphism of buildings._
Proof.: As \(X_{s,t}\) stabilizes \(R_{\{s,t\}}(B_{+})\), we deduce \(\{gB_{+}\mid g\in X_{s,t}\}\subseteq R_{\{s,t\}}(B_{+})\). Let \(g\in G\) be such that \(w:=\delta_{+}(B_{+},gB_{+})\in\langle s,t\rangle\). By [2, Proposition 8.59(2)]) there exists \(h\in U_{w}\) such that \(hwB_{+}=gB_{+}\) and hence \(gB_{+}\in\{hB_{+}\mid h\in X_{s,t}\}\). Thus \(R_{\{s,t\}}(B_{+})=\{gB_{+}\mid g\in X_{s,t}\}\). Since \(B_{\{s,t\}}\subseteq B_{+}\), the map is well-defined. Suppose \(g,h\in X_{s,t}\) such that \(gB_{+}=hB_{+}\). Then \(g^{-1}h\in B_{+}\cap X_{s,t}=B_{\{s,t\}}\) as \(B_{+}=\mathrm{Stab}_{G}(B_{+})\) and \(B_{\{s,t\}}=\mathrm{Stab}_{X_{s,t}}(B_{+})\) and thus the mapping is injective. Since \(R_{\{s,t\}}(B_{+})=\{gB_{+}\mid g\in X_{s,t}\}\) the mapping is also surjective. Furthermore, it preserves adjacency. The claim follows now from [2, Lemma 5.61].
### Blueprints
This subsection is based on [16, Ch. 7].
A _parameter system_ will mean a family of disjoint _parameter sets_\((U^{\prime}_{s})_{s\in S}\), each having a distinguished element \(\infty_{s}\in U^{\prime}_{s}\). We shall write \(U_{s}:=U^{\prime}_{s}\backslash\{\infty_{s}\}\).
Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\) and let \((U^{\prime}_{s})_{s\in S}\) be a parameter system. A _labelling of \(\Delta\) of type \((U^{\prime}_{s})_{s\in S}\)_, _based at \(c\in\mathcal{C}\)_, is a family \(\left(\varphi_{P_{s}}:U^{\prime}_{s}\to P_{s}\right)_{P_{s}\in\mathcal{P}_{ \Delta}}\) of bijections such that \(\varphi_{P_{s}}(\infty_{s})=\mathrm{proj}_{P_{s}}c\). We call \(\varphi_{P_{s}}^{-1}(x)\in U^{\prime}_{s}\) the _\(s\)-label_ of \(x\).
**(2.15) Example**.: Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \(I_{2}(m)\), \(m\geq 2\), with a labelling of type \((U^{\prime}_{s},U^{\prime}_{t})\), where \(S=\{s,t\}\), based at some chamber \(c\in\mathcal{C}\). Given any chamber \(x\in\mathcal{C}\) at distance \(d\) from \(c\) one has a minimal gallery \((c=c_{0},\ldots,c_{d}=x)\). Let \(u_{i}\) be the label attached to \(c_{i}\) in the unique panel containing \(c_{i-1}\) and \(c_{i}\). The gallery thus determines the sequence \((u_{1},\ldots,u_{d})\) where the \(u_{i}\) lie alternately in \(U_{s}\) and \(U_{t}\), and any such sequence obviously determines a unique gallery starting at \(c\), and hence a unique chamber at the end of this gallery. If two sequences determine the same chamber they are called _equivalent_.
A _blueprint of type_\((W,S)\) is a tuple \(\left(\left(U^{\prime}_{s}\right)_{s\in S},\left(\Delta_{st}\right)_{s\neq t \in S}\right)\) consisting of a parameter system \((U^{\prime}_{s})_{s\in S}\) and buildings \(\Delta_{st}=(\mathcal{C}_{st},\delta_{st})\) of type \(I_{2}(m_{st})\) for each \(s\neq t\in S\), with a given labelling of type \((U^{\prime}_{s},U^{\prime}_{t})\), based at some chamber \(\infty_{st}\in\mathcal{C}_{st}\).
A building \(\Delta\) of type \((W,S)\) will be said to _conform_ to the blueprint \(\left(\left(U^{\prime}_{s}\right)_{s\in S},\left(\Delta_{st}\right)_{s\neq t \in S}\right)\) if there exists a labelling of \(\Delta\) of type \((U^{\prime}_{s})_{s\in S}\), based at some chamber \(c\in\mathcal{C}\), such that for each \(\{s,t\}\)-residue \(R\) of \(\Delta\) there is an isomorphism \(\varphi_{R}:\Delta_{st}\to R\) with the property that \(x\) and \(\varphi_{R}(x)\) have the same \(s\)- and \(t\)-labels for each chamber \(x\) of \(\Delta_{st}\). We call a blueprint \(\mathcal{B}\)_realisable_ if there exists a building which conforms to it.
### A realisable criterion
We want to construct a building which conforms to a given blueprint. Let \(\mathcal{B}=\left(\left(U^{\prime}_{s}\right)_{s\in S},\left(\Delta_{st} \right)_{s\neq t\in S}\right)\) be a blueprint of type \((W,S)\). As a first step we construct a chamber system \(\mathbf{C}(\mathcal{B})\) as follows: The chambers are sequences \(\bar{u}:=(u_{1},\ldots,u_{k})\), where \(u_{i}\in U_{s_{i}}\) and \(s_{1}\cdots s_{k}\) is reduced. We call \((s_{1},\ldots,s_{k})\) the _type_ of \(\bar{u}\). We define \(s\)-adjacency via
\[(u_{1},\ldots,u_{k})\sim_{s}(u_{1},\ldots,u_{k},u_{k+1})\sim_{s}(u_{1},\ldots,u_{k},u^{\prime}_{k+1}),\]
where \(u_{k+1},u^{\prime}_{k+1}\in U_{s}\); this is evidently an equivalence relation, so \(\mathbf{C}(\mathcal{B})\) is a chamber system. For \(\bar{u}=(u_{1},\ldots,u_{k})\) having type \((s_{1},\ldots,s_{k})\) and \(\bar{v}=(v_{1},\ldots,v_{n})\) having type \((t_{1},\ldots,t_{n})\) we define the sequence \(\bar{u}\bar{v}:=(u_{1},\ldots,u_{k},v_{1},\ldots,v_{n})\) if \(s_{1}\cdots s_{k}t_{1}\cdots t_{n}\) is reduced. We now define an _elementary equivalence_ to be an alteration from a sequence \(\bar{u}_{1}\bar{u}\bar{u}_{2}\) of type \((f_{1},p(s,t),f_{2})\) to \(\bar{u}_{1}\bar{u}^{\prime}\bar{u}_{2}\) of type \((f_{1},p(t,s),f_{2})\) where \(\bar{u}\) and \(\bar{u}^{\prime}\) are equivalent in \(\Delta_{st}\). Two sequences \(\bar{u}\) and \(\bar{v}\) are called _equivalent_, written \(\bar{u}\simeq\bar{v}\), if one can be transformed to the other by a finite sequence of elementary equivalences. We now consider \(\mathbf{C}(\mathcal{B})\) modulo the equivalence relation.
Notice that \([\bar{u}]\) determines a unique element \(w\in W\) where \((s_{1},\ldots,s_{k})\) is the type of \(\bar{u}\) and \(w=s_{1}\cdots s_{k}\). We define \(x\sim_{s}y\) if \(x=[\bar{u}],y=[\bar{v}]\) with \(\bar{u}\sim_{s}\bar{v}\).
**(2.16) Theorem**.: _Assume that for any two sequences \(\bar{u},\bar{v}\) of the same reduced type, \(\bar{u}\simeq\bar{v}\) implies \(\bar{u}=\bar{v}\). Then \(\mathbf{C}_{\mathcal{B}}:=([\bar{u}],(\sim_{s})_{s\in\mathcal{S}})\) is a chamber system. The chambers are equivalence classes of sequences \(\bar{u}:=(u_{1},\ldots,u_{k})\), denoted by \([\bar{u}]\) or \([u_{1},\ldots,u_{k}]\). Moreover, \(\mathbf{C}_{\mathcal{B}}\) is a building which conforms to \(\mathcal{B}\)._
Proof.: This follows from the proof of [16, Theorem (7.1)].
**(2.17) Corollary**.: _A blueprint is realisable if and only if for any two sequences \(\bar{u},\bar{v}\) of the same reduced type, \(\bar{u}\simeq\bar{v}\) implies \(\bar{u}=\bar{v}\). In particular, a blueprint is realisable if its restriction to each spherical rank \(3\) subdiagram is realisable._
Proof.: The first assertion is [16, Theorem (7.2)]; the second assertion follows from the first and [16, Step 1 of Theorem (7.1)].
### Blueprints and Moufang buildings
This subsection is based on [16].
Let \(\Delta=(\mathcal{C},\delta)\) be a spherical Moufang building of type \((W,S)\) and let \(c\in\mathcal{C}\). For each \(s\in S\) we fix \(1\neq e_{s}\in U_{\alpha_{s}}\) and put \(n_{s}:=m(e_{s})\). Then any chamber of \(\Delta\) can be written uniquely in the form \(u_{1}n_{s_{1}}\cdots u_{k}n_{s_{k}}B_{+}\) with \(u_{i}\in U_{\alpha_{s_{i}}}\) and \(s_{1}\cdots s_{k}\) is reduced, if we fix the type \((s_{1},\ldots,s_{k})\) of \(u_{1}n_{s_{1}}\cdots u_{k}n_{s_{k}}B_{+}\). This yields a _natural labelling_ of the building \(\Delta\). More precisely let \(P\) be any \(s\)-panel of \(\Delta\), and let \(\operatorname{proj}_{P}c=d\) and \(w=\delta(c,d)\). As cosets of \(B_{+}\) the chambers of \(P\) may be written \(u_{1}n_{s_{1}}\cdots u_{k}n_{s_{k}}B_{+}\) (this is \(d\)), and \(u_{1}n_{s_{1}}\cdots u_{k}n_{s_{k}}v_{n_{s}}B_{+}\) where \(u_{i}\in U_{\alpha_{s_{i}}}\) and \(v\in U_{\alpha_{s}}\). We assign them the \(s\)-labels \(\infty_{s}\) and \(v\), using \(U^{\prime}_{\alpha_{s}}=U_{\alpha_{s}}\cup\{\infty_{s}\}\). If we let \(R_{st}\) be the \(\{s,t\}\)-residue containing \(c\), then \(R_{st}\) acquires a labelling and we have a blueprint given by the \((e_{s})_{s\in S}\), namely \(\big{(}(U^{\prime}_{\alpha_{s}})_{s\in S},(R_{st})_{s\neq t\in S}\big{)}\).
**(2.18) Proposition**.: _Let \(\Delta\) be a spherical Moufang building. Then \(\Delta\) conforms to the blueprint given by the restriction to \(E_{2}(c)\), i.e. \(\big{(}(U^{\prime}_{\alpha_{s}})_{s\in S},(R_{st})_{s\neq t\in S}\big{)}\), and the natural labelling of \(\Delta\) as above._
Proof.: This is [16, (7.5) Proposition].
**(2.19) Corollary**.: _Let \(\Delta\) be a spherical Moufang building and \(\mathcal{B}\) be the blueprint given by \(E_{2}(c)\). Then \((u_{1},\ldots,u_{k})\) and \((v_{1},\ldots,v_{k})\) are equivalent if and only if \(u_{1}n_{1}\cdots u_{k}n_{k}B_{+}=v_{1}n^{\prime}_{1}\cdots v_{k}n^{\prime}_{k} B_{+}\). In particular, the map \(\varphi:\mathbf{C}_{\mathcal{B}}\to\Delta,[(u_{1},\ldots,u_{k})]\mapsto u_{1}n_{1} \cdots u_{k}n_{k}B_{+}\) is an isomorphism of buildings._
Proof.: If \((u_{1},\ldots,u_{k})\) and \((v_{1},\ldots,v_{k})\) are elementary equivalent, then \(u_{1}n_{1}\cdots u_{k}n_{k}B_{+}=v_{1}n^{\prime}_{1}\cdots v_{k}n^{\prime}_{k} B_{+}\). Thus \(\varphi\) is well-defined. Clearly, \(\varphi\) is surjective. Let \([(u_{1},\ldots,u_{k})],[(v_{1},\ldots,v_{k})]\in\mathbf{C}_{\mathcal{B}}\) be such that \(u_{1}n_{1}\cdots u_{k}n_{k}B_{+}=\varphi([u_{1},\ldots,u_{k}])=\varphi([v_{1}, \ldots,v_{k}])=v_{1}n^{\prime}_{1}\cdots v_{k}n^{\prime}_{k}B_{+}\). Let \((u_{1},\ldots,u_{k})\) (resp. \((v_{1},\ldots,v_{k})\)) be of type \((s_{1},\ldots,s_{k})\) (resp. \((t_{1},\ldots,t_{k})\)). Then \(s_{1}\cdots s_{k}=t_{1}\cdots t_{k}\). Thus there exists a sequence \((w_{1},\ldots,w_{k})\in[v_{1},\ldots,v_{k}]\) of type \((s_{1},\ldots,s_{k})\) and we have \(u_{1}n_{1}\cdots u_{k}n_{k}B_{+}=w_{1}n_{1}\cdots w_{k}n_{k}B_{+}\). The uniqueness of the decomposition in [16, Lemma (7.4)] implies \(u_{i}=w_{i}\) and hence \([u_{1},\ldots,u_{k}]=[w_{1},\ldots,w_{k}]\). Thus \(\varphi\) is a bijection. Clearly, \(\varphi\) preserves \(s\)-adjacency. Now the claim follows from [2, Lemma 5.61].
We now extend the concept of a natural labelling to (arbitrary) buildings of type \(A_{1}\times A_{1}\), by defining a labelling of type \((U^{\prime}_{1},U^{\prime}_{2})\) to be _natural_ if \((u_{1},u_{2})\) is equivalent to \((u_{2},u_{1})\) for all \(u_{1}\in U_{1},u_{2}\in U_{2}\). If \(\Delta\) is a spherical Moufang building with a natural labelling given by \(1\neq e_{s}\in U_{\alpha_{s}}\) then any \(A_{1}\times A_{1}\) residue acquires a natural labelling in this sense (because the appropriate root groups commute).
**Lemma**.: _Let \((W,S)\) be a reducible \(2\)-spherical Coxeter system of rank \(3\). Then a blueprint of type \((W,S)\) is realisable if the labelling of the restriction to any \(A_{1}\times A_{1}\) residue is natural._
Proof.: Let \(\mathcal{B}=((U^{\prime}_{s})_{s\in S},(\Delta_{st})_{s\neq t\in S})\) be a blueprint of type \((W,S)\). Let \(S=\{r,s,t\}\) and assume \(m_{sr}=2=m_{tr}\). By Corollary (2.17) it suffices to show that for any two sequences \(\bar{u},\bar{v}\) of the same reduced type, \(\bar{u}\simeq\bar{v}\) implies \(\bar{u}=\bar{v}\). Therefore let \(\bar{u}=(u_{1},\ldots,u_{k})\) and \(\bar{v}=(v_{1},\ldots,v_{k})\) be two sequences of the same reduced type \((s_{1},\ldots,s_{k})\) such that \(\bar{u}\simeq\bar{v}\) (note that \(k\leq m_{st}+1\)). If \((W,S)\) is of type \(A_{1}\times A_{1}\times A_{1}\) the claim follows, because an elementary equivalence is just a permutation of the sequence and for each \(s\in S\) there is at most one element of \(U_{s}\) in such a sequence. Thus we can assume that \((W,S)\) is not of type \(A_{1}\times A_{1}\times A_{1}\) and hence \(m_{st}\geq 3\). Since \(m_{sr}=2=m_{tr}\) we know that \(r\) occurs at most once in the reduced type. If \(u_{i}\) is in \(U_{r}\), then the sequence with \(u_{i},u_{i-1}\) (resp. \(u_{i},u_{i+1}\)) reversed is equivalent to \((u_{1},\ldots,u_{k})\), since \(u_{i}\) is the only element in \(U_{r}\) in the sequence \(\bar{u}\). We also note that we can do an elementary equivalence in \(\Delta_{st}\) only if \(u_{i}\) is at position \(1\) or \(k\) in the sequence. If we do an elementary equivalence in \(\Delta_{st}\) we have to do this twice (because of the type) and get the same sequence as we started with. Thus the claim follows.
### The action via left multiplication
Let \(\Delta=(\mathcal{C},\delta)\) be a spherical Moufang building of type \((W,S)\). For each \(s\in S\) we fix \(1\neq e_{s}\in U_{\alpha_{s}}\) and put \(n_{s}:=m(e_{s})\). As we have already mentioned, every chamber of \(\Delta\) can be written in the form \(u_{1}n_{1}\cdots u_{k}n_{k}B_{+}\), where \(u_{i}\in U_{\alpha_{s_{i}}}\) and this decomposition is unique if one fixes the type \((s_{1},\ldots,s_{k})\). Since left multiplication of \(G\) is an action on \(\Delta\), we want to know how \(g.u_{1}n_{1}\cdots u_{k}n_{k}B\) looks like. Assume that \(\Delta\) satisfies Condition (lco). Since \(U_{+}=\langle U_{\alpha_{s}}\mid s\in S\rangle\) as a consequence of Lemma (2.11) and \(U_{-\alpha_{s}}^{n_{s}}=U_{\alpha_{s}}\) it suffices to consider this action for \(U_{\alpha_{s}},H_{s}\) and \(n_{s}^{\pm 1}\) for each \(s\in S\). For \(s_{1},\ldots,s_{k}\in S\) and \(u_{i}\in U_{\alpha_{s_{i}}}\) we denote the chamber \(u_{1}n_{1}\cdots u_{k}n_{k}B_{+}\) by \((u_{1},\ldots,u_{k})\). We remark that we do not consider all chambers \(g.(u_{1},\ldots,u_{k})\) for \(g\in G\).
**Theorem**.: _Let \(t,s_{1},\ldots,s_{k}\in S\) and \(u_{i}\in U_{\alpha_{s_{i}}}\). Let \(\omega:G\times\Delta\to\Delta\) be the left multiplication of \(G\) on \(\Delta\). Then we have \(\omega(g,())=()\) for \(g\in B_{+}\) and \(\omega(g,(u_{1},\ldots,u_{k}))\) is given by the following:_
\[(u_{1}^{g^{-1}},\omega(g^{n_{s_{1}}},(u_{2},\ldots,u_{k})))\text{ if }g \in H_{s}\text{ for some }s\in S,\] \[(gu_{1},u_{2},\ldots,u_{k})\text{ if }g\in U_{\alpha_{s_{1}}},\] \[(u_{1},\omega(g^{n_{s_{1}}}[g,u_{1}]^{n_{s_{1}}},(u_{2},\ldots,u_{k })))\text{ if }g\in U_{\alpha_{t}},s_{1}\neq t\in S,\] \[\omega(n_{s},\omega(n_{s}^{-2},(u_{1},\ldots,u_{k})))\text{ if }g=n_{s_{ 8}}^{-1}\] \[\omega(n_{s_{1}}^{2},(u_{2},\ldots,u_{k}))\text{ if }g=n_{s_{1}},u_{1}=1,\] \[(\overline{u}_{1},\omega(b(u_{1}),(u_{2},\ldots,u_{k})))\text{ if }g=n_{s_{ 1}},u_{1}\neq 1,\] \[(1_{U_{t}},u_{1},\ldots,u_{k})\text{ if }g=n_{t},l(ts_{1}\cdots s_{k})=k+1.\]
Proof.: This is a straight forward computation.
## 3 Foundations and enveloping groups
This subsection is based on [26, Ch. 11] and [11].
### Foundations
A _foundation of type_\((W,S)\) is a triple \(\mathcal{F}:=((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r,s \},\{s,t\}\in E(S)})\) such that the following hold:
1. \(\Delta_{J}=(\mathcal{C}_{J},\delta_{J})\) is a building of type \((\langle J\rangle,J)\) with \(c_{J}\in\mathcal{C}_{J}\) for each \(J\in E(S)\).
2. Each _glueing_\(\varphi_{rst}:\mathcal{P}_{s}(c_{\{r,s\}})\to\mathcal{P}_{s}(c_{\{s,t\}})\) is a base-point preserving bijection.
3. The \(\varphi_{rst}\) satisfy the cocycle condition \(\varphi_{tsu}\circ\varphi_{rst}=\varphi_{rsu}\).
It follows from the definition that \(\varphi_{rst}=\mathrm{id}\) and that \(\varphi_{rst}=\varphi_{tsr}^{-1}\). We say that the foundation \(\mathcal{F}\) satisfies Condition (lco) if \(\Delta_{J}\) satisfies Condition (lco) for each \(J\in E(S)\). For each \(J\subseteq S\) the \(J\)_-residue_ of \(\mathcal{F}\) is the foundation \(\mathcal{F}_{J}:=((\Delta_{I})_{I\in E(J)},(c_{I})_{I\in E(J)},(\varphi_{rst}) _{\{r,s\},\{s,t\}\in E(J)})\) of type \((\langle J\rangle,J)\). Two foundations \(\mathcal{F}=((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r, s\},\{s,t\}\in E(S)})\) and \(\mathcal{F}^{\prime}=((\Delta^{\prime}_{J})_{J\in E(S)},(c^{\prime}_{J})_{J\in E (S)},(\varphi^{\prime}_{rst})_{\{r,s\},\{s,t\}\in E(S)})\) of the same type \((W,S)\) are called _isomorphic_ if there exist isomorphisms \(\alpha_{J}:\Delta_{J}\to\Delta^{\prime}_{J}\) for all \(J\in E(S)\) such that \(\alpha_{J}(c_{J})=c^{\prime}_{J}\) and for all \(r,s,t\in S\) with \(\{r,s\},\{s,t\}\in E(S)\) we have \(\varphi^{\prime}_{rst}\circ\alpha_{\{r,s\}}=\alpha_{\{s,t\}}\circ\varphi_{rst}\).
_(3.1) Remark._ We remark that there is a notion of more general isomorphisms between foundations, which allow isomorphisms of the Coxeter system. In that sense our isomorphisms are called _special_.
Let \(\mathcal{F}=((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r, s\},\{s,t\}\in E(S)})\) be a foundation of type \((W,S)\). An _apartment_ of \(\mathcal{F}\) is a tupel \((\Sigma_{J})_{J\in E(S)}\) such that the following hold:
1. \(\Sigma_{J}\) is an apartment of \(\Delta_{J}\) containing \(c_{J}\) for each \(J\in E(S)\).
2. Given \(\{r,s\},\{s,t\}\in E(S)\), then \(\varphi_{rst}(\Sigma_{\{r,s\}}\cap\mathcal{P}_{s}(c_{\{r,s\}}))=\Sigma_{\{s,t\} }\cap\mathcal{P}_{s}(c_{\{s,t\}})\).
### Moufang sets
A _Moufang set_ is a pair \((X,(U_{x})_{x\in X})\), where \(X\) is a set with \(|X|\geq 3\) and for each \(x\in X\), \(U_{x}\) is a subgroup of \(\mathrm{Sym}(X)\) (we compose from right to left) such that the following hold:
1. For each \(x\in X\), \(U_{x}\) fixes \(x\) and acts simply transitive on \(X\backslash\{x\}\).
2. For all \(x,y\in X\) and each \(g\in U_{x}\), \(g\circ U_{y}\circ g^{-1}=U_{g(y)}\).
The groups \(U_{x}\) for \(x\in X\) are called the _root groups_ of the Moufang set. Let \((X,(U_{x})_{x\in X})\) and \((X^{\prime},(U_{x^{\prime}})_{x^{\prime}\in X^{\prime}})\) be two Moufang sets and let \(\varphi:X\to X^{\prime}\) be a map. Then the Moufang sets are called \(\varphi\)_-isomorphic_, if \(\varphi\) is bijective and for all \(x\in X\) we have \(\varphi\circ U_{x}\circ\varphi^{-1}=U_{\varphi(x)}\).
**(3.2) Example.** Let \(\Delta\) be a thick, irreducible, spherical Moufang building of rank at least \(2\). Let \(P\) be a panel, let \(p\in P\) and let \(\Sigma\) be an apartment in \(\Delta\) with \(p\in\Sigma\). Let \(\alpha\) denote the unique root in \(\Sigma\) containing \(p\) but not \(P\cap\Sigma\). Let \(U_{p}:=\{g|_{P}\mid g\in U_{\alpha}\}\). Then the group \(U_{p}\) is independent of the choice of the apartment \(\Sigma\) and \(\mathbb{M}(\Delta,P):=(P,(U_{p})_{p\in P})\) is a Moufang set (cf. [14, Notation 1.19]).
### Moufang foundations
A foundation \(\mathcal{F}=((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r,s \},\{s,t\}\in E(S)})\) of \(2\)-spherical type \((W,S)\) is called _Moufang foundation_, if the following hold:
1. \(\Delta_{J}\) is a Moufang building of type \((\langle J\rangle,J)\) for each \(J\in E(S)\).
* Given \(\{r,s\},\{s,t\}\in E(S)\), the Moufang sets \(\mathbb{M}(\Delta_{\{r,s\}},\mathcal{P}_{s}(c_{\{r,s\}}))\) and \(\mathbb{M}(\Delta_{\{s,t\}},\mathcal{P}_{s}(c_{\{s,t\}}))\) are \(\varphi_{rst}\)-isomorphic.
* **Example.** Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a thick twin building of irreducible \(2\)-spherical type \((W,S)\) and of rank at least \(3\). Let \(\varepsilon\in\{+,-\}\) and \(c\in\mathcal{C}_{\varepsilon}\). By [17, (8.3) Theorem 4] the residue \(R_{J}(c)\) with the restriction of the distance function is a Moufang building for each \(J\in E(S)\) and one can verify that \(((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r,s\},\{s,t\} \in E(S)})\) is a Moufang foundation of type \((W,S)\), where \(\Delta_{J}=(R_{J}(c),\delta_{\varepsilon}),c_{J}:=c\) and \(\varphi_{rst}=\mathrm{id}\). We will denote this Moufang foundation by \(\mathcal{F}(\Delta,c)\). It is a (non-trivial) fact that for any \(\varepsilon\in\{+,-\}\) and \(c,c^{\prime}\in\mathcal{C}_{\varepsilon}\) we have \(\mathcal{F}(\Delta,c)\cong\mathcal{F}(\Delta,c^{\prime})\).
A foundation \(\mathcal{F}\) is called _integrable_, if there exists a twin building \(\Delta\) of type \((W,S)\) and a chamber \(c\) of \(\Delta\) such that \(\mathcal{F}\cong\mathcal{F}(\Delta,c)\). A foundation satisfies Condition (lsco) if it is \(3\)-spherical and if there exists a twin building \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) and a chamber \(c\) of \(\Delta\) such that \(\mathcal{F}\cong\mathcal{F}(\Delta,c)\) and both buildings \(\Delta_{+},\Delta_{-}\) satisfy Condition (lsco). In particular, if a foundation satisfies Condition (lsco), it is integrable.
Let \(\mathcal{F}\) be an integrable Moufang foundation. Then every panel contains at least \(3\) chambers. Since \(\mathcal{F}\) is integrable, there exists a twin building \(\Delta\) and a chamber \(c\) of \(\Delta\) such that \(\mathcal{F}\cong\mathcal{F}(\Delta,c)\). By Lemma (2.7) the twin building \(\Delta\) is thick. Moreover, every irreducible integrable Moufang foundation satisfying Condition (lco) determines the isomorphism class of the corresponding twin building:
**(3.4) Proposition**.: _Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*}),\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be two thick irreducible \(2\)-spherical twin buildings of type \((W,S)\) and of rank at least \(3\) satisfying Condition (lco). Suppose \(c\in\Delta_{+},c^{\prime}\in\Delta^{\prime}_{+}\) such that \(\mathcal{F}(\Delta,c)\cong\mathcal{F}(\Delta^{\prime},c^{\prime})\). Then \(\Delta\cong\Delta^{\prime}\)._
Proof.: This is a consequence of [26, Theorem 11.1.12] and [15, Theorem 1.5].
### The Steinberg group associated with an RGD-system
Let \(\mathcal{D}=(G,(U_{\alpha})_{\alpha\in\Phi})\) be an RGD-system of irreducible spherical type \((W,S)\) and rank at least \(2\). Following [21], the _Steinberg group_ associated with \(\mathcal{D}\) is the group \(\hat{G}\) which is the direct limit of the inductive system formed by the groups \(U_{\alpha}\) and \(U_{[\alpha,\beta]}:=\langle U_{\gamma}\mid\gamma\in[\alpha,\beta]\rangle\) for all prenilpotent pairs \(\{\alpha,\beta\}\subseteq\Phi\). Then \(U_{\alpha},U_{[\alpha,\beta]}\to\hat{G}\) are injective and \((\hat{G},(\hat{U}_{\alpha})_{\alpha\in\Phi})\) is an RGD-system of type \((W,S)\), where \(\hat{U}_{\alpha}\) denotes the image of \(U_{\alpha}\) in \(\hat{G}\). Let \(\Delta\) be the associated spherical building, i.e. \(\Delta=\Delta(\mathcal{D})_{+}\), where \(\Delta(\mathcal{D})=(\Delta(\mathcal{D})_{+},\Delta(\mathcal{D})_{-},\delta_{ *})\) is the associated twin building. Then the kernel of \(\hat{G}\to\mathrm{Aut}(\Delta)\) is equal to the center of \(\hat{G}\) by [2, Proposition 7.127(2)].
### The direct limit of a foundation
In this subsection we let \(\mathcal{F}=((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r,s \},\{s,t\}\in E(S)})\) be a Moufang foundation of type \((W,S)\) satisfying Condition (lco) and let \((\Sigma_{J})_{J\in E(S)}\) be an apartment of \(\mathcal{F}\). As \(\mathcal{F}\) is a Moufang foundation, the buildings \(\Delta_{J}\) are Moufang buildings. We identify the roots in \(\Sigma_{J}\) with \(\Phi^{J}\). For \(\alpha\in\Phi^{J}\) we let \(U^{J}_{\alpha}\) be the root group associated with \(\alpha\subseteq\Sigma_{J}\) and we let \(H_{J}=\langle U^{J}_{\alpha}\mid\alpha\in\Phi^{J}\rangle\leq\mathrm{Aut}( \Delta_{J})\). As \(J\in E(S)\), it follows from [2, Remark 7.107\((a)\)] that the root groups \(U^{J}_{\alpha}\) are nilpotent.
We note that for each \(\{s,t\}\in E(S)\) the restriction \(U^{\{s,t\}}_{\pm\alpha_{s}}\to U^{\{s,t\}}_{\pm\alpha_{s}}|_{\mathcal{P}_{s}(c _{\{s,t\}})}\) is an isomorphism. For each \(s\in S\) we fix \(s_{0}\in S\) such that \(m_{ss_{0}}>2\) and we define \(U_{\pm s}:=U^{\{s,s_{0}\}}_{\pm\alpha_{s}}|_{\mathcal{P}_{s}(c_{\{s,s_{0}\}})}\). Using (MF2) we know that for each \(t\in S\) with \(\{s,t\}\in E(S)\) the mapping \(U_{\pm s}\to U^{\{s,t\}}_{\pm\alpha_{s}}|_{\mathcal{P}_{s}(c_{\{s,t\}})},g\mapsto \varphi_{s_{0}st}\circ g\circ\varphi_{s_{0}st}^{-1}\) is an isomorphism. Thus we have canonical isomorphisms \(U_{\pm s}\to U^{\{s,t\}}_{\pm\alpha_{s}}\).
Let \(J=\{s,s_{0}\}\in E(S)\). For each \(s\in K\in E(S)\) we denote the image of \(u\in U_{\pm s}\) in \(\operatorname{Aut}(\Delta_{K})\) by \(u_{K}\). Then for every \(1\neq u\in U_{s}\) there exist \(u^{\prime},u^{\prime\prime}\in U_{-s}\) such that \((u^{\prime})_{J}u_{J}(u^{\prime\prime})_{J}\) stabilises \(\Sigma_{J}\) and acts on \(\Sigma_{J}\) as the reflection \(r_{\alpha_{s}}\). By [2, Consequence (3) on p.415]\(u^{\prime},u^{\prime\prime}\) are unique. Let \(s\in K\in E(S)\). By construction of \(U_{\pm s}\to\operatorname{Aut}(\Delta_{K})\) we know that \((u^{\prime})_{K}u_{K}(u^{\prime\prime})_{K}\) interchanges the elements in \(\Sigma_{K}\cap\mathcal{P}_{s}(c_{K})\) and stabilizes every panel \(P\) with \(|P\cap\Sigma_{K}|=2\). As \(K\) is spherical, this implies that \((u^{\prime})_{K}u_{K}(u^{\prime\prime})_{K}\) stabilizes \(\Sigma_{K}\) and acts on \(\Sigma_{K}\) as the reflection \(r_{\alpha_{s}}\) (cf. [2, proof of Lemma 7.5]). This implies that for \(u\in U_{s}\) the elements \(u^{\prime},u^{\prime\prime}\in U_{-s}\) do not depend on the choice of \(s\in K\in E(S)\). We define for every \(1\neq u\in U_{s}\) the element \(m(u):=u^{\prime}uu^{\prime\prime}\), where \(u^{\prime},u^{\prime\prime}\in U_{-s}\) are as above.
**(3.5) Lemma**.: _Let \(\pi_{s}:U_{s}\star U_{-s}\to\prod_{s\in J\in E(S)}\operatorname{Aut}(\Delta_{J})\) be the canonical homomorphism and let \(K_{s}:=\ker\pi_{s}\). Let \(X_{s}:=\left(U_{s}\star U_{-s}\right)/K_{s}\). Then \(U_{\pm s}\to X_{s}\) is injective and \((X_{s},(U_{\pm s}))\) is an RGD-system of type \(A_{1}\), where we identify \(U_{\pm s}\) with its image in \(X_{s}\)._
Proof.: Note that \(U_{\pm s}\to\operatorname{Aut}(\Delta_{J})\) are injective for every \(s\in J\in E(S)\) and \(U_{-\alpha_{s}}^{J}\not\leq U_{\alpha_{s}}^{J}\). Thus \(U_{\pm s}\to X_{s}\) are injective and \(U_{-s}\not\leq U_{\beta}\) in \(X_{s}\). It suffices to show (RGD2). We show that \(m(u)=u^{\prime}uu^{\prime\prime}\) conjugates \(U_{\pm s}\) to \(U_{\mp s}\) for every \(1\neq u\in U_{s}\). Let \(J=\{s,s_{0}\}\) and \(x\in U_{s}\). Then \(m(u)_{J}^{-1}x_{J}m(u)_{J}\in U_{-\alpha_{s}}^{J}\) and hence \(m(u)_{J}^{-1}x_{J}m(u)_{J}=(x^{\prime})_{J}\) for some \(x^{\prime}\in U_{-s}\). By construction of \(U_{\pm s}\to\operatorname{Aut}(\Delta_{K})\) we infer \(m(u)_{K}^{-1}x_{K}m(u)_{K}=(x^{\prime})_{K}\). Thus \((x^{\prime})^{-1}m(u)^{-1}xm(u)\in K_{s}\) for every \(x\in U_{s}\) and hence \(m(u)^{-1}xm(u)K_{s}=x^{\prime}K_{s}\) in \(X_{s}\). This implies that \((X_{s},(U_{\pm s}))\) is an RGD-system of type \(A_{1}\).
**(3.6) Example**.: Let \(\Delta\) be an irreducible twin building satisfying Condition (lco). Using [15, Theorem 1.5] and [2, Theorem 8.27], \(\Delta\) is a so-called _Moufang twin building_. Let \(c_{+}\in\Delta_{+},c_{-}\in\Delta_{-}\) and let \(\Sigma\) be the twin apartment containing \(c_{+}\) and \(c_{-}\). Let \(U_{\alpha}\) be the corresponding root groups and let \(G=\langle U_{\alpha}\mid\alpha\in\Phi\rangle\leq\operatorname{Aut}(\Delta)\). Then \((G,(U_{\alpha})_{\alpha\in\Phi})\) is an RGD-system by [2, 8.47(\(a\))].
Let \(\mathcal{F}=\mathcal{F}(\Delta,c_{+})\) and \(\Sigma_{J}:=R_{J}(c_{+})\cap\Sigma\). Then \((\Sigma_{J})_{J\in E(S)}\) is an apartment of \(\mathcal{F}\). We consider the homomorphism \(\varphi:U_{s}\star U_{-s}\to\operatorname{Aut}(\Delta)\) be the canonical homomorphism. Let \(g\in\ker\varphi\). Then \(g\) acts trivial on every rank \(2\) residue and hence \(g\in\ker\pi_{s}\). Now let \(g\in\ker\pi_{s}\). Then \(\varphi(g)\in\langle U_{\alpha_{s}}\cup U_{-\alpha_{s}}\rangle\). As \(\varphi(g)\) fixes \(\mathcal{P}_{s}(c_{+})\), we deduce \(\varphi(g)\in H_{\alpha_{s}}\) and hence \(\varphi(g)\) fixes \(\Sigma\). If \(m_{st}>2\), then \(\varphi(g)\) fixes \(R_{\{s,t\}}(c_{+})\) by assumption. If \(m_{st}=2\), then \(\varphi(g)\) fixes \(\mathcal{P}_{t}(c_{+})\), as the corresponding root groups commute. In particular, \(\varphi(g)\) fixes a twin apartment and all neighbours of \(c_{+}\). Thus \(\varphi(g)=1\) by [17, Theorem 1] and we have \(\ker\pi_{s}=\ker\varphi\). In particular, \(X_{s}\to\operatorname{Aut}(\Delta)\) is injective, where \(X_{s}\) is as in the previous lemma.
Let \(J=\{s,t\}\in E(S)\) and let \(\hat{H}_{J}\) be the Steinberg group associated with \((H_{J},(U_{\alpha}^{J})_{\alpha\in\Phi^{J}})\). Let \(\pi_{J}^{\operatorname{St}}:\hat{H}_{J}\to\operatorname{Aut}(\Delta_{J}),\pi_{s,J}:U_{s}\star U_{-s}\to\operatorname{Aut}(\Delta_{J}),\varphi_{s}:U_{s}\star U_ {-s}\to\hat{H}_{J}\) be the canonical homomorphisms. As \(\pi_{s}:U_{s}\star U_{-s}\to\prod_{s\in K\in E(S)}\operatorname{Aut}(\Delta_{K})\), we deduce \(\ker\pi_{s}\leq\ker\pi_{s,J}\). As \(\pi_{s,J}=\pi_{J}^{\operatorname{St}}\circ\varphi_{s}\), we have \(\varphi_{s}(\ker\pi_{s})\leq\varphi_{s}(\ker\pi_{s,J})\leq\ker\pi_{J}^{ \operatorname{St}}\). Since \(\ker\pi_{J}^{\operatorname{St}}=Z(\hat{H}_{J})\), we deduce \(K_{st}:=\varphi_{s}(\ker\pi_{s})\varphi_{t}(\ker\pi_{t})\trianglelefteq\hat{H}_{J}\). Now \(\hat{H}_{J}/K_{st}\) is again an RGD-system by [2, 7.131]. Thus we let \(\mathcal{D}_{J}=(\hat{H}_{J}/K_{st},(\hat{U}_{\alpha}K_{st}/K_{st})_{\alpha\in \Phi^{J}})\) and write \(X_{s,t}:=\hat{H}_{J}/K_{st}\). Let \(\psi_{st}:\hat{H}_{J}\to X_{s,t}\) be the canonical homomorphism. Then \((\psi_{st}\circ\varphi_{s})\left(\ker\pi_{s}\right)\leq\psi_{st}(K_{st})=1\) and hence \(\psi_{st}\circ\psi_{s}\) factors through \(X_{s}\to\hat{H}_{J}/K_{st}\).
If \(s\neq t\in S\) are such that \(m_{st}=2\), we define \(X_{s,t}:=X_{s}\times X_{t}\). We define \(G\) to be the direct limit of the inductive system formed by \(X_{s},X_{s,t}\) for all \(s\neq t\in S\). Note that \(X_{s,t}\) is generated by the canonical image of \(X_{s},X_{t}\) in \(X_{s,t}\) and hence \(G=\langle X_{s}\mid s\in S\rangle\). We note that it is not clear whether \(X_{s}\to G\) is injective. If we write \(1\neq u\in U_{s}\) we simply mean that \(u\neq 1\) in the group \(U_{s}\).
**(3.7) Lemma**.: _Let \(s_{1},\ldots,s_{k},s\in S\) and let \(u_{i},v_{i}\in U_{s_{i}}\backslash\{1\}\). Then \(U_{s}^{m(u_{1})\cdots m(u_{k})}=U_{s}^{m(v_{1})\cdots m(v_{k})}\)._
Proof.: We show the hypothesis by induction on \(k\). For \(k=1\) we have \(m(v_{1})^{-1}m(u_{1})\in H_{s_{1}}\leq N_{X_{s_{1},s}}(U_{s})\). Thus we assume \(k>1\). Using [2, Consequences (4) and (5) on p.415] we deduce \(tm(u_{i})t^{-1}=m(tu_{i}t^{-1})\) and \(tm(u_{i})^{-1}t^{-1}=m(tu_{i}t^{-1})^{-1}\) for each \(t\in H_{t}\) and \(u_{i}\in U_{s_{i}}\). Note that \(tu_{i}t^{-1}\in U_{s_{i}}\). Using induction, this implies for \(t:=m(v_{k})m(u_{k})^{-1}\in H_{s_{k}}\):
\[tm(u_{k-1})^{-1}\cdots m(u_{1})^{-1}U_{s}m(u_{1})\cdots m(u_{k-1} )t^{-1}\] \[\qquad=m(tu_{k-1}t^{-1})^{-1}\cdots m(tu_{1}t^{-1})^{-1}tU_{s}t^{ -1}m(tu_{1}t^{-1})\cdots m(tu_{k-1}t^{-1})\] \[\qquad=m(tu_{k-1}t^{-1})^{-1}\cdots m(tu_{1}t^{-1})^{-1}U_{s}m(tu_ {1}t^{-1})\cdots m(tu_{k-1}t^{-1})\] \[\qquad=m(v_{k-1})^{-1}\cdots m(v_{1})^{-1}U_{s}m(v_{1})\cdots m(v _{k-1})\qed\]
**(3.8) Lemma**.: _Suppose \(s,t,s_{1},\ldots,s_{k},t_{1},\ldots,t_{l}\in S\) such that \(s_{1}\cdots s_{k}\alpha_{s}=t_{1}\cdots t_{l}\alpha_{t}\). Let \(1\neq u_{i}\in U_{s_{i}},1\neq v_{i}\in U_{t_{i}}\). Then \(U_{s}^{m(u_{k})\cdots m(u_{1})}=U_{t}^{m(v_{l})\cdots m(v_{1})}\)._
Proof.: At first we show that we have an action of \(W\) on the conjugacy class of \(U_{s}\), where \(s\in S\) acts on every conjugacy class by conjugation with \(m(u)\) for some \(1\neq u\in U_{s}\). By the previous lemma, conjugation with \(m(u)\) does not depend on \(1\neq u\in U_{s}\) and \(m(u)^{2}\) acts trivial on every conjugacy class. Moreover, \((st)^{m_{st}}\) acts as \((m(u_{s})m(u_{t}))^{m_{st}}\in\langle H_{s}\cup H_{t}\rangle\) (cf. [6, Lemma 3.3]) and hence trivial.
We are now in the position to prove the claim. If \(s\neq t\), there exists \(w\in\langle s,t\rangle\) such that \(w\alpha_{s}=\alpha_{t}\) and \(w^{-1}U_{s}w=U_{t}\) holds in \(X_{s,t}\). Thus we can assume \(s=t\). It suffices to show that for \(w\in W\) with \(w\alpha_{s}=\alpha_{s}\) we have \(w^{-1}U_{s}w=U_{s}\). Thus let \(w\in W\) such that \(w\alpha_{s}=\alpha_{s}\). Then \(ws\notin\alpha_{s}\) and hence \(\{w,ws\}\in\partial\alpha_{s}\). By Lemma (2.5) there exist a sequence of panels \(P_{0}=\{1,s\},\ldots,P_{n}=\{w,ws\}\) contained in \(\partial\alpha_{s}\) and a sequence of spherical rank \(2\) residues \(R_{1},\ldots,R_{n}\) contained in \(\partial^{2}\alpha_{s}\) such that \(P_{i-1},P_{i}\) are distinct and contained in \(R_{i}\). We show that claim via induction on \(n\). For \(n=0\) there is nothing to show. Let \(n>0\) and let \(x\in P_{n-1}\cap\alpha_{s}\). Then \(x^{-1}w\in\langle J\rangle\) for some \(J\subseteq S\) with \(|J|=2\). Using induction, we deduce \(w^{-1}U_{s}w=(x^{-1}w)^{-1}x^{-1}U_{s}x(x^{-1}w)=(x^{-1}w)^{-1}U_{s}(x^{-1}w)= U_{s}\). This finishes the claim.
For \(s\in S\) we define \(U_{\alpha_{s}}:=U_{s}\) and for \(\alpha\in\Phi\) we define \(U_{\alpha}\) as a conjugate of \(U_{s}\) as in (RGD2). This is well-defined by the previous lemma. We put \(\mathcal{D}_{\mathcal{F}}:=(G,(U_{\alpha})_{\alpha\in\Phi})\).
**(3.9) Lemma**.: _The system \(\mathcal{D}_{\mathcal{F}}\) satisfies (RGD2) and (RGD4). Furthermore, (RGD1) holds for each pair \(\{\alpha,\beta\}\) of prenilpotent roots with \(o(r_{\alpha}r_{\beta})<\infty\)._
Proof.: It follows by the definition of \(G\) and \(U_{\alpha}\) that (RGD2) and (RGD4) are satisfied. Let \(\{\alpha,\beta\}\) be a prenilpotent pair with \(o(r_{\alpha}r_{\beta})<\infty\). Then there exists \(R\in\partial^{2}\alpha\cap\partial^{2}\beta\). Let \(w=\operatorname{proj}_{R}1_{W}\). Then \(w^{-1}\alpha,w^{-1}\beta\in\Phi^{\{s,t\}}\) for some \(s\neq t\in S\). As \([U_{w^{-1}\alpha},U_{w^{-1}\beta}]\leq U_{(w^{-1}\alpha,w^{-1}\beta)}\) holds in \(X_{s,t}\), it also holds in \(G\). This implies \([U_{\alpha},U_{\beta}]^{w}=[U_{w^{-1}\alpha},U_{w^{-1}\beta}]\leq U_{(w^{-1} \alpha,w^{-1}\beta)}\) and hence \([U_{\alpha},U_{\beta}]\leq U_{(w^{-1}\alpha,w^{-1}\beta)}^{w^{-1}}=U_{(ww^{-1} \alpha,ww^{-1}\beta)}=U_{(\alpha,\beta)}\).
**(3.10) Lemma**.: _Let \((c_{0},\ldots,c_{k})\) be a minimal gallery and let \((\alpha_{1},\ldots,\alpha_{k})\) be the sequence of roots which are crossed by that minimal gallery. Then we have \([U_{\alpha_{1}},U_{\alpha_{k}}]\leq\langle U_{\alpha_{2}},\ldots,U_{\alpha_{k}}\rangle\). In particular, \(U_{\alpha_{1}}\cdots U_{\alpha_{k}}\) is a nilpotent group._
Proof.: We can assume \(\alpha_{1}\subseteq\alpha_{k}\). In particular, \(k\geq 3\) and \(c_{0},\ldots,c_{k}\) are not contained in a rank \(2\) residue. Let \(R\) be the residue containing \(c_{k-2},c_{k-1}\) and \(c_{k}\). Then there exists a minimal gallery \((d_{0}=c_{0},\ldots,d_{k}=c_{k})\) with \(d_{i}=\operatorname{proj}_{R}c_{0}\) for some \(0\leq i\leq k\). We note that the set of roots crossed by \((d_{0},\ldots,d_{k})\) coincides with the set of roots crossed by \((c_{0},\ldots,c_{k})\). Let \((\beta_{1},\ldots,\beta_{k})\) be the set of roots crossed by \((d_{0},\ldots,d_{k})\) and let \(1\leq j\leq k\) be such that \(\beta_{j}=\alpha_{1}\). Since \(\alpha_{1}\subseteq\alpha_{k}\), we have \(j<i\). Let \(\gamma\neq\beta_{i}\) be the root containing \(d_{i}\) but not some neighbour of \(d_{i}\) in \(R\). Since \(\{\beta_{j+1},\ldots,\beta_{i-1}\}\subseteq\{\alpha_{2},\ldots,\alpha_{k-1}\}\) it suffices to show \([U_{\alpha_{1}},U_{\alpha_{k}}]\leq\langle U_{\beta_{j+1}},\ldots,U_{\beta_{i -1}}\rangle\).
If \(\alpha_{k}\in\{\gamma,\beta_{i}\}\), then we have \([U_{\alpha_{1}},U_{\alpha_{k}}]\leq\langle U_{\beta_{j+1}},\ldots,U_{\beta_{i -1}}\rangle\) by induction. Thus we assume \(\alpha_{k}\notin\{\gamma,\beta_{i}\}\). By Lemma (2.11) and (RGD2) we deduce \(U_{\alpha_{k}}\leq[U_{\gamma},U_{\beta_{i}}]\). Then for each \(u_{k}\in U_{\alpha_{k}}\) there exist \(a_{1},\ldots,a_{n}\in U_{\gamma},b_{1},\ldots,b_{n}\in U_{\beta_{i}}\) such that \(u_{k}=a_{1}b_{1}\cdots a_{n}b_{n}\). Using induction, we know that \([U_{\alpha_{1}},U_{\gamma}],[U_{\alpha_{1}},U_{\beta_{i}}]\leq\langle U_{\beta _{j+1}},\ldots,U_{\beta_{i-1}}\rangle\) as well as \([U_{\beta_{l}},U_{\gamma}],[U_{\beta_{l}},U_{\beta_{i}}]\leq\langle U_{\beta _{l+1}},\ldots,U_{\beta_{i-1}}\rangle\) for every \(j+1\leq l\leq i-1\). Let \(u_{1}\in U_{\alpha_{1}}\). Using induction, we deduce
\[[u_{1},u_{k}]=[u_{1},a_{1}b_{1}\cdots a_{n}b_{n}]=[u_{1},b_{n}][u_{1},a_{1}b_{1 }\cdots a_{n}]^{b_{n}}\in\langle U_{\beta_{j+1}},\ldots,U_{\beta_{i-1}}\rangle\]
Let \(V:=U_{\alpha_{1}}\cdots U_{\alpha_{k-1}}\). Using induction, \(V\leq G\). Since \([U_{\alpha_{i}},U_{\alpha_{k}}]\leq\langle U_{\alpha_{i+1}},\ldots,U_{\alpha_{ k-1}}\rangle\leq V\) for \(1\leq i\leq k-1\), we have \(VU_{\alpha_{k}}=U_{\alpha_{k}}V\). It follows that \(VU_{\alpha_{k}}=U_{\alpha_{1}}\cdots U_{\alpha_{k}}\) is a subgroup of \(G\). In particular, the subgroup is nilpotent because of the commutator relations and the fact that the groups \(U_{\delta}\) are nilpotent.
**(3.11) Lemma**.: _Let \((c_{0},\ldots,c_{k})\) be a minimal gallery and let \((\alpha_{1},\ldots,\alpha_{k})\) be the sequence of roots which are crossed by that minimal gallery. Then the product map \(U_{\alpha_{1}}\times\cdots\times U_{\alpha_{k}}\to\langle U_{\alpha_{i}}\mid 1 \leq i\leq k\rangle\) is a bijection, if \(\mathcal{D}_{\mathcal{F}}\) satisfies (RGD3)._
Proof.: As in the definition of an RGD-system we let \(U_{+}=\langle U_{\alpha}\mid\alpha\in\Phi_{+}\rangle\). At first we assume that \(U_{-\alpha_{s}}\cap U_{+}\neq\{1\}\). Then [2, Consequence (1) on p.414] would imply \(U_{-\alpha_{s}}\leq U_{+}\), which is a contradiction to (RGD3). Thus \(U_{-\alpha_{s}}\cap U_{+}=\{1\}\).
By the previous lemma we have \(\langle U_{\alpha_{1}},\ldots,U_{\alpha_{k}}\rangle=U_{\alpha_{1}}\cdots U_{ \alpha_{k}}\) and hence the product map is surjective. We prove by induction on \(k\) that it is injective. If \(k=1\) there is nothing to show. Thus let \(k\geq 2\) and let \(u_{i},v_{i}\in U_{\alpha_{i}}\) such that \(u_{1}\cdots u_{k}=v_{1}\cdots v_{k}\). Then we have \(v_{1}^{-1}u_{1}=v_{2}\cdots v_{k}(u_{2}\cdots u_{k})^{-1}\). Using (RGD2) we can arrange it that \(\alpha_{1}=-\alpha_{s}\in\Phi_{-}\) for some \(s\in S\) and \(\alpha_{i}\in\Phi_{+}\) for every \(2\leq i\leq k\). Since \(U_{-\alpha_{s}}\cap U_{+}=1\) we obtain \(u_{1}=v_{1}\) and \(u_{2}\cdots u_{k}=v_{2}\cdots u_{k}\). Using induction the claim follows.
**(3.12) Theorem**.: _If \(\mathcal{D}_{\mathcal{F}}\) satisfies (RGD0) and (RGD3), then \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system._
Proof.: By assumption and Lemma (3.9) it suffices to show (RGD2) for all roots \(\alpha\subsetneq\beta\in\Phi\). Let \(\alpha\subsetneq\beta\), let \(G=(c_{0},\ldots,c_{k})\) be a minimal gallery and let \((\alpha_{1}=\alpha,\ldots,\alpha_{k}=\beta)\) be the sequence of roots which are crossed by \(G\). Then \(P:=\{c_{k-1},c_{k}\}\in\partial\beta\). Using Lemma (3.11) we obtain a unique set \(I\subseteq\{2,\ldots,k-1\}\) and unique elements \(1\neq u_{\alpha_{i}}\in U_{\alpha_{i}}\) such that \([u_{\alpha},u_{\beta}]=\prod_{i\in I}u_{\alpha_{i}}\). Let \(Q\in\partial\beta\). Using Lemma (2.5) there exist a sequence \(P_{0}=P,\ldots,P_{n}=Q\) of panels in \(\partial\beta\) and a sequence \(R_{1},\ldots,R_{n}\) of spherical rank \(2\) residues in \(\partial^{2}\beta\) such that \(P_{i-1},P_{i}\) are distinct and contained in \(R_{i}\). Let \((d_{0}=c_{0},\ldots,d_{m})\) be a minimal gallery such that \(Q=\{d_{m-1},d_{m}\}\). We will show by induction on \(n\) that for every \(i\in I\) the root \(\alpha_{i}\) is crossed by the gallery \((d_{0},\ldots,d_{m})\). For \(n=0\) there is nothing to show, as two minimal galleries from one chamber to another chamber cross the same roots. Thus we assume \(n>0\). Note that by [2, Lemma 3.69] every minimal gallery from \(c_{0}\) to a chamber which is not contained in \(\beta\) has to cross \(\partial\alpha\) and \(\partial\beta\). Let \((e_{0}=c_{0},\ldots,e_{l})\) be a minimal gallery such that \(P_{n-1}=\{e_{l-1},e_{l}\}\) and \(e_{z}=\operatorname{proj}_{R_{n}}e_{0}\) for some \(z\). Moreover, we let \((\beta_{1},\ldots,\beta_{l}=\beta)\) be the sequence of roots which are crossed by \((e_{0},\ldots,e_{l})\). Using induction \(\alpha_{i}\) is crossed by \((e_{0},\ldots,e_{l})\) for every \(i\in I\)
Using Lemma (3.11) we obtain a unique set \(J\subseteq\{2,\ldots,l-1\}\) and unique elements \(1\neq v_{\beta_{j}}\in U_{\beta_{j}}\) such that \([u_{\alpha},u_{\beta}]=\prod_{j\in J}v_{\beta_{j}}\). Note that \(\alpha\neq\beta_{1}\) in general. Similarly, if \((\delta_{1},\ldots,\delta_{m}=\beta)\) is the sequence of roots crossed by \((e_{0},\ldots,e_{z}=\operatorname{proj}_{R_{n}}e_{0},\ldots,d_{m-1},d_{m})\), there exist a unique set \(F\subseteq\{2,\ldots,m-1\}\) and unique elements \(1\neq x_{\delta_{f}}\in U_{\delta_{f}}\) such that \([u_{\alpha},u_{\beta}]=\prod_{f\in F}x_{\delta_{f}}\). Extending both galleries to \(p\), where \(e_{z},p\) are opposite in \(R_{n}\), Lemma (3.11) implies that \(J,F\subseteq\{2,\ldots,z-1\}\), as \(\langle U_{\beta_{z}}\mid z<x<l\rangle\cap\langle U_{\delta_{z}}\mid z<x<m \rangle=\{1\}\).
We have \(\prod_{i\in I}u_{\alpha_{i}}=\prod_{j\in J}v_{\beta_{j}}\). Assume \(\beta_{l-1}\in\{\alpha_{i}\mid i\in I\}\). Then \(u_{\beta_{l-1}}\in\langle U_{\beta_{j}}\mid 2\leq j\leq l-2\rangle\) and the previous lemma yields a contradiction. Thus \(\beta_{l-1}\notin\{\alpha_{i}\mid i\in I\}\) and \(\alpha_{i}\) is crossed by the gallery \((e_{0},\ldots,e_{l-1})\) for every \(i\in I\). Repeating that argument we deduce that for \(i\in I\) the root \(\alpha_{i}\) is crossed by \((e_{0},\ldots,e_{z})\) and hence by \((e_{0},\ldots,e_{z},\ldots,d_{m})\). Using induction again, we deduce that \(\alpha_{i}\) is crossed by \((d_{0},\ldots,d_{m})\) for every \(i\in I\) and the claim follows.
Now assume that for \(i\in I\) we have \(o(r_{\beta}r_{\alpha_{i}})<\infty\). Then there exists \(R\in\partial^{2}\beta\cap\partial^{2}\alpha_{i}\). Suppose \(Q\in\partial\beta\) such that \(Q\subseteq\alpha_{i}\cap R\) and let \(\operatorname{proj}_{Q}c_{0}\neq q\in Q\). Then a minimal gallery from \(c_{0}\) to \(q\) does not cross the root \(\alpha_{i}\). This yields a contradiction and we infer \(\alpha_{i}\subsetneq\beta\) for every \(i\in I\). Using similar arguments, we deduce \(\alpha\subsetneq\alpha_{i}\) for every \(i\in I\). Since \(\gamma\in(\alpha,\beta)\) if and only if \(\alpha\subsetneq\gamma\subsetneq\beta\), the claim follows.
**(3.13) Corollary**.: _If the canonical mappings \(U_{\pm s}\to G\) are injective and if \(\mathcal{D}_{\mathcal{F}}\) satisfies (RGD3), then \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system and we have \(\mathcal{F}\cong\mathcal{F}(\Delta(\mathcal{D}_{\mathcal{F}}),B_{+})\)._
Proof.: By Theorem (3.12) \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system. Thus it suffices to show that \(\mathcal{F}\cong\mathcal{F}(\Delta(\mathcal{D}_{\mathcal{F}}),B_{+})\). Since \(U_{\pm s}\to G\) are injective, we do not distinguish between them and their images in \(G\).
Each \(\Delta_{\{s,t\}}\) is a spherical Moufang building for \(s\neq t\) with \(\{s,t\}\in E(S)\). Thus there exists an isomorphism \(\beta_{\{s,t\}}:X_{s,t}/B_{\{s,t\}}\to\Delta_{\{s,t\}},gB_{\{s,t\}}\mapsto g(c_ {\{s,t\}})\) by [2, Lemma 7.28], where \(B_{\{s,t\}}:=\langle H_{s}\cup H_{t}\cup U_{\alpha}\mid\alpha\in\Phi_{+}^{\{s, t\}}\rangle\).
By Remark (2.12) and Lemma (2.14), we know that \(\alpha_{\{s,t\}}:X_{s,t}/B_{\{s,t\}}\to R_{\{s,t\}}(B_{+}),gB_{\{s,t\}}\to gB_{+}\) is an isomorphism for every \(\{s,t\}\in E(S)\). Thus we have an isomorphism \(\gamma_{\{s,t\}}:=\alpha_{\{s,t\}}\circ\beta_{\{s,t\}}^{-1}:\Delta_{\{s,t\}} \to R_{\{s,t\}}(B_{+}),d=g(c_{\{s,t\}})\mapsto gB_{+}\).
It remains to show that \(\gamma_{\{r,s\}}(d)=\left(\gamma_{\{s,t\}}\circ\varphi_{rst}\right)(d)\) for every \(d\in\mathcal{P}_{s}(c_{\{r,s\}})\). Let \(d=g(c_{\{r,s\}})\) for some \(g\in\langle U_{\alpha_{s}}^{\{r,s\}}\cup U_{-\alpha_{s}}^{\{r,s\}}\rangle\). Let \(g_{i}\in U_{\pm\alpha_{s}}^{\{r,s\}}\) such that \(g=g_{1}\cdots g_{n}\) and let \(g^{\prime}_{i}\in U_{\pm\alpha_{s}}^{\{s,t\}}\) be the unique element such that \(g^{\prime}_{i}|_{\mathcal{P}_{s}(c_{\{r,t\}})}=\varphi_{rst}\circ g_{i}|_{ \mathcal{P}_{s}(c_{\{r,s\}})}\circ\varphi_{rst}^{-1}\). Let \(g^{\prime}:=g_{1}\ldots g^{\prime}_{n}\). Then
\[\varphi_{rst}(d)=\varphi_{rst}(g(c_{\{r,s\}}))=\varphi_{rst}\left(g|_{\mathcal{ P}_{s}(c_{\{r,s\}})}\left(\varphi_{rst}^{-1}(c_{\{s,t\}})\right)\right)=g^{ \prime}_{1}\cdots g^{\prime}_{n}|_{\mathcal{P}_{s}(c_{\{s,t\}})}(c_{\{s,t\}})=g ^{\prime}(c_{\{s,t\}})\]
Since \(g^{\prime}_{i}=g_{i}\) in \(G\), we deduce \(g^{\prime}=g\) in \(G\). As \(\gamma_{\{r,s\}}(d)=\gamma_{\{r,s\}}(g(c_{\{r,s\}}))=gB_{+}\), we have
\[\left(\gamma_{\{s,t\}}\circ\varphi_{rst}\right)(d)=\gamma_{\{s,t\}}\left(\varphi_ {rst}(d)\right)=\gamma_{\{s,t\}}\left(g^{\prime}(c_{\{s,t\}})\right)=g^{\prime} B_{+}=gB_{+}=\gamma_{\{r,s\}}(d)\qed\]
For \(J\subseteq S\) we define \(G_{J}\) to be the direct limit of the groups \(X_{s}\) and \(X_{s,t}\) with the inclusions as above, where \(s\neq t\in J\). It follows directly from the definition of the direct limit that we have a homomorphism \(G_{J}\to G\) extending \(X_{s},X_{s,t}\to G\). We define \((\mathcal{D}_{\mathcal{F}})_{J}=(G_{J},(U_{\alpha})_{\alpha\in\Phi^{J}})\) and note that we have \(\mathcal{D}_{(\mathcal{F}_{J})}\neq(\mathcal{D}_{\mathcal{F}})_{J}\) in general, as the group \(X_{s}\) depends on \(\mathcal{F}\).
**(3.14) Lemma**.: _Let \(\mathcal{F}\) be an irreducible Moufang foundation of type \((W,S)\) satisfying Condition_ (lco)_. If \(J\subseteq S\) is irreducible such that \(|J|\geq 3\) and \(\mathcal{F}_{J}\) is integrable, then the following hold:_
1. _Let_ \(\Delta\) _be a twin building of type_ \((W,S)\) _and let_ \(c\) _be a chamber of_ \(\Delta\) _such that_ \(\mathcal{F}\cong\mathcal{F}(\Delta,c)\)_. Then we have a canonical homomorphism_ \(G_{J}\to\operatorname{Aut}(\Delta)\)
_._
2. _The homomorphisms_ \(U_{\pm s}\to G_{J}\) _are injective for every_ \(s\in J\) _and_ \((\mathcal{D}_{\mathcal{F}})_{J}\) _is an RGD-system._
Proof.: Since \(\mathcal{F}_{J}\) is integrable, there exists a thick twin building \(\Delta\) of type \((W,S)\) and a chamber \(c\) of \(\Delta\) such that \(\mathcal{F}_{J}\cong\mathcal{F}(\Delta,c)\). Using [15, Theorem 1.5] and [2, Theorem 8.27], \(\Delta\) is a so-called _Moufang twin building_. Let \(\varepsilon\in\{+,-\}\) be such that \(c\in\mathcal{C}_{\varepsilon}\). By Lemma (2.8) there exists a twin apartment \(\Sigma\) containing the image of the apartments \(\Sigma_{J}\) of the foundation. As we have seen in Example (3.6), \(K_{s}\) acts trivial \(\Delta\) and we have homomorphism \(X_{s}\to\operatorname{Aut}(\Delta)\). Clearly, \(U_{\pm s}\to\operatorname{Aut}(\Delta)\) are injective. Let \(\{\alpha,\beta\}\subseteq\Phi^{\{s,t\}}\) be a prenilpotent pair. Note that restriction of an automorphism of \(\Delta\) to \(R_{\{s,t\}}(c)\) is an epimorphism. Using [2, Corollary 7.66] it is an isomorphism from \(U_{[\alpha,\beta]}\) to its image in \(\operatorname{Aut}(R_{\{s,t\}}(c))\). Thus we have a homomorphism \(\hat{H}_{\{s,t\}}\to\operatorname{Aut}(\Delta)\) for \(m_{st}>2\). As \(K_{s}\) is contained in the kernel of this map, we obtain a homomorphism \(X_{s,t}=\hat{H}_{\{s,t\}}/K_{st}\to\operatorname{Aut}(\Delta)\) for \(m_{st}>2\). As \(U_{\pm\alpha_{s}}\) commutes with \(U_{\pm\alpha_{t}}\) in \(\operatorname{Aut}(\Delta)\) if \(m_{st}=2\), we also have a homomorphism \(X_{s,t}=X_{s}\times X_{t}\to\operatorname{Aut}(\Delta)\). We conclude that there exists a homomorphism \(G_{J}\to\operatorname{Aut}(\Delta)\) mapping \(U_{\pm s}\) onto \(U_{\pm\alpha_{s}}\). Note that \(U_{\alpha}\leq\langle U_{\alpha_{s}}\mid s\in S\rangle\) fixes \(c\) for every \(\alpha\in\Phi_{+}^{J}\), whereas \(U_{-\alpha_{s}}\) does not. Thus (RGD3) holds and the claim follows from Theorem (3.12).
**(3.15) Lemma**.: _Let \(\mathcal{F}\) be a Moufang foundation and let \(J\subseteq S\) be reducible such that \(|J|=3\). Then \(U_{\pm s}\to G_{J}\) are injective and \((\mathcal{D}_{\mathcal{F}})_{J}\) is an RGD-system._
Proof.: Let \(J=\{r,s,t\}\) and assume \(m_{rs}=2=m_{rt}\). If \((\langle J\rangle,J)\) is of type \(A_{1}\times A_{1}\times A_{1}\), then \(G_{\{r,s,t\}}=X_{r}\times X_{s}\times X_{t}\) is an RGD-system. Otherwise, \(G_{\{r,s,t\}}=X_{r}\times X_{s,t}\) is an RGD-system.
**(3.16) Lemma**.: _Let \(\mathcal{F}\) be an irreducible Moufang foundation of type \((W,S)\). Then \(\mathcal{F}\) is integrable if and only if \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system and \(U_{\pm s}\to G\) are injective._
Proof.: One implication is Corollary (3.13); the other follows from Lemma (3.14) applied to \(J=S\).
**(3.17) Lemma**.: _Assume that for every irreducible \(J\subseteq S\) with \(|J|=3\) the \(J\)-residue \(\mathcal{F}_{J}\) is integrable. Then \(X_{s}\to X_{s,t}\) is injective for all \(s\neq t\in S\)._
Proof.: Let \(s\neq t\in S\). If \(m_{st}=2\) there is nothing to show, as \(X_{s,t}=X_{s}\times X_{t}\). Thus we assume \(m_{st}\neq 2\) and let \(1\neq g\in\ker(X_{s}\to X_{s,t})\). Since \(\ker(X_{s}\to\prod_{s\in J\in E(S)}\operatorname{Aut}(\Delta_{J}))=\{1\}\), there exists \(s\in K\in E(S)\) such that \(g\notin\ker(X_{s}\to\operatorname{Aut}(\Delta_{K}))\). As \(g\in\ker(X_{s}\to X_{s,t})\), we have \(g\in\ker(X_{s}\to\operatorname{Aut}(\Delta_{\{s,t\}}))\) and hence \(K\neq\{s,t\}\). Let \(J=K\cup\{t\}\). Then \(J\) is irreducible. As \(\mathcal{F}_{J}\) is integrable, there exists a twin building \(\Delta\) and a chamber \(c\) of \(\Delta\) such that \(\mathcal{F}_{J}\cong\mathcal{F}(\Delta,c)\). By Lemma (3.14) we obtain that \(G_{J}\) acts on \(\Delta\). As \(g\) is trivial in \(G_{J}\) but not in \(\operatorname{Aut}(\Delta)\), this yields a contradiction and hence \(X_{s}\to X_{s,t}\) is injective for all \(s\neq t\in S\).
## 4 The \(3\)-spherical case
In this section we let \(\mathcal{F}\) be a Moufang foundation of irreducible \(3\)-spherical type \((W,S)\) and of rank at least \(3\) satisfying Condition (lco). Moreover, we assume that for every irreducible \(J\subseteq S\) with \(|J|=3\) the \(J\)-residue \(\mathcal{F}_{J}\) is integrable and satisfies Condition (lsco). Let \((\Sigma_{J})_{J\in E(S)}\) be an apartment of \(\mathcal{F}\) and let \(U_{\pm s},X_{s},X_{s,t}\) and \(\mathcal{D}_{\mathcal{F}}=(G,(U_{\alpha})_{\alpha\in\Phi})\) be as before.
Our goal is to show that \(\mathcal{F}\) is integrable. By Corollary (3.13) it suffices to show that the canonical mappings \(U_{\pm s}\to G\) are injective and that \(\mathcal{D}_{\mathcal{F}}=(G,(U_{\alpha})_{\alpha\in\Phi})\) satisfies (RGD3). We will show that \(G\) acts non-trivially on a building and deduce both hypotheses from the action. For all \(s\in S\) we fix \(1\neq e_{s}\in U_{s}\) and let \(n_{s}:=m(e_{s})\).
For \(s\neq t\in S\) with \(m_{st}=2\) we define \(\Delta_{\{s,t\}}\) to be the spherical building associated with \(X_{s}\times X_{t}\) (cf. [2, Proposition 7.116]). Using [2, Proposition 7.116, Corollary 7.68 and Remark 7.69]), we get (similar as for Moufang buildings) a natural labelling. In particular, \(\big{(}(U_{s}\cup\infty_{s})_{s\in S},(\Delta_{\{s,t\}})_{s\neq t\in S}\big{)}\) is a blueprint. We will denote it by \(\mathcal{B}_{\mathcal{F}}\) and remark that the restriction of \(\mathcal{B}_{\mathcal{F}}\) to any \(A_{1}\times A_{1}\) residue is natural.
**(4.1) Lemma**.: _Let \(J\subseteq S\) be irreducible and of rank \(3\). Then the blueprint \(\mathcal{B}_{\mathcal{F}_{J}}\) is realisable._
Proof.: Since \(\mathcal{F}_{J}\) is an integrable Moufang foundation, there exists a thick twin building \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) and a chamber \(c\) of the twin building \(\Delta\) such that \(\mathcal{F}_{J}\cong\mathcal{F}(\Delta,c)\). Without loss of generality let \(c\in\mathcal{C}_{+}\). Let \(\Sigma\) be a twin apartment containing the images of the apartments \((\Sigma_{K})_{K\in E(J)}\) of the foundation. We deduce from [15, Theorem 1.5] and [2, Theorem 8.27 and Proposition 8.21] that \(\Delta_{+}\) is a spherical Moufang building. By Proposition (2.18) the building \(\Delta_{+}\) conforms to the blueprint given by its restriction to \(E_{2}(c)\) and the natural labelling of \(\Delta_{+}\), i.e. \(\big{(}U_{\alpha_{s}}\cup\{\infty_{s}\}\big{)}_{s\in J},(R_{\{s,t\}}(c))_{s \neq t\in J}\big{)}\), where the \(U_{\pm\alpha_{s}}\) are the root groups corresponding to the roots in \(\Sigma\cap\Delta_{+}\). We will show that \(\Delta_{+}\) conforms to \(\mathcal{B}_{\mathcal{F}_{J}}\). As we have a labelling of \(\Delta\) of type \((U_{\alpha_{s}}\cup\{\infty_{s}\})_{s\in J}\) and \(U_{s}\to U_{\alpha_{s}}\) are isomorphisms, we also have a labelling of type \((U_{s}\cup\{\infty_{s}\})_{s\in J}\). Let \(R\) be an \(\{s,t\}\)-residue of \(\Delta_{+}\). Then there exists an isomorphism \(\varphi_{R}:R_{\{s,t\}}(c)\to R\) such that \(x,\varphi_{R}(x)\) have the same \(s\)- and \(t\)-labels for each \(x\in R_{\{s,t\}}(c)\). As \(\mathcal{F}_{J}\cong\mathcal{F}(\Delta,c)\), there exist isomorphisms \(\alpha_{K}:\Delta_{K}\to R_{K}(c)\) for each \(K\in E(J)\). Then \(x\) and \(\alpha_{\{s,t\}}(x)\) have the same \(s\)- and \(t\)-labels and hence \(\Delta_{+}\) conforms to the blueprint \(\mathcal{B}_{\mathcal{F}_{J}}\). In particular, \(\mathcal{B}_{\mathcal{F}_{J}}\) is realisable.
**(4.2) Theorem**.: _The blueprint \(\mathcal{B}_{\mathcal{F}}\) is realisable._
Proof.: Let \(J\subseteq S\) be of rank \(3\). Then \(J\) is spherical by assumption. If \(J\) is irreducible, then \(\mathcal{F}_{J}\) is integrable and hence \(\mathcal{B}_{\mathcal{F}_{J}}\) is realisable by Lemma (4.1). If \(J\) is reducible, then \(\mathcal{B}_{\mathcal{F}_{J}}\) is realisable by Lemma (2.20). Thus each restriction to a spherical rank \(3\) subdiagram is realisable and hence the claim follows from Corollary (2.17).
Recall that \(H_{s}=\langle m(u)m(v)\mid u,v\in U_{s}\rangle\langle 1\rrbracket\) and \(B_{J}=\langle H_{s},U_{s}\mid s\in J\rangle\) for \(J\subseteq S\). Then we have \(u_{t}^{h^{-1}}\in U_{t}\) for \(u_{t}\in U_{t},h\in H_{s}\) and \(u_{s}^{n_{t}},[u_{s},u_{t}]^{n_{t}}\in B_{\{s,t\}}\) for \(s\neq t\in S,u_{s}\in U_{s},u_{t}\in U_{t}\).
### An action associated with left multiplication
**(4.3) Theorem**.: _Let \(s\neq t\in S\). Then \(B_{\{s,t\}}\) acts on \(\mathbf{C}(\mathcal{B}_{\mathcal{F}})\) as follows: Let \(s_{1},\ldots,s_{k}\in S\) be such that \(s_{1}\cdots s_{k}\) is reduced and let \(u_{i}\in U_{s_{i}}\). Let \(r\in\{s,t\}\) and \(g\in U_{r}\cup H_{r}\). Then \(\omega_{B}(g,()):=()\) and for \(k>0\) we have_
\[\omega_{B}(g,(u_{1},\ldots,u_{k})):=\begin{cases}(u_{1}^{g^{-1}},\omega_{B}(g^ {ns_{1}},(u_{2},\ldots,u_{k})))&\text{if }g\in H_{r},\\ (gu_{1},u_{2},\ldots,u_{k})&\text{if }r=s_{1}\text{ and }g\in U_{r},\\ (u_{1},\omega_{B}(g^{ns_{1}}[g,u_{1}]^{n_{s_{1}}},(u_{2},\ldots,u_{k})))&\text{ if }r\neq s_{1}\text{ and }g\in U_{r}.\end{cases}\]
Proof.: We have to show that every relation in \(B_{\{s,t\}}\) acts trivial on \((u_{1},\ldots,u_{k})\). We prove the hypothesis by induction on \(k\). For \(k=0\) there is nothing to show. Thus we assume \(k\geq 1\). We consider a relation in \(B_{\{s,t\}}=\langle H_{s}\cup H_{t}\rangle\ltimes\langle U_{s}\cup U_{t}\rangle\) (note that \(H_{s},H_{t}\) normalise \(U_{s},U_{t}\) and their intersection is trivial by [2, Lemma 7.62]). Let \(h_{1},\ldots,h_{m}\in\langle H_{s}\cup H_{t}\rangle\) be such that \(h_{1}\cdots h_{m}=1\) in \(\langle H_{s}\cup H_{t}\rangle\). Then \(\omega_{B}(h_{1}\cdots h_{m},(u_{1},\ldots,u_{k}))=(u_{1}^{(h_{1}\cdots h_{m})^{ -1}},\omega_{B}(\prod_{i=1}^{m}h_{i}^{n_{s_{1}}},(u_{2},\ldots,u_{k})))\). We have \(u_{1}^{(h_{1}\cdots h_{m})^{-1}}=u_{1}\) and \(\prod_{i=1}^{m}h_{i}^{n_{s_{1}}}=(h_{1}\cdots h_{m})^{n_{s_{1}}}\) is a relation in \(G_{\{s,t,s_{1}\}}\). As the
product is contained in \(B_{\{s,t,s_{1}\}}\), it is a relation in \(B_{\{s,t,s_{1}\}}\). Note that \(G_{\{s,t,s_{1}\}}\) is an RGD-system by Lemma (3.14) or Lemma (3.15). Using Lemma (2.13) we know that \(B_{\{s,t,s_{1}\}}\) is the 2-amalgam of the groups \(B_{\{r\}}\) with \(r\in\{s,t,s_{1}\}\). By the universal property of direct limits and induction, we deduce that \(B_{\{s,t,s_{1}\}}\) acts on sequences of length at most \(k-1\) and hence we can write
\[\prod_{i=1}^{m}h_{i}^{n_{s_{1}}}=\prod_{i=1}^{n}g_{i}^{-1}r_{i}g_{i}\]
for some \(g_{i}\in B_{\{s,t,s_{1}\}}\) and relations \(r_{i}\in B_{\{s,t\}}\cup B_{\{s,s_{1}\}}\cup B_{\{t,s_{1}\}}\). The claim follows now by induction. Let \(v_{1},\ldots,v_{m}\in U_{s}\) and \(w_{1},\ldots,w_{m}\in U_{t}\) be such that \(\prod_{i=1}^{m}v_{i}w_{i}=1\) in \(\langle U_{s}\cup U_{t}\rangle\).
We distinguish the following cases:
1. \(s_{1}\notin\{s,t\}\): Then \(\omega_{B}(\prod_{i=1}^{m}v_{i}w_{i},(u_{1},\ldots,u_{k}))=(u_{1},\omega_{B}( b,(u_{2},\ldots,u_{k})))\), where \(b=\prod_{i=1}^{m}v_{i}^{n_{s_{1}}}[v_{i},u_{1}]^{n_{s_{1}}}w_{i}^{n_{s_{1}}}[ w_{i},u_{1}]^{n_{s_{1}}}\), and \(b=(\prod_{i=1}^{m}v_{i}w_{i})^{u_{1}n_{s_{1}}}\) is a relation in \(G_{\{s,t,s_{1}\}}\) and hence in \(B_{\{s,t,s_{1}\}}\). As before, the claim follows by induction.
2. \(s_{1}\in\{s,t\}\): Without loss of generality we can assume \(s_{1}=t\). Then we compute \(\omega_{B}(\prod_{i=1}^{m}v_{i}w_{i},(u_{1},\ldots,u_{k}))=(w_{1}\cdots w_{m} u_{1},\omega_{B}(\prod_{i=1}^{m}v_{i}^{n_{s_{1}}}[v_{i},w_{i}\cdots w_{m}u_{1}]^{n_{ s_{1}}},(u_{2},\ldots,u_{k})))\). Since \(\prod_{i=1}^{m}v_{i}w_{i}\) is a relation in \(\langle U_{s}\cup U_{t}\rangle\) and \(U_{s}\) acts trivial on the \(t\)-panel containing a fundamental chamber, the element \(w_{1}\cdots w_{m}\in U_{t}\) acts also trivial on this \(t\)-panel. Since the action is simply transitive, we deduce \(w_{1}\cdots w_{m}=1\) and \[\prod_{i=1}^{m}v_{i}^{n_{s_{1}}}[v_{i},w_{i}\cdots w_{m}u_{1}]^{n_{ s_{1}}} =\left(\prod_{i=1}^{m}(w_{i}\cdots w_{m}u_{1})^{-1}v_{i}w_{i}\cdots w _{m}u_{1}\right)^{n_{s_{1}}}\] \[=\left(\prod_{i=1}^{m}(w_{i}\cdots w_{m})^{-1}v_{i}w_{i}\cdots w_{m} \right)^{u_{1}n_{s_{1}}}\] \[=\left((w_{1}\cdots w_{m})^{-1}\prod_{i=1}^{m}v_{i}w_{i}\right)^{u _{1}n_{s_{1}}}\] is a relation in \(G_{\{s,t\}}\) and hence in \(B_{\{s,t\}}\). The claim follows by induction. It remains to show that for all \(h\in H_{t},v\in U_{s},v^{\prime}:=h^{-1}v^{-1}h=(h^{-1}vh)^{-1}\in U_{s}\) the element \(h^{-1}vhv^{\prime}\) acts trivial on \((u_{1},\ldots,u_{k})\) (the case where \(s,t\) are interchanged follows similarly).
We distinguish the following cases:
1. \(s=s_{1}\): Then \(\omega_{B}(h^{-1}vhv^{\prime},(u_{1},\ldots,u_{k}))=\Big{(}(v(v^{\prime}u_{1}) ^{h^{-1}})^{h},\omega_{B}((h^{-1})^{n_{s_{1}}}h^{n_{s_{1}}},(u_{2},\ldots,u_{k}) )\Big{)}\). As \((v(v^{\prime}u_{1})^{h^{-1}})^{h}=v^{h}v^{\prime}u_{1}=u_{1}\) in \(U_{s}\) and \((h^{-1})^{n_{s_{1}}}h^{n_{s_{1}}}\) is a relation in \(G_{\{s\}}\) and hence in \(B_{\{s\}}\), the claim follows by induction.
2. \(s\neq s_{1}\): Then \(\omega_{B}(h^{-1}vhv^{\prime},(u_{1},\ldots,u_{k}))=((u_{1}^{h^{-1}})^{h}, \omega_{B}(b,(u_{2},\ldots,u_{k})))\), where \(b=(h^{-1})^{n_{s_{1}}}v^{n_{s_{1}}}[v,u_{1}^{h^{-1}}]^{n_{s_{1}}}h^{n_{s_{1}}}( v^{\prime})^{n_{s_{1}}}[v^{\prime},u_{1}]^{n_{s_{1}}}\), and \(b=(h^{-1}vhv^{\prime})^{u_{1}n_{s_{1}}}\) is a relation in \(G_{\{s,t,s_{1}\}}\) and hence in \(B_{\{s,t,s_{1}\}}\). As before, the claim follows by induction.
**(4.4) Lemma**.: _Let \(J\subseteq S\) be such that \(|J|\leq 3\). Let \(u_{i},u_{i}^{\prime}\in\bigcup_{j\in J}U_{j}\)._
1. _Let_ \(g\in B_{J}\) _be such that_ \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(u_{1}^{\prime},\ldots,u_{k}^{\prime})\)_. Then_ \(g\cdot u_{1}n_{1}\cdots u_{k}n_{k}B_{J}=u_{1}^{\prime}n_{1}\cdots u_{k}^{ \prime}n_{k}B_{J}\)_._
2. _Let_ \(g_{1},\ldots,g_{m}\in\bigcup_{j\in J}U_{j}\) _be and let_ \(g=g_{1}\cdots g_{m}\)_. We assume that_ \(\omega_{B}(g_{1}\cdots g_{m},(u_{1},\ldots,u_{k}))=(u_{1}^{\prime},\ldots,u_{r}^{ \prime},\omega_{B}(g^{\prime},(u_{r+1},\ldots,u_{k})))\)_, i.e._ \(\omega_{B}\) _does not stop after_ \(r\) _steps. Then we have_ \(g^{\prime}=(u_{1}^{\prime}n_{1}\cdots u_{r}^{\prime}n_{r})^{-1}g(u_{1}n_{1} \cdots u_{r}n_{r})\) _in_ \(G_{J}\)
Proof.: We prove Assertion \((a)\) by induction on \(k\). For \(k=0\) the claim follows directly, as \(\omega_{B}(g,())=()\) and \(g\cdot B_{J}=B_{J}\). Thus we assume \(k\geq 1\). It suffices to show the claim for \(g\in U_{s}\cup H_{s}\) with \(s\in J\). If \(g\in H_{s}\), we have \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(u_{1}^{g^{-1}},\omega_{B}(g^{n_{1}},(u_{2},\ldots,u_{k})))\). Let \(\omega_{B}(g^{n_{1}},(u_{2},\ldots,u_{k}))=(u_{2}^{\prime},\ldots,u_{k}^{ \prime})\). Using induction we have \(g^{n_{1}}\cdot u_{2}n_{2}\cdots u_{k}n_{k}B_{J}=u_{2}^{\prime}n_{2}\cdots u_{ k}^{\prime}n_{k}B_{J}\). This implies
\[g\cdot u_{1}n_{1}\cdots u_{k}n_{k}B_{J}=u_{1}^{g^{-1}}n_{1}g^{n_{1}}u_{2}n_{k} \cdots u_{k}n_{k}B_{J}=u_{1}^{g^{-1}}n_{1}u_{2}^{\prime}n_{2}\cdots u_{k}^{ \prime}n_{k}B_{J}\]
As \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(u_{1}^{g^{-1}},u_{2}^{\prime},\ldots,u_{k }^{\prime})\), the claim follows. Assume \(u_{1}\in U_{s}\) and \(g\in U_{s}\). Then \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(gu_{1},u_{2},\ldots,u_{k})\) and \(g\cdot u_{1}n_{1}\cdots u_{k}n_{k}B_{J}=(gu_{1})n_{1}u_{2}n_{2}\cdots u_{k}n_{ k}B_{J}\).
Now we assume \(u_{1}\in U_{t}\) for some \(s\neq t\in S\). Then \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(u_{1},\omega_{B}(g^{n_{1}}[g,u_{1}]^{n_{1} },(u_{2},\ldots,u_{k})))\). Let \(\omega_{B}(g^{n_{1}}[g,u_{1}]^{n_{1}},(u_{2},\ldots,u_{k}))=(u_{2}^{\prime}, \ldots,u_{k}^{\prime})\). Using induction we have \(g^{n_{1}}[g,u_{1}]^{n_{1}}\cdot u_{2}n_{2}\cdots u_{k}n_{k}B_{J}=u_{2}^{\prime }n_{2}\cdots u_{k}^{\prime}n_{k}B_{J}\). This implies
\[gu_{1}n_{1}\cdots u_{k}n_{k}B_{J}=u_{1}n_{1}g^{n_{1}}[g,u_{1}]^{n_{1}}u_{2}n_{ 2}\cdots u_{k}n_{k}B_{J}=u_{1}n_{1}u_{2}^{\prime}n_{2}\cdots u_{k}^{\prime}n_{ k}B_{J}\]
As \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(u_{1},u_{2}^{\prime},\ldots,u_{k}^{\prime})\), the claim follows. We prove Assertion \((b)\) by induction on \(r\). Let \(j\in J\) be such that \(u_{1}\in U_{j}\). Let \(i_{1}<\ldots<i_{n}\) be all indices such that \(g_{i_{1}},\ldots,g_{i_{n}}\in U_{j}\). Then we have the following:
\[\omega_{B}(g,(u_{1},\ldots,u_{k}))=(g_{i_{1}}\cdots g_{i_{n}}u_{1},\omega_{B} (g^{\prime},(u_{2},\ldots,u_{k})))\]
where \(g^{\prime}=\prod_{x=0}^{n}\prod_{y=i_{x}+1}^{i_{x}+1-1}g_{y}^{n_{j}}[g_{y},g_ {i_{x}+1}\cdots g_{i_{n}}u_{1}]^{n_{j}}\) and \(i_{0}:=0,i_{n+1}:=m+1\). We compute the following:
\[g^{\prime} =\prod_{x=0}^{n}\prod_{y=i_{x}+1}^{i_{x}+1-1}g_{y}^{n_{j}}[g_{y},g_{i_{x}+1}\cdots g_{i_{n}}u_{1}]^{n_{j}}\] \[=\prod_{x=0}^{n}\prod_{y=i_{x}+1}^{i_{x}+1-1}\left((g_{i_{x}+1} \cdots g_{i_{n}}u_{1})^{-1}g_{y}(g_{i_{x}+1}\cdots g_{i_{n}}u_{1})\right)^{n_{j}}\] \[=\prod_{x=0}^{n}\left((g_{i_{x}+1}\cdots g_{i_{n}}u_{1})^{-1} \left(\prod_{y=i_{x}+1}^{i_{x}+1-1}g_{y}\right)(g_{i_{x}+1}\cdots g_{i_{n}}u_{ 1})\right)^{n_{j}}\] \[=\left((g_{i_{1}}\cdots g_{i_{n}}u_{1})^{-1}\left(\prod_{y=1}^{m }g_{m}\right)u_{1}\right)^{n_{j}}\] \[=\left(g_{i_{1}}\cdots g_{i_{n}}u_{1}n_{j}\right)^{-1}gu_{1}n_{1}\]
As \(g^{\prime}\in\langle U_{j}\mid j\in J\rangle\), the claim follows by induction.
**(4.5) Lemma**.: _The action in Theorem (4.3) maps equivalent sequences to equivalent sequences. In particular, \(\omega_{B}\) extends to an action of the building \(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}}\)._
Proof.: In this proof we use the notation \(U_{s_{1}\cdots s_{k}}\) for \(U_{s_{1}}\times\cdots\times U_{s_{k}}\). It suffices to show that every element in \(\bigcup_{r\in S}U_{r}\cup H_{r}\) maps two elementary equivalent sequences to elementary equivalent sequences. Let \(s\neq t\in S\), let \(\bar{u}=\bar{u}_{1}\bar{u}_{0}\bar{u}_{2}\) be of type \((f_{1},p(s,t),f_{2})\) and \(\bar{v}=\bar{u}_{1}\bar{v}_{0}\bar{u}_{2}\) be of type \((f_{1},p(t,s),f_{2})\), where \(\bar{u}_{0}\) and \(\bar{v}_{0}\) are equivalent in \(\Delta_{\{s,t\}}\). It suffices to show the claim for the sequences \(\bar{u}_{0}\bar{u}_{2}\) of type \((p(s,t),f_{2})\) and \(\bar{v}_{0}\bar{u}_{2}\) of type \((p(t,s),f_{2})\). Let \(w\in W\) and \(s\neq t\in S\) be such that \(\ell(sw)=\ell(w)+1=\ell(tw)\). Let \((u_{1},\ldots,u_{m})\in U_{p(s,t)},(v_{1},\ldots,v_{m})\in U_{p(t,s)}\) be equivalent in \(\Delta_{\{s,t\}}\), let \(s_{1},\ldots,s_{k}\in S\) be such that \(w=s_{1}\cdots s_{k}\) is reduced and
let \((w_{1},\ldots,w_{k})\in U_{s_{1}\cdots s_{k}}\). Then \((u_{1},\ldots,u_{m},w_{1},\ldots,w_{k})\) and \((v_{1},\ldots,v_{m},w_{1},\ldots,w_{k})\) are elementary equivalent. First we assume that \(g\in H_{r}\) for some \(r\in S\). Then we have the following:
\[\omega_{B}(g,(u_{1},\ldots,u_{m},w_{1},\ldots,w_{k})) =(u^{\prime}_{1},\ldots,u^{\prime}_{m},\omega_{B}(g^{p(n_{s},n_{t} )},(w_{1},\ldots,w_{k})))\] \[\omega_{B}(g,(v_{1},\ldots,v_{m},w_{1},\ldots,w_{k})) =(v^{\prime}_{1},\ldots,v^{\prime}_{m},\omega_{B}(g^{p(n_{t},n_{s} )},(w_{1},\ldots,w_{k})))\]
Let \(J:=\{r,s,t\}\). We deduce from Lemma (4.4)\((a)\) and Corollary (2.19) that
\[u^{\prime}_{1}n_{1}\cdots u^{\prime}_{m}n_{m}B_{J}=g\cdot u_{1}n_{1}\cdots u_{ m}n_{m}B_{J}=g\cdot v_{1}n^{\prime}_{1}\cdots v_{m}n^{\prime}_{m}B_{J}=v^{ \prime}_{1}n^{\prime}_{1}\cdots v^{\prime}_{m}n^{\prime}_{m}B_{J}\]
Using Corollary (2.19) again, we deduce that \((u^{\prime}_{1},\ldots,u^{\prime}_{m})\simeq(v^{\prime}_{1},\ldots,v^{\prime} _{m})\). Since \(p(n_{s},n_{t})=p(n_{t},n_{s})\) is a relation in \(G_{\{r,s,t\}}\), we obtain \(g^{p(n_{s},n_{t})}=g^{p(n_{t},n_{s})}\) in \(B_{\{r,s,t\}}\) and hence they act equally on \((w_{1},\ldots,w_{k})\).
Now let \(g\in U_{r}\) for some \(r\in S\). We note that \((u_{1},\ldots,u_{k})\simeq(v_{1},\ldots v_{k})\) for \(u_{i},v_{i}\in U_{s}\cup U_{t}\) implies \(u_{1}n_{1}\cdots u_{k}n_{k}=v_{1}n^{\prime}_{1}\cdots v_{k}n^{\prime}_{k}\) (cf. Corollary (2.19) and [16, p. 87]). We consider the following two cases:
1. \(r\in\{s,t\}\): W.l.o.g. we assume that \(r=s\). Then \[\omega_{B}(g,(u_{1},\ldots,u_{m},w_{1},\ldots,w_{k}))=(gu_{1},u_{2},\ldots,u_{ m},w_{1},\ldots,w_{k}).\] Let \(v^{\prime}_{1},\ldots,v^{\prime}_{m}\) be such that \(\omega_{B}(g,(v_{1},\ldots,v_{k}))=(v^{\prime}_{1},\ldots,v^{\prime}_{k})\). By Lemma (4.4)\((a)\) and Corollary (2.19) we deduce \((gu_{1},u_{2},\ldots,u_{k})\simeq(v^{\prime}_{1},\ldots,v^{\prime}_{k})\). Now there are two possibilities: 1. \(\omega_{B}(g,(v_{1},\ldots,v_{m},w_{1},\ldots,w_{k}))=(v^{\prime}_{1},\ldots,v^ {\prime}_{m},w_{1},\ldots,w_{k})\), i.e. the action stops after at most \(m\) steps. Then the claim follows directly. 2. \(\omega_{B}(g,(v_{1},\ldots,v_{k},w_{1},\ldots,w_{k}))=(v^{\prime}_{1},\ldots, v^{\prime}_{m},\omega_{B}(g^{\prime},w_{1},\ldots,w_{k}))\): Then by Lemma (4.4)\((b)\) we deduce \(g^{\prime}=(v^{\prime}_{1}n^{\prime}_{1}\cdots v^{\prime}_{m}n^{\prime}_{m})^{ -1}g(v_{1}n^{\prime}_{1}\cdots v_{m}n^{\prime}_{m})\). But then we infer the following in \(G_{\{s,t\}}\): \[g^{\prime} =(v^{\prime}_{1}n^{\prime}_{1}\cdots v^{\prime}_{m}n^{\prime}_{ m})^{-1}g(v_{1}n^{\prime}_{1}\cdots v_{m}n^{\prime}_{m})\] \[=(gu_{1}n_{1}u_{2}n_{2}\cdots u_{m}n_{m})^{-1}g(u_{1}n_{1}\cdots u _{k}n_{k})\] \[=1\] Thus \(\omega_{B}(g^{\prime},(w_{1},\ldots,w_{k}))=(w_{1},\ldots,w_{k})\) and the claim follows.
2. \(r\notin\{s,t\}\): Let \(\omega_{B}(g,(u_{1},\ldots,u_{k}))=(u^{\prime}_{1},\ldots,u^{\prime}_{k})\) and \(\omega_{B}(g,(v_{1},\ldots,v_{k}))=(v^{\prime}_{1},\ldots,v^{\prime}_{k})\) be. Then Lemma (4.4)\((a)\) implies \[g\cdot u_{1}n_{1}\ldots u_{m}n_{m}B_{\{r,s,t\}} =u^{\prime}_{1}n_{1}\cdots u^{\prime}_{n}n_{m}B_{\{r,s,t\}}\] \[g\cdot v_{1}n^{\prime}_{1}\cdots v_{m}n^{\prime}_{m}B_{\{r,s,t\}} =v^{\prime}_{1}n^{\prime}_{1}\cdots v^{\prime}_{m}n^{\prime}_{m}B_{ \{r,s,t\}}\] In particular, we deduce that \[g^{\prime} :=\left(u^{\prime}_{1}n_{1}\cdots u^{\prime}_{n}n_{m}\right)^{-1} gu_{1}n_{1}\ldots u_{m}n_{m}\in B_{\{r,s,t\}}\] \[g^{\prime\prime} :=\left(v^{\prime}_{1}n^{\prime}_{1}\cdots v^{\prime}_{m}n^{ \prime}_{m}\right)^{-1}gv_{1}n^{\prime}_{1}\cdots v_{m}n^{\prime}_{m}\in B_{\{r,s, t\}}\] Assume that \(\omega_{B}(g,(u_{1},\ldots,u_{m},w_{1},\ldots,w_{k}))=(u^{\prime}_{1},\ldots,u^{ \prime}_{m},w_{1},\ldots,w_{k})\), i.e. the action stops after at most \(m\) steps. Then \(gu_{1}n_{1}\cdots u_{m}n_{m}=u^{\prime}_{1}n_{1}\cdots u^{\prime}_{m}n_{m}\) and \(g^{\prime}\) is a relation in \(G_{\{r,s,t\}}\). We will show that this is a contradiction. Let \(u,u^{\prime}\in\langle U_{\alpha_{s}}\cup U_{\alpha_{t}}\rangle\) be such
that \(u_{1}n_{1}\cdots u_{m}n_{m}=un(w)\) and \(u^{\prime}_{1}n_{1}\cdots u^{\prime}_{m}n_{m}=u^{\prime}n(w)\) as in [16, Lemma (7.4)]. Then \((u^{\prime})^{-1}gu\) is a relation as well and we deduce \(U_{\alpha_{r}}\cap\langle U_{\alpha_{s}}\cup U_{\alpha_{t}}\rangle\neq\{1\}\), which is a contradiction. Thus \(g^{\prime}\) cannot be a relation. Moreover, we have \(u_{1}n_{1}\cdots u_{m}n_{m}=v_{1}n^{\prime}_{1}\cdots v_{m}n^{\prime}_{m}\), as \((u_{1},\ldots,u_{m})\simeq(v_{1},\ldots,v_{m})\), and \(u^{\prime}_{1}n_{1}\cdots u^{\prime}_{m}n_{m}=v^{\prime}_{1}n^{\prime}_{1} \cdots v^{\prime}_{m}n^{\prime}_{m}\), as \((u^{\prime}_{1},\ldots,u^{\prime}_{m})\simeq(v^{\prime}_{1},\ldots,v^{\prime} _{m})\) by Corollary (2.19). In particular, we have \(g^{\prime}=g^{\prime\prime}\) in \(B_{\{r,s,t\}}\) and \(g^{\prime\prime}\) is no relation as well. In particular, the action does not stop after at most \(m\) steps. Together with Theorem (4.3) we obtain the following:
\[\omega_{B}(g,(u_{1},\ldots,u_{m},w_{1},\ldots,w_{k})) =(u^{\prime}_{1},\ldots,u^{\prime}_{m},\omega_{B}(g^{\prime},(w_{ 1},\ldots,w_{m})))\] \[\simeq(v^{\prime}_{1},\ldots,v^{\prime}_{m},\omega_{B}(g^{\prime \prime},(w_{1},\ldots,w_{m})))\] \[=\omega_{B}(g,(v_{1},\ldots,v_{m},w_{1},\ldots,w_{k})).\qed\]
**(4.6) Lemma**.: _Let \(s_{1},\ldots,s_{k}\in S\) be such that \(s_{1}\cdots s_{k}\) is reduced and let \(u_{i}\in U_{s_{i}}\). Then we have for each \(s\in S\) a well-defined mapping_
\[\omega(n_{s},\cdot):\mathbf{C}(\mathcal{B}_{\mathcal{F}})\to\mathbf{C}_{ \mathcal{B}_{\mathcal{F}}},(u_{1},\ldots,u_{k})\mapsto\begin{cases}\omega_{B} (n_{s_{1}}^{2},[u_{2},\ldots,u_{k}])&\text{if }s=s_{1},u_{1}=1,\\ [\overline{u}_{1},\omega_{B}(b(u_{1}),[u_{2},\ldots,u_{k}])]&\text{if }s=s_{1},u_{1} \neq 1,\\ [1_{U_{s}},u_{1},\ldots,u_{k}]&\text{if }\ell(ss_{1}\cdots s_{k})=k+1\\ \omega(g,(v_{1},\ldots,v_{k}))&\text{else,}\end{cases}\]
_where \((v_{1},\ldots,v_{k})\) is of type \((s,t_{2},\ldots,t_{k}),st_{2}\cdots t_{k}=s_{1}\cdots s_{k}\) and \((u_{1},\ldots,u_{k})\simeq(v_{1},\ldots,v_{k})\). Moreover, this mapping extends to \(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}}\). More precisely, for any two equivalent sequences \((u_{1},\ldots,u_{k})\simeq(v_{1},\ldots,v_{k})\) we have \(\omega(n_{s},(u_{1},\ldots,u_{k}))=\omega(n_{s},(v_{1},\ldots,v_{k}))\)._
Proof.: To show that \(\omega(n_{s},\cdot)\) is well-defined, we have to show that the case \(\ell(ss_{1}\cdots s_{k})=k-1\) and \(s\neq s_{1}\) is independent of the choice of \((v_{1},\ldots,v_{k})\). Therefore, we let \(r_{2},\ldots,r_{k}\in S,w_{1}\in U_{s}\) and \(w_{i}\in U_{r_{i}}\) be such that \(st_{2}\cdots t_{k}=s_{1}\cdots s_{k}=sr_{2}\cdots r_{k}\) and
\[(v_{1},\ldots,v_{k})\simeq(u_{1},\ldots,u_{k})\simeq(w_{1},\ldots,w_{k})\]
As \(t_{2}\cdots t_{k}=r_{2}\cdots r_{k}\), there exist \(v^{\prime}_{i}\in U_{t_{i}}\) such that \((w_{2},\ldots,w_{k})\simeq(v^{\prime}_{2},\ldots,v^{\prime}_{k})\) and hence \((w_{1},\ldots,w_{k})\simeq(w_{1},v^{\prime}_{2},\ldots,v^{\prime}_{k})\). By Corollary (2.17) we have \(v_{1}=w_{1},v_{i}=v^{\prime}_{i}\) and hence \((v_{2},\ldots,v_{k})\simeq(w_{2},\ldots,w_{k})\). Using Lemma (4.5) we deduce
\[\omega(n_{s},(v_{1},\ldots,v_{k})) =\begin{cases}\omega_{B}(n_{s_{1}}^{2},[v_{2},\ldots,v_{k}])&\text{ if }v_{1}=1\\ [\overline{v}_{1},\omega_{B}(b(v_{1}),[v_{2},\ldots,v_{k}])]&\text{if }v_{1} \neq 1\end{cases}\] \[=\begin{cases}\omega_{B}(n_{s_{1}}^{2},[w_{2},\ldots,w_{k}])& \text{if }w_{1}=1\\ [\overline{w}_{1},\omega_{B}(b(w_{1}),[w_{2},\ldots,w_{k}])]&\text{if }w_{1} \neq 1\end{cases}\] \[=\omega(n_{s},(w_{1},\ldots,w_{k}))\]
Thus the mapping is well-defined. Suppose \(s_{1},\ldots,s_{k},t_{1},\ldots,t_{k}\in S\) such that \(s_{1}\cdots s_{k}=t_{1}\cdots t_{k}\) is reduced. Let \(u_{i}\in U_{s_{i}},v_{i}\in U_{t_{i}}\) be such that \((u_{1},\ldots,u_{k})\simeq(v_{1},\ldots,v_{k})\). If \(\ell(ss_{1}\cdots s_{k})=k+1\), we have \((1,u_{1},\ldots,u_{k})\simeq(1,v_{1},\ldots,v_{k})\) and hence \([1,u_{1},\ldots,u_{k}]=[1,v_{1},\ldots,v_{k}]\). Now we assume \(\ell(ss_{1}\cdots s_{k})=k-1\). Then there exist \(s^{\prime}_{2},\ldots,s^{\prime}_{k},t^{\prime}_{2},\ldots,t^{\prime}_{k}\in S\) such that \(s_{1}\cdots s_{k}=ss^{\prime}_{2}\cdots s^{\prime}_{k}\) and \(t_{1}\cdots t_{k}=st^{\prime}_{2}\cdots t^{\prime}_{k}\). Let \(u^{\prime}_{1},v^{\prime}_{1}\in U_{s},u^{\prime}_{i}\in U_{s^{\prime}_{i}},v^{ \prime}_{i}\in U_{t^{\prime}_{i}}\) be such that \((u^{\prime}_{1},\ldots,u^{\prime}_{k})\simeq(u_{1},\ldots,u_{k})\simeq(v_{1}, \ldots,v_{k})\simeq(v^{\prime}_{1},\ldots,v^{\prime}_{k})\). As before we deduce \(u^{\prime}_{1}=v^{\prime}_{1}\) and \((u^{\prime}_{2},\ldots,u^{\prime}_{k})\simeq(v^{\prime}_{2},\ldots,v^{\prime}_{k})\). The claim follows now from Lemma (4.5).
**(4.7) Lemma**.: _For each \(s\in S\) we have \(n_{s}\in\operatorname{Aut}(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}})\) via_
\[\omega(g,[u_{1},\ldots,u_{k}]):=\begin{cases}\omega(n_{s},\omega_{B}(n_{s}^{-2}, [u_{1},\ldots,u_{k}]))&\text{if }g=n_{s}^{-1}\\ \omega(n_{s},[u_{1},\ldots,u_{k}])&\text{if }g=n_{s}\end{cases}\]
Proof.: It suffices to show that \(n_{s}^{-1}n_{s}\) acts trivial on every sequence. Let \(s_{1},\ldots,s_{k}\in S\) be such that \(s_{1}\cdots s_{k}\) is reduced and let \(u_{i}\in U_{s_{i}}\). We distinguish the following cases:
1. \(s=s_{1}\) and \(u_{1}=1\): Then \(\omega(n_{s}^{-1}n_{s},[u_{1},\ldots,u_{k}])=[1,\omega_{B}(n_{s}^{-2}n_{s}^{2},[ u_{2},\ldots,u_{k}])]=[u_{1},\ldots,u_{k}]\).
2. \(s=s_{1}\) and \(u_{1}\neq 1\): Then \[\omega(n_{s}^{-1}n_{s},[u_{1},\ldots,u_{k}]) =\omega(n_{s}^{-1},[\overline{u}_{1},\omega_{B}(b(u_{1}),[u_{2}, \ldots,u_{k}])])\] \[=\omega(n_{s},[\overline{u}_{1}^{n_{s}^{2}},\omega_{B}((n_{s}^{- 2})^{n_{s}}b(u_{1}),[u_{2},\ldots,u_{k}])])\] \[=[\overline{u_{1}^{n_{s}^{2}}},\omega_{B}(b(\overline{u}_{1}^{n_ {s}^{2}})(n_{s}^{-2})^{n_{s}}b(u_{1})),[u_{2},\ldots,u_{k}]]\] Note that \(u_{1}n_{s}B_{\{s\}}=n_{s}^{-1}n_{s}u_{1}n_{s}B_{\{s\}}=n_{s}n_{s}^{-2} \overline{u}_{1}n_{s}b(u_{1})B_{\{s\}}=n_{s}\overline{u}_{1}^{n_{s}^{2}}n_{s} B_{\{s\}}=\overline{\overline{u}_{1}^{n_{s}^{2}}}n_{s}B_{\{s\}}\). Thus \(u_{1}=\overline{u}_{1}^{n_{s}^{2}}\) and \(b(\overline{u}_{1}^{n_{s}^{2}})(n_{s}^{-2})^{n_{s}}b(u_{1})\) is a relation in \(B_{\{s\}}\). The claim follows now from Theorem (4.3).
3. \(\ell(ss_{1}\cdots s_{k})=k+1\): Then \(\omega(n_{s}^{-1}n_{s},[u_{1},\ldots,u_{k}])=\omega_{B}(n_{s}^{2}(n_{s}^{-2}) ^{n_{s}},[u_{1},\ldots,u_{k}])=[u_{1},\ldots,u_{k}]\).
4. \(\ell(ss_{1}\cdots s_{k})=k-1\) and \(s\neq s_{1}\): Let \((v_{1},\ldots,v_{k})\in[u_{1},\ldots,u_{k}]\) with \(v_{1}\in U_{s}\). Then \(\omega(n_{s}^{-1}n_{s},[v_{1},\ldots,v_{k}])=[v_{1},\ldots,v_{k}]\) as before and hence \(\omega(n_{s}^{-1}n_{s},[u_{1},\ldots,u_{k}])=[v_{1},\ldots,v_{k}]=[u_{1}, \ldots,u_{k}]\).
### Some relations in \(\mathrm{Aut}(\textbf{C}_{\mathcal{B}_{\mathcal{F}}})\)
In this subsection we will show that the relations in Theorem (2.9) act trivial on \(\textbf{C}_{\mathcal{B}_{\mathcal{F}}}\). This will imply that \(G\) acts on the building \(\textbf{C}_{\mathcal{B}_{\mathcal{F}}}\).
**(4.8) Lemma**.: _For \(s\in S\) and \(1\neq u_{s}\in U_{s}\) We have \(n_{s}u_{s}n_{s}=\overline{u}_{s}n_{s}b(u_{s})\) in \(\mathrm{Aut}(\textbf{C}_{\mathcal{B}_{\mathcal{F}}})\)._
Proof.: Let \((u_{1},\ldots,u_{k})\) be a sequence of type \((s_{1},\ldots,s_{k})\). We distinguish the following two cases:
1. \(\ell(ss_{1}\cdots s_{k})=k+1\). Then we have \[\omega(n_{s}u_{s}n_{s},[u_{1},\ldots,u_{k}])=[\overline{u}_{s},\omega_{B}(b(u_ {s}),[u_{1},\ldots,u_{k}])]=\omega(\overline{u}_{s}n_{s}b(u_{s}),[u_{1},\ldots, u_{k}])\]
2. \(\ell(ss_{1}\cdots s_{k})=k-1\): W.l.o.g. we assume \(s=s_{1}\). We let \(b(u_{s})=v_{s}h_{s}\) and distinguish the following cases: 1. \(u_{1}=1\): Then we compute the following: \[\omega(n_{s}u_{s}n_{s},[1,u_{2},\ldots,u_{k}]) =[1,\omega_{B}(u_{s}n_{s}^{2},[u_{2},\ldots,u_{k}])]\] \[\omega(\overline{u}_{s}n_{s}b(u_{s}),[1,u_{2},\ldots,u_{k}]) =[\overline{u}_{s}\overline{v}_{s},\omega_{B}(b(v_{s})h_{s}^{n_{s }},[u_{2},\ldots,u_{k}])]\] Comparing the cosets in the Moufang building we obtain \(\overline{u}_{s}\overline{v}_{s}=1\). Moreover, we have the following: \[b(v_{s})h_{s}^{n_{s}}=n_{s}^{-1}\overline{v}_{s}^{-1}\overline{v}_{s}n_{s}b(v_{s })n_{s}^{-1}h_{s}n_{s}=n_{s}^{-1}\overline{v}_{s}^{-1}\overline{u}_{s}^{-1} \overline{u}_{s}n_{s}v_{s}n_{s}n_{s}^{-1}h_{s}n_{s}=n_{s}^{-1}n_{s}u_{s}n_{s}n_{s }=u_{s}n_{s}^{2}\] Thus \((u_{s}n_{s}^{2})^{-1}b(v_{s})h_{s}^{n_{s}}\) is a relation in \(B_{\{s\}}\) and the claim follows.
2. \(u_{1}\neq 1\): Then we compute the following: \[\omega(n_{s}u_{s}n_{s},[u_{1},\ldots,u_{k}]) =\begin{cases}\omega_{B}(n_{s}^{2}b(u_{1}),[u_{2},\ldots,u_{k}])& \text{if }u_{s}\overline{u}_{1}=1\\ \left[\overline{u_{s}\overline{u}_{1}},\omega_{B}(b(u_{s}\overline{u}_{1})b( u_{1}),[u_{2},\ldots,u_{k}])\right]&\text{if }u_{s}\overline{u}_{1}\neq 1\\ \omega(\overline{u}_{s}n_{s}b(u_{s}),[u_{1},\ldots,u_{k}])&=\begin{cases} \omega_{B}(\overline{u}_{s}n_{s}^{2}h_{s}^{n_{s}},[u_{2},\ldots,u_{k}])&\text{ if }v_{s}u_{1}^{h_{s}^{-1}}=1\\ \left[\overline{u}_{s}\overline{v_{s}u_{1}^{h_{s}^{-1}}},\omega_{B}(b(v_{s}u_ {1}^{h_{s}^{-1}})h_{s}^{n_{s}},[u_{2},\ldots,u_{k}])\right]&\text{if }v_{s}u_{1}^{h_{s}^{-1}} \neq 1\end{cases}\] Note that in \(X_{s}\) we have \(n_{s}u_{s}\overline{u}_{1}n_{s}b(u_{1})=n_{s}u_{s}n_{s}u_{1}n_{s}=\overline{u }_{s}n_{s}b(u_{s})u_{1}n_{s}=\overline{u}_{s}n_{s}v_{s}u_{1}^{h_{s}^{-1}}n_{s} h_{s}^{n_{s}}\). If \(u_{s}\overline{u}_{1}=1\), then the left hand side is contained in \(B_{\{s\}}\). If \(v_{s}u_{1}^{h_{s}^{-1}}\neq 1\), then \(U_{-\alpha_{s}}\cap B_{\{s\}}\neq\{1\}\), which is a contradiction. Thus \(u_{s}\overline{u}_{1}=1\) implies \(v_{s}u_{1}^{h_{s}^{-1}}=1\). On the other hand if \(v_{s}u_{1}^{h_{s}^{-1}}=1\), then the right hand side is contained in \(B_{\{s\}}\). If \(u_{s}\overline{u}_{1}\neq 1\), then again \(U_{-\alpha_{s}}\cap B_{\{s\}}\neq\{1\}\), which is a contradiction. Thus \(v_{s}u_{1}^{h_{s}^{-1}}=1\) implies \(u_{s}\overline{u}_{1}=1\). Assume that \(u_{s}\overline{u}_{1}=1=v_{s}u_{1}^{h_{s}^{-1}}\). Then \(n_{s}^{2}b(u_{1})=\overline{u}_{s}n_{s}^{2}h_{s}^{n_{s}}\) in \(X_{s}\) and hence in \(B_{\{s\}}\). Thus we are done. Now we assume that \(u_{s}\overline{u}_{1}\neq 1\neq v_{s}u_{1}^{h_{s}^{-1}}\). Note that in \(X_{s}\) we have the following: \[\overline{u_{s}\overline{u}_{1}}n_{s}b(u_{s}\overline{u}_{1})b(u_{1}) =n_{s}u_{s}\overline{u}_{1}n_{s}b(u_{1})\] \[=n_{s}u_{s}n_{s}u_{1}n_{s}\] \[=\overline{u}_{s}n_{s}b(u_{s})u_{1}n_{s}\] \[=\overline{u}_{s}n_{s}v_{s}u_{1}^{h_{s}^{-1}}n_{s}s_{s}^{h_{s}}\] \[=\overline{u}_{s}v_{s}u_{1}^{h_{s}^{-1}}n_{s}b(v_{s}u_{1}^{h_{s}^{ -1}})h_{s}^{n_{s}}\] This implies \(\overline{u_{s}\overline{u}_{1}}=\overline{u}_{s}\overline{v_{s}}u_{1}^{h_{s}^ {-1}}\) and hence \(b(u_{s}\overline{u}_{1})b(u_{1})=b(v_{s}u_{1}^{h_{s}^{-1}})h_{s}^{n_{s}}\). Thus we are done.
**Lemma**.: _For \(s,t\in S\) and \(1\neq h\in H_{t}\) we have \(n_{s}hn_{s}=n_{s}^{2}h^{n_{s}}\) in \(\operatorname{Aut}(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}})\)._
Proof.: Let \((u_{1},\ldots,u_{k})\) be a sequence of type \((s_{1},\ldots,s_{k})\). We distinguish the following cases:
1. \(\ell(ss_{1}\cdots s_{k})=k+1\): Then we have \(\omega(n_{s}hn_{s},[u_{1},\ldots,u_{k}])=\omega_{B}(n_{s}^{2}h^{n_{s}},[u_{1},\ldots,u_{k}])\).
2. \(\ell(ss_{1}\cdots s_{k})=k-1\): Again we can assume \(s=s_{1}\). We distinguish the cases \(u_{1}=1\) and \(u_{1}\neq 1\) and compute the following: \[\omega(n_{s}hn_{s},[1,u_{2}\ldots,u_{k}]) =[1,\omega_{B}(hn_{s}^{2},[u_{2},\ldots,u_{k}])]\] \[\omega(n_{s}^{2}h^{n_{s}},[1,u_{2},\ldots,u_{k}]) =[1,\omega_{B}((n_{s}^{2})^{n_{s}}h^{n_{s}^{2}},[u_{2},\ldots,u_{ k}])]\] \[\omega(n_{s}hn_{s},[u_{1},\ldots,u_{k}]) =[\overline{u}_{1}^{\overline{h}^{-1}},\omega_{B}(b(\overline{u }_{1}^{h^{-1}})h^{n_{s}}b(u_{1}),[u_{2},\ldots,u_{k}])]\] \[\omega(n_{s}^{2}h^{n_{s}},[u_{1},\ldots,u_{k}]) =[(u_{1}^{(h^{n_{s}})^{-1}})^{n_{s}^{-2}},\omega_{B}((n_{s}^{2})^ {n_{s}}h^{n_{s}^{2}},[u_{2},\ldots,u_{k}])]\] In the case \(u_{1}=1\) we have \(hn_{s}^{2}=n_{s}^{2}n_{s}^{-2}hn_{s}^{2}=n_{s}^{2}h^{n_{s}^{2}}_{s}\) and hence we are done. In the case \(u_{1}\neq 1\) we have \(\overline{u}_{1}^{\overline{h}^{-1}}=(u_{1}^{(h^{n_{s}})^{-1}})^{n_{s}^{-2}}\), as \(\overline{\overline{u}_{1}^{h^{-1}}}n_{s}B_{\{s\}}=(u_{1}^{(h^{n_{s}})^{-1}})^{ n_{s}^{-2}}n_{s}B_{\{s\}}\) and hence \(b(\overline{u}_{1}^{h^{-1}})h^{n_{s}}b(u_{1})=(n_{s}^{2})^{n_{s}}h^{n_{s}^{2}}\). The claim follows now from Theorem (4.3).
**Lemma**.: _For \(s\neq t\in S\) and \(\alpha_{s}\neq\alpha\in\Phi_{+}^{\{s,t\}}\) we have \(n_{s}u_{\alpha}n_{s}=n_{s}^{2}u_{\alpha}^{n_{s}}\) in \(\operatorname{Aut}(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}})\)._
Proof.: Let \(\alpha_{s}\neq\alpha\in\Phi_{+}^{\{s,t\}}\). By Lemma (2.11) there exists \(g_{i}\in U_{t},h_{i}\in U_{s}\) such that \(u_{\alpha}=\prod_{i=1}^{n}g_{i}h_{i}\). Note that \(h_{1}\cdots h_{n}=1\) as in the proof of Theorem (4.3). We distinguish the following cases:
1. \(\ell(ss_{1}\cdots s_{k})=k+1\): We deduce the following \[\omega(n_{s}u_{\alpha}n_{s},[u_{1},\dots,u_{k}])=\omega(n_{s},[h_{1}\cdots h_ {n},\omega_{B}(b,[u_{1},\dots,u_{k}])])=\omega_{B}(n_{s}^{2}b,[u_{1},\dots,u_{k }])\] where \(b=\prod_{i=1}^{n}g_{i}^{n_{s}}[g_{i},h_{i}\cdots h_{n}]^{n_{s}}\). Note that \(b=\big{(}(h_{1}\cdots h_{n})^{-1}\prod_{i=1}^{n}g_{i}h_{i}\big{)}^{n_{s}}=u_{ \alpha}^{n_{s}}\) and hence the claim follows by Lemma (4.5).
2. \(\ell(ss_{1}\cdots s_{k})=k-1\). W.l.o.g. we assume \(s=s_{1}\). Let \(g_{i}^{\prime}\in U_{t},h_{i}^{\prime}\in U_{s}\) such that \(u_{\alpha}^{n_{s}}=\prod_{i=1}^{m}g_{i}^{\prime}h_{i}^{\prime}\). Again, \(h_{1}^{\prime}\cdots h_{m}^{\prime}=1\). We deduce the following: 1. If \(u_{1}=1\), we have: \[\omega(n_{s}u_{\alpha}n_{s},[u_{1},\dots,u_{k}]) =[1,\omega_{B}(u_{\alpha}n_{s}^{2},[u_{2},\dots,u_{k}])]\] \[\omega(n_{s}^{2}u_{\alpha},[u_{1},\dots,u_{k}]) =\omega(n_{s}^{2},[h_{1}^{\prime}\cdots h_{m}^{\prime},\omega_{B }(b,[u_{2},\dots,u_{k}])])\] \[=[1,\omega_{B}((n_{s}^{2})^{n_{s}}b,[u_{2},\dots,u_{k}])]\] where \(b=\prod_{i=1}^{m}(g_{i}^{\prime})^{n_{s}}[g_{i}^{\prime},h_{i}^{\prime}\cdots h _{m}^{\prime}]^{n_{s}}\). Note that \(b=((h_{1}^{\prime}\cdots h_{m}^{\prime})^{-1}\prod_{i=1}^{m}g_{i}^{\prime}h_{ i}^{\prime})^{n_{s}}=(u_{\alpha}^{n_{s}})^{n_{s}}=n_{s}^{-2}u_{\alpha}n_{s}^{2}\). Thus \((n_{s}^{2})^{n_{s}}b=u_{\alpha}n_{s}^{2}\) in \(B_{\{s,t\}}\) and the claim follows from Lemma (4.5).
3. If \(u_{1}\neq 1\), we compute the following: \[\omega(n_{s}u_{\alpha}n_{s},[u_{1},\dots,u_{k}]) =\omega(n_{s}u_{\alpha},[\overline{u}_{1},\omega_{B}(b(u_{1}),[u_ {2},\dots,u_{k}])])\] \[=\omega(n_{s},[h_{1}\cdots h_{n}\overline{u}_{1},\omega_{B}(b \cdot b(u_{1}),[u_{2},\dots,u_{k}])])\] \[=[\overline{u}_{1},\omega_{B}(b(\overline{u}_{1})\cdot b\cdot b(u _{1}),[u_{2},\dots,u_{k}])]\] \[\omega(n_{s}^{2}u_{s\alpha},[u_{1},\dots,u_{k}]) =\omega(n_{s}^{2},[h_{1}^{\prime}\cdots h_{m}^{\prime}u_{1},\omega _{B}(b^{\prime},[u_{2},\dots,u_{k}])])\] \[=[u_{1}^{n_{s}^{-2}},\omega_{B}((n_{s}^{2})^{n_{s}}b^{\prime},[u_ {2},\dots,u_{k}])]\] where \(b=\prod_{i=1}^{n}g_{i}^{n_{s}}[g_{i},h_{i}\cdots h_{m}u_{1}]^{n_{s}}\) and \(b^{\prime}=\prod_{i=1}^{m}g_{i}^{\prime n_{s}}[g_{i}^{\prime},h_{i}^{\prime} \cdots h_{m}^{\prime}u_{1}]^{n_{s}}\). As before, we have \(\overline{u}_{1}=u_{1}^{n_{s}^{-2}}\) and \(b(\overline{u}_{1})\cdot b\cdot b(u_{1})=(n_{s}^{2})^{n_{s}}b^{\prime}\) in \(B_{\{s\}}\). Now the claim follows from Lemma (4.5).
**(4.11) Lemma**.: _For all \(s\neq t\in S\) we have \(p(n_{s},n_{t})=p(n_{t},n_{s})\) in \(\operatorname{Aut}(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}})\)._
Proof.: At first we transform some elements by only using relations which are already known to hold in \(\operatorname{Aut}(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}})\), i.e. using Lemmas (4.8) - (4.10). We compute the following:
1. \(u=1\) and \(n_{1}=n_{t}\): Then \[n_{s}^{-1}n_{t}^{-1}n_{s}n_{t}n_{t} =n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}(n_{t}^{-2})^{n_{s}}n_{t}^{2}\] \[n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}n_{t} =n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}^{-1}n_{s}n_{t}n_{s}(n_{s}^{-2})^{n_ {t}n_{s}}n_{t}^{2}\] \[n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}^{-1}n_{s}n_{t}n_{s}n_{t}n_{t} =n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}n_{s}(n_ {t}^{-2})^{n_{s}n_{t}n_{s}}n_{t}^{2}\]
2. \(U_{t}\ni u_{t}\neq 1\) and \(n_{1}=n_{t}\): Then \[n_{s}^{-1}n_{t}^{-1}n_{s}n_{t}u_{t}n_{t} =n_{s}^{-1}n_{t}^{-1}n_{s}\overline{u}_{t}n_{s}^{-1}n_{s}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}n_{t}^{-2}u_{t}^{\prime}n_{s}n_{t}b_{t}\]
\[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}n_{s}^{-1}n_{t}^{-1}n_{s}n _{t}n_{s}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}n_{s}^{-1}n_{t}n_{s}n _{t}n_{s}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}n_{s}n_{s}^{-1}n_{t} ^{-1}n_{s}n_{t}n_{s}n_{t}b_{t}\] \[=u_{t}^{\prime\prime\prime}n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}b_{t }^{\prime\prime}n_{s}^{-1}n_{t}^{-1}n_{s}n_{t}b_{t}\] \[n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}u_{t}n_{t}n_{t} =n_{s}^{-1}n_{t}^{-1}n_{t}n_{s}\overline{u_{t}}n_{s}^{-1}n_{t}n_{s }n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}n_{s}^{-2}u_{t}^{\prime}n_{s}n_{s}^{-1} n_{t}n_{s}n_{t}b_{t}\qquad\qquad(\text{note: }u_{t}^{\prime}\in U_{s})\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}u_{t}^{\prime\prime}n_{s}n_{s}^{-2}n_{s }^{-1}n_{t}n_{s}n_{t}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}\overline{u_{t}^{\prime\prime}}n_{t}n_{s}n_{ s}^{-1}n_{t}^{-1}n_{s}b_{s}n_{s}^{-2}n_{s}^{-1}n_{t}n_{s}n_{t}b_{t}\] \[=u_{t}^{\prime\prime\prime}n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}^{-1}n_{ s}n_{t}n_{s}n_{s}^{-1}n_{t}^{-1}n_{s}^{-2}n_{t}n_{s}n_{s}^{-1}n_{t}^{-1}n_{s}n_{t}n _{s}n_{t}b_{t}\] \[=u_{t}^{\prime\prime\prime}n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}^{-1}n_{ s}n_{t}n_{s}^{\prime}n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{s}n_{t}b_{t}\] \[(n_{s}^{-1}n_{t}^{-1})^{2}(n_{s}n_{t})^{2}u_{t}n_{t} =(n_{s}^{-1}n_{t}^{-1})^{2}n_{s}n_{t}n_{s}\overline{u_{t}}n_{s}^{ -1}n_{t}^{-1}n_{s}^{-1}n_{s}n_{t}n_{s}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}n_{t}^{-2}u_{t}^{\prime}n_{s} n_{t}n_{s}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}n_{t}u_{t}^{\prime\prime}n_{t}n_{t }n_{t}^{-2}n_{t}^{-1}n_{s}n_{t}n_{s}n_{t}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}\overline{u_{t}^{\prime\prime}}n_{ t}n_{s}n_{s}n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}b_{t}^{\prime}n_{t}^{-2}n_{s}n_{t}n_{s}(n_{s }^{-1}n_{t}^{-1})^{2}(n_{s}n_{t})^{2}b_{t}\] \[=n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}\overline{u_{t}^{\prime\prime}}n_{ s}n_{t}n_{s}n_{s}^{-1}n_{t}^{-1}n_{s}^{-1}(n_{t}n_{s})^{2}b_{t}^{\prime\prime}(n_{s }^{-1}n_{t}^{-1})^{2}(n_{s}n_{t})^{2}b_{t}\] \[=u_{t}^{\prime\prime\prime}n_{t}n_{t}^{-1}n_{s}^{-1}n_{t}^{-1}n_{ s}^{-1}(n_{t}n_{s})^{2}b_{t}^{\prime\prime}(n_{s}^{-1}n_{t}^{-1})^{2}(n_{s}n_{t})^{2}b_{t}\] \[=u_{t}^{\prime\prime\prime}n_{t}(n_{t}^{-1}n_{s}^{-1})^{2}(n_{t}n _{s})^{2}b_{t}^{\prime\prime}(n_{s}^{-1}n_{t}^{-1})^{2}(n_{s}n_{t})^{2}b_{t}\]
Now we prove the claim. Let \(s\neq t\in S,s_{1},\ldots,s_{k}\in S,w\in\langle s,t\rangle\) be such that \(s_{1}\cdots s_{k},ws_{1}\cdots s_{k}\) are reduced and let \(v_{i}\in U_{s_{i}},u_{i}\in U_{s}\cup U_{t}\). We show by induction on \(\ell(w)\) that \(p(n_{s},n_{t})^{-1}p(n_{t},n_{s})\) fixes the equivalence class \([u_{1},\ldots,u_{n},v_{1},\ldots,v_{k}]\), where \((u_{1},\ldots,u_{n},v_{1},\ldots,v_{k})\) is of type \((w,s_{1},\ldots,s_{k})\). For \(\ell(w)=0\) we have \(l(p(s,t)s_{1}\cdots s_{k})=m_{st}+k\). Since \((1,\ldots,1)\) of type \((p(s,t))\) is equivalent to \((1,\ldots,1)\) of type \((p(t,s))\) it follows that
\[\omega(p(n_{s},n_{t}),[v_{1},\ldots,v_{k}])=[1,\ldots,1,v_{1},\ldots,v_{k}]= \omega(p(n_{t},n_{s}),[v_{1},\ldots,v_{k}])\]
and hence \(\omega(p(n_{s},n_{t})^{-1}p(n_{t},n_{s}),[v_{1},\ldots,v_{k}])=[v_{1},\ldots,v_{k}]\). Now let \(\ell(w)>0\). Note that it suffices to show that one of \(p(n_{s},n_{t})^{-1}p(n_{t},n_{s})\) and \(p(n_{t},n_{s})^{-1}p(n_{s},n_{t})\) acts trivial, as the product of these two elements act trivial. Using the previous computations and induction, we infer:
\[\omega(p(n_{s},n_{t})^{-1}p(n_{t},n_{s}),[(u_{1},\ldots,u_{n},v_{1 },\ldots,u_{k})]) =\omega(p(n_{s},n_{t})^{-1}p(n_{t},n_{s})u_{1}n_{1},[(u_{2}, \ldots,u_{n},v_{1},\ldots,u_{k})])\] \[=[u_{1},\ldots,u_{n},v_{1},\ldots,v_{k}]\]
For example we consider the case \(1\neq u_{1}\in U_{s},n_{1}=n_{s}\) and \(m_{st}=4\) explicitly. We have the following:
\[\omega((n_{s}n_{t})^{-2}(n_{t}n_{s})^{2},[u_{1},\ldots,u_{k}]) =\omega((n_{s}n_{t})^{-2}(n_{t}n_{s})^{2}u_{1}n_{s},[u_{2}, \ldots,u_{k}])\] \[=\omega(u_{s}^{\prime\prime\prime}n_{s}(n_{s}^{-1}n_{t}^{-1})^{2}(n _{s}n_{t})^{2}b_{s}^{\prime\prime}(n_{t}^{-1}n_{s}^{-1})^{2}(n_{t}n_{s})^{2}b_{s}, [u_{2},\ldots,u_{k}])\] \[=\omega(u_{s}^{\prime\prime\prime}n_{s}b_{s}^{\prime\prime}b_{s}, [u_{2},\ldots,u_{k}])\]
As before, we have \(u_{s}^{\prime\prime\prime}=u_{1}\) and \(b_{s}^{\prime\prime}b_{s}=1\) in \(X_{s,t}\). This finishes the claim.
**(4.12) Theorem**.: _The mapping \(G\times\mathbf{C}_{\mathcal{B}_{\mathcal{F}}}\to\mathbf{C}_{\mathcal{B}_{ \mathcal{F}}},(g,[(u
## 5 Main results
_(5.1) Remark_.: Let \(\mathcal{F}:=((\Delta_{J})_{J\in E(S)},(c_{J})_{J\in E(S)},(\varphi_{rst})_{\{r,s\},\{s,t\}\in E(S)})\) be an irreducible \(3\)-spherical Moufang foundation such that every panel contains at least \(6\) chambers and such that each residue \(\mathcal{F}_{J}\) of rank \(3\) is integrable. Then \(\mathcal{F}\) satisfies Condition (lco) and for each \(J\subseteq S\) with \(|J|=3\) the \(J\)-residue \(\mathcal{F}_{J}\) satisfies the Conditions (lco) and (lsco).
**(5.2) Theorem**.: _Let \(\mathcal{F}\) be an irreducible, \(3\)-spherical Moufang foundation of rank at least \(3\) such that every panel contains at least \(6\) chambers. Assume that for each irreducible \(J\subseteq S\) with \(|J|=3\) the \(J\)-residue \(\mathcal{F}_{J}\) is integrable. Then \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system and \(\mathcal{F}\cong\mathcal{F}(\Delta(\mathcal{D}_{\mathcal{F}}),B_{+})\). In particular, \(\mathcal{F}\) is integrable._
Proof.: By Theorem (4.12) we have an action of \(G\) on \(\mathbf{C}_{\mathcal{B}_{\mathcal{F}}}\). Clearly, \(U_{s}\to G\) is injective. Let \(1\neq u^{\prime}\in U_{-s}\). Then \(1\neq u:=n_{s}^{-1}u^{\prime}n_{s}\in U_{s}\) and \(\omega(n_{s}un_{s}^{-1},[()])=\omega(n_{s},[(u)])=[\overline{u}]\). Since \(\overline{u}=1\) if and only if \(u=1\), the element \(u^{\prime}\) acts non-trivial on the building and hence \(U_{-s}\to G\) is injective as well. Since for each \(\alpha\in\Phi_{+}\) we have \(U_{\alpha}\leq\langle U_{\alpha_{s}}\mid s\in S\rangle\) and \(U_{\alpha_{s}}\) acts trivial on \([()]\), but \(U_{-\alpha_{s}}\) does not fix \([()]\), \(\mathcal{D}_{\mathcal{F}}\) satisfies (RGD3). Thus \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system of type \((W,S)\) and the claim follows from Corollary (3.13).
**(5.3) Corollary**.: _Let \(\mathcal{F}\) be an irreducible, \(3\)-spherical Moufang foundation of rank at least \(3\) such that every panel contains at least \(6\) chambers. Then the following are equivalent:_
1. \(\mathcal{F}\) _is integrable._
2. _For each irreducible_ \(J\subseteq S\) _with_ \(|J|=3\) _the_ \(J\)_-residue_ \(\mathcal{F}_{J}\) _is integrable._
Proof.: One implication follows directly by considering restriction of the twin building to the twin building \((R_{J}(c_{+}),R_{J}(c_{-}),\delta_{*})\). The other follows from the previous theorem.
**(5.4) Theorem**.: _Let \(\Delta\) be a thick irreducible \(3\)-spherical twin building of rank at least \(3\). Then \(\Delta\) is known._
Proof.: At first we assume that every panel of \(\Delta\) contains at least \(6\) chambers. Then \(\Delta\) satisfies the Conditions (lco) and (lsco). Let \(c\) be a chamber of \(\Delta\) and let \(\mathcal{F}:=\mathcal{F}(\Delta,c)\). As in the previous corollary, each rank \(3\)-residue of \(\mathcal{F}\) is integrable. Theorem (5.2) implies that \(\mathcal{D}_{\mathcal{F}}\) is an RGD-system and \(\mathcal{F}\cong\mathcal{F}(\Delta(\mathcal{D}_{\mathcal{F}}),B_{+})\). Using Proposition (3.4), we deduce \(\Delta\cong\Delta(\mathcal{D}_{\mathcal{F}})\).
Now we assume that there exists a panel containing at most \(5\) chambers. Then [24, (34.5)] implies that every panel contains only finitely many chambers. But then \(\Delta\) is known by the Main result of [11]. We note that the Main result of [11] uses the fact that no rank \(2\) residue is associated with \(B_{2}(2)\) in order the use the extension theorem of [15]. But as it is shown in [4, Corollary 6.4], the extension theorem holds for arbitrary thick \(3\)-spherical twin buildings. Thus every irreducible locally finite \(3\)-spherical twin building is known by [4] and [11].
|
2304.10072 | Universal Slow Dynamics of Chemical Reaction Networks | Understanding the emergent behavior of chemical reaction networks (CRNs) is a
fundamental aspect of biology and its origin from inanimate matter. A closed
CRN monotonically tends to thermal equilibrium, but when it is opened to
external reservoirs, a range of behaviors is possible, including transition to
a new equilibrium state, a non-equilibrium state, or indefinite growth. This
study shows that slowly driven CRNs are governed by the conserved quantities of
the closed system, which are generally far fewer in number than the species.
Considering both deterministic and stochastic dynamics, a universal slow
dynamics equation is derived with singular perturbation methods, and is shown
to be thermodynamically consistent. The slow dynamics is highly robust against
microscopic details of the network, which may be unknown in practical
situations. In particular, non-equilibrium states of realistic large CRNs can
be sought without knowledge of bulk reaction rates. The framework is
successfully tested against a suite of networks of increasing complexity and
argued to be relevant in the treatment of open CRNs as chemical machines. | Masanari Shimada, Pegah Behrad, Eric De Giuli | 2023-04-20T03:57:27Z | http://arxiv.org/abs/2304.10072v3 | # Universal Slow Dynamics of Chemical Reaction Networks
###### Abstract
Understanding the emergent behavior of chemical reaction networks (CRNs) is a fundamental aspect of biology and its origin from inanimate matter. A closed CRN monotonically tends to thermal equilibrium, but when it is opened to external reservoirs, a range of behaviors is possible, including transition to a new equilibrium state, a non-equilibrium state, or indefinite growth. This study shows that slowly driven CRNs are governed by the conserved quantities of the closed system, which are generally far fewer in number than the species. Considering both deterministic and stochastic dynamics, a universal slow dynamics equation is derived with singular perturbation methods. The slow dynamics is highly robust against microscopic details of the network, which may be unknown in practical situations. The framework is successfully tested against a suite of networks of increasing complexity and argued to be relevant in the treatment of open CRNs as chemical machines.
_Keywords_: Hydrodynamics \(|\) Reaction networks \(|\) Stoichiometry \(|\)
The goal of theory for complex systems is often to reduce the number of degrees of freedom from a large intractable number down to something manageable, whose dynamics can then be understood intuitively. Ideally, such a reduction should be principled, mathematically well-controlled, and lead to a description in terms of universal effective variables. This challenge is especially acute in the biosciences where dizzying complexity is the norm. We address it for chemical reaction networks (CRN), which provide the substrate for biochemistry and hence biology.
Although model reduction for CRNs has a long history [1, 2, 3], most approaches work at the level of the rate equations, ignoring stochastic effects known to be important in biochemistry. Moreover, existing theories employ different reductions for each particular CRN, thus not leading to any universal description. This may be sufficient for detailed analysis of a particular system, but makes cross-system analysis difficult and hinders unification of diverse phenomena. Here we take a different route, grounded in hydrodynamics, a branch of condensed matter physics [4].
The starting point for hydrodynamics is the observation that molecular timescales, on the order of picoseconds, are miniscule compared to macroscopic forcing timescales. Thus most degrees of freedom relax very rapidly to a state of local thermodynamic equilibrium. However, over macroscopic distances, forcing conditions can differ, thus leading to different local equilibria. In such a _hydrodynamic limit_ in which forcing is slow in time and gradual in space, only a subset of degrees of freedom, dictated by symmetries and conservation laws, are important. Indeed, conserved quantities are precisely those whose densities need to be tracked, while the other degrees of freedom relax quickly and can be neglected 2. Conventional continuum theories for fluids, elastic solids, liquid crystals, and others are all of this type, differing only in the assumed symmetries [4].
Footnote 2: If a continuous symmetry is broken, then one also needs to track the density of its associated elastic variable, like the displacement field in an elastic solid. This phenomenon will play no role here.
This vantage is natural for chemical reaction networks, since individual reactions conserve the number of each element, leading to a large number of conservation laws. However, to our knowledge, in previous work these were not exploited to obtain an effective hydrodynamic theory. Here we consider slowly driven well-mixed physical CRNs, and show that they are governed by conservation of elements, similar to hydrodynamic theories. We derive a universal slow dynamics equation, (25) below, that can be applied to generic slowly-driven CRNs. For a CRN with \(N\) species and \(E\) elements, the reduced theory generally involves only \(E\) variables. We work at the large-deviations level of the particle number distribution, thus incorporating the leading stochastic effects.
This article is organized as follows. First, we define our CRN, emphasizing the role of microscopic reversibility. Then we analyze the rate equation in the slowly-driven setting, showing that a naive perturbation expansion breaks down. This is cured by a singular perturbation theory, which leads to the slow dynamics equation. We then extend the theory to include stochastic effects, and test our theory with numerical simulations, showing its broad utility. Finally we show how the theory can be extended to initial states that are far from equilibrium.
Species are indexed with \(i,j,\ldots\) while reactions are indexed with \(\alpha,\beta,\ldots\). We use vector notation whenever possible. For example, stoichiometric coefficients \(p_{i\alpha}\) and \(q_{i\alpha}\) are also written as \(\vec{p}_{\alpha}\) and \(\vec{q}_{\alpha}\). All contractions are explicitly indicated by dots. In CRNs, many functions appear that act component-wise on different species. We write \([\vec{f}(\vec{n})\vec{g}(\vec{n})\cdots]\) for the vector in species space whose components are \(f_{j}(\vec{n})g_{j}(\vec{n})\cdots\). For example, \([\vec{n}^{eq}e^{\vec{c}}]\) has components \(n_{j}^{eq}e^{c_{j}}\), etc. When such expressions are considered as diagonal matrices, we double the brackets, i.e. \([[\vec{f}]]\) is the matrix with
elements \(\delta_{ij}f_{i}\). To sum or multiply over all species we write \(\sum[\vec{f}]\) and \(\prod[\vec{f}]\), respectively. We also apply this notation to component-wise vectors over reactions.
### Slowly driven chemical reaction networks
We define a physical CRN as follows. We have \(N\) species \(X_{i}\), interacting with \(M\) reactions \(\alpha\), split into the core and the boundary interactions. Write a general core reaction as
\[\sum_{i}p_{i\alpha}X_{i}\rightleftharpoons\sum_{i}q_{i\alpha}X_{i}, \tag{1}\]
where \(p_{i\alpha}\) and \(q_{i\alpha}\) are the stoichiometric coefficients for the molecular species as reactant and product, respectively.
The number of moles of all species are collected in a vector \(\vec{n}\). Quantum mechanics requires that if a reaction \(\alpha\) occurs with rate \(k_{\alpha}^{+}\), then its corresponding backward reaction must occur, with rate \(k_{\alpha}^{-}\). As a condition for existence of thermal equilibrium, these rates are not independent but constrained in ratio to satisfy
\[\frac{k_{\alpha}^{+}(\vec{n})}{k_{\alpha}^{-}(\vec{n})}=e^{-\frac{1}{RT}( \Delta G)_{\alpha}}, \tag{2}\]
where \((\Delta G)_{\alpha}=(\vec{q}_{\alpha}-\vec{p}_{\alpha})\cdot\vec{G}(\vec{n})\) is the difference in molar Gibbs free energy between products and reactants. (2) is known as local detailed balance [5, 6, 7, 8], or microscopic reversibility [9]. Importantly, it does not require or imply that the CRN be in thermal equilibrium or close to it. In a physical CRN, we require that all reactions in the core of the system satisfy (2). For the boundary interactions, which force the system, we consider intake and degradation pairs
\[X_{i}^{\mathscr{C}}\rightleftharpoons X_{i}, \tag{3}\]
with rates \(\epsilon r_{i}^{+}\) and \(\epsilon r_{i}^{-}\), respectively, where \(\epsilon\) is a dimensionless constant. The \(\mathscr{C}\) decoration denotes a chemostat.
The CRN is slowly driven when \(\epsilon\ll 1\), meaning that all reservoir interactions are slow compared to internal reactions. As a consequence, after an initial relaxation the system will be close to some thermal equilibrium state. However, over long timescales it can have a non-trivial dynamics near evolving equilibria, just as a fluid that is stirred or poured will transition through a series of near-equilibria, described in that case by the Navier-Stokes equations. Our main result is an evolution equation for the slow degrees of freedom, which, as we show, correspond to conserved quantities, in precise analogy with hydrodynamics. For a generic large CRN, the conserved quantities are the number of each element (H,C,O,..) and number of free electrons, if ions are present.
The local detailed balance condition (2) can also be derived microscopically. Indeed, modern transition rate theory [10] predicts from first principles
\[k_{\alpha}^{+}(\vec{n})=ke^{-\frac{1}{RT}(\delta G)_{\alpha}(\vec{n})}, \tag{4}\]
where \((\delta G)_{\alpha}(\vec{n})=G_{A_{\alpha}}-\sum_{i}p_{i\alpha}G_{i}(\vec{n})\) is the difference in molar Gibbs free energy between the activated complex and the reactants, and \(k=\Omega c^{\circ}k_{0}\) in terms of \(k_{0}=1/(2\pi\beta\hbar)\approx 6\times 10^{12}\) Hz at 300K. Here \(\Omega\) is the system volume, assumed to be dominated by the solvent, and \(c^{\circ}\) is standard
concentration 1 mol/L. (4) holds for both forward and backward reactions, _mutatis mutandis_, so that (2) is satisfied.
For ideal dilute solutions, the free energies (chemical potentials) for each species take the form
\[G_{i}(\vec{n})=\mu_{i}^{\circ}+RT\log(n_{i}/c^{\circ}\Omega)=RT\log(n_{i}/n_{i}^ {eq}) \tag{5}\]
where \(\mu_{i}^{\circ}\) is the chemical potential in standard conditions, and \(n_{i}^{eq}=\Omega c^{\circ}e^{-\mu_{i}^{\circ}/RT}\). The forward reaction rates are thus proportional to \(\prod_{i}(n_{i}/c^{\circ}\Omega)^{p_{i\alpha}}\), which is the law of mass action. The flux of reaction \(\alpha\) is
\[J_{\alpha}(\vec{n}) = k_{\alpha}^{+}(\vec{n})-k_{\alpha}^{-}(\vec{n}) \tag{6}\] \[= ke^{-G_{A_{\alpha}}/RT}\left(\prod\left[(\vec{n}/\vec{n}^{eq})^{ \vec{p}_{\alpha}}\right]-\prod\left[(\vec{n}/\vec{n}^{eq})^{\vec{q}_{\alpha}} \right]\right)\]
We can write the rates of reservoir interactions in the form
\[r_{i}^{+}=r_{i}z_{i}^{\mathscr{C}},\quad r_{i}^{-}(n_{i})=r_{i}n_{i}/\Omega, \tag{7}\]
where \(z_{i}^{\mathscr{C}}\) is the molar concentration of species \(i\) in its reservoir; in general this can be time-dependent. The numbers \(r_{i}\) have units of rate per mole times volume, while the factors \(z_{i}^{\mathscr{C}}\) and \(n_{i}\) account for the law of mass action.
Note that to precisely distinguish energy from entropy, and hence to unambiguously identify heat flows, requires a microscopic Hamiltonian [11]. However, by comparing (2) to our rate parametrization for reservoir interactions, we can write
\[\frac{r_{i}^{+}}{r_{i}^{-}(n_{i})}=\frac{\epsilon r_{i}z_{i}^{\mathscr{C}}}{ \epsilon r_{i}n_{i}/\Omega}=e^{-\frac{1}{RT}(G_{i}(\vec{n})-W_{i})}, \tag{8}\]
where \(W_{i}\) is the work done by the reservoir in one intake reaction. This identifies \(W_{i}=\mu_{i}^{c^{\circ}}+RT\log\bigl{(}z_{i}^{\mathscr{C}}/c^{\circ}\bigr{)}\), which is just the chemical potential of species \(i\) at the reservoir concentration, as expected.
To quantify how far a CRN is from equilibrium, we measure the entropy production rate
\[T\dot{S}=TR\sum\left[(\vec{k}^{+}-\vec{k}^{-})\log\frac{\vec{k}^{+}}{\vec{k} ^{-}}\right]\geq 0 \tag{9}\]
Define the N by M stoichiometric matrix \(\mathbb{S}=[\mathbb{S}^{0}\ \mathbb{S}^{\mathscr{C}}]\) where \(\mathbb{S}^{0}_{i\alpha}=q_{i\alpha}-p_{i\alpha}\) are the stoichiometric coefficients for the core reactions, and \(\mathbb{S}^{\mathscr{C}}_{i\alpha}=+1\) when \(\alpha\) corresponds to a reservoir of species \(i\), and \(0\) otherwise. Using local detailed balance, the entropy production can be rewritten as
\[T\dot{S} = -\sum\left[(\vec{k}^{+}-\vec{k}^{-})\left(\mathbb{S}^{T}\cdot \vec{G}(\vec{n})-(\mathbb{S}^{\mathscr{C}})^{T}\cdot\vec{W}\right)\right] \tag{10}\] \[= -\vec{G}(\vec{n})\cdot\mathbb{S}\cdot(\vec{k}^{+}-\vec{k}^{-})+ \vec{W}\cdot(\vec{r}^{+}-\vec{r}^{-}),\]
which will be useful below.
### Deterministic analysis
To illustrate our approach, we first consider the rate equations for our model; later we will generalize our results to include stochastic effects. The rate equations are
\[\partial_{t}\vec{n}=\mathbb{S}\cdot\vec{J}(\vec{n}) \tag{11}\]
where \(\vec{J}\) is the vector of reaction fluxes. Separating the reactions into core and boundary, this becomes
\[\partial_{t}\vec{n}=\mathbb{S}^{0}\cdot\vec{J}^{\,0}(\vec{n})+\epsilon\ [\ \vec{r}\ ( \vec{z}^{\mathscr{L}}-\vec{n}/\Omega)], \tag{12}\]
where \(\vec{J}^{\,0}(\vec{n})\) is the reaction flux vector for core reactions, and where \(r_{j}=0\) if there is no reservoir for species \(j\). (12) suggests a perturbative solution in \(\epsilon:\vec{n}=\vec{n}^{0}+\epsilon\vec{n}^{1}+\ldots\). At leading order \(\partial_{t}\vec{n}^{0}=\mathbb{S}^{0}\cdot\vec{J}^{0}(\vec{n}^{0})\), which describes a closed system. The system monotonically tends to thermal equilibrium, described by \(\vec{J}^{0}(\vec{n}^{0})=0\). The general steady-state solution is
\[\vec{n}^{0}=[\vec{n}^{eq}e^{\vec{c}}]=\Omega c^{\circ}[e^{-\vec{\mu}^{\circ}/ RT}e^{\vec{c}}] \tag{13}\]
where we must have \((\mathbb{S}^{0})^{T}\cdot\vec{c}=0\). Such vectors \(\vec{c}\) are the _conserved quantities_ of the closed CRN. To see this more explicitly, let \(\vec{Y}=(e^{-},H,C,O,\ldots)\) be a vector of elements that appear in the CRN, and write each species as an abstract sum of elements
\[X_{j}=\sum_{e}\zeta_{je}Y_{e}, \tag{14}\]
defining the _atomic matrix_\(\zeta\). Let there be \(E\) elements so that \(\zeta\) is an N by E matrix. The condition for conservation of element \(e\) at reaction \(\alpha\) is
\[0=\sum_{j}q_{j\alpha}\zeta_{je}-\sum_{j}p_{j\alpha}\zeta_{je}=\left((\mathbb{ S}^{0})^{T}\cdot\zeta\right)_{\alpha e}, \tag{15}\]
showing that \(\vec{\zeta}_{e}\) is a conserved quantity. If \(\zeta\) is a basis for this space, then we can write
\[\vec{c}=\zeta\cdot\vec{\eta}, \tag{16}\]
where \(\vec{\eta}\) is a vector in element space. Comparing with (13) we see that nonzero \(\vec{c}\) is equivalent to shifting chemical potentials by
\[\vec{\mu}^{\circ}\rightarrow\vec{\mu}^{\circ}-RT\zeta\cdot\vec{\eta}. \tag{17}\]
Therefore \(RT\vec{\eta}\) corresponds to a shift in the chemical potential of elements. ( In our rate parametrization a full transformation would require also shifting activation energies by \(G_{A_{\alpha}}\rightarrow G_{A_{\alpha}}-RT\vec{p_{\alpha}}\cdot\zeta\cdot \vec{\eta}\).) It acts as a 'tilt' in the free energy landscape, which fixes the number of each element as
\[\vec{y}=\zeta^{T}\cdot\vec{n}=\zeta^{T}\cdot[\vec{n}^{eq}e^{\zeta\cdot\vec{ \eta}}]+\mathscr{O}(\epsilon) \tag{18}\]
Although this gives \(E\) equations in \(E\) unknowns, they cannot be explicitly solved for \(\vec{\eta}\).
At the next order we have
\[\partial_{t}\vec{n}^{1}=(\mathbb{S}^{0})\cdot\left.\left(\partial\vec{J}^{\,0}/ \partial\vec{n}\right)\right|_{\vec{n}^{0}}\cdot\vec{n}^{1}+\left.\left[\vec{r} \left(\vec{z}^{\mathscr{C}}-\vec{n}/\Omega\right)\right]\right|_{\vec{n}^{0}} \tag{19}\]
If we multiply by \(\zeta^{T}\) we get
\[\partial_{t}\zeta^{T}\cdot\vec{n}^{1}=\zeta^{T}\cdot[\vec{r}(\vec{z}^{ \mathscr{C}}-\vec{n}^{0}/\Omega)], \tag{20}\]
which expresses element balance. This equation can be directly integrated. For simplicity assume that the reservoirs are independent of time. Then after an initial relaxation we will have
\[\zeta^{T}\cdot\vec{n}^{1}(t)\sim t\zeta^{T}\cdot[\vec{r}(\vec{z}^{\mathscr{C}} -\vec{n}^{0}(\infty)/\Omega)], \tag{21}\]
which diverges at large \(t\). After a time \(t\sim 1/\epsilon\) we will have \(e\vec{n}^{1}\sim\vec{n}^{0}\) and the perturbation series breaks down. Thus the slow driving limit \(\epsilon\to 0\) is _singular_. This is not specific to our boundary conditions or simplifying assumptions, but is completely generic.
This mathematical singularity has a simple physical interpretation: when the system is open to reservoirs, elements can be exchanged with the environment. Over a long time scale \(t\sim 1/\epsilon\), the relevant thermal equilibrium state can be completely different from its initial value. Fortunately, this suggests a cure to the long-time divergence: we need to consider a multiple-scales asymptotic analysis [12, 13]. We introduce the slow time \(\tau=t\epsilon\), so-named because \(\tau\sim\mathscr{O}(1)\) when \(t\sim 1/\epsilon\), and replace the time derivative by
\[\frac{\partial}{\partial t}\rightarrow\frac{\partial}{\partial t}+\epsilon \frac{\partial}{\partial\tau}\]
The leading order solution remains the same, except that the coefficients \(\vec{\eta}\) get promoted to functions of the slow time, capturing their evolution on long time scales. The \(\mathscr{O}(\epsilon)\) equation becomes
\[\partial_{\tau}\vec{n}^{0}+\partial_{t}\vec{n}^{1}=(\mathbb{S}^{0})\cdot\left. \left(\partial\vec{J}^{\,0}/\partial\vec{n}\right)\right|_{\vec{n}^{0}}\cdot \vec{n}^{1}+[\vec{r}(\vec{z}^{\mathscr{C}}-\vec{n}^{0}/\Omega)]\]
Multiplying by \(\zeta^{T}\) we have
\[\zeta^{T}\cdot\partial_{\tau}\vec{n}^{0}+\zeta^{T}\cdot\partial_{t}\vec{n}^{ 1}=\zeta^{T}\cdot[\vec{r}(\vec{z}^{\mathscr{C}}-\vec{n}^{0}/\Omega)] \tag{22}\]
At this order, the divergences are cured if we impose
\[\zeta^{T}\cdot\partial_{\tau}\vec{n}^{0}=\zeta^{T}\cdot[\vec{r}(\vec{z}^{ \mathscr{C}}-\vec{n}^{0}/\Omega)] \tag{23}\]
which are the slow dynamics equations. As shown below, these same equations will remain valid for stochastic dynamics. We have \(E\) equations in \(E\) DOF. More explicitly,
\[\sum_{j,e^{\prime}}\zeta_{je}n^{eq}_{j}e^{\sum_{e}\zeta_{je}n_{e}}\zeta_{je^{ \prime}}\partial_{\tau}\eta_{e^{\prime}}=\sum_{j}\zeta_{je}r_{j}(z^{\mathscr{ C}}_{j}-n^{eq}_{j}e^{\sum_{e}\zeta_{je}n_{e}}/\Omega) \tag{24}\]
Defining the matrix \(M(\vec{\eta})=\zeta^{T}\cdot[[\vec{n}^{eq}e^{\zeta\cdot\vec{\eta}}]]\cdot\zeta\) and the external flux \(\vec{J}^{\mathscr{C}}(\vec{\eta})=[\vec{r}(\vec{z}^{\mathscr{C}}-\vec{n}^{eq }e^{\zeta\cdot\vec{\eta}}/\Omega)]\) we can write this as
\[M\cdot\partial_{\tau}\vec{\eta}=\zeta^{T}\cdot\vec{J}^{\mathscr{C}}, \tag{25}\]
which is our main result. (25) is a strongly nonlinear system of equations governing the evolution of near-equilibrium states in a slowly driven CRN. The core CRN can be completely arbitrary, as long as it is detailed balanced. The reservoirs can be forced arbitrarily on the slow time scale; that is, \(\vec{r}\) and \(\vec{n}^{\mathscr{C}}\) can be arbitrary functions of \(\tau\).
Note that \(\zeta^{T}\cdot\vec{n}\) is simply the number of moles of each element; thus from (23) the slow dynamics equation is simply the conservation law for elements, which under slowly-driven conditions gives a closed set of equations. Remarkably, the number of degrees of freedom is reduced from \(N\) down to \(E\). Moreover, these slow DOF are not arbitrary but are easily interpretable and universal across different CRNs.
If some elements appear in the CRN but not in any reservoirs, then their concentrations are clearly conserved at their initial values; in this case the corresponding entries of the slow dynamics equations can be immediately solved, leading to a further reduction in the number of evolving DOF.
(25) is also remarkably universal in form: it does not depend on any reaction rates in the core, except insofar as they are fast compared to the reservoir interactions. It can thus be applied to poorly characterized CRNs where only the stoichiometry, chemical potentials, and reservoir interactions are known.
The slow dynamics equation describes the dissipative dynamics through near-equilibrium states. In SI, we show that at leading order the dissipation depends only on the slow dynamics, and not also on \(\vec{n}^{1}\) as might naively be expected. In particular it takes a simple form
\[T\dot{S}=\vec{W}\cdot(\vec{r}^{\perp}-\vec{r}^{\,-})+\mathscr{O}(\epsilon^{2}) \tag{26}\]
which can be evaluated on the solution of (25).
## 1 Stochastic analysis
We now extend our results to include stochastic effects; in a first reading, this section can be omitted. We begin from the Doi-Peliti path integral formulation [14, 15], which is an exact rewriting of the chemical master equation for the full counting statistics \(\mathbb{P}(\vec{n},t)\). For a self-contained review see [16]. The CRN is specified by the quasi-Hamiltonian (or Liouvillian) \(H=H_{0}+\epsilon H_{\mathscr{C}}\) with
\[H_{0}(\vec{n},\vec{\nu}) =\sum_{\alpha}k_{\alpha}\left\{\left(e^{\vec{\nu}\cdot(\vec{q}_{ \alpha}-\vec{p}_{\alpha})}-1\right)\prod\left[\left(\frac{\vec{n}}{\vec{n}^{eq }}\right)^{\vec{p}_{\alpha}}\right]\right. \tag{27}\] \[\left.+\big{(}e^{-\vec{\nu}\cdot(\vec{q}_{\alpha}-\vec{p}_{ \alpha})}-1\big{)}\prod\left[\left(\frac{\vec{n}}{\vec{n}^{eq}}\right)^{\vec{ q}_{\alpha}}\right]\right\}\] \[H_{\mathscr{C}}(\vec{n},\vec{\nu}) =\sum\left[\vec{r}(e^{-\vec{\nu}}-1)\vec{n}/\Omega+\vec{r}(e^{ \vec{\nu}}-1)\vec{z}^{\mathscr{C}}\right]. \tag{28}\]
The \(\vec{\nu}\) variables act as a per-species bias. As explained in SI, in the macroscopic regime where particle numbers are large the leading behavior of \(\rho\equiv\log\mathbb{P}\) satisfies a Hamilton-Jacobi equation [17, 18, 19]
\[\frac{\partial\rho(\vec{n},t)}{\partial t}=H(\vec{n},-\nabla_{\vec{n}}\rho( \vec{n},t)). \tag{29}\]
(29) goes beyond the Gaussian approximation as it includes rare trajectories between different attractors, if they exist. In the slowly driven case, we require that \(\Omega c^{\circ}\epsilon\gg 1\), which ensures that the slow driving is more relevant than finite-size fluctuations from the bulk of the system.
Let \(\{\vec{c}_{x}\}_{x=1}^{E}\) be a basis of the left kernel (or cokernel) of the stoichiometric matrix \(\mathscr{K}=\mathrm{coker}\,\mathbb{S}\), i.e., this system has \(E\) conserved quantities. We solve (29) under the initial condition
\[\rho(\vec{n},t=0) =\sum\left[\vec{n}\log\vec{\lambda}^{(0)}-\vec{\lambda}^{(0)}- \vec{n}\log\frac{\vec{n}}{e}\right], \tag{30}\] \[\vec{\lambda}^{(0)} =\vec{n}^{eq}e^{\vec{c}^{(0)}}, \tag{31}\]
where \(\vec{c}^{(0)}\) is an arbitrary vector in \(\mathscr{K}\). This is a large deviation function of a Poisson distribution in the limit \(n_{i}\to\infty\)
\[\mathbb{P}(\vec{n},0)=\prod\left[\frac{\left(\vec{\lambda}^{(0)}\right)^{\vec {n}}e^{-\vec{\lambda}^{(0)}}}{(\vec{n}/e)^{\vec{n}}}\right]\sim\prod\left[ \frac{\left(\vec{\lambda}^{(0)}\right)^{\vec{n}}e^{-\vec{\lambda}^{(0)}}}{ \vec{n}!}\right]\]
with means \(\vec{\lambda}^{(0)}\).
Introducing the two time variables \(t\) and \(\tau=\epsilon t\) as in the previous subsection, (29) is rewritten as
\[\bigg{(}\frac{\partial}{\partial t}+\epsilon\frac{\partial}{\partial\tau} \bigg{)}\rho(\vec{n},t,\tau)=H(\vec{n},-\nabla_{\vec{n}}\rho(\vec{n},t,\tau))\]
with
\[\rho(\vec{n},t=0,\tau=0)=\sum_{i}\Big{[}n_{i}\log\lambda_{i}^{(0)}-\lambda_{i }^{(0)}-n_{i}\log\frac{n_{i}}{e}\Big{]}.\]
We expand the solution of this equation as \(\rho(\vec{n},t,\tau)=\rho_{0}(\vec{n},t,\tau)+\epsilon\rho_{1}(\vec{n},t,\tau)+\cdots\). The leading equation is
\[\frac{\partial}{\partial t}\rho_{0}(\vec{n},t,\tau)=H_{0}(\vec{n},-\nabla_{ \vec{n}}\rho_{0}(\vec{n},t,\tau)),\]
which is already solved by the initial condition. We cannot determine the \(\tau\)-dependence of the vector \(\vec{c}(\tau)\in\mathscr{K}\) at this order.
At the next order we have
\[\frac{\partial}{\partial t}\rho_{1}+\frac{\partial}{\partial\tau}\rho_{0}=- \nabla_{n}\rho_{1}\cdot\nabla_{\nu}H_{0}(\vec{n},-\nabla_{n}\rho_{0})+H_{1}( \vec{n},-\nabla_{n}\rho_{0}),\]
which can be written
\[\frac{\partial\rho_{1}}{\partial t}+\nabla_{n}\rho_{1}\cdot\nabla_{\nu}H_{0}( \vec{n},-\nabla_{n}\rho_{0})=\sum\Big{[}(1-\vec{n}/\vec{\lambda})(\partial_{ \tau}\vec{\lambda}-\vec{J}^{\ell})\Big{]} \tag{32}\]
Now we note that, deterministically, the system is always close to some equilibrium state (i.e. \(|\vec{n}-\vec{n}^{eq}|\sim\epsilon\)) where \(\vec{n}^{eq}\) is of the form (13). Further deviations are exponentially suppressed in probability when \(\Omega c^{\circ}\epsilon\gg 1\), as assumed. Consider a general equilibrium \(\vec{n}=[\vec{n}^{eq}e^{\zeta\cdot\vec{q}^{\prime}}]\) where \(\vec{q}\neq\vec{\eta}\). On any such state, we have \(\nabla_{\nu}H_{0}=0\), so that this equation can be directly integrated:
\[\rho_{1}|_{\vec{n}=[\vec{n}^{eq}e^{\zeta\cdot\vec{q}^{\prime}}]}=\int^{t}dt^{ \prime}\sum\left[(1-e^{\zeta\cdot(\vec{q}^{\prime}-\vec{\eta})})(\partial_{\tau }\vec{\lambda}-\vec{J}^{\ell})\right], \tag{33}\]
which may lead to secular divergences. We demand that for all nearby equilibria \(|\vec{\eta}^{\prime}-\vec{\eta}|\ll 1\), the right-hand-side vanishes. We thus expand \(e^{\zeta\cdot(\vec{\eta}^{\prime}-\vec{\eta})}\approx 1+\zeta\cdot(\vec{\eta}^{ \prime}-\vec{\eta})\) and impose
\[0=\zeta^{T}\cdot(\partial_{\tau}\vec{\lambda}-\vec{J}^{\ell}), \tag{34}\]
which is equivalent to (23), with \(\vec{n}^{(0)}\) replaced by \(\vec{\lambda}\). We thus recover the slow dynamics equation in the stochastic approach, as the leading equation necessary to prevent long-time divergences in the singular perturbation expansion.
In this approximation, the particle distribution remains Poissonian at leading order, with mean \(\vec{\lambda}\) that corresponds to \(\vec{n}^{0}\) in the rate equations. Note that the first correction to Poissonian distributions is given by the solution to (32). Since this is a linear PDE for \(\rho_{1}\), it can be solved by the method of characteristics. This solution will depend on a trajectory of the closed system.
It is clear that (34) only prevents the leading divergences, and there are not enough DOF in \(\vec{\lambda}(\tau)\) to prevent further ones: this implies that the full distribution must be non-Poissonian. To go beyond (34), it is easiest to use a cumulant generating function representation, as discussed in SI. This analysis shows that, once (34) is solved, all higher-order divergences are tamed by solving a series of linear tensorial ODEs. These ODEs all involve the same matrix \(M\) that appears in (25), indicating its central role for slow dynamics in CRNs.
## 2 Numerical validation
We illustrate our theory with a series of models of increasing complexity. Initially we consider models with a small range of reaction rates, corresponding to physical systems at high temperature.
Figure 1: The slow dynamics equation tracks solutions to the full rate equations, even through non-equilibrium processes. In (a) the ABC model is shown with numerical solutions (solid), and the slow dynamics equation (dashed) for a range of \(\epsilon\). In (b,c), the methane combustion model is shown for numerical solutions with \(\epsilon=0.3\) (dashed) and \(\epsilon=0.01\) (solid, colored) and compared to the result from the slow dynamics equation (solid, black). These results use parameters \(k_{1}=k_{2}=k_{3}=r_{A}=r_{C}=1,n_{A}(t=0)=n_{A}^{eq}=1,n_{B}(t=0)=n_{B}^{eq}=2,n_{C}(t=0)=n_{C}^{eq}=3,n_{A}^{\mathscr{E}}=2,n_{C}^{\mathscr{E}}=4.\) for the ABC model and \(k_{i}\to 1,n_{i}^{eq}=1,r_{i}=1,n_{i}^{eq}=2\) for the methane combustion model.
### ABC Model
The first model, which we dub the ABC model, has 3 internal reactions
\[A \stackrel{{ k_{1}}}{{\rightleftharpoons}}B, \tag{35}\] \[A+B \stackrel{{ k_{2}}}{{\rightleftharpoons}}2B,\] (36) \[B \stackrel{{ k_{3}}}{{\rightleftharpoons}}C \tag{37}\]
and two reservoirs
\[A \stackrel{{ r_{A}}}{{\rightleftharpoons}}A^{ \mathscr{C}}, \tag{38}\] \[C \stackrel{{ r_{C}}}{{\rightleftharpoons}}C^{ \mathscr{C}}. \tag{39}\]
It is stoichiometrically trivial, but simple enough that the slow dynamics equation can be analytically solved, as detailed in SI. This solution, giving \(\vec{n}^{0}(\tau)\), is compared with illustrative numerical results from the full rate equations, shown in Fig. 1a. The leading order analytical solution \(\vec{n}^{0}(\tau)\) differs from numerical results by an amount of order \(\epsilon\), as expected.
### Methane combustion
We now consider a version of methane combustion (see Methods):
\[\mathrm{CH}_{4}+3\,\mathrm{CO}_{2} \stackrel{{ k_{1}}}{{\rightleftharpoons}}2\, \mathrm{H}_{2}\mathrm{O}+4\,\mathrm{CO}, \tag{40}\] \[\mathrm{O}_{2}+2\,\mathrm{CO} \stackrel{{ k_{2}}}{{\rightleftharpoons}}2\, \mathrm{CO}_{2},\] (41) \[\mathrm{H}_{2}+\mathrm{CO}_{2} \stackrel{{ k_{3}}}{{\rightleftharpoons}}\mathrm{H}_{2} \mathrm{O}+\mathrm{CO},\] (42) \[\mathrm{H}_{2}\mathrm{O} \stackrel{{ r_{3}}}{{\rightleftharpoons}}\mathrm{H}_{2} \mathrm{O}^{\mathscr{C}},\] (43) \[\mathrm{O}_{2} \stackrel{{ r_{5}}}{{\rightleftharpoons}}\mathrm{O}_{2} \,^{\mathscr{C}},\] (44) \[\mathrm{H}_{2} \stackrel{{ r_{6}}}{{\rightleftharpoons}}\mathrm{H}_{2} \,^{\mathscr{C}}. \tag{45}\]
Although small, this CRN has features typical of large physical networks: the stoichiometric analysis (see SI) shows that there are 3 conserved quantities, corresponding to the concentrations of C, H, and O, as expected. We consider it in the high temperature limit \(T\to\infty\) where all bulk rates are equal \(k_{i}\to 1\) (in appropriate units), and we furthermore set \(n_{i}^{eq}=1,r_{i}=1,n_{i}^{\mathscr{C}}=2\) for simplicity. Example numerical time evolutions are shown in Fig.1b (colored) and compared with the result from the slow dynamics equation (black). At \(\epsilon=0.01\) the results are indistinguishable while even at \(\epsilon=0.3\) (dashed) the slow dynamics result captures all qualitative features of the dynamics, and provides a quantitative approximation with relative errors smaller than \(\epsilon\).
The entropy production rate is shown in Fig.1c (colored). As predicted by our analysis, the entropy production is well-captured by the contribution from slow dynamics (black). Thus, even though the system is always close to some thermal equilibrium state, it is nevertheless out-of-equilibrium and constantly producing entropy.
### Broad spectrum of reaction rates
In realistic CRNs at room temperature, the reaction rates span a wide range of scales. This presents challenges both for numerical simulations and for the basis of our theory, which requires a time-scale separation between the bulk and the reservoir interactions. Nevertheless, at high enough temperature such a separation can be found and the theory applied. Here we consider an 'early Earth' CRN modelling a submarine hydrothermal system containing formaldehyde, ammonia, and water, of interest for the origin of life as submarine hydrothermal systems are a potential source of abiotic amino acids [20, 21]. Using Reaction Mechanism Generator (see Methods) we obtain species that can be obtained from the initial pool, along with the corresponding reactions, including their rates; the full list of 13 species and 40 reactions is in SI.
This CRN conserves the concentrations of C, H, O, and N, so its slow dynamics is governed by only 4 DOF, a large reduction from the initial 13 DOF.
We solved the rate equations with reservoirs of \(\rm CH_{2}O,H,CH_{4}O\), and \(\rm C_{2}H_{6}O_{2}\) at T=1000K. For this choice of reservoirs, N is still conserved, as is the difference of C and O concentrations (since all reservoirs have an equal number of C and O atoms). Thus only 2 nontrivial DOF are needed to understand its slow dynamics. As shown in Figure 2, despite the range of reaction rates spanning 30 orders of magnitude and initial concentrations spanning more than 5 orders of magnitude, the slow dynamics equation quantitatively predicts the evolution of all species, and the entropy production.
### Exponential growth
The slow dynamics equation is not limited to description of transients between equilibrium states; the theory also applies when a steady state is not reached. With the early Earth CRN, if coupled to reservoirs of \(\rm CH_{2}O,NH_{3},H,CHO,H_{2}N\), then the system grows on the slow time scale \(\tau\), as shown in SI. Again the slow dynamics equation captures this behavior.
Figure 2: The slow dynamics equation can be applied to CRNs with a broad range of reaction rates. In (a) the histogram of reaction rates is shown (including both forward and reverse reactions), along with the chosen value of \(\epsilon\) in simulations. Numerical integration of the rate equations (b, colors) is shown along with the prediction from slow dynamics (b, black). The entropy production (c, colors) is well predicted by the slow dynamics contribution (c, black).
## 3 Extension to far-from-equilibrium states
Although above we considered initial states that were near equilibrium before coupling to external reservoirs, this is not essential to obtain a reduction to conserved quantities. Suppose instead that there is a leading order coupling to reservoirs, which we assume is stationary. Write \(\vec{r}\rightarrow\frac{1}{\epsilon}\vec{r}^{0}+\vec{r}\) so that the rate equation, in the two-time _ansatz_, is
\[\partial_{t}\vec{n}+\epsilon\partial_{\tau}\vec{n}=\underbrace{(\mathbb{S}^{0} )\cdot\vec{J}^{\,0}(\vec{n})+[\vec{r}^{0}(\vec{n}^{\ell^{\prime}}-\vec{n})]}_{ \mathbb{S}\cdot\vec{J}^{\,0}(\vec{n})}+\epsilon[\vec{r}(\vec{n}^{\ell^{\prime }}-\vec{n})]\]
Expanding \(\vec{n}=\vec{n}^{0}+\epsilon\vec{n}^{1}+\ldots\), at leading order \(\partial_{t}\vec{n}^{0}=\tilde{\mathbb{S}}\cdot\tilde{\vec{J}}^{\,0}(\vec{n}^ {0})\). With time-independent coupling to reservoirs, this will either describe relaxation to a non-equilibrium steady state (NESS), or blow-up. Assume that we reach a NESS. Then at the next order we have an equation of the form (22), with \(\mathbb{S}\) replaced by \(\tilde{\mathbb{S}}\). This will generally have long-time divergences unless \(\vec{n}^{0}\) depends on the slow time \(\tau\). Let \(\zeta\) be a basis of the conserved quantities of \(\tilde{\mathbb{S}}\), i.e. \(\tilde{\mathbb{S}}^{T}\cdot\zeta=0\). Then the slow dynamics equation is again (23). The differences with the previous analysis are that now (i) the conserved quantities do not necessarily correspond to elements; and (ii) \(\vec{n}^{0}\) is not a known function of the slow DOF \(\vec{\eta}\).
## 4 Discussion & Conclusions
We have shown that slowly driven CRNs are governed by the conserved quantities of the corresponding closed system. The latter are generally the element concentrations, giving a huge reduction in DOF in large CRNs. The natural dynamical variables of the slow dynamics are chemical potentials, \(\vec{\eta}\), which evolve according to (25). From the solution of this equation, which does not involve the bulk reaction rates, one can obtain the full dynamics, at leading order in driving.
This framework may be useful to understand free energy transduction in open CRNs [8]. Indeed, open CRNs can be considered as chemical machines that interconvert species between the reservoirs. For example, the early Earth CRN considered above, when coupled to reservoirs of \(\mathrm{CH_{2}O,H,CH_{4}O}\), and \(\mathrm{C_{2}H_{6}O_{2}}\), has effective reactions
\[\mathrm{CH_{2}O+2H} \rightleftharpoons CH_{4}O \tag{46}\] \[\mathrm{CH_{2}O+CH_{4}O} \rightleftharpoons C_{2}H_{6}O_{2}, \tag{47}\]
known as _emergent cycles_; these can be obtained algebraically from the stoichiometric matrix [6, 7, 8]. The free energy change in an emergent cycle is obtained straightforwardly from the chemical potentials of the species, but to understand the efficiency of the chemical machine, one requires the flux through the cycle, except in special cases [8]. Generally this necessitates the entire suite of reaction rates and a numerical solution of the rate equations. A limiting factor in the analysis is then that many rates are not known for biochemical networks of interest.
Our analysis provides an alternative. For any given forcing, one can solve the slow dynamics equation, _without knowledge of any bulk reaction rates_. With this solution in hand, one can evaluate the reaction fluxes and then the efficiency. This solution is guaranteed to work in the limit of slow driving, and can provide a benchmark value at finite-rate driving.
We note that an alternative weak-driving theory has been obtained in [22], also using the Hamilton-Jacobi equation. This theory is based upon the log-probability correction \(\rho_{1}\), but without
the multiple time scale analysis. This is sufficient for non-equilibrium steady states as considered in [22], but in dynamical problems it will generally suffer from long-time divergences.
Our reduction of complex dynamics to that of the conserved quantities is reminiscent of hydrodynamics. There are, however, some differences. First, in hydrodynamics one assumes that the system is coupled to other systems that differ only weakly from it. Here instead we do not assume that the reservoirs are near the system: their concentrations can be arbitrarily far from the corresponding concentration in the system. We only assume that they react slowly with the system. Second, in hydrodynamics one considers systems that interact spatially, whereas our system is well-mixed and interacts with external reservoirs without any explicit spatial coupling. The extension of our results to include spatial effects will be presented in a future publication.
For constructing the methane combustion model, the toolbox "Stoichiometry Tools" in MATLAB was used [23]. The input \(\{\mathrm{CH}_{4},\mathrm{CO}_{2},\mathrm{H}_{2}\mathrm{O},\mathrm{CO}, \mathrm{O}_{2},\mathrm{H}_{2}\}\) led to the reactions (40).
For larger models, we used Reaction Mechanism Generator (RMG) [24, 25]. Given an input pool of species, RMG iteratively finds possible reactions between the species and new species that can be produced. Rates, enthalpies, and entropies are either looked up in a database, or estimated using additivity methods. For the early Earth model, we used the input set \(\{\mathrm{CH}_{2}\mathrm{O},\mathrm{NH}_{3},\mathrm{H}_{2}\mathrm{O}\}\), with \(\mathrm{H}_{2}\mathrm{O}\) as a solvent, into RMG. It results in the 40 reactions and 10 new species shown in SI.
We note that RMG is not guaranteed to find all possible reactions among species. In testing against known CRNs relevant to the origin of life, we found that RMG sometimes failed to find reactions known to be possible. Thus we use it as a method to benchmark our framework against CRNs with valid stoichiometry and a broad range of realistic rates.
All ordinary differential equations were integrated in MATLAB using the ODE15s solver.
We are grateful to Mark Persic for his preliminary work on this project. This work was funded by NSERC Discovery Grant RGPIN-2020-04762 (to E. De Giuli).
## References
* [1] Okino M S and Mavrovouniotis M L 1998 _Chemical reviews_**98** 391-408
* [2] Radulescu O, Gorban A N, Zinovyev A and Noel V 2012 _Frontiers in genetics_**3** 131
* [3] Snowden T J, van der Graaf P H and Tindall M J 2017 _Bulletin of mathematical biology_**79** 1449-1486
* [4] Chaikin P M and Lubensky T C 2000 _Principles of Condensed Matter Physics_ (Cambridge, U.K.: Cambridge University Press) ISBN 9780521794503
* [5] Van den Broeck C 2013 _Stochastic thermodynamics: A brief introduction_ vol 184 (IOS Press Amsterdam) pp 155-193
* [6] Polettini M and Esposito M 2014 _The Journal of chemical physics_**141** 024117
* [7] Rao R and Esposito M 2016 _Physical Review X_**6** 041064
* [8] Wachtel A, Rao R and Esposito M 2022 _The Journal of Chemical Physics_**157** 024109
* [9] Astumian R D 2018 _Chemical Communications_**54** 427-444
* [10] Gilbert R G and Smith S C 1990 _Theory of unimolecular and recombination reactions_ (Publishers' Business Services [distributor])
* [11] Schmiedl T and Seifert U 2007 _The Journal of chemical physics_**126** 044101
* [12] Bender C M and Orszag S A 1999 _Advanced Mathematical Methods for Scientists and Engineers_ vol I (Springer)
* [13] Hinch E J 1991 _Matched Asymptotic Expansions_ 1st ed (Cambridge, U.K.: Cambridge)
* [14] Doi M 1976 _Journal of Physics A: Mathematical and General_**9** 1465
* [15] Peliti L 1985 _Journal de Physique_**46** 1469-1483
* [16] De Giuli E and Scalliet C 2022 _Journal of Physics A: Mathematical and Theoretical_**55** 474002
* [17] Kubo R, Matsuo K and Kitahara K 1973 _Journal of Statistical Physics_**9** 51-96
* [18] Kitahara K 1975
* [19] Smith E 2020 _Entropy_**22** 1137
* [20] Martin W, Baross J, Kelley D and Russell M J 2008 _Nature Reviews Microbiology_**6** 805-814 URL [https://doi.org/10.1038/nmicro1991](https://doi.org/10.1038/nmicro1991)
* [21] Smith E and Morowitz H J 2016 _The origin and nature of life on earth: the emergence of the fourth geosphere_ (Cambridge University Press)
* [22] Freitas N, Falasco G and Esposito M 2021 _New Journal of Physics_**23** 093003
* [23] Kantor J 2023 Stoichiometry tools URL [https://www.mathworks.com/matlabcentral/fileexchange/29774-stoichiometry-tools](https://www.mathworks.com/matlabcentral/fileexchange/29774-stoichiometry-tools)
* [24] Gao C W, Allen J W, Green W H and West R H 2016 _Computer Physics Communications_**203** 212-225
* [25] Liu M, Grinberg Dana A, Johnson M S, Goldman M J, Jocher A, Payne A M, Grambow C A, Han K, Yee N W and Mazeau E J 2021 _Journal of Chemical Information and Modeling_**61** 2686-2696
# Universal Slow Dynamics of Chemical Reaction Networks - Supplementary Information
**Masanari Shimada\({}^{*1}\)1, Pegah Behrad\({}^{*1}\), and Eric De Giuli\({}^{1}\)**
\({}^{1}\) Department of Physics, Toronto Metropolitan University\(\mathscr{S}\), M5B 2K3, Toronto, Canada
E-mail: [email protected]
Footnote 1: * These two authors contributed equally to this work
\(\mathscr{S}\)formerly Ryerson University
March 2023
## 1 Entropy production in the slow-driving limit
Consider the rate of entropy production
\[T\dot{S}=-\vec{G}(\vec{n})\cdot\mathbb{S}\cdot(\vec{k}^{+}-\vec{k}^{-})+\vec{W} \cdot(\vec{r}^{+}-\vec{r}^{-}) \tag{1}\]
The first term can be written
\[-\vec{G}(\vec{n})\cdot\mathbb{S}\cdot(\vec{k}^{+}-\vec{k}^{-}) =-\partial_{t}\big{(}\vec{G}(\vec{n})\cdot\vec{n}\big{)}+\vec{n} \cdot\partial_{t}\vec{G}(\vec{n}) \tag{2}\] \[=-\partial_{t}\big{(}\vec{G}(\vec{n})\cdot\vec{n}-RT\Sigma\vec{n }\big{)}, \tag{3}\]
which is minus the rate of change of Helmholtz free energy (the second term is pressure times volume, using the ideal gas law). Since \(\vec{W}\cdot(\vec{r}^{+}-\vec{r}^{-})\) is the rate of working on the system by the reservoirs, Eq. (1) is a formulation of the 1st law of thermodynamics for CRNs.
Moreover, we can write
\[T\dot{S} =-RTk\sum_{\alpha}e^{-G_{A_{\alpha}}/RT}\left[\sum_{j}S_{j\alpha} \log\bigl{(}n_{j}/n_{j}^{eq}\bigr{)}\right]\left[\prod_{i}(n_{i}/n_{i}^{eq})^{ p_{i\alpha}}-\prod_{i}(n_{i}/n_{i}^{eq})^{q_{i\alpha}}\right]+\vec{W}\cdot(\vec{r}^{+} -\vec{r}^{-}) \tag{4}\] \[=-\underbrace{RTk[\log(\vec{n}/\vec{n}^{eq})]\cdot\mathbb{S} \cdot[[e^{-G_{\bar{A}}/RT}]]\cdot\Bigl{[}\prod[(\vec{n}/\vec{n}^{eq})^{p}]- \prod[(\vec{n}/\vec{n}^{eq})^{q}]\Bigr{]}}_{\partial_{t}\bar{A}}+\vec{W}\cdot( \vec{r}^{+}-\vec{r}^{-})\]
The first term can be expanded
\[\partial_{t}\Delta F =RTk\left[\log\bigl{(}\vec{n}^{0}/\vec{n}^{eq}\bigr{)}\left(1+ \epsilon\frac{\vec{n}^{1}}{\vec{n}^{0}}+\ldots\right)\right]\cdot\mathbb{S} \cdot[[e^{-G_{\bar{A}}/RT}]]\cdot \tag{5}\] \[\qquad\times\left[\prod\left[(\vec{n}^{0}/\vec{n}^{eq})^{p}\left( 1+p\epsilon\frac{\vec{n}^{1}}{\vec{n}^{0}}+\ldots\right)\right]-\prod\left[( \vec{n}^{0}/\vec{n}^{eq})^{q}\left(1+q\epsilon\frac{\vec{n}^{1}}{\vec{n}^{0}} +\ldots\right)\right]\right]\]
We have
\[[\log\bigl{(}\vec{n}^{0}/\vec{n}^{eq}\bigr{)}]=[\vec{c}]=\zeta\cdot\eta \tag{6}\]
so that \(\mathbb{S}^{T}\cdot[\log(\vec{n}^{0}/\vec{n}^{eq})]=0\). We also notice that
\[\left[\prod[(\vec{n}^{0}/\vec{n}^{eq})^{p}\left(1+p\epsilon\frac{ \vec{n}^{1}}{\vec{n}^{0}}\right)]\right]-\left[\prod[(\vec{n}^{0}/\vec{n}^{eq} )^{q}\left(1+q\epsilon\frac{\vec{n}^{1}}{\vec{n}^{0}}\right)]\right]\] \[\qquad=\Bigl{[}\Bigl{[}\prod[(\vec{n}^{0}/\vec{n}^{eq})^{p}] \Bigr{]}\Bigr{]}\cdot\left(1+\epsilon p\cdot\left[\frac{\vec{n}^{1}}{\vec{n}^ {0}}\right]\right)-\Bigl{[}\Bigl{[}\prod[(\vec{n}^{0}/\vec{n}^{eq})^{q}] \Bigr{]}\Bigr{]}\cdot\left(1+\epsilon q\cdot\left[\frac{\vec{n}^{1}}{\vec{n}^ {0}}\right]\right)+\ldots\] \[\qquad=\Bigl{[}\Bigl{[}\prod[(\vec{n}^{0}/\vec{n}^{eq})^{p}] \Bigr{]}\Bigr{]}\cdot\epsilon p\cdot\left[\frac{\vec{n}^{1}}{\vec{n}^{0}} \right]-\Bigl{[}\Bigl{[}\prod[(\vec{n}^{0}/\vec{n}^{eq})^{q}]\Bigr{]}\Bigr{]} \cdot\epsilon q\cdot\left[\frac{\vec{n}^{1}}{\vec{n}^{0}}\right]+\ldots\]
Combining these facts we see that \(\partial_{t}\Delta F=\mathscr{O}(\epsilon^{2})\). More precisely:
\[\partial_{t}\Delta F=\epsilon^{2}RTk\left[\frac{\vec{n}^{1}}{\vec{n}^{0}}\log \bigl{(}\vec{n}^{0}/\vec{n}^{eq}\bigr{)}\right]\cdot\mathbb{S}\cdot[[e^{-G_{ \bar{A}}/RT}]]\cdot\Bigl{(}\prod[(\vec{n}^{0}/\vec{n}^{eq})^{p}]\Bigr{]}\cdot p -\Bigl{[}\prod[(\vec{n}^{0}/\vec{n}^{eq})^{q}]\Bigr{]}\cdot q\Bigr{)}\cdot \left[\frac{\vec{n}^{1}}{\vec{n}^{0}}\right]+\ldots \tag{7}\]
The entropy production is written
\[T\dot{S}=\epsilon\;[\vec{\mu}^{c^{o}}+RT\log\bigl{(}\vec{z}^{\mathscr{C}}/c^{o} \bigr{)}]\cdot[\vec{r}(\vec{z}^{\mathscr{C}}-\vec{n}^{0}/\Omega)]+\mathscr{O }(\epsilon^{2}), \tag{8}\]
and at leading order it can be determined from the slow dynamics equation, without knowledge of \(\vec{n}^{1}\).
## 2 Derivation of the Hamilton-Jacobi equation
Our notation follows [1]. We start from the generating function
\[Z(\vec{z},t_{f})=\sum_{\{\vec{n}\}}z_{1}^{n_{1}}z_{2}^{n_{2}}\cdots z_{N}^{n_{N}} P(\vec{n},t_{f}),\]
which has a path integral representation
\[Z=\int\mathscr{D}n\int\mathscr{D}\nu\;e^{-S},\]
with the action \(S=S_{0}+S_{BC}\), where
\[S_{0}=\int_{0}^{t_{f}}dt\left[\vec{\nu}\cdot\partial_{t}\vec{n}-H(\vec{n},\vec {\nu})\right] \tag{9}\]
and
\[S_{BC}=\sum_{j}\left[[\vec{n}(0)-\vec{n}^{0}]\cdot\vec{\nu}(0)+n_{j}(t_{f})[-z _{j}e^{-\nu_{j}(t_{f})}+1-\nu_{j}(t_{f})]\right].\]
This gives boundary conditions \(\vec{\nu}(t_{f})=\log\vec{z}\), \(\vec{n}(0)=\vec{n}^{0}\).
The particle number statistics are extracted from the generating function by contour integrals. To extract \(P(\vec{m},t_{f})\) we want
\[P(\vec{m},t_{f}) =\prod_{j}\oint\frac{dz_{j}}{2\pi i}\frac{1}{z_{j}^{m_{j}+1}}Z( \vec{z},t_{f})\] \[=\int\mathscr{D}n\int\mathscr{D}\nu\;e^{-S_{0}}e^{-\sum_{j}[\vec {n}(0)-\vec{n}^{0}]\cdot\vec{\nu}(0)}\prod_{j}\oint\frac{dz_{j}}{2\pi i}\frac{ 1}{z_{j}^{m_{j}+1}}e^{-n_{j}(t_{f})[-z_{j}e^{-\nu_{j}(t_{f})}+1-\nu_{j}(t_{f})]}\] \[=\int\mathscr{D}n\int\mathscr{D}\nu\;e^{-S_{0}}e^{-\sum_{j}[\vec {n}(0)-\vec{n}^{0}]\cdot\vec{\nu}(0)}\prod_{j}\oint\frac{dz_{j}}{2\pi i}\frac{ 1}{z_{j}^{m_{j}+1}}\sum_{k_{j}\geq 0}\frac{1}{k_{j}!}\big{(}n_{j}(t_{f})z_{j}e^{- \nu_{j}(t_{f})}\big{)}^{k_{j}}e^{-n_{j}(t_{f})[1-\nu_{j}(t_{f})]}\] \[=\int\mathscr{D}n\int\mathscr{D}\nu\;e^{-S_{0}}e^{-\sum_{j}[\vec {n}(0)-\vec{n}^{0}]\cdot\vec{\nu}(0)}\prod_{j}\frac{1}{m_{j}!}\big{(}n_{j}(t_{ f})e^{-\nu_{j}(t_{f})}\big{)}^{m_{j}}e^{-n_{j}(t_{f})[1-\nu_{j}(t_{f})]}\] \[=\int\mathscr{D}n\int\mathscr{D}\nu\;e^{-S_{0}}e^{-S_{BC}^{\prime}} \tag{10}\]
with
\[S_{BC}^{\prime}=\sum_{j}\left[[\vec{n}(0)-\vec{n}^{0}]\cdot\vec{\nu}(0)+\log( m_{j}!)-m_{j}\log n_{j}(t_{f})+m_{j}\nu_{j}(t_{f})+n_{j}(t_{f})[1-\nu_{j}(t_{f})]\right] \tag{11}\]
Integrating out \(\vec{\nu}(t_{f})\) we impose \(\vec{n}(t_{f})=\vec{m}\) as expected and the other \(t_{f}\) terms become \(\log m_{j}!-m_{j}\log m_{j}+m_{j}=\mathscr{O}(\log m_{j})\) in the action. Then
\[\rho(\vec{m},t_{f})\equiv\log P(\vec{m},t_{f})=\log\int_{\vec{n}(t_{f})=\vec{ m}}\mathscr{D}[n,\nu]\;e^{-S_{0}}e^{-[\vec{n}(0)-\vec{n}^{0}]\cdot\vec{\nu}(0) }+(irrel) \tag{12}\]
The presence of a strictly fixed \(\vec{n}(0)=\vec{n}^{0}\) is somewhat inconvenient. Let us regularize it by adding \(\eta\vec{\nu}(0)^{2}\) to the action, where eventually \(\eta\to 0\). Then we have
\[\rho(\vec{m},0) =\log\int d\vec{\nu}(0)e^{-[\vec{m}-\vec{n}^{0}]\cdot\vec{\nu}(0)} e^{-\eta\vec{\nu}(0)^{2}}\] \[=-[\vec{m}-\vec{n}^{0}]\cdot\vec{\nu}(0)-\eta\vec{\nu}(0)^{2} \tag{13}\]
in the saddle-point approximation, giving
\[\nabla_{\vec{m}}\rho(\vec{m},0)=-\vec{\nu}(0). \tag{14}\]
Now consider a small time \(t_{f}=\epsilon\ll 1\). In the saddle-point approximation
\[\rho(\vec{m},\epsilon) =-\epsilon\big{[}\vec{\nu}(\epsilon)\cdot\underbrace{\partial_{t} \vec{n}}_{=\frac{\partial H}{\partial\vec{\nu}}(\vec{m},\vec{\nu}(\epsilon))}- \underbrace{H(\vec{m},\vec{\nu}(0))+(\vec{\nu}(\epsilon)-\vec{\nu}(0))\cdot \frac{\partial H}{\partial\vec{\nu}}(\vec{m},\vec{\nu}(0))+\ldots}_{=H(\vec{ m},\vec{\nu}(0))+(\vec{\nu}(\epsilon)-\vec{\nu}(0))\cdot\frac{\partial H}{ \partial\vec{\nu}}(\vec{m},\vec{\nu}(0))-[\vec{n}(0)-\vec{n}^{0}]\cdot\vec{ \nu}(0)-\eta\vec{\nu}(0)^{2}+\ldots\] \[=\epsilon H(\vec{m},\vec{\nu}(0))-\vec{\nu}(0)\cdot\left[ \underbrace{\vec{n}(0)+\epsilon\frac{\partial H}{\partial\vec{\nu}}(\vec{m}, \vec{\nu}(0))}_{\vec{n}(\epsilon)+\vec{\nu}(\epsilon^{2})}-\vec{n}^{0}\right] -\eta\vec{\nu}(0)^{2}\] \[=\epsilon H(\vec{m},\vec{\nu}(0))+\rho(\vec{m},0), \tag{15}\]
which leads to
\[\frac{\partial\rho(\vec{m},t)}{\partial t}=H(\vec{m},-\nabla_{\vec{m}}\rho( \vec{m},t)) \tag{16}\]
This Hamilton-Jacobi equation is Eq. 78 in [2], Eq. 2.12 in [3], and Eq. 22 in [4]. The steady state version is Eq. 9 in [5].
The only approximation used in deriving Eq.(16) is the saddle-point one; this includes the leading trajectories in large systems, including relaxations to equilibria and escape trajectories between different attractors.
## 3 Cumulant generating function
Consider the cumulant generating function
\[K(\vec{x},t)\equiv\log\langle e^{\vec{x}\cdot\vec{n}(t)}\rangle. \tag{17}\]
By the same derivation method as considered for the probability \(P(\vec{n},t)\), we can obtain a closed Hamilton-Jacobi equation for \(K\),
\[\frac{\partial K(\vec{x},t)}{\partial t} =H\bigg{(}\frac{\partial K(\vec{x},t)}{\partial\vec{x}},\vec{x} \bigg{)}, \tag{18}\] \[K(\vec{x},0) =K_{0}(\vec{x}), \tag{19}\]
where the Hamiltonian is given by
\[H = H_{0}+\epsilon H_{1}, \tag{20}\] \[H_{0}(\vec{n},\vec{\nu}) = \sum_{\alpha}k_{\alpha}\Bigg{(}\Big{(}e^{\vec{S}_{\alpha}\cdot\vec {\nu}}-1\Big{)}\prod_{j}\left(\frac{n_{j}}{n_{j}^{eq}}\right)^{p_{\alpha,j}}+ \Big{(}e^{-\vec{S}_{\alpha}\cdot\vec{\nu}}-1\Big{)}\prod_{j}\left(\frac{n_{j}}{ n_{j}^{eq}}\right)^{q_{\alpha,j}}\Bigg{)},\] (21) \[H_{1}(\vec{n},\vec{\nu}) = \sum_{j}r_{j}\big{(}\big{(}e^{-\nu_{j}}-1\big{)}n_{j}+(e^{\nu_{j}}- 1)n_{j}^{\mathscr{C}}\big{)}. \tag{22}\]
Also, we assume that the initial state \(K_{0}(\vec{x})\) is a stationary state of \(H_{0}\)
\[0=H_{0}\Bigg{(}\frac{\partial K_{0}}{\partial\vec{x}},\vec{x}\Bigg{)}. \tag{23}\]
### Moments of initial state
Let us consider the cumulant expansion of the initial state
\[K_{0}(\vec{x}) = \sum_{j}\kappa_{0,j}^{(1)}x_{j}+\frac{1}{2}\sum_{jk}\kappa_{0,jk} ^{(2)}x_{j}x_{k}+\cdots \tag{24}\] \[= \sum_{n=1}^{\infty}\frac{1}{n!}\sum_{j_{1}j_{2}\cdots j_{n}} \kappa_{0,j_{1}j_{2}\cdots j_{n}}^{(n)}x_{j_{1}}x_{j_{2}}\cdots x_{j_{n}}.\]
From the rate equation, the mean \(\kappa_{0,j}^{(1)}\) is written as
\[\kappa_{0,j}^{(1)}=n_{j}^{eq}e^{\sum_{x}\zeta_{j}\mu_{y}^{(1)}}. \tag{25}\]
Suppose that the cumulants up to the \((n-1)\)-th order are already determined. Then from Eq. (23) the equation for \(\kappa^{(n)}\) reads
\[0 =\sum_{\alpha}k_{\alpha}\Bigg{\{}\Big{(}e^{\vec{S}_{\alpha}\cdot \vec{x}}-1\Big{)}\prod_{j}\left(\frac{1}{n_{j}^{eq}}\frac{\partial K_{0}(\vec{x })}{\partial x_{j}}\right)^{p_{\alpha,j}}+\Big{(}e^{-\vec{S}_{\alpha}\cdot\vec {x}}-1\Big{)}\prod_{j}\left(\frac{1}{n_{j}^{eq}}\frac{\partial K_{0}(\vec{x})} {\partial x_{j}}\right)^{q_{\alpha,j}}\Bigg{\}}\] \[=\sum_{j_{1}j_{2}\cdots j_{n}}\frac{1}{n!}A_{j_{1}\cdots j_{n}} \Big{(}\kappa_{0}^{(1)},\kappa_{0}^{(2)},\cdots,\kappa_{0}^{(n-1)}\Big{)}x_{j_ {1}}x_{j_{2}}\cdots x_{j_{n}}\] \[\qquad+\sum_{\alpha}k_{\alpha}\vec{S}_{\alpha}\cdot\vec{x}\prod_ {j}\left\{\frac{1}{n_{j}^{eq}}\Bigg{(}\kappa_{0,j}^{(1)}+\frac{1}{(n-1)!} \sum_{j_{2}\cdots j_{n}}\kappa_{0,jj_{2}\cdots j_{n}}^{(n)}x_{j_{2}}\cdots x_{ j_{n}}\Bigg{)}\Bigg{\}}^{p_{\alpha,j}}\right\}^{q_{\alpha,j}}+\mathscr{O}\big{(}x^{n+1} \big{)}\] \[=\sum_{j_{1}j_{2}\cdots j_{n}}\frac{1}{n!}A_{j_{1}\cdots j_{n}} \Big{(}\kappa_{0}^{(1)},\kappa_{0}^{(2)},\cdots,\kappa_{0}^{(n-1)}\Big{)}x_{j _{1}}x_{j_{2}}\cdots x_{j_{n}}\] \[\qquad+\sum_{\alpha}k_{\alpha}\vec{S}_{\alpha}\cdot\vec{x}\Bigg{(} \prod_{j}\left(\frac{\kappa_{0,j}^{(1)}}{n_{j}^{eq}}\right)^{p_{\alpha,j}} \Bigg{)}\sum_{kj_{2}\cdots j_{n}}\frac{p_{\alpha,k}\kappa_{0,kj_{2}\cdots j_{n }}^{(n)}}{(n-1)!\kappa_{0,k}^{(1)}}x_{j_{2}}\cdots x_{j_{n}}\] \[\qquad-\sum_{\alpha}k_{\alpha}\vec{S}_{\alpha}\cdot\vec{x}\Bigg{(} \prod_{j}\left(\frac{\kappa_{0,j}^{(1)}}{n_{j}^{eq}}\right)^{q_{\alpha,j}} \Bigg{)}\sum_{kj_{2}\cdots j_{n}}\frac{q_{\alpha,k}\kappa_{0,kj_{2}\cdots j_{n }}^{(n)}}{(n-1)!\kappa_{0,k}^{(1)}}x_{j_{2}}\cdots x_{j_{n}}+\mathscr{O}\big{(} x^{n+1}\big{)}\] \[=\sum_{j_{1}j_{2}\cdots j_{n}}\frac{1}{n!}A_{j_{1}\cdots j_{n}} \Big{(}\kappa_{0}^{(1)},\kappa_{0}^{(2)},\cdots,\kappa_{0}^{(n-1)}\Big{)}x_{ j_{1}}x_{j_{2}}\cdots x_{j_{n}}\] \[\qquad+\sum_{\alpha}\sum_{\ell}S_{\alpha,\ell}x_{\ell}F_{\alpha}^ {+}(\kappa_{0}^{(1)})\sum_{kj_{2}\cdots j_{n}}\frac{p_{\alpha,k}\kappa_{0,kj_ {2}\cdots j_{n}}^{(n)}}{(n-1)!\kappa_{0,k}^{(1)}}x_{j_{2}}\cdots x_{j_{n}}\] \[\qquad-\sum_{\alpha}\sum_{\ell}S_{\alpha,\ell}x_{\ell}F_{\alpha}^ {-}(\kappa_{0}^{(1)})\sum_{kj_{2}\cdots j_{n}}\frac{q_{\alpha,k}\kappa_{0,kj_ {2}\cdots j_{n}}^{(n)}}{(n-1)!\kappa_{0,k}^{(1)}}x_{j_{2}}\cdots x_{j_{n}}+ \mathscr{O}\big{(}x^{n+1}\big{)}\] \[=\sum_{j_{1}j_{2}\cdots j_{n}}\frac{1}{n!}A_{j_{1}\cdots j_{n}} \Big{(}\kappa_{0}^{(1)},\kappa_{0}^{(2)},\cdots,\kappa_{0}^{(n-1)}\Big{)}x_{ j_{1}}x_{j_{2}}\cdots x_{j_{n}}\] \[\qquad-\sum_{j_{1}j_{2}\cdots j_{n}}\frac{1}{(n-1)!}\sum_{\alpha }F_{\alpha}^{+}(\kappa_{0}^{(1)})\sum_{k}\frac{\kappa_{0,kj_{2}\cdots j_{n}}^{ (n)}S_{\alpha,k}S_{\alpha,j_{1}}}{\kappa_{0,k}^{(1)}}x_{j_{1}}x_{j_{2}}\cdots x _{j_{n}}+\mathscr{O}\big{(}x^{n+1}\big{)}, \tag{26}\]
where
\[F_{\alpha}^{+}\Big{(}\kappa_{0}^{(1)}\Big{)} \coloneqq k_{\alpha}\prod_{j}\Bigg{(}\frac{\kappa_{0,j}^{(1)}}{n_{j }^{eq}}\Bigg{)}^{p_{\alpha,j}}, \tag{27}\] \[F_{\alpha}^{-}\Big{(}\kappa_{0}^{(1)}\Big{)} \coloneqq k_{\alpha}\prod_{j}\Bigg{(}\frac{\kappa_{0,j}^{(1)}}{n_{j }^{eq}}\Bigg{)}^{q_{\alpha,j}}=F_{\alpha}^{+}\Big{(}\kappa_{0}^{(1)}\Big{)} \tag{28}\]
and the tensor \(A\) is a function of \(\kappa_{0}^{(1)},\kappa_{0}^{(2)},\cdots,\kappa_{0}^{(n-1)}\). As a result, \(n\)-th cumulant is written as
\[\kappa_{0,j_{1}j_{2}\cdots j_{n}}^{(n)}=\sum_{y_{1}\cdots y_{n}} \Bigg{(}\prod_{k=1}^{n}\kappa_{0,j_{k}}^{(1)}\zeta_{j_{k}y_{k}}\Bigg{)}\mu_{y_ {1}\cdots y_{n}}^{(n)}+\tilde{\kappa}_{0,j_{1}j_{2}\cdots j_{n}}^{(n)}, \tag{29}\]
where \(\tilde{\kappa}^{(n)}\) satisfies Eq. (26) and the first term vanishes if it is contracted with the stoichiometric tensor. The degrees of freedom of the tensor \(\mu^{(n)}\) result from the element conservation.
### Perturbation theory
We first consider naive perturbation expansion
\[K(\vec{x},t)=K_{0}(\vec{x})+\epsilon K_{1}(\vec{x},t)+\cdots. \tag{30}\]
Note that the \(\mathscr{O}(\epsilon^{0})\) equation is already solved by the initial condition \(K(\vec{x},t=0)=K_{0}(\vec{x})\). Then the \(\mathscr{O}(\epsilon^{1})\) equation is given by
\[\frac{\partial K_{1}(\vec{x},t)}{\partial t}=\left.\nabla_{n}H_{0}(\vec{n}, \vec{x})\right|_{\vec{n}=\,\partial K_{0}(\vec{x})/\partial\vec{x}}\cdot\frac{ \partial K_{1}(\vec{x},t)}{\partial\vec{x}}+H_{1}\bigg{(}\frac{\partial K_{0} (\vec{x})}{\partial\vec{x}},\vec{x}\bigg{)}. \tag{31}\]
This equation has secular divergence for \(\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\,\prime}\) as follows:
\[\frac{\partial K_{1}(\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\,\prime },t)}{\partial t}=H_{1}\Bigg{(}\frac{\partial K_{0}(\vec{x}=\hat{\zeta}\cdot \vec{\mu}^{\,\prime})}{\partial\vec{x}},\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\, \prime}\Bigg{)}\] \[\Rightarrow K_{1}(\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\,\prime},t)=tH_{1} \Bigg{(}\frac{\partial K_{0}(\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\,\prime})}{ \partial\vec{x}},\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\,\prime}\Bigg{)}, \tag{32}\]
where we assume for simplicity that \(H_{1}\) has no explicit time dependence. To remove this divergence we introduce another time variable \(\tau=\epsilon t\) and write
\[K_{0}(\vec{x})\to K_{0}(\vec{x},\tau)=\sum_{j}n_{j}^{eq}e^{ \sum_{y}\zeta_{jy}\mu_{y}^{(1)}(\tau)}x_{j}\] \[+\sum_{n=2}^{\infty}\frac{1}{n!}\sum_{j_{1}\cdots j_{n}}x_{j_{1}} \cdots x_{j_{n}}\Bigg{\{}\!\sum_{y_{1}\cdots y_{n}}\Bigg{(}\!\prod_{k=1}^{n}n _{j_{k}}^{eq}e^{\sum_{y}\zeta_{j_{k}y}\mu_{y}^{(1)}(\tau)}\zeta_{j_{k}y_{k}} \Bigg{)}\!\mu_{y_{1}\cdots y_{n}}^{(n)}(\tau)+\tilde{\kappa}_{0,j_{1}j_{2} \cdots j_{n}}^{(n)}\Bigg{\}}. \tag{33}\]
Then we have
\[\frac{\partial K_{0}(\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\,\prime},\tau)}{ \partial\tau}=H_{1}\Bigg{(}\frac{\partial K_{0}(\vec{x}=\hat{\zeta}\cdot\vec{ \mu}^{\,\prime},\tau)}{\partial\vec{x}},\vec{x}=\hat{\zeta}\cdot\vec{\mu}^{\, \prime}\Bigg{)}. \tag{34}\]
This equation determines the \(\tau\)-dependence of all the tensors \(\mu^{(n)}(\tau)\). Note that we can set \(\mu^{(n)}(\tau=0)=0\) without loss of generality. Furthermore, assuming that the conserved quantities do not fluctuate in the initial state \(K_{0}(\vec{x},\tau=0)\), we can ignore \(\tilde{\kappa}\) because these initial cumulants
vanish if they are contracted with \(\hat{\zeta}\). As a result, we can write
\[\frac{\partial K_{0}(\vec{x}=\hat{\zeta}\cdot\vec{\mu}\,^{\prime}, \tau)}{\partial\tau} =H_{1}\Bigg{(}\frac{\partial K_{0}(\vec{x}=\hat{\zeta}\cdot\vec{ \mu}\,^{\prime},\tau)}{\partial\vec{x}},\vec{x}=\hat{\zeta}\cdot\vec{\mu}\,^{ \prime}\Bigg{)}, \tag{35}\] \[K_{0}(\vec{x}=\hat{\zeta}\cdot\vec{\mu}\,^{\prime},\tau) =\vec{c}(\tau)\cdot\vec{\mu}\,^{\prime}+\sum_{n=2}^{\infty}\frac{ 1}{n!}\Big{(}\hat{M}(\tau)\cdot\vec{\mu}\,^{\prime}\Big{)}^{\otimes n}\bullet^{ n}\hat{\mu}^{(n)}(\tau),\] (36) \[\frac{\partial}{\partial x_{j}}K_{0}(\vec{x}=\hat{\zeta}\cdot \vec{\mu}\,^{\prime},\tau) =\kappa_{j}^{(1)}(\tau)+\sum_{n=2}^{\infty}\frac{1}{(n-1)!}\kappa _{j}^{(1)}(\tau)\bigg{\{}\Big{(}\hat{M}(\tau)\cdot\vec{\mu}\,^{\prime}\Big{)} ^{\otimes n-1}\bullet^{n-1}\hat{\mu}^{(n)}(\tau)\cdot\hat{\zeta}^{T}\bigg{\}} _{j},\] (37) \[\kappa_{j}^{(1)}(\tau) =n_{j}^{eq}e^{\sum_{y}\zeta_{jy}\mu_{y}^{(1)}(\tau)},\] (38) \[c_{y}(\tau) =\sum_{j}\kappa_{j}^{(1)}(\tau)\zeta_{jy},\] (39) \[M_{yz}(\tau) =\sum_{j}\kappa_{j}^{(1)}(\tau)\zeta_{jy}\zeta_{jz}, \tag{40}\]
where for a vector \(\vec{v}\), the symbol \(\vec{v}^{\otimes n}\) means \(n\)-times tensor product
\[\big{(}\vec{v}^{\otimes n}\big{)}_{x_{1}\cdots x_{n}}=\prod_{k=1}^{n}v_{x_{k}} \tag{41}\]
and for \(n\)-rank tensors \(\hat{A}\) and \(\hat{B}\), the symbol \(\hat{A}\bullet^{n}\hat{B}\) means the full contraction
\[\hat{A}\bullet^{n}\hat{B}=\sum_{x_{1}\cdots x_{n}}A_{x_{1}\cdots x_{n}}B_{x_{ 1}\cdots x_{n}}. \tag{42}\]
The equation for \(\mu^{1}\) is again the highly nonlinear slow dynamics equation. The equations for higher-order tensors \(\mu^{(n)}\) are instead linear. Moreover, the dependence on the CRN is through the same matrix \(M\) that appeared in the slow dynamics equation.
### Large deviation function
To connect these results with those for \(\rho=\log\mathbb{P}\), the cumulant generating function obtained above can be decomposed as
\[K_{0}(\vec{x},\tau) =\tilde{K}(\vec{x})+\delta K(\vec{x},\tau), \tag{43}\] \[\tilde{K}(\vec{x}) \coloneqq\sum_{j}n_{j}^{eq}x_{j}+\sum_{n=2}^{\infty}\frac{1}{n!} \sum_{j_{1}\cdots j_{n}}\tilde{\kappa}_{0,j_{1}\cdots j_{n}}^{(n)}x_{j_{1}} \cdots x_{j_{n}},\] (44) \[\delta K(\vec{x},\tau) \coloneqq\sum_{j}n_{j}^{eq}\Big{(}e^{\sum_{y}\zeta_{jy}\mu_{y}^{(1 )}(\tau)}-1\Big{)}x_{j}\] \[\qquad+\sum_{n=2}^{\infty}\frac{1}{n!}\sum_{j_{1}\cdots j_{n}} \sum_{y_{1}\cdots y_{n}}\left(\prod_{k=1}^{n}n_{j_{k}}^{eq}e^{\sum_{y}\zeta_{ j_{k}}\mu_{y}^{(1)}(\tau)}x_{j_{k}}\zeta_{j_{k}y_{k}}\right)\!\mu_{y_{1} \cdots y_{n}}^{(n)}(\tau). \tag{45}\]
Note that \(\delta K(\vec{x},\tau=0)=0\). Then the large deviation function of this cumulant generating function is given by
\[\rho_{0}(\vec{n},\tau) =\log\int\mathrm{d}\vec{x}\,e^{-i\vec{x}\cdot\vec{n}}e^{\tilde{K}( \vec{x})}e^{\delta K(\vec{x},\tau)}\] \[=\log\sum_{\vec{n}^{\prime}}\tilde{P}(\vec{n}^{\prime})\delta P( \vec{n}-\vec{n}^{\prime},\tau), \tag{46}\]
where
\[\tilde{P}(\vec{n}) =\int\mathrm{d}\vec{x}\,e^{-i\vec{x}\cdot\vec{n}}e^{\tilde{K}( \vec{x})}, \tag{47}\] \[\delta P(\vec{n},\tau) =\int\mathrm{d}\vec{x}\,e^{-i\vec{x}\cdot\vec{n}}e^{\delta K( \vec{x},\tau)}. \tag{48}\]
Note that \(\delta P(\vec{n},\tau=0)=\delta_{\vec{n},0}\).
Thus the approximation in the manuscript corresponds to
\[\delta P(\vec{n},\tau)\approx\delta_{\vec{n},\vec{\kappa}^{(1)}(\tau)}, \tag{49}\]
which leads to
\[\rho_{0}(\vec{n},\tau)\approx\log\tilde{P}[\vec{n}-\vec{\kappa}^{(1)}(\tau)]. \tag{50}\]
This is a Poisson distribution if the initial distribution is Poissonian.
## 4 ABC model
The ABC model is defined by
\[A \stackrel{{ k_{1}}}{{\rightleftharpoons}}B, \tag{51}\] \[A+B \stackrel{{ k_{2}}}{{\rightleftharpoons}}2B,\] (52) \[B \stackrel{{ k_{3}}}{{\rightleftharpoons}}C \tag{53}\]
with two reservoirs
\[A \stackrel{{ r_{A}}}{{\rightleftharpoons}}A^{ \mathscr{C}}, \tag{54}\] \[C \stackrel{{ r_{C}}}{{\rightleftharpoons}}C^{ \mathscr{C}}. \tag{55}\]
The stoichiometric matrix for the core is
\[\mathbb{S}^{0}=\begin{pmatrix}-1&-1&0\\ 1&1&-1\\ 0&0&1\end{pmatrix}, \tag{57}\]
whose co-kernel is spanned by
\[\vec{c}_{1}=\begin{pmatrix}1\\ 1\\ 1\end{pmatrix}, \tag{58}\]
giving the only conserved quantity of the closed system. Then \(\zeta\) is the \(3\times 1\) matrix with elements \(\vec{c}_{1}\). The slow dynamics equation is
\[(n_{A}^{eq}+n_{B}^{eq}+n_{C}^{eq})e^{\eta}\partial_{\tau}\eta=r_{A}(n_{A}^{ \mathscr{C}}-n_{A}^{eq}e^{\eta})+r_{C}(n_{C}^{\mathscr{C}}-n_{C}^{eq}e^{\eta}) \tag{59}\]
Under constant forcing this is easily solved
\[e^{\eta(\tau)}=e^{\eta(0)}e^{-a\tau}+\frac{r_{A}n_{A}^{\mathscr{C}}+r_{B}n_{B} ^{\mathscr{C}}}{r_{A}n_{A}^{eq}+r_{C}n_{C}^{eq}}(1-e^{-a\tau}), \tag{60}\]
where \(a=(r_{A}n_{A}^{eq}+r_{C}n_{C}^{eq})/(n_{A}^{eq}+n_{B}^{eq}+n_{C}^{eq})\).
## 5 Methane combustion
We now consider a version of methane combustion:
\[\mathrm{CH}_{4}+3\,\mathrm{CO}_{2} \rightleftharpoons\begin{matrix}\frac{k_{1}}{\rightleftharpoons 2}\,\mathrm{H}_{2}\mathrm{O}+4\, \mathrm{CO}, \tag{61}\] \[\mathrm{O}_{2}+2\,\mathrm{CO} \rightleftharpoons\begin{matrix}\frac{k_{2}}{\rightleftharpoons 2}\, \mathrm{CO}_{2},\] (62) \[\mathrm{H}_{2}+\mathrm{CO}_{2} \rightleftharpoons\begin{matrix}\frac{k_{3}}{\rightleftharpoons 2}\, \mathrm{H}_{2}\mathrm{O}+\mathrm{CO},\] (63) \[\mathrm{H}_{2}\mathrm{O} \rightleftharpoons\begin{matrix}\frac{r_{3}}{\rightleftharpoons 2}\, \mathrm{H}_{2}\mathscr{C}^{\mathscr{C}},\] (64) \[\mathrm{O}_{2} \rightleftharpoons\begin{matrix}\frac{r_{5}}{\rightleftharpoons 2}\, \mathrm{O}_{2}^{\mathscr{C}},\] (65) \[\mathrm{H}_{2} \rightleftharpoons\begin{matrix}\frac{r_{6}}{\rightleftharpoons 2}\, \mathrm{H}_{2}^{\mathscr{C}}. \tag{66}\]
We label these chemical species as follows:
\[1:\mathrm{CH}_{4}, \tag{67}\] \[2:\mathrm{CO}_{2},\] (68) \[3:\mathrm{H}_{2}\mathrm{O},\] (69) \[4:\mathrm{CO},\] (70) \[5:\mathrm{O}_{2},\] (71) \[6:\mathrm{H}_{2}. \tag{72}\]
The stoichiometric matrix is
\[\mathbb{S}^{0}=\begin{pmatrix}-1&0&0\\ -3&2&-1\\ 2&0&1\\ 4&-2&1\\ 0&-1&0\\ 0&0&-1\end{pmatrix} \tag{73}\]
with left kernel
\[\mathscr{K}=\mathrm{span}\,\{\vec{c}_{1},\vec{c}_{2},\vec{c}_{3}\}, \tag{74}\]
where
\[\vec{c}_{1}=\begin{pmatrix}1\\ 1\\ 0\\ 1\\ 0\\ 0\end{pmatrix},\ \vec{c}_{2}=\begin{pmatrix}4\\ 0\\ 2\\ 0\\ 0\\ 2\end{pmatrix},\ \vec{c}_{3}=\begin{pmatrix}0\\ 2\\ 1\\ 1\\ 2\\ 0\end{pmatrix}. \tag{75}\]
These vectors correspond to the numbers of C, H, and O, respectively.
## 6 Early Earth
We consider a simple model of the early Earth, with 10 species
\[\mathrm{H}_{2}\mathrm{O},\mathrm{CH}_{2}\mathrm{O},\mathrm{NH}_{3 },\mathrm{H},\mathrm{CHO},\mathrm{H}_{2}\mathrm{N},\mathrm{CH}_{3}\mathrm{O}^{( i)},\mathrm{CH}_{3}\mathrm{O}^{(ii)}, \tag{76}\] \[\mathrm{CH}_{4}\mathrm{O},\mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_ {2},\mathrm{C}_{2}\mathrm{H}_{6}\mathrm{O}_{2},\mathrm{CH}_{4}\mathrm{NO}, \mathrm{CH}_{3}\mathrm{NO}, \tag{77}\]
(including 2 isomers of CH\({}_{3}\)O), and 40 reactions:
\[\begin{array}{c}\mathrm{CHO}+\mathrm{CH}_{3}\mathrm{O}^{(i)}\rightleftharpoons 2 \,\mathrm{CH}_{2}\mathrm{O}\\ \mathrm{CH}_{2}\mathrm{O}+\mathrm{CH}_{3}\mathrm{O}^{(i)}\rightleftharpoons \mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}\\ 2\,\mathrm{CH}_{3}\mathrm{O}^{(i)}\rightleftharpoons\mathrm{C}_{2}\mathrm{H}_ {6}\mathrm{O}_{2}\\ \mathrm{CH}_{2}\mathrm{O}+\mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}\rightleftharpoons \mathrm{CHO}+\mathrm{C}_{2}\mathrm{H}_{6}\mathrm{O}_{2}\\ \mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}+\mathrm{CH}_{3}\mathrm{O}^{(i)} \rightleftharpoons\mathrm{CH}_{2}\mathrm{O}+\mathrm{C}_{2}\mathrm{H}_{6} \mathrm{O}_{2}\\ \mathrm{H}_{2}\mathrm{N}+\mathrm{CH}_{3}\mathrm{O}^{(i)}\rightleftharpoons \mathrm{CH}_{2}\mathrm{O}+\mathrm{NH}_{3}\\ \mathrm{CH}_{2}\mathrm{O}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons \mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}+\mathrm{NH}_{3}\\ \mathrm{CH}_{2}\mathrm{O}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons \mathrm{CH}_{4}\mathrm{NO}\\ \mathrm{CHO}+H\rightleftharpoons\mathrm{CH}_{2}\mathrm{O}\\ \mathrm{H}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons\mathrm{NH}_{3}\\ \mathrm{CH}_{2}\mathrm{O}+H\rightleftharpoons\mathrm{CH}_{3}\mathrm{O}^{(i)} \\ \mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}+H\rightleftharpoons \mathrm{C}_{2}\mathrm{H}_{6}\mathrm{O}_{2}\\ \mathrm{CHO}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons\mathrm{CH}_{3} \mathrm{NO}\\ \mathrm{CH}_{3}\mathrm{NO}+H\rightleftharpoons\mathrm{CH}_{4}\mathrm{NO}\\ \mathrm{CHO}+\mathrm{CH}_{4}\mathrm{NO}\rightleftharpoons\mathrm{CH}_{2} \mathrm{O}+\mathrm{CH}_{3}\mathrm{NO}\\ \mathrm{CH}_{4}\mathrm{NO}+\mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}\rightleftharpoons \mathrm{CH}_{3}\mathrm{NO}+\mathrm{C}_{2}\mathrm{H}_{6}\mathrm{O}_{2}\\ \mathrm{CH}_{4}\mathrm{NO}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons \mathrm{CH}_{3}\mathrm{NO}+\mathrm{NH}_{3}\\ \mathrm{CHO}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons 2 \,\mathrm{CH}_{2}\mathrm{O}\\ \mathrm{H}_{2}\mathrm{N}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons \mathrm{CH}_{2}\mathrm{O}+\mathrm{NH}_{3}\\ \mathrm{CH}_{3}\mathrm{O}^{(i)}\rightleftharpoons\mathrm{CH}_{3}\mathrm{O}^{(ii) }\\ \mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2}+\mathrm{CH}_{3}\mathrm{O}^{(ii)} \rightleftharpoons\mathrm{CH}_{2}\mathrm{O}+\mathrm{C}_{2}\mathrm{H}_{6} \mathrm{O}_{2}\\ \mathrm{CH}_{2}\mathrm{O}+H\rightleftharpoons\mathrm{CH}_{3}\mathrm{O}^{(ii) }\rightleftharpoons\mathrm{CH}_{2}\mathrm{O}+\mathrm{CH}_{4}\mathrm{O}\\ \mathrm{CH}_{4}\mathrm{O}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons \mathrm{NH}_{3}+\mathrm{CH}_{3}\mathrm{O}^{(i)}\\ \mathrm{CH}_{4}\mathrm{O}+\mathrm{C}_{2}\mathrm{H}_{5}\mathrm{O}_{2} \rightleftharpoons\mathrm{C}_{2}\mathrm{H}_{6}\mathrm{O}_{2}+\mathrm{CH}_{3} \mathrm{O}^{(i)}\\ \mathrm{CH}_{4}\mathrm{NO}+\mathrm{CH}_{3}\mathrm{O}^{(i)}\rightleftharpoons \mathrm{CH}_{4}\mathrm{O}+\mathrm{CH}_{3}\mathrm{NO}\\ \mathrm{CH}_{2}\mathrm{O}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons \mathrm{CHO}+\mathrm{CH}_{4}\mathrm{O}\\ \mathrm{CH}_{4}\mathrm{O}+\mathrm{H}_{2}\mathrm{N}\rightleftharpoons \mathrm{NH}_{3}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\\ \mathrm{CH}_{3}\mathrm{O}^{(i)}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons \mathrm{CH}_{2}\mathrm{O}+\mathrm{CH}_{4}\mathrm{O}\\ \mathrm{CH}_{3}\mathrm{O}^{(i)}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons \mathrm{CH}_{2}\mathrm{O}+\mathrm{CH}_{4}\mathrm{O}\\ \mathrm{CH}_{3}\mathrm{O}^{(i)}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons \mathrm{CH}_{2}\mathrm{O}+\mathrm{CH}_{4}\mathrm{O}\\ \mathrm{CH}_{4}\mathrm{O}+\mathrm{CH}_{3}\mathrm{O}^{(ii)}\rightleftharpoons \mathrm{CH}_{4}\mathrm{O}+\mathrm{CH}_{3}\mathrm{O}^{(i)}\]
When coupled to reservoirs of \(\mathrm{CH}_{2}\mathrm{O},\mathrm{NH}_{3},\mathrm{H},\mathrm{CHO},\mathrm{H}_ {2}\mathrm{N}\), the system grows exponentially on the slow time scale \(\tau\), as shown in Fig. 1 here. |
2305.17434 | Demonstration of geometric diabatic control of quantum states | Geometric effects can play a pivotal role in streamlining quantum
manipulation. We demonstrate a geometric diabatic control, that is, perfect
tunneling between spin states in a diamond by a quadratic sweep of a driving
field. The field sweep speed for the perfect tunneling is determined by the
geometric amplitude factor and can be tuned arbitrarily. Our results are
obtained by testing a quadratic version of Berry's twisted Landau-Zener model.
This geometric tuning is robust over a wide parameter range. Our work provides
a basis for quantum control in various systems, including condensed matter
physics, quantum computation, and nuclear magnetic resonance. | Kento Sasaki, Yuki Nakamura, Tokuyuki Teraji, Takashi Oka, Kensuke Kobayashi | 2023-05-27T10:07:46Z | http://arxiv.org/abs/2305.17434v1 | # Demonstration of geometric diabatic control of quantum states
###### Abstract
Geometric effects can play a pivotal role in streamlining quantum manipulation. We demonstrate a geometric diabatic control, that is, perfect tunneling between spin states in a diamond by a quadratic sweep of a driving field. The field sweep speed for the perfect tunneling is determined by the geometric amplitude factor and can be tuned arbitrarily. Our results are obtained by testing a quadratic version of Berry's twisted Landau-Zener model. This geometric tuning is robust over a wide parameter range. Our work provides a basis for quantum control in various systems, including condensed matter physics, quantum computation, and nuclear magnetic resonance.
+
Footnote †: preprint: APS/123-QED
Tunneling is an exotic yet ubiquitous quantum phenomenon. To control quantum states, a common strategy known as adiabatic control avoids it by moving a large energy barrier slowly. Another ubiquitous feature of quantum physics is geometric effects [1]. A well-known example is the geometric phase [2] that a particle acquires during an adiabatic motion. However, geometric effects are not restricted by adiabaticity. Even during diabatic tunneling events, geometric effects take place and lead to grave consequences in the dynamics.
The simplest system that demonstrates the marriage of tunneling and geometric effects is the twisted Landau-Zener (TLZ) model introduced by Berry, which describes a particle in two quantum states driven by an external field [3]. In the original untwisted Landau-Zener (LZ) model [4; 5; 6; 7], when two energy levels change in time, quantum tunneling across an energy gap \(\Delta\) occurs depending on the speed of the change [Fig. 1(a)] [8; 9]. The tunneling probability \(P\) depends on the sweep speed \(F\) [Fig. 1(c)]; \(P=0\) in the adiabatic limit (\(F\to 0\)), while \(P\) is unity in the diabatic limit (\(|F|\rightarrow\infty\)). Such speed-dependent tunneling has been demonstrated in various systems [10; 11; 12; 13; 14; 15; 16]. In the TLZ model, the driving field has a "twist" and the adiabatic to diabatic transition is geometrically modulated [17; 18]. Recently, the importance of the geometric effects was recognized not only in equilibrium [19] but also in nonequilibrium [20; 21; 22; 23]. The TLZ model, which possesses a new nonequilibrium tuning knob on top of the LZ model, should be widely applied to materials engineering [24; 25] and quantum controls [26; 27; 28; 29]. Despite a few experiments [30; 31; 32; 33], the opportunity to utilize such geometric tuning for quantum control has long been overlooked, and its robustness remains unexplored.
Here, using an electron spin in a diamond, we realize and test an ideal TLZ model with a quadratic twist [24] that manifests perfect tunneling and nonreciprocity over a wide range of gap and twist parameters. We measure the tunneling probabilities with high precision and obtain an average of 95.5 % under the condition where perfect tunneling occurs. The condition of perfect tunneling can be smoothly tuned by adjusting the curvature of the quadratic sweep. These geometrical effects are robust beyond the framework of the existing theory [24]. This geometric diabatic control is ubiquitous and can be applied to various quantum systems.
As a geometric diabatic control, we aim to realize perfect tunneling (\(P=1\)) and change the state at the same time. The Hamiltonian for the TLZ model in the natural units is defined as [24],
\[\hat{H}=\mathbf{b}\cdot\hat{\mathbf{\sigma}}=m\hat{\sigma}_{x}+\nu q\hat{\sigma}_{y}+ \frac{1}{2}\kappa_{\parallel}\nu^{2}q^{2}\hat{\sigma}_{z}, \tag{1}\]
where \(\hat{\sigma}_{j}\) (\(j=x\), \(y\), and \(z\)) is the Pauli operator, and \(\mathbf{b}=(b_{x},b_{y},b_{z})\equiv(m,\nu q,1/2k_{\parallel}\nu^{2}q^{2})\) is a driving field. We change the parameter \(q\) in time as \(q=-F(t-T/2)\) between time \(t=0\) and \(t=T\) with a dimensionless sweep speed \(F\). This is a quadratic version of the original TLZ model [3]; \(\Delta=2m\) is the gap, and \(2\nu\) (\(>0\)) is the energy slope. Figure 1(b) depicts the initial and final fields as a red solid arrow (\(t=0\)) and a red dotted arrow (\(t=T\)), respectively. The \(b_{z}\) component, which depends quadratically on time, induces a "twist" of the field. This twist appears in the trajectory of the field [the solid red line in Fig. 1(b)] and its strength is determined by the geodesic curvature \(\kappa_{\parallel}\). Situations in which the spin and driving field are always kept parallel or antiparallel are adiabatic; situations that deviate from this are diabatic. The diabatic geometric effect is cap
tured by the geometric amplitude factor [3] (also known as the quantum geometric potential [17] or shift vector [25]) \(R_{12}(q)=-A_{11}(q)+A_{22}(q)+\partial_{q}\text{arg}A_{12}(q)\), where the Berry connection is defined by \(A_{nl}(q)=\langle n(q)|\,i\partial_{q}\,|l(q)\rangle\) using the instantaneous eigenstate \(|n(q)\rangle\) satisfying \(\hat{H}(q)\,|n(q)\rangle=E_{n}(q)\,|n(q)\rangle\). The tunneling probability \(P\) from \(|1\rangle\) to \(|2\rangle\) is given by [24],
\[P\approx\exp\left[-\frac{\pi}{4\nu|F|}\left(\Delta+\frac{FR_{12}(0)}{2}\right) ^{2}\right], \tag{2}\]
where \(R_{12}(0)=\nu_{\text{\tiny{$\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{ \mathrm{ }}}}}}}}}}$}}}}\) holds in the present model. Equation (2), referred to as "TLZ formula" in this work, is derived using a twisting coordinate transformation [24] and it recovers the LZ formula when \(\kappa_{\parallel}=0\) [Fig. 1(c)]. We stress that the TLZ formula is approximate in contrast to the LZ formula which is asymptotically exact. Figure 1(d) shows the behavior of the transition described by the TLZ formula when \(\kappa_{\parallel}>0\). The \(P\) is nonreciprocal to the sign reversal of the speed \(F\) corresponding to the field sweep direction [25]. In Eq. (2), the gap \(\Delta\) in the LZ model is effectively shifted to \(\Delta+\frac{FR_{12}(0)}{2}\) by the geometric amplitude factor [17; 24]. In particular, when the speed is,
\[F_{\text{PT}}=-2\Delta/R_{12}(0) \tag{3}\]
the effective gap closes and the tunneling probability saturates \(P\approx 1\). We call this behavior "perfect tunneling (PT)" [24], and the speed at which \(P\) is maximized is referred to as the "PT condition". In contrast to the LZ case, the quantum state changes during the diabatic transition from the initial state \(|1(q=FT/2)\rangle\) to the final state \(|2(q=-FT/2)\rangle\), and thus allows us to realize geometric diabatic control of the quantum states. Our main purpose is to extensively test the behaviors predicted by the TLZ formula [Eq. (2)].
We realize the TLZ transition with an electron spin of a single nitrogen-vacancy (NV) center in a diamond [12; 13; 34]. We use the NV center's \(m_{S}=0\) and \(-1\) states as a
Figure 1: Comparison of the Landau-Zener (LZ) transition and the twisted Landau-Zener (TLZ) transition. (a) Tunneling at level anti-crossing. (b) Sweeping of the driving field. The fields at \(t=0\) and \(t=T\) for the LZ (TLZ) model are indicated by solid and dashed blue (red) arrows, respectively. Predicted (c) LZ transition probability and (d) TLZ transition probability are plotted as a function of the speed \(F\). Dynamics of the field (black arrow) and spin (blue arrow) in the LZ model are plotted in the (e) adiabatic and (f) diabatic limits [see (b) and (c)]. Dynamics of the field (black arrow) and spin (red arrow) in the TLZ model at (g) \(F=F_{\text{PT}}\) and (h) \(F=-F_{\text{PT}}\) [see (b) and (d)]. The solid black line indicates the field amplitude in the \(xy\) plane in (e) and (f) and in the \(yz\) plane in (g) and (h). The origin of each arrow corresponds to the field amplitude at each instant.
Figure 2: Demonstration of the TLZ transition at \(m=0.5\) MHz. (a) Measurement sequence. Laser and microwave pulses are used for initialization and readout of the NV center. (b) Dependence of tunneling probability \(P\) on speed \(F\). The squares and circles indicate experimental results, black solid lines indicate the TLZ formula [Eq.(2)], and vertical black dotted lines indicate \(F=F_{\text{PT}}\). The blue dashed lines indicate the LZ formula [TLZ with \(R_{12}(0)=0\)]. (c) Tunneling probability at \(F=F_{\text{PT}}\) in the range of \(\kappa_{\parallel}=0\)–\(1\)\(\mu\)s. The error bars indicate 65 % confidence intervals estimated from the shot noise of the PL measurement.
two-level system and manipulate it with microwave pulses. In a suitable rotating frame (see Supplemental Material [35]), the Hamiltonian is expressed as (\(\hat{S}_{i}\) denotes the \(S=\frac{1}{2}\) spin operators)
\[\hat{H}_{\text{r}}=f_{\text{R}}\Big{[}\text{cos}(\phi_{\text{mm}})\hat{S}_{x}- \text{sin}(\phi_{\text{mm}})\hat{S}_{y}\Big{]}+\frac{d(f_{\text{det}})}{dt} \hat{S}_{z}, \tag{4}\]
where \(f_{\text{R}}\) is the Rabi frequency corresponding to the microwave field amplitude, \(\phi_{\text{mm}}\) is the microwave phase, and \(f_{\text{det}}\) is the detuning between the resonance frequency and the microwave frequency. We generate a microwave pulse satisfying \(f_{\text{R}}=\sqrt{b_{x}^{2}+b_{y}^{2}}\), \(\phi_{\text{mm}}=-\arctan(b_{y}/b_{x})\), and \(f_{\text{det}}=\int_{0}^{t}b_{z}(t^{\prime})dt^{\prime}\), so that Eq. (4) reproduces the driving field \(\mathbf{b}\) in the TLZ Hamiltonian [Eq. (1)]. This conversion to the \(S=\frac{1}{2}\) system in MKS units corresponds to making the following changes to each parameter; \(m\rightarrow\pi m\), \(\nu\rightarrow\pi\nu\), and \(\kappa_{\parallel}\rightarrow\kappa_{\parallel}/\pi\) (see [35]). We adjust the sweep duration \(T\) considering the coherence time and available microwave parameter ranges. Figure 2(a) shows the measurement sequence. We use green laser pulses and photoluminescence (PL) measurements for spin initialization and readout. We prepare the initial and final states using rectangular microwave pulses after and before the laser pulse to match the instantaneous field direction with the projection direction. The obtained PL intensity is precisely converted to a tunneling probability using reference PL intensities of the \(m_{S}=0\) and \(=-1\) states [36].
We show our experimental results obtained when the gap parameter is fixed as \(m=0.5\) MHz. Without loss of generality, we investigate the probability \(P\) [Eq. (2)] by selecting the energy slope \(\nu\) to \((10\ \mathrm{MHz})^{2}\) and adjusting only the dimensionless speed \(F\). First, we set \(\kappa_{\parallel}=0\ \mu\)s to address the conventional LZ model. The blue circles in Fig. 2(b ii) show the experimental result. The lower the speed (\(F\to 0\)), the lower the transition probability \(P\); the behavior is symmetric between positive and negative speeds. It agrees well with the LZ formula (black solid line) and proves that our system reproduces the LZ model with high accuracy (for more details see [35]).
We then address the TLZ transition when \(\kappa_{\parallel}=+0.2\ \mu\)s shown in Fig. 2(b i). The experimental result (red circles) is asymmetric in \(F\rightarrow-F\) and becomes higher for \(F<0\) than for \(F>0\). The \(P\) reaches maxima in the vicinity of the predicted PT condition (\(F=F_{\text{PT}}\)) indicated by the vertical dashed line. Specifically, as shown in Fig. 2(c), we find \(P=95.5\pm 1.3\ \%\), on average, in a range of \(\kappa_{\parallel}=0.0\)-\(1.0\ \mu\)s. Figure 2(b iii) shows the results when \(\kappa_{\parallel}=-0.2\ \mu\)s. Compared to the \(\kappa_{\parallel}=+0.2\ \mu\)s case [Fig. 2(b i)], it shows totally inverted behavior to the speed \(F\). These behaviors are qualitatively different from the LZ transition (blue dashed line) and well reproduced by the TLZ formula without any adjustable parameters (black solid line). These are our central results, proving that the tunneling probability is successfully modulated by the geodesic curvature \(\kappa_{\parallel}\) of the driving field, resulting in perfect tunneling and nonreciprocity. The fact that perfect tunneling, which has only been possible in the extremely fast speed limits of the LZ model, is achieved even at finite speeds is essentially different in the long history of the LZ physics.
Here we give an intuitive picture of the perfect tunneling phenomenon. Figure 1(g) shows the driving field (black arrow) and spin (red arrow) dynamics at \(F=F_{\text{PT}}\). The quadratic sweep produces adiabatic dynamics in the initial stage (\(t\sim 0\)) and diabatic dynamics near the gap minima (\(t\sim T/2\)). Near the gap minima, the \(x\) component of the driving field \(\mathbf{b}\), i.e., the gap itself (\(b_{x}=m\)), causes spin precession and rotates the spin around the \(x\) axis. When the PT condition is fulfilled, this rotation of the spin is synchronized with the counterclockwise twist of the field (also around the \(x\) axis) and the transition to the excited state is achieved smoothly. Thus a spin flipping is realized [Fig. 1(g)]. When the sweep direction is reversed (\(F=-F_{\text{PT}}\)), as shown in Fig. 1(h), the clockwise field twist cannot synchronize with the spin precession. This geometric motion near the gap minima increases the effective gap \(\Delta+\frac{FR_{12}}{2}\) and prevents tunneling. More generally, the observed nonreciprocity is analogous to the well-known selective absorption of circularly polarized light, but in the non-perturbative regime.
As described above, the spin flips during the perfect tunneling. In terms of quantum control, a spin flip can also be achieved differently using the Rabi oscillation and the adiabatic control (or its shortcut [37]). The driving field and spin are orthogonal, parallel, and antiparallel in the Rabi oscillation, the adiabatic control, and the TLZ model, respectively. This difference in the restriction of the driving field to the spin direction makes a difference in control speed, robustness, and implementability. Our geometric diabatic control is an effective means of increasing the versatility of quantum control (see [35]).
Next we study the validity of the TLZ formula [Eq. (2)] when the twist becomes stronger; the higher-order terms ignored in the derivation of the TLZ formula increase and the precession is no longer perfectly synchronized with the quadratic twist. We investigate the tunneling probability obtained at \(m=0.5\) MHz for a curvature range from \(\kappa_{\parallel}=0\ \mu\)s to \(\kappa_{\parallel}=3\ \mu\)s. Figure. 3(a iii) shows the experimental result, representing a clear nonreciprocal behavior to the speed \(F\). As \(\kappa_{\parallel}\) increases, the PT condition approaches zero. A similar trend is observed in the TLZ formula shown in Fig. 3(a i), indicating that this characteristic is consistent with \(F_{\text{PT}}=-\frac{2\Delta}{R_{12}(0)}\). This result proves that the speed of the quantum control is tunable by the geodesic curvature \(\kappa_{\parallel}\) of the driving field.
For a more quantitative comparison, we show a cross section at \(\kappa_{\parallel}=1.4\ \mu\)s in Fig. 3(b i) [white line in Fig. 3(a iii)]. The experimental result (red circles) exhibits \(P\sim 1\) near \(F_{\text{PT}}=-0.045\) in good agreement with the TLZ formula (black solid line). On the other hand, in (negatively) large speeds \(F<F_{\text{PT}}\), \(P\) decreases almost exponentially in the TLZ formula [24], whereas the change is gradual in the experimental result. This deviation becomes prominent as the gap parameter \(m\) and/or the curvature \(\kappa_{\parallel}\) are larger. The right
panels of Figs. 3(a) and (b) show the corresponding data sets obtained with a larger gap parameter (\(m=2.0\) MHz). The PT condition in the experiment (red circles) shifts to the left from what the TLZ formula (black solid line) predicts [black arrow in Fig. 3(b ii)]. The maximum \(P\) is then slightly suppressed from unity.
We obtain exact solutions by numerical simulations (see [35]) to discuss this deviation. The simulation results are in Figs. 3(a v) and (a vi) and the green dashed lines in Fig. 3(b). They reproduce the experimental results satisfactorily over the entire speed range. The black solid and green dashed lines in Fig. 3(a) show the perfect tunneling conditions obtained by the TLZ formula and the simulation, respectively. The results show that as the gap parameter \(m\) and curvature \(\kappa_{\parallel}\) become larger, the exact PT condition shifts toward the (negative) high speed side. Our precise measurements reveal that the higher-order terms are essential for a quantitative understanding of the TLZ transition.
As shown above, we find that nonreciprocity and high tunneling probability at finite speed always persist even when the TLZ formula is invalid. Thus, we conclude that these geometric effects are robust. Introducing a field twist can be a ubiquitous method of adjusting tunneling probabilities at arbitrary speeds, making the present TLZ model an alternative framework for quantum control at various energy scales. When applied to quantum materials, such control induces nontrivial properties such as the nonreciprocity of dc current and photocurrent [24; 25].
In the case of an infinitesimal gap (\(m=0.0\) MHz), the TLZ formula predicts a counter intuitive behavior, i.e., tunneling is suppressed as we increase the speed. Since this is relevant to the study of laser-field-driven dynamics in Dirac and Weyl semimetals [24], we study this situation in detail. The energy change is shown in the inset of Fig. 4(a), which mimics the situation where electrons in the valence band accelerated by the electric field are excited through the Dirac (Weyl) point into the conduction band. Here the LZ model and the TLZ model correspond to the case where the driving fields are dc and ac electric fields, respectively. We examine the LZ model and observe that it yields \(P\sim 1\), as shown in Fig. 4 (a). This is a straightforward phenomenon caused by the complete reversal of the field in the \(y\) axis. We then examine the TLZ transition at \(\kappa_{\parallel}=2.5\)\(\mu\)s as in Fig. 4(b). The high tunneling probability near the adiabatic limit \(F\sim 0\) is consistent with \(F_{\text{PT}}=-\frac{4\pi m}{\kappa_{\parallel}}=0\) (for \(m=0\)). This behavior, where the probability decreases with increasing sweep speed, is opposite to the LZ transition at a finite gap [Fig. 2(b)]. This counter intuitive result is caused by the monocyclic nature of the quadratic twist, where the initial and final fields point in the same direction. It is qualitatively reproduced by the TLZ formula (black solid line) and is perfectly reproduced in the simulation (green dashed line).
We experimentally confirmed the nonadiabatic geometric effects of nonreciprocity and perfect tunneling in the quadratic TLZ model over a wide range of parameters. Specifically, we
Figure 3: Gap parameter and curvature dependence of the TLZ transition probability. (a) The left (right) panels denote the results at \(m=0.5\) MHz (\(m=2.0\) MHz). The black solid (green dashed) line indicates the PT condition in the TLZ formula (simulation). (b) Tunneling probability at \(\kappa_{\parallel}=1.4\)\(\mu\)s [white line in (a iii) and (a iv)]. The black arrow in (ii) indicates the PT condition.
Figure 4: Sweep speed dependence of the transition probability of the gapless (\(m=0.0\) MHz) system. (a) The LZ transition (\(\kappa_{\parallel}=0\)). The inset is a schematic of the energy change. (b) The TLZ transition (\(\kappa_{\parallel}=2.5\)\(\mu\)s).
showed that we could utilize the geometric effects to control the quantum state dynamically. Geometric diabatic control can be applied to control systems of various energy scales, from nuclear spins to quantum materials. An important challenge to improving this method is to find a way to enhance the tunneling probability and bring it even closer to 100 %. We think this is possible by engineering the shape of the field twist to cancel the higher-order terms ignored in the derivation of the TLZ formula.
###### Acknowledgements.
We thank K. M. Itoh (Keio University) for letting us use the confocal microscope system. This work is partially supported by "Advanced Research Infrastructure for Materials and Nanotechnology in Japan(ARIM)" of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Proposal Number JPMXP1222UT1131. This work is supported by JSPS Grants-in-Aid for Scientific Research (Nos. JP22K03524, JP19H00656, JP19H05826, JP23H01103, JP20H02187, and JP20H05661), JST CREST (JPMJCR19T3 and JPMJCR1773), MEXT Q-LEAP (JPMXS018068379), JST Moonshot R&D (JPMJMS2062), MIC R&D for construction of a global quantum cryptography network (JPMI00316), Next Generation Artificial Intelligence Research Center at the University of Tokyo.
|
2302.04389 | Verification of Distributed Artificial Intelligence Systems in
Bioinformatics | Software is a great enabler for a number of projects that otherwise would be
impossible to perform. Such projects include Space Exploration, Weather
Modeling, Genome Projects, and many others. It is critical that software aiding
these projects does what it is expected to do. In the terminology of software
engineering, software that corresponds to requirements, that is does what it is
expected to do is called correct. Checking the correctness of software has been
the focus of a great deal of research in the area of software engineering.
Practitioners in the field in which software is applied quite often do not
assign much value to checking this correctness. Yet, as software systems become
larger, potentially combined with distributed subsystems written by different
authors, such verification becomes even more important. Concurrent, distributed
systems are prone to dangerous errors due to different speeds of execution of
their components such as deadlocks, race conditions, or violation of
project-specific properties. This project describes an application of a static
analysis method called model checking to verification of a distributed system
for the Bioinformatics process. In it, we evaluate the efficiency of the model
checking approach to the verification of combined processes with an increasing
number of concurrently executed steps. We show that our experimental results
correspond to analytically derived expectations. We also highlight the
importance of static analysis to combined processes in the Bioinformatics
field. | Aedin Pereira, Julia Ding, Zaina Ali, Rodion Podorozhny | 2023-02-09T01:01:35Z | http://arxiv.org/abs/2302.04389v1 | Verification of Distributed Artificial Intelligence Systems in Bioinformatics
## 1 Abstract
Software is a great enabler for a number of projects that otherwise would be impossible to perform. Such projects include Space Exploration, Weather Modeling, Genome Projects, and many others. It is critical that software aiding these projects does what it is expected to do. In the terminology of software engineering, software that corresponds to requirements, that is does what it is expected to do is called correct. Checking the correctness of software has been the focus of a great deal of research in the area of software engineering. Practitioners in the field in which software is applied quite often do not assign much value to checking this correctness. Yet, as software systems become larger, potentially combined with distributed subsystems written by different authors, such verification becomes even more important. Concurrent, distributed systems are prone to dangerous errors due to different speeds of execution of their components such as deadlocks, race conditions, or violation of project-specific properties. This project describes an application of a static analysis method called model checking to verification of a distributed system for the Bioinformatics process. In it, we evaluate the efficiency of the model checking approach to the verification of combined processes with an increasing number of concurrently executed steps. We show that our experimental results correspond to analytically derived expectations. We also highlight the importance of static analysis to combined processes in the Bioinformatics field.
## 2 Introduction
A multi-agent system is a type of Distributed Artificial Intelligence(DAI) system containing several agents that draw information from their individual environments and communicate it with each other to evaluate the best way to collaboratively complete a task. Examples of multiple agents in such systems include communicative agents implemented through negotiation in finite-state machines. These communicative agents exchange data, while non-communicative agents observe behavior without transmitting data to other agents. A DAI system consists of properties that represent transitions from states and their propositions to succeeding states and their respective propositions.
In many scientific fields, especially in Bioinformatics, precision and accuracy are imperative. Even the smallest mistakes can cause disastrous consequences with large-scale impacts. Thus, a need for verifying the correctness of systems and processes is created. While the verification of the integrity of an entire system as a whole is too large for practical purposes, one can largely verify a system's effectiveness and correctness by checking certain key properties against how the system operates. Conducting verification on such a set of properties can potentially isolate and eliminate virtually all of the flaws in the system.
The objectives of this project are to map the genome annotation process to a Kripke structure then, introduce errors into the Kripke structure and utilize a Computational Tree Logic (CTL) model checker to flag these errors. Moreover, we can perform an analysis of the run time of the CTL model checker and draw conclusions about the computational complexity of CTL model checking which is an important factor in gauging if it is a viable method to validate the use of DAI in the genome process.
## 3 Background Information
### Overview of CTL model checking
One approach to such verification is model checking, in which the system is modeled through a Kripke structure, an example of which is shown below in Figure 1.
Figure 1: Kripke structure with example states and corresponding propositions. Each state is labeled with propositions \(p\) or \(q\).
Figure 2 depicts the CTL Model Checker interface, in which the textual representation of the model, written in CTL, is first loaded into the application. The application then takes the start state as input. From here, a specific property is inputted and the CTL Model Checker will return whether or not it holds true, thus verifying the property. The model checker performs a backward propagation state space search to determine which states can be marked with a given property.
### Interleavings
One way to model the concurrent execution of a set of steps by several agents is by the use of interleavings. An interleaving represents a possible permutation of the execution of concurrent steps. Thus, an initial approach to model concurrent execution is to enumerate all possible interleavings of execution of several steps by agents in a Kripke structure. Each interleaving must use unique state names to avoid the creation of loops. Interleavings do correspond to permutations of the same kind of steps so, the same step names do appear in different interleavings. To resolve the problem with loops we give unique names to different occurrences of the same step by attaching a number after the state name. For example, states VI1 and VI2 both represent the Visualization step in different interleavings. The number of interleavings is determined by the number of unique ways to order the \(n\) concurrently executed steps. All interleavings combined contain \(n!\times n\) states and \(n!\times n\) transitions, thus contributing to the factorial explosion of the number of states and transitions, which has a higher growth rate than exponential explosion(refer to figure 3). The formula to determine the number of interleavings is below
\[n!=n\times(n-1)\times(n-2)\cdots\times 1\]
Figure 2: CTL Model Checker interface.
### Big 0 notation(Upper Asymptotic Bound)
Big O notation is a function that serves as an upper bound to the running time function of the algorithm. The formula measures how the run time scales to the size of different inputs. To calculate the formula we first, go through our code line by line and determine whether or not that line execution time is constant no matter what our input is then, we assign these functions as \(O(1)\). Then we look for functions in the code that increase linearly with the input and assign them \(O(n)\). Finally, we look for functions that grow exponentially as the function increases and assign them \(O(n^{2})\). The next step is to drop all the \(O(1)\) because we will not take them into consideration since their run time is not affected when we scale up the input. Lastly, we drop the non-dominant terms. For example, if we have \(O(n+n^{2})\) we drop the non-dominant term and change the formula to \(O(n^{2})\). In the case of model checking algorithms, we can use Big 0 notation to measure the run time as we increase the size of a CTL formula (amount of states, transitions, and sub-formulas).
## 4 Motivational Example
The use of Distributed Artificial Intelligence (AI) in Bioinformatics has grown more prevalent and it is necessary to be able to verify that the agents are communicating correctly especially in a field with no room for error. A common architecture for the implementation of a distributed AI system is a multi-agent system architecture. For instance, the process of genome annotation can benefit
Figure 3: Commonly used formulas in Big O Notation
from the advantages of a multi-agent architecture. In figure 4, which gives the original description of the process from [1], the steps for different kinds of annotation are performed concurrently, so they can be executed by different agents. The concurrent execution of annotation steps for visualization, structural, functional, and community annotation can be modeled in a Kripke structure by enumerating their possible interleavings. The number of interleavings will be \(n!\) where \(n\) is the number of concurrent agents, assuming each agent executes one step. Even though the diagram in figure 4 shows only one step for each kind of annotation it is most likely each one of those steps has an internal process whose detail is not shown in the figure. The Kripke structure that corresponds to the four annotation steps running concurrently is shown in figure 5. That Kripke structure can be used for analysis of the concurrent execution of the annotation steps.
Figure 4: Workflow Diagram of Genome Annotation Process
## 5 Experimentation
### Genome Annotation Process
Using the diagram of the workflow of genome annotation (shown in figure 4), it is possible to create a Kripke structure using the following states and transitions. The diagram begins with a genome sequence which can either go straight to the database or it can go to one of the prediction methods (Homology-based gene prediction and Ab initio Gene Prediction). Once the user chooses which method of prediction the genome sequence goes to the annotation phase where it goes through the following steps in any order: Visualizations, Community, Functional, and Structural. The diagram represents the 24 different permutations of the four steps. After annotations, the genome sequence can go to quality control which checks if the annotation was correct. If the annotation is correct then the genome sequence goes to the highly curated database. If the annotation is faulty then it goes back to annotation until it is correct. The genome sequence can also skip quality control and go straight to the database.
### Errors in Genome Annotation Process
The process for genome annotation involves some steps concurrently executed by different agents. The processes for different agents are most likely written by different people, but the combined process must correspond to some common
Figure 5: Kripke Structure of Workflow Diagram
properties, so it is crucial that we can verify that the combined process does correspond to those common properties. In particular, the common property we are trying to verify in many of these cases is that the prediction states are done before annotation states. We must make sure that this holds regardless of the interleaving, that is, regardless of the order of execution of concurrent steps of a process.
#### 5.2.1 Visualization Before Prediction Error
Figure 6 is the diagram of the faulty process where one of the annotation steps for Visualization, VI, is done before the prediction step AB. In Figure 6 we show the interleavings of three concurrently executed steps because the Visualization step is performed sequentially before step AB. The total number of interleavings is 6 and the number of steps is 18 and the number of transitions is 18. Notice that step VI is absent from the interleavings because it is not run concurrently when it is put before step AB. The interleavings are only done with three states which drastically reduces the number of interleavings to only 6 unique permutations. These interleavings exist to represent a group of steps running concurrently in each unique permutation. We formalized figure 6 into a textual representation of the Kripke structure and then gave it as input into a CTL model checker. Next, We ran the CTL formula AG(gs \(\rightarrow\) AF(ab or hb) and ((ab or hb) \(\rightarrow\) AF(vi1))). The property requires that if we reach step VI then we should have gone through step AB or HB. This property does not hold true in the initial step of the diagram, GS. This is the expected result because the Kripke structure in figure 6 has an error. This error could occur in a combined genome annotation process in which some steps are executed by different agents. It is very likely that the processes that correspond to those steps are written by different people, so it is possible that an error of this kind is introduced inadvertently. We used a CTL model checker to catch this error. Thus, we show that a process combined from different authors should be verified against common properties.
### Prediction After Annotation Error
Next, we show a faulty process in which one of the prediction steps was wrongly executed after the annotation steps. Figure 7 represents the case when one of the prediction states AB is done after all the annotation steps. In this figure, we show the concurrent execution of 4 steps which contains 24 interleavings, 96 steps, and 96 transitions in all the interleavings. Next, we submit the textual representation of this Kripke structure into a model checker and then verify it against the property below. In that property, we had to repeat the same pattern for all the representations of the Visualization step in each interleaving.
\(AG(gs\)\(\rightarrow\) AF(ab or hb) and ((ab or hb) \(\rightarrow\) AF(vi1 or vi2 or vi3 or vi4 or vi5 or vi6 or vi7 or vi8 or vi9 or vi10 or vi11 or vi12 or vi13 or vi14 or vi15 or vi16 or vi17 or vi18 or vi19 or vi20 or vi21 or vi22 or vi23 or vi24)))\)
Figure 6: Kripke Structure of Visualization Before Prediction
This CTL formula represents this case corresponds to the property that any visualization step should be preceded by a prediction step. Step AB should have been executed before Visualization of every instance of the Visualization step in the interleavings. The CTL formula repeats the visualization preposition because it represents a condition that step AB or HB must be performed before each instance of the Visualization step for each interleaving. This repetition significantly increases the length of the CTL formula which is likely to increase the verification time. The expected result is that the property should not hold because the diagram shows that step AB is executed after all annotation steps. The CTL model checker shows that the property does not hold in step GS. The experimental results match our expectations. Thus, CTL model checking enables us to verify the model of concurrent execution of a group of steps in a process. Even though modeling of concurrency exponentially increases both the model and property sizes.
## 6 Analysis
### Results
This section presents the analysis of the experimental results. In this experiment, we applied the model checker to increasingly more structurally complex processes in which we varied the number of concurrently executed steps. The greater the number of interleavings according to the factorial or exponential dependence function of the running time on input size. The experimental results conform to the analytically expected dependence function.
Figure 7: Kripke Structure of interleavings with error
### Dependence of Running Time on Size of CTL Formula
The results shown in Figure 9 show a linear relationship between the length of the sub-formula and the verification time. This matches our prediction because the upper asymptotic bound is linear.
It can be derived that the CTL model checker can run in polynomial time with any length of CTL formula, meaning Model checking can be performed in a reasonable time for any subformula length.
### Dependence of Running Time on Size of Kripke Structure.
After running the CTL model checker to verify the genome annotation process, it was found that a variety of factors could be compared. Beginning with property
Figure 8: Influence of CTL Length on CTL Model Checker Time in Ticks
Figure 9: Graph of Time Dependence on CTL Formula Length
types, the CTL approach analyzes and focuses on a sequence of events. For instance, the modification of the functional decomposition is quite apparent. In regards to computational complexity, the algorithms for the CTL model checking are linear with the size of the Kripke structure and the size of the input grows at the rate of \(n!\) where n is the number of agents executing one step
growing exponentially. Thus, this verification technique can only be applied to relatively small Kripke structures. The experimental results show a correlation between the average verification time (ms) and the size of the Kripke structure. In Figure 10 we see the data of the three Kripke structures of different sizes. The shape of the dependence graph is that of an exponential/factorial function. The graph shows the dependence of the running time of the model checker on the size of the input Kripke structure. The sizes of the Kripke structures that model concurrency via interleavings are significantly larger than those for the sequential execution of steps.
### Upper Asymptotic Bound of Verification Time
The upper asymptotic bound (_BigO_) of CTL model checking algorithms is \(O(|f|\times(|s|+|r|))\) where \(|f|\) represents the number of sub-formulas in the property, \(|s|\) represents the number of steps in a Kripke structure and \(|r|\) represents the number of transitions in a Kripke structure. The linear time is attributed to the fact that CTL model checking algorithms are versions of state space search. For the Kripke structure in figure 6 and the property notation is \(O(2\times(31+42))\) and for figure 4 the Big O notation is \(O(2\times(102+172))\). Our experimental results do conform to these bounds.
## 7 Related Work
There are many papers that describe the application of Model Checking to combined Bioinformatics processes implemented as concurrent multi-agent systems. Thus, for related work, we searched for papers about the application of a multi agent approach to Bioinformatics processes. Most often these papers were written by experts in the area of Bioinformatics. In such papers, the authors would describe modeled Bioinformatics process. Multi-agent architecture of their system and experimental results related to the field of Bioinformatics. Not a single one of those papers attempted to verify their multi-agent systems against the correctness properties of the modeled process. Nevertheless we reviewed these papers as being close to the topic of Bioinformatics.
The paper by Girum Fitihamlak Eijgu and Jaehee Jung ([1]) gives a detailed explanation of the workflow of the genome annotation process and describes how the different agents should communicate with each other. The authors did not mention how they verified or tested their system. We used this paper to model the workflow diagram of the genome annotation process into a Kripke structure, then we turned the Kripke structure into a textual representation which we were able to use to validate properties against. After a literary search, we were not able
to find any papers that validate the multi-agent genome annotation process using CTL model checking.
However, we were able to find a paper by Bert Bogaerts et al. ([2]) which focuses on using test data to validate the process of genome annotation. The paper provides a workflow diagram of the Whole Genome Sequencing process. Then, they used test data from the N. meningitidis reference strains to test if the process is annotating the expected results. They were able to validate that a specific process produces these expected results. The paper doesn't mention CTL model checking and lacks the advantage outlined in this paper.
In the paper by Tolga Ovatman et al. ([3]), the authors apply CTL model checking to Programmable logic controllers (PLCs). PLCs are a special type of computer that is capable of processing a large number of input and output operations within certain time constraints. The process is broken into 3 cycles: input data are read into memory, data in the memory are processed, and the output data are written. PLCs are necessary for real-time automation and have been applied to railway interlocking systems, nuclear power plants, and manufacturing conveyors. Errors in PLCs can be very dangerous and difficult to debug. Therefore it is necessary to employ verification techniques. The paper details the use of Model checking to verify PLCs and introduces the use of Petri nets while modeling processes.
In the paper by Francesco Maria Donini et al[[4]], The authors apply CTL model checking to web applications. When creating a web application it is necessary to have multiple interactions between the database and the user. The authors were able to utilize Kripke structures to illustrate the processes necessary to create simple website features. For example, one step may be gathering login information and another step will be checking the login information in a database. The authors introduced errors into the process and then used CTL properties to catch the errors and ensure the steps are communicating correctly. They concluded that CTL model checking is an effective verification method for web applications
## 8 Conclusion
It is important to verify combined processes against common properties because most likely they will contain errors. These errors are most likely due to the fact that different parts of a combined process are written by different authors responsible for their domain of expertise. Multi-agent systems architecture is useful for the execution of bioinformatics processes. For instance, processes can
be executed faster due to internal concurrency. Testing for faults in processes with internal concurrency delivers a low level of assurance because testing is always sampling and because testing does not enforce the enumeration of all possible interleavings. One of the great advantages of static analysis methods, such as model checking, is that by enumeration of all possible interleavings, they greatly increase their assurance of analysis results. One of the disadvantages that result from this enumeration is the exponential explosion of the size of the model. Our experimental results confirm both the high level of assurance of the results of the analysis a concurrent process by model checking and the exponential time of the analysis. This system for the execution of Bioinformatics processes that have internal concurrency can greatly benefit from verification by model checking or other static analysis methods. In the reviewed related literature about Bioinformatics processes we have not come across a single instance of verification by a static analysis method which is most likely due to the fact that experts in the field of Bioinformatics are not aware of such methods.
Multidisciplinary research in the field of verification of Bioinformatics processes can be beneficial to both the field of Bioinformatics and the field of mathematically rigorous program analysis. The main contribution of our work is the application of combined Bioinformatics processes. The Kripke structure for the combined processes is created automatically. The experimental results show that there is an exponential explosion in the size of the Kripke structure due to the number of concurrent processes. There are a number of ways to mitigate this exponential explosion. For instance, the use of a more compact representation, such as the use of binary representation, on-the-fly model checking, distribution using concurrent versions of model checking algorithms, reducing Kripke structure size by further abstraction. In the future, we would like to investigate these techniques.
The impact of our approach that addresses these existing limitations is that we use a theoretical workflow diagram to identify errors that may be created when the individually created processes are combined by one author. Rather than having to assemble the process to verify it, our approach allows for verification to be done before assembly.
|
2301.03808 | Passenger Path Choice Estimation Using Smart Card Data: A Latent Class
Approach with Panel Effects Across Days | Understanding passengers' path choice behavior in urban rail systems is a
prerequisite for effective operations and planning. This paper attempts
bridging the gap by proposing a probabilistic approach to infer passengers'
path choice behavior in urban rail systems using a large-scale smart card data.
The model uses latent classes and panel effects to capture passengers' implicit
behavior heterogeneity and longitudinal correlations, key research gaps in big
data driven behavior studies. We formulate the probability of each individual's
arrival time at a destination based on their path choice behavior, and estimate
corresponding path choice model parameters as a maximum likelihood estimation
problem. The original likelihood function is intractable due to the exponential
computation complexity. We derive a tractable likelihood function and propose a
numerical integral approach to efficiently estimate the model. Also, we propose
a method to calculate the t-statistic of the estimated choice parameters based
on the numerically estimated Hessian matrix and Cramer-Rao bound (the lower
bound on the coefficient variance). Case studies using synthetic data validate
the model performance and its robustness against parameter initialization and
input errors, and highlight the importance of incorporating crowding impact in
path choice estimation. Applications using actual data from the Mass Transit
Railway, Hong Kong reveal two latent groups of passengers: time-sensitive (TS)
and comfort-aware (CA). TS passengers are those who are more likely to choose
paths with short travel times. Most of them are regular commuters with high
travel frequency and less schedule flexibility. CA passengers care more about
the travel comfort experience and choose paths with less walking and waiting
times. The proposed approach is data-driven and general to accommodate other
discrete choice structures. | Baichuan Mo, ZhenLiang Ma, Haris N. Koutsopoulos, Jinhua Zhao | 2023-01-10T06:21:25Z | http://arxiv.org/abs/2301.03808v1 | Passenger Path Choice Estimation Using Smart Card Data: A Latent Class Approach with Panel Effects Across Days
###### Abstract
Understanding passengers' path choice behavior in urban rail systems is a prerequisite for effective operations and planning. The area witnesses active developments in two broad but separate fields, including behaviour modeling using'small' survey data in transport and mobility pattern using 'big' data in computer science. This paper attempts bridging the gap by proposing a probabilistic approach to infer passengers' path choice behavior in urban rail systems using a large-scale smart card data. The model uses latent classes and panel effects to capture passengers' implicit behavior heterogeneity and longitudinal correlations, key research gaps in big data driven behavior studies. We formulate the probability of each individual's arrival time at a destination based on their path choice behavior, and estimate corresponding path choice model parameters as a maximum likelihood estimation problem. The original likelihood function is intractable due to the exponential computation complexity. We derive a tractable likelihood function and propose a numerical integral approach to efficiently estimate the model. Also, we propose a method to calculate the t-statistic of the estimated choice parameters based on the numerically estimated Hessian matrix and Cramer-Rao bound (the lower bound on the coefficient variance). Case studies using synthetic data validate the model performance and its robustness against parameter initialization and input errors, and highlight the importance of incorporating crowding impact in path choice estimation. Applications using actual data from the Mass Transit Railway, Hong Kong reveal two latent groups of passengers: time-sensitive (TS) and comfort-aware (CA). TS passengers are those who are more likely to choose paths with short travel times. Most of them are regular commuters with high travel frequency and less schedule flexibility. CA passengers care more about the travel comfort experience and choose paths with less walking and waiting times. The proposed approach is data-driven and general to accommodate other discrete choice structures. It provides the same outputs as traditional choice modeling and facilities a deep understanding of passengers choice behaviors in both a cost-effective and timely way, based on which more informed planning and management strategies could be designed, evaluated, and monitored.
keywords: Path choices; Urban railway systems; Smart card data; Latent passenger groups, Panel effects +
Footnote †: journal:
## 1 Introduction
Increases in ridership are outpacing capacity in many large urban rail transit systems, including Hong Kong's Mass Transit Railway (MTR), the London Underground, and the New York subway system (Zhu
et al., 2017, 2018). Crowding at stations and on trains is a concern due to its impact on safety, service quality, and operating efficiency. Various studies have measured passengers' willingness to pay for less crowded conditions (Li and Hensher, 2011) and suggested incorporating the crowding disutility in investment appraisals (Haywood and Koning, 2015). Given the interest in dealing with crowding-related problems, understanding passengers' route choice behavior under crowding situations is important for both operations management and planning practices. However, estimating path choice fractions or individual choice behavior is not trivial. As passenger's path choices in an urban rail system are not directly observed, most of the previous studies rely on revealed and stated preference survey data (Raveau et al., 2011, 2014; Jin et al., 2017; Zhang et al., 2017). Surveys are a powerful tool to facilitate behavior analysis. However, they are constrained by high costs, reporting accuracy, and survey coverage.
Automated Fare Collection (AFC) and Automatic Vehicle Location (AVL) data provide opportunities for analysis in areas such as travel behavior, demand modeling, transit operations planning, etc. (Pelletier et al., 2011; Bagchi and White, 2005; Koutsopoulos et al., 2019). In addition to aggregate trends of when and where passengers travel, AFC data provides detailed information on the travel patterns of individuals and/or specific groups (Goulet-Langlois et al., 2016; Briand et al., 2017). Table 1 summarizes the existing route choice studies using AFC or/and AVL data in metro systems. Several studies have used AFC data to estimate passengers' path choice probabilities (Sun and Xu, 2012; Zhao et al., 2016; Sun and Schonfeld, 2016; Zhou et al., 2015; Zhu et al., 2021; Mo et al., 2022). They provide useful insights on the aggregate choice behavior (i.e., path fractions) under existing conditions. For example, Zhu et al. (2021) develops a data-driven approach for the inference of passenger itineraries in urban heavy rail systems, where the path fractions can be estimated using AFC and AVL data. However, these results cannot be used for operations planning applications without modeling the individual path choice behavior, such as timetable design, network expansion, operating strategies and policy interventions, etc. This is because the new timetable or network expansion may change the service attributes, causing changes in an individual's choice behavior. Inference of path fractions cannot capture the impact of these changes.
This study focuses on the estimation of path choice models as a function of attributes of alternative paths using AFC and AVL data. Relevant to this context, Sun et al. (2015) developed an integrated Bayesian approach to infer network attributes and passenger route choice behavior using AFC data from the Singapore Mass Rapid Transit system. Zhang et al. (2018) developed a data fusion model to estimate individual path choices by combining revealed preference (RP) survey data and AFC data and modeled the risk attitudes of passengers. However, both studies imposed a strong assumption on link travel times (independent normal distribution) ignoring the fact that under congested conditions passengers may experience left behind at major stations due to capacity constraints. During peak periods, a (usually) shorter travel time route may have passengers who are left behind. But the models above cannot distinguish whether the longer travel time is due to choosing a longer route or being left behind multiple times on a shorter route (Zhu et al., 2017, 2021; Mo et al., 2020).
To incorporate the left behind phenomenon, Zhu et al. (2017) proposed a passenger-to-train assignment model (PTAM) by decomposing the journey time into access, waiting, in-vehicle, egress walking times, and considering the dynamics of being left behind at origin stations explicitly. The model was applied to estimate the left behind at key stations for non-transfer trips with capacity constraints and validated using both synthetic and actual data. Horcher et al. (2017) extended the PTAM to the case with transfers and presented a discrete choice model (DCM) to estimate the user cost of crowding in urban rail systems. However, the model identified the "actual" path used by passengers based on predetermined probability thresholds, which
may eventually impact the estimation quality of the choice model.
These data-driven behavioral studies provide a good attempt to bridge the gap between'small' (survey) and 'big' (AFC) data studies in different domains but answering the same question regarding passengers' path choices under crowding (Chen et al., 2016). However, existing AFC/AVL data driven path choice estimation studies are basically designed towards calibrating parameters of standard discrete choice models in DCM studies. They lack of systematic consideration of common and unique characteristics of the behavior choice problem itself and model them correspondingly in the context of big data, comparing with the survey data based DCM studies, including:
* **Choice Heterogeneity**. It is commonly known that passengers may have different perceptions of service performance (i.e., travel time, waiting time) and show different choice strategies for travels. Ignoring the population heterogeneity may lead to estimation bias. The choice heterogeneity is usually captured by latent class or mixture models in the DCM literature (Hess et al., 2009; Mo et al., 2021). However, to the best of the authors' knowledge, none of the previous AFC data-based studies have considered passenger heterogeneity in modeling individual path choice behavior.
* **Panel Effect (choice correlations across time)**. AFC data records passengers travels across days, and thus one unique challenge is about which days data should be used to estimate passengers' routine behavior. Considering individual travels on multiple days is important for robust choice behavior estimation, as passengers may occasionally deviate from their habitual behavior on some days. However, this brings challenges to model the temporal correlations of individuals' route choices (i.e., panel effect) across times and days, which is not considered or even discussed in previous studies.
* **Choice Model Coefficient Significance (t-statistics)**. In DCM studies, the significance levels of model coefficients are important for deriving behavioral and policy insights. However, no AFC/AVL driven study is found on reporting the t-statistics of calibrated model coefficients, thus limiting its capability in facilitating comprehensive behavioral interpretation.
To fill the research gaps, this paper develops a latent class approach with panel effects to estimate individual path choice behavior from AFC (tap-in and tap-out) and AVL data. The proposed framework explicitly captures capacity constraints (crowding), choice heterogeneity, and panel effects. We formulate the probability of each individual's arrival time at its destination station based on their path choice behavior, and estimate the corresponding path choice parameters as a maximum likelihood estimation (MLE) problem.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Reference} & \multicolumn{2}{c}{Data} & \multicolumn{2}{c}{Behavior} & \multicolumn{2}{c}{Characteristics} & \multicolumn{2}{c}{Method} \\ \cline{2-9} & AFC & AVL & Aggregate & Individual & Crowding & Heterogeneity & Panel effect & Optimization & Probabilistic \\ \hline Sun and Xu (2012) & ✓ & ✓ & ✓ & & ✓ & & & ✓ \\ Zhao et al. (2016) & ✓ & ✓ & ✓ & & ✓ & & & ✓ \\ Sun and Schonfeld (2016) & ✓ & ✓ & ✓ & & ✓ & & & ✓ \\ Zhou et al. (2015) & ✓ & ✓ & ✓ & & ✓ & & & ✓ \\ Zhu et al. (2021) & ✓ & ✓ & ✓ & & ✓ & & & ✓ \\ Mo et al. (2022) & ✓ & ✓ & ✓ & & ✓ & & ✓ & ✓ \\ Sun et al. (2015) & ✓ & & & ✓ & & & & ✓ \\ Zhang et al. (2018) & ✓ & & & ✓ & & & & ✓ \\ Hörcher et al. (2017) & ✓ & ✓ & & ✓ & ✓ & & & ✓ \\ \hline
**This study** & ✓ & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of literature on urban rail path choice inference and modeling using AFC data
The original likelihood function is intractable due to the exponentially large number of summations and the integration over a normally distributed variable. We derive a new conditional probability-based formulation to eliminate a large number of summations and use a numerical integration approach for the normal random variable, which leads to a tractable likelihood function and enables an efficient model estimation. Given the difficulty in deriving the analytical Hessian matrix, the t-values of estimated parameters are calculated based on the numerically estimated Hessian matrix and the Cramer-Rao bound. Case studies using synthetic data validate the model performance and highlight the importance of incorporating crowding impact in path choice estimation. Applications using actual data from the Mass Transit Railway (MTR), Hong Kong reveal two latent groups of passengers in the systems. The main contributions of this study are as follows:
* Introducing and model the passenger path choice problem using smart card data in closed public transport systems considering system crowding, choice heterogeneity and panel effects across times.
* Formulating a MLE based latent-class path choice estimation problem with panel effects and deriving a tractable likelihood function for efficient model coefficients estimation.
* Proposing a numerical method to calculate the t-statistic of estimated choice coefficients based on numerically estimated Hessian matrix and Cramer-Rao bound (lower bound of coefficient variance).
* Validating the model performance using both synthetic and real-world data in Hong Kong, and identifying latent groups of passengers with heterogeneous preference over travel times and comfortableness.
The rest of the paper is organized as follows: Section 2 formulates the route choice problem and develops the MLE estimation method. Section 3 validates the proposed approach using synthetic and real-world data. The final section summarizes the main findings and discusses future directions.
## 2 Methodology
### Problem description
Consider a closed AFC system where both tap-in and tap-out records of passengers over time are available, and train arrivals and departures at stations are available from the AVL system. Define a passenger \(i\) with a series of observed AFC records \(v_{i}=\{(o_{i,1},d_{i,1},t^{\text{in}}_{i,1},t^{\text{out}}_{i,1}),...,(o_{i, N_{i}},d_{i,N_{i}},t^{\text{in}}_{i,N_{i}},t^{\text{out}}_{i,N_{i}})\}\), where \(o_{i,n},d_{i,n},t^{\text{in}}_{i,n},t^{\text{out}}_{i,n}\) represent the passenger's origin, destination, tap-in time, and tap-out time of the \(n\)-th trip, respectively. The set of all passengers is defined as \(\mathcal{P}\) (i.e., \(i\in\mathcal{P}\)).
To capture passengers' choice heterogeneity, we assume that there are \(K\) latent groups in the population and passengers in the same group share the same choice preferences. Let \(g_{i}\) be a random variable indicating the group that passenger \(i\) belongs to. The probability that passenger \(i\) belongs to a latent group \(G_{k}\) is formulated as a multinomial logit model:
\[\text{Pr}(g_{i}=G_{k};\theta)=\frac{\exp(\theta_{k}\cdot x_{i})}{\sum_{k^{ \prime}=1}^{K}\exp(\theta_{k^{\prime}}\cdot x_{i})} \tag{1}\]
where \(x_{i}\) is the vector of the characteristics of passenger \(i\), including variables (extracted from smart card data) such as travel frequency, card type, travel regularity, etc. \(\mathcal{G}=\{G_{k}\mid k=1,2,...,K\}\) is a set of latent groups to be estimated (\(K\) need to be pre-specified). \(\theta=(\theta_{k})_{k=1,...,K}\) is the parameter vector to be estimated, associated individual's characteristics.
According to the random utility maximization (RUM) assumption (Ben-Akiva and Lerman, 2018), the utility of passenger \(i\) choosing path \(m\) at the \(n\)-th trip, given that passenger \(i\) is in group \(G_{k}\), can be formulated as:
\[U_{i,n,m}^{k}=\beta^{k}\cdot z_{n,m}+\alpha_{i}^{k}+\varepsilon_{i,n,m}^{k} \tag{2}\]
where \(\beta^{k}\) are the unknown parameters to be estimated, associated with path attributes. \(z_{n,m}:=[y_{n,m},\ \log PS_{m}]\) and \(y_{n,m}\) is the vector of path attributes, including variables such as in-vehicle time, out-of-vehicle time, left behind waiting time, etc. \(PS_{m}\) is the "path size" factor, which is used to capture the correlation in error terms caused by path overlapping (Hoogendoorn-Lanser and Bovy, 2007). The formulation with the path size factor is known as the "path-size logit model" (Prato, 2009). \(PS_{m}\) is defined as
\[PS_{m}=\frac{1}{L_{k}}\sum_{a\in A_{m}}\frac{l_{a}}{\sum_{m^{\prime}\in\mathcal{ R}_{i,n}}\delta_{a,m^{\prime}}}\quad\forall\ m\in\mathcal{R}_{i,n},n=1,...,N_{i},i\in \mathcal{P} \tag{3}\]
where \(A_{m}\) is the set of all links of path \(m\). \(\delta_{a,m^{\prime}}=1\) if link \(a\) is in path \(m^{\prime}\), otherwise \(\delta_{a,m^{\prime}}=0\). \(L_{k}\) is the length of path \(m\) and \(l_{a}\) is the length of link \(a\). \(\mathcal{R}_{i,n}\) is the set of all available paths for passenger \(i\)'s \(n\)-th trip.
To capture the individual's behavior correlation over time (i.e., panel effect), the utility function (Eq. 2) also includes an individual specific unobserved factor \(\alpha_{i}^{k}\) (a random variable). The panel effect is assumed to be persistent over time (i.e., no subscript \(n\)) (Ben-Akiva and Lerman, 2018). \(\alpha_{i}^{k}\) is assumed to be independent and identically distributed (i.i.d.) for all passengers in group \(k\) and follows a normal distribution \(\mathcal{N}(0,(\sigma^{k})^{2})\) (the zero-mean is due to the fact that the mean value can be estimated as a part of the alternative specific constant), where \(\sigma^{k}\) is the standard deviation to be estimated. Given \(\alpha_{i}^{k}\), the unobserved error term \(\varepsilon_{i,n,m}^{k}\) is assumed to be i.i.d. Gumbel distributed across all \(i\), \(n\), and \(m\).
Let \(\pi_{i,n,m}^{k}[\alpha_{i}^{k}]\) be the probability of passenger \(i\) choosing path \(m\) at the \(n\)-th trip given that passenger \(i\) is in group \(G_{k}\). According to RUM theory,
\[\pi_{i,n,m}^{k}[\alpha_{i}^{k}]=\frac{\exp(\beta^{k}\cdot z_{n,m}+\alpha_{i}^ {k})}{\sum_{m^{\prime}\in\mathcal{R}_{i,n}}\exp(\beta^{k}\cdot z_{n,m^{\prime} }+\alpha_{i}^{k})} \tag{4}\]
Since there are a total of \(N_{i}\) trip records for passenger \(i\), we can formulate the series choice probability as (Arellano and Honore, 2001):
\[\text{Pr}(r_{i,1}=m_{1},...,r_{i,N_{i}}=m_{N_{i}})=\sum_{k=1}^{K }\text{Pr}(g_{i}=G_{k})\cdot\int_{\alpha_{i}^{k}}\left[\prod_{n=1}^{N_{i}}\pi_ {i,n,m_{n}}^{k}[\alpha_{i}^{k}]\right]\cdot f(\alpha_{i}^{k})\,\text{d}\alpha_ {i}^{k}\] \[\forall m_{1}\in\mathcal{R}_{i,1},...,m_{N_{i}}\in\mathcal{R}_{i,N_{i}} \tag{5}\]
where \(r_{i,n}\) is a random variable indicating the path used by passenger \(i\) in the \(n\)-th trip. And \(f(\alpha_{i}^{k})\) is the probability density function of \(\alpha_{i}^{k}\).
The goal of this study is to develop an approach to simultaneously estimate \(\beta=(\beta^{k})_{k=1,...,K}\), \(\theta\), \(\sigma=(\sigma^{k})_{k=1,...,K}\), which specify passenger's path choice behavior, choice heterogeneity, and panel effect. We formulate an MLE problem to estimate these parameters in the following sections.
The structure of the methodology is presented in Figure 1.
The notation used across this paper is shown in Table 2.
\begin{table}
\begin{tabular}{l|l} \hline
**Notation** & **Description** \\ \hline \multicolumn{2}{l}{_Model Parameters_} \\ \hline \(v_{i}\) & A series of AFC data records for passenger \(i\) \\ \((o_{i,n},d_{i,n},t_{i,n}^{\text{in}},t_{i,n}^{\text{out}})\) & Passenger’s origin, destination, tap-in time, and tap-out time of the \(n\)-th trip, respectively \\ \(\mathcal{P}\) & The set of all passengers \\ \(x_{i}\) & Vector of the characteristics of passenger \(i\) \\ \(N_{i}\) & Total number of trips for passenger \(i\) \\ \(G_{k}\) & The \(k\)-th latent group \\ \(\mathcal{G}\) & The set of all latent groups \\ \(K\) & The number of latent groups \\ \(z_{n,m}\) & Vector of path attributes for path \(m\) and trip \(n\) \\ \(\alpha_{i}^{k}\) & A random variable to capture panel effect for individual \(i\) in latent group \(k\) \\ \(U_{i,n,m}^{k}\) & The utility of passenger \(i\) choosing path \(m\) at the \(n\)-th trip, given that passenger \(i\) is in group \(G_{k}\) \\ \(PS_{m}\) & Path size factor for path \(m\) \\ \(A_{m}\) & The set of all links of path \(m\) \\ \(\mathcal{R}_{i,n}\) & The set of all available paths for passenger \(i\)’s \(n\)-th trip \\ \(\tau\) & The time duration that each time index represents \\ \(\mathcal{R}^{u,v}\) & The set of feasible paths for OD pair \((u,v)\) \\ \(\pi_{i,n,m}^{k}[\alpha_{i}^{k}]\) & The probability of passenger \(i\) choosing path \(m\) at the \(n\)-th trip given that passenger \(i\) \\ & is in group \(G_{k}\) \\ \(r_{i,n}\) & A random variable indicating the path used by passenger \(i\) in the \(n\)-th trip \\ \(\mathcal{L}\mathcal{L}(\theta,\beta,\sigma)\) & Log-likelihood function of all observations \\ \(f(\alpha_{i}^{k})\) & The probability density function of \(\alpha_{i}^{k}\) \\ \(\Lambda_{i,n,m}^{j}\) & the set of all trains associated with the \(j\)-th segment of path \(m\) for passenger \(i\)’s trip \(n\) \\ \(J_{i,n,m}\) & Number of path segments for path \(m\) of passenger \(i\)’s trip \(n\) \\ \(\Omega_{i,n,m}\) & The set of possible itineraries for path \(m\) in the \(n\)-th trip of passenger \(i\) \\ \hline \end{tabular}
\end{table}
Table 2: Notation summary
Figure 1: Methodology framework
### Model formulation
Given the set of passenger \(i\)'s AFC data records \(v_{i}\), the probability of observing \(v_{i}\) can be expressed as
\[\text{Pr}(v_{i}) =\sum_{r_{i,1},...,r_{i,N_{i}}}\text{Pr}(v_{i}\mid r_{i,1},...,r_{i,N_{i}})\text{Pr}(r_{i,1},...,r_{i,N_{i}})\] \[=\sum_{r_{i,1},...,r_{i,N_{i}}}\left\{\left[\prod_{n=1}^{N_{i}} \text{Pr}(o_{i,n},d_{i,n},t_{i,n}^{\text{in}},t_{i,n}^{\text{out}}\mid r_{i,n}) \right]\times\text{Pr}(r_{i,1},...,r_{i,N_{i}})\right\} \tag{6}\]
where the second equality follows from the Bayesian theorem.
As the origin and destination are known given path \(m\), \(\text{Pr}(o_{i,n},d_{i,n},t_{i,n}^{\text{in}},t_{i,n}^{\text{out}}\mid r_{i,n} =m)\) is equivalent to the probability that passenger \(i\) enters the origin at time \(t_{i,n}^{\text{in}}\) and exits the destination at time \(t_{i,n}^{\text{out}}\) (denoted as \(\text{Pr}(t_{i,n}^{\text{in}},t_{i,n}^{\text{out}}\mid r_{i,n}=m)\)). Notice that
\[\text{Pr}(t_{i,n}^{\text{in}},t_{i,n}^{\text{out}}\mid r_{i,n}=m)=\text{Pr}(t _{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\cdot\text{Pr}(t_{i,n}^ {\text{in}}\mid r_{i,n})\propto\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{ \text{in}},r_{i,n}=m) \tag{7}\]
The "proportional to" is due to the fact that we do not model the tap-in time choices. Therefore, the likelihood function becomes
\[\mathcal{L}(\theta,\beta,\sigma)=\prod_{i\in\mathcal{P}}\text{Pr}(v_{i})= \prod_{i\in\mathcal{P}}\left[\sum_{r_{i,1},...,r_{i,N_{i}}}\prod_{n=1}^{N_{i} }\left[\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n})\right] \cdot\text{Pr}(r_{i,1},...,r_{i,N_{i}})\right] \tag{8}\]
The only unknown part in the likelihood function (Equation 8) is \(\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\), which is the probability that passenger \(i\) taps out at his/her destination at time \(t_{i,n}^{\text{out}}\) given that he/she uses path \(m\in\mathcal{R}_{i,n}\) and taps in at time \(t_{i,n}^{\text{in}}\). It can be derived by integrating over different itinerary scenarios, where each scenario is associated with a specific walking, boarding, and left behind possibility (Zhu et al., 2021).
To illustrate the derivation of \(\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\), we consider an example journey involving one transfer. Figure 2 shows, in a time-space diagram, all possible movements of a passenger tapping in at the origin station on line 1 and tapping out at the destination station on line 2. The movement along a specific line is referred to as a "path segment". A path segment is characterized by the line, the boarding station, and the transfer/alighting station. Each path segment is associated with a set of trains with run IDs indicating the dispatching time sequence. For example, the first path segment in Figure 2 has Trains 1, 2, 3, and 4 numbered in chronological order. Let the set of all trains associated with the \(j\)-th segment of path \(m\) for passenger \(i\)'s trip \(n\) be \(\Lambda_{i,n,m}^{j}\). For example, for the first path segment in Figure 1, we have \(\Lambda_{i,n,m}^{j}=\{\text{{Line 1 Train 1}, \ Line 1 Train 2},\ \text{{Line 1 Train 3},\ \text{{Line 1 Train 4}}}\}\). With slight abuse of notation, for a Train \(I\in\Lambda_{i,n,m}^{j}\), Train \(I+k\) represents the train in the same line with ID\(+k\) (\(k\in\mathbb{Z}\)). For example, if Train \(I\) is _Line 1 Train 1_, then Train \(I+1\) is _Line 1 Train 2_.
After the passenger taps in, he/she walks directly to the platform at the origin station. The walking time from the entry gate to the origin station platform is referred to as the "access walking time". Depending on the available capacity (i.e., potentially left behind), this passenger may board Trains 2 or 3 on Line 1 for the first path segment. Note that Train 1 is not feasible because the passenger arrives on the platform after the departure of Train 1. After alighting at the transfer station, the passenger walks to the boarding platform for the next path segment on Line 2. The walking time from the alighting platform to the next boarding platform is referred to as the "transfer time". Similarly, depending on the available capacity, the passenger may board Trains 2 or 3 on Line 2 (Train 4 is not feasible because the passenger cannot exit at his/her current tap-out time if boarding Train 4). After alighting at the platform of the destination station, the passenger walks directly to the exit gate and taps out. The walking time from the alighting platform to the exit gate is referred to as the "egress walking time".
Generally, let us consider passenger \(i\in\mathcal{P}\) who uses path \(m\in\mathcal{R}_{i,n}\) in his/her \(n\)-th trip. Let \(J_{i,n,m}\) be the number of path segments for path \(m\) of this trip. An itinerary \(\mathcal{H}=\{I_{1},I_{2},...,I_{J_{i,n,m}}\}\) is defined by "a sequence of train IDs" (each train ID is associated with a path segment) representing a possible movement of the passenger in the system, where \(I_{j}\in\Lambda_{i,n,m}^{j}\) indicates Train \(I_{j}\) for the \(j\)-th path segment. For example, in Figure 2, a feasible itinerary is \(\mathcal{H}=\{\text{{Line 1 Train 2}, \ Line 2 Train 3}\}\), which indicates the itinerary that the passenger first boards Train 2 on Line 1 and then boards Train 3 on Line 2.
It is worth noting that for a specific passenger \(i\), there are a limited number of feasible itineraries given his/her tap-in and tap-out time and the feasibility of transfer times. For example, in Figure 2, any itineraries with trains in Line 1 departing before _Line 1 Train 2_ are not feasible because passengers cannot board those trains given their tap-in times. Let \(\Omega_{i,n,m}\) be the set of possible itineraries for path \(m\) in the \(n\)-th trip of passenger \(i\). We have
\[\Omega_{i,n,m}=\{\{I_{1},...,I_{J_{i,n,m}}\},\ \forall\ I_{j}\in \Lambda_{i,n,m}^{j},\ \mid T_{d}(I_{1})\geq t_{i,n}^{\text{in}},T_{a}(I_{J_{i,n,m}})\leq t_{i,n}^{ \text{out}},T_{d}(I_{j})\geq T_{a}(I_{j+1}),\\ \forall\ j=1,...,J_{i,n,m}\} \tag{9}\]
where \(T_{d}(\cdot)\) (resp. \(T_{a}(\cdot)\)) is a function which returns the train's departure (resp. arrival) time at the boarding (resp. alighting) station of the corresponding segment. This information is available from the AVL data. Eq.
9 means that a feasible itinerary needs to satisfy that 1) Train \(I_{1}\) departs after \(t_{i,n}^{\text{in}}\) so that the passenger is able to board it (i.e., \(T_{d}(I_{1})\geq t_{i,n}^{\text{in}}\), assuming the minimum access walking time is zero). 2) The last train (i.e., Train \(I_{J_{i,n,m}}\)) arrives earlier than \(t_{i,n}^{\text{out}}\) so that the passenger is able to tap-out at \(t_{i,n}^{\text{out}}\) (i.e., \(T_{a}(I_{J_{i,n,m}})\leq t_{i,n}^{\text{out}}\), assuming the minimum egress walking time is zero). 3) Train \(I_{j+1}\) departs later than the arrival of the train \(I_{j}\) so that the passenger can successfully transfer (i.e., \(T_{d}(I_{j})\geq T_{a}(I_{j+1})\), assuming the minimum transfer time is zero).
Given the feasible itinerary set \(\Omega_{i,n,m}\), \(\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\) can be rewritten as:
\[\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n}=m)=\sum_{\mathcal{ H}\in\Omega_{i,n,m}}\text{Pr}(t_{i,n}^{\text{out}}\mid\mathcal{H},t_{i,n}^{ \text{in}},r_{i,n}=m)\cdot\text{Pr}(\mathcal{H}\mid t_{i,n}^{\text{in}},r_{i,n }=m). \tag{10}\]
We first consider the derivation of \(\text{Pr}(t_{i,n}^{\text{out}}\mid\mathcal{H},t_{i,n}^{\text{in}},r_{i,n}=m)\), the probability of tap out at time \(t_{i,n}^{\text{out}}\) given itinerary \(\mathcal{H}\), path \(m\) and tap-in time. Since the itinerary includes the information of the last train's arrival time \(T_{a}(I_{J_{i,n,m}})\), this probability is equivalent to the probability that the egress walking time is equal to \(t_{i,n}^{\text{out}}-T_{a}(I_{J_{i,n,m}})\). Let the egress walking time probability density function (PDF) for path \(m\) be \(f_{m}^{\text{Eg}}(\cdot)\). Then, \(\text{Pr}(t_{i,n}^{\text{out}}\mid\mathcal{H},t_{i,n}^{\text{in}},r_{i,n}=m)\) can be expressed as
\[\text{Pr}(t_{i,n}^{\text{out}}\mid\mathcal{H},t_{i,n}^{\text{in}},r_{i,n}=m)=f _{m}^{\text{Eg}}(t_{i,n}^{\text{out}}-T_{a}(I_{J_{i,n,m}})). \tag{11}\]
Note that Equation 11 uses the density to represent the probability, which does not affect the parameter estimations in the MLE.
Now let us consider the derivation of \(\text{Pr}(\mathcal{H}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\), the probability of choosing itinerary \(\mathcal{H}\) given path \(m\) and the tap-in time. Since the boarded train on segment \(j\) only depends on the boarded train
Figure 2: Time-space diagram for a journey involving one transfer (adapted from Zhu et al. (2017b)). The red lines indicate feasible itineraries
on segment \(j-1\), but not \(j-k\) for all \(k>1\), this probability can be extended using the Markov property:
\[\text{Pr}(\mathcal{H}\mid t_{i,n}^{\text{in}},r_{i,n}=m) =\text{Pr}(I_{1},I_{2},...,I_{J_{i,n,m}}\mid t_{i,n}^{\text{in}},r _{i,n}=m)\] \[=\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\cdot\prod_{j= 2}^{J_{i,n,m}}\text{Pr}(I_{j}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m) \tag{12}\]
In the following contents, we elaborate the derivation of \(\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\) and \(\text{Pr}(I_{j}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m)\), respectively.
Note that \(\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\) is the probability of boarding Train \(I_{1}\) on the first segment of path \(m\). There are two different scenarios for this event to happen: 1) **[No left behind]** the passenger arrives at the platform between the departure time of Trains \(I_{1}\) and \(I_{1}-1\) and boards Trains \(I_{1}\) without left behind. 2) **[With left behind]** the passenger arrives at the platform between the departures of Trains \(I_{1}-k\) and \(I_{1}-k-1\) and is able to board Train \(I_{1}\) after being left behind \(k\) times. Given the feasible itinerary set \(\Omega_{i,n,m}\), there is a maximum number of times the passenger is left behind to board Train \(I_{1}\). Denote the upper bound of \(k\) as \(B_{i,n,m}^{I_{1}}\), where \(B_{i,n,m}^{I_{1}}=\arg\max_{k}\{k\in\mathbb{N}\mid\exists\mathcal{H}^{\prime} \in\Omega_{i,n,m}\text{ s.t. }I_{1}^{\prime}=I_{1}-k,I_{1}^{\prime}\in\mathcal{H}^{\prime}\}\), \(\mathbb{N}\) is the set of natural numbers (including zero). And Train \(I_{1}-B_{i,n,m}^{I_{1}}\) represents the earliest train that passenger \(i\) can board at the first segment.
Let \(t_{i,n,m}^{j}\) be the walking time from the alighting platform of segment \(j-1\) to the boarding platform of segment \(j\) in path \(m\) for passenger \(i\), and \(t_{i,n,m}^{0}\) is the access walking time. Then, \(t_{i,n}^{\text{in}}+t_{i,n,m}^{0}\) is the passenger arrival time at the platform of his/her origin station. Hence, the probability of arriving at the platform between the departure of Train \(I_{1}-k\) and \(I_{1}-k-1\) can be formulated as
\[\text{Pr}(T_{d}(I_{1}-k-1)\leq t_{i,n}^{\text{in}}+t_{i,n,m}^{0} \leq T_{d}(I_{1}-k)\mid t_{i,n}^{\text{in}},r_{i,n}=m)=\int_{T_{d}(I_{1}-k-1) -t_{i,n}^{\text{in}}}^{T_{d}(I_{1}-k-1)-t_{i,n}^{\text{in}}}f_{m}^{\text{Ac}}( t)dt:=\rho_{i,n,m}^{I_{1},k}\\ \forall k=0,1,..,B_{i,n,m}^{I_{1}} \tag{13}\]
where \(f_{m}^{\text{Ac}}(\cdot)\) is the access walking time PDF for path \(m\). Eq. 13 can be pre-calculated once \(f_{m}^{\text{Ac}}(\cdot)\) is given because it is a definite integral.
Let \(\eta_{i,n,m}^{j,k}\) be the probability of being left behind \(k\) times at the boarding station of the \(j\)-th segment of path \(m\) for passenger \(i\)'s trip \(n\). Let \(E_{i,n,m}^{I_{j},k}\) be the event that "passenger \(i\) in the \(n\)-th trip arrives at the boarding station of segment \(j\) of path \(m\) between the departure of Train \(I_{j}-k\) and \(I_{j}-k-1\) and is left behind \(k\) times to board Train \(I_{j}\)". We have:
\[\text{Pr}(E_{i,n,m}^{I_{1},k}|\ t_{i,n}^{\text{in}},r_{i,n}=m)=\rho_{i,n,m}^{I _{1},k}\cdot\eta_{i,n,m}^{1,k}\quad\forall k=0,1,..,B_{i,n,m}^{I_{1}}. \tag{14}\]
Then, \(\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\) can be rewritten as
\[\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)=\sum_{k=0}^{B_{i,n,m}^{I _{1}}}\text{Pr}(E_{i,n,m}^{I_{1},k}|\ t_{i,n}^{\text{in}},r_{i,n}=m)=\sum_{k=0 }^{B_{i,n,m}^{I_{1},k}}\rho_{i,n,m}^{I_{1},k}\cdot\eta_{i,n,m}^{1,k}. \tag{15}\]
This finishes the derivation of \(\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\). \(\eta_{i,n,m}^{1,k}\) can be estimated from AFC and AVL data
using a Gaussian Mixture model (Ma et al., 2019), which will be described in A.
Now, we derive \(\text{Pr}(I_{j}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m)\) in Eq. 12, the probability of boarding train \(I_{j}\) given that the passenger has boarded train \(I_{j-1}\) on the \((j-1)\)-th segment of path \(m\). It is derived in a similar way as \(\text{Pr}(I_{1}\mid t_{i,n}^{\text{in}},r_{i,n}=m)\). Passenger \(i\) may arrive at the boarding station of segment \(j\) between the departure times of Trains \(I_{j}-k\) and \(I_{j}-k-1\) and be left behind \(k\) times to board Train \(I_{j}\) (note that \(k=0\) means no left behind). The probability of arriving at the platform between the departures of train \(I_{j}-k\) and \(I_{j}-k-1\) given he/she alights at \(T_{a}(I_{j-1})\) is formulated as
\[\text{Pr}(T_{d}(I_{j}-k-1)\leq T_{a}(I_{j-1})+t_{i,n,m}^{j}\leq T_ {d}(I_{j}-k)\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m)\] \[=\int_{T_{d}(I_{j}-k-1)-T_{a}(I_{j-1})}^{T_{d}(I_{j}-k)-T_{a}(I_{ j-1})}f_{m,j}^{\text{Tr}}(t)dt=\tilde{\rho}_{i,n,m}^{I_{j},k}\hskip 28.452756pt \forall k=0,1,..,B_{i,n,m}^{I_{j}}. \tag{16}\]
where \(B_{i,n,m}^{I_{j}}\) is the maximum possible left behind times when boarding train \(I_{j}\) given the feasible itinerary constraint, defined as \(\arg\max_{k}\{k\in\mathbb{N}\mid\exists\mathcal{H}^{\prime}\in\Omega_{i,n,m}\) s.t. \(I_{j}^{\prime}=I_{j}-k,I_{j}^{\prime}\in\mathcal{H}^{\prime}\}\). \(t_{i,n,m}^{j}\) is the transfer walking time from the alighting of train \(I_{j-1}\) to the next platform. \(f_{m,j}^{\text{Tr}}(\cdot)\) is the PDF of \(t_{i,n,m}^{j}\). \(\tilde{\rho}_{i,n,m}^{I_{j},k}\) is defined for the simplicity of expression. Given the definition of \(E_{i,n,m}^{I_{j},k}\), we have
\[\text{Pr}(E_{i,n,m}^{I_{j},k}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m)= \tilde{\rho}_{i,n,m}^{I_{j},k}\cdot\eta_{i,n,m}^{j,k}\quad\forall k=0,1,..,B_ {i,n,m}^{I_{j}}. \tag{17}\]
Then, \(\text{Pr}(I_{j}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m)\) can be rewritten as
\[\text{Pr}(I_{j}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m)=\sum_{k=0}^{B_{i, n,m}^{I_{j}}}\text{Pr}(E_{i,n,m}^{I_{j},k}\mid I_{j-1},t_{i,n}^{\text{in}},r_{i,n}=m )=\sum_{k=0}^{B_{i,n,m}^{I_{j}}}\tilde{\rho}_{i,n,m}^{I_{j},k}\cdot\eta_{i,n,m }^{j,k}. \tag{18}\]
With all parts of \(\mathcal{L}(\theta,\beta,\sigma)\) in Equation 8 derived, there are still two remaining challenges for the MLE problem. First, the calculation of \(\text{Pr}(t_{i,n}^{\text{out}}\mid t_{i,n}^{\text{in}},r_{i,n})\) requires the inputs of left behind probability \(\eta_{i,n,m}^{j,k}\) and three PDF functions \(f_{m}^{\text{Ac}}(\cdot)\), \(f_{m}^{\text{Eg}}(\cdot)\), and \(f_{m,j}^{\text{Tr}}(\cdot)\). The PDF functions can be obtained from field-experiments. But obtaining the left behind probability is not trivial. In this study, we used the model proposed by Ma et al. (2019) to estimate \(\eta_{i,n,m}^{j,k}\) from AFC and AVL data. Details can be found in A. The second challenge is that, the calculation of \(\text{Pr}(v_{i})\) (Eq. 6) has an exponentially large number of summation over different paths, and it requires the integral of a normally distributed random variable, which makes it numerically hard to solve. In the following section, we derive a new conditional probability-based formulation to eliminate the large number of summations, and use a numerical integration approach for the normal random variable, which leads to a tractable likelihood function and enables an efficient model estimation.
### Tractable log-likelihood function
To eliminate the exponentially large number of summation over different paths in Eq. 6, we observe that given \(\alpha_{i}^{k}\) and \(g_{i}\), passenger \(i\)'s route choice for each trip becomes independent. Mathematically,
\[\text{Pr}(v_{i}\mid\alpha_{i}^{k},g_{i}=G_{k})=\prod_{n=1}^{N_{i}}\text{Pr}(t_ {i,n}^{\text{out}},t_{i,n}^{\text{in}}\mid\alpha_{i}^{k},g_{i}=G_{k})=\prod_{n =1}^{N_{i}}\sum_{m_{n}\in\mathcal{R}_{i,n}}\text{Pr}(t_{i,n}^{\text{out}} \mid t_{i,n}^{\text{in}},r_{i,n}=m_{n})\cdot\pi_{i,n,m_{n}}^{k}[\alpha_{i}^{k}] \tag{19}\]
Note that Eq.19 only has a total of \(N_{i}\times\left|\mathcal{R}_{i,n}\right|\) summation terms, while this number in Eq.6 is \(\left|\mathcal{R}_{i,n}\right|^{N_{i}}\). Based on Eq. 19, \(\text{Pr}(v_{i})\) can be obtained by integrating and summing over \(\alpha_{i}^{k}\) and \(g_{i}\), respectively. Since \(\alpha_{i}^{k}\) is a normal random variable, an approximated numerical integration approach is used to get a tractable formula. Note that there are a large class of quadrature rules for numerical integration (Davis and Rabinowitz, 2007). In this paper, we use the simplest midpoint rule for the interpolation as this is not the focus of this study. Let \(\alpha^{\text{U}}\) and \(\alpha^{\text{L}}\) be the upper and lower bounds for \(\alpha_{i}^{k}\). We divide \([\alpha^{\text{L}}\), \(\alpha^{\text{U}}]\) into discrete intervals with equal length \(\Delta\). Let \(\mathcal{S}\) be the set of all middle points in each interval. Specifically, \(\mathcal{S}=\{\alpha^{\text{L}}+\frac{k\cdot\Delta}{2}\mid\forall k=1,3,5,..., \left|\mathcal{S}\right|,\left|\mathcal{S}\right|=\frac{2(\alpha^{\text{U}}- \alpha^{\text{L}})}{\Delta}-1\}\). Hence, \(\text{Pr}(v_{i})\) can be rewritten as
\[\text{Pr}(v_{i})\approx\sum_{k=1}^{K}\text{Pr}(g_{i}=G_{k})\cdot\sum_{\alpha_ {i}^{k}\in\mathcal{S}}\text{Pr}(v_{i}\mid\alpha_{i}^{k},g_{i}=G_{k})\cdot f( \alpha_{i}^{k})\cdot\Delta \tag{20}\]
\(\Delta\) is the parameter determining the trade-off between approximation accuracy and computational efficiency, where a smaller \(\Delta\) indicates a more fine-grained integration, but higher computational cost.
Given the new formulation of \(\text{Pr}(v_{i})\), we can use \(\mathcal{L}(\theta,\beta,\sigma)=\prod_{i\in\mathcal{P}}\text{Pr}(v_{i})\) to evaluate the likelihood function with a tractable formulation.
### Model estimation
The new log-likelihood function can be expressed as
\[\mathcal{L}\mathcal{L}(\theta,\beta,\sigma)=\sum_{i\in\mathcal{P}}\log\text{ Pr}(v_{i})=\sum_{i\in\mathcal{P}}\log\left[\sum_{k=1}^{K}\sum_{\alpha_{i}^{k}\in \mathcal{S}}\text{Pr}(g_{i}=G_{k})\cdot\text{Pr}(v_{i}\mid\alpha_{i}^{k},g_{ i}=G_{k})\cdot f(\alpha_{i}^{k})\cdot\Delta\right] \tag{21}\]
As \(\mathcal{L}\mathcal{L}(\theta,\beta,\sigma)\) is a combination of elementary functions, it is continuous and differentiable. Therefore, the MLE can be solved with any first- or second-order optimization method. In this study, the BFGS algorithm is used (Nocedal and Wright, 2006). BFGS is a quasi-Newton method. It uses only the first derivatives and has demonstrated good performance for many optimization problems. However, as the function includes the multiplication of several nonlinear terms, the convexity of this function is unknown. It is possible that the \(\mathcal{L}\mathcal{L}\) is not concave and the BFGS algorithm may converge to different local minimums under different initializations. Hence, we conduct a sensitivity analysis in Section 3.1 with respect to different initial values and show that the model estimation results are stable. Besides, the numerical results show that \(\mathcal{L}\mathcal{L}\) is concave within a reasonable range of path attribute values.
After obtaining the optimal parameters \(\theta^{*}\), \(\beta^{*}\), and \(\sigma^{*}\), we calculate the t-values of the estimated parameters based on a numerically estimated Hessian matrix and the Cramer-Rao bound. Note that as \(\mathcal{L}\mathcal{L}\) is second-order differentiable, the analytical Hessian matrix can also be derived. The numerical Hessian matrix is used for simplification due to the complex function form. In this study, we adopt the formulation with fourth-order approximation under uniform grid spacing to calculate the second derivative (Fornberg, 1988). The exact formulas are attached in B (other approximation formulas can also be used). With the second derivative formulas, we can calculate the numerical Hessian matrix of \(\mathcal{L}\mathcal{L}(\theta,\beta,\sigma)\) at point \((\theta^{*},\beta^{*},\sigma^{*})\). Denote the Hessian matrix as \(\hat{H}\). Note that, from the second-order optimality conditions, \(\hat{H}\) is negative semi-definite, which is the algebraic equivalent of the local concavity of the log-likelihood function (Bierlaire, 2020).
Let \(\Theta=(\theta,\beta,\sigma)\) be a vector of all parameters. Using the Cramer-Rao bound (Cramer, 2016; Rao, 1992), the variance of an estimated parameter \(\hat{\Theta}_{k}\) is
\[\text{Var}[\hat{\Theta}_{k}]=-\hat{H}_{k,k}^{-1} \tag{22}\]
where \(\hat{H}^{-1}\) is the inverse of the Hessian matrix and \(\hat{H}_{k,k}^{-1}\) is its \(k\)-th diagonal element. Then, the corresponding t-value is calculated as:
\[\text{t-value}[\hat{\Theta}_{k}]=\frac{\hat{\Theta}_{k}}{\sqrt{\text{Var}[\hat {\Theta}_{k}]}} \tag{23}\]
## 3 Case study
### Model validation and sensitivity test
It is difficult to collect passengers' actual path choices in reality. To validate the proposed approach, we use synthetic data generated by simulating passengers' route choices, train operations, and their interactions (Mo et al., 2022; Zhu et al., 2021; Mo et al., 2020).
Figure 3 shows the configuration of the synthetic urban rail network, where there are 7 stations (A\(\sim\)G) and 3 lines (red, green, and blue). The number on each link represents the in-vehicle travel time. This network is extracted from the MTR metro system in Hong Kong. It is also representative of typical metro network structures in terms of lines and transfers. The platform of station C of the red line in the up direction is assumed to be crowded with extensive left behind. All the other platforms are assumed to have no left behind.
To generate the synthetic data, we assume that passengers' path choice behavior is based on four path attributes: 1) in-vehicle time (i.e., the train run time of a path), 2) out-of-vehicle time (i.e., the sum of access, egress, transfer, and waiting time without left behind), 3) the number of transfers (i.e., the number of times transferring on the path), and 4) denied waiting time (i.e., the waiting time due to left behind at the crowded
Figure 3: Synthetic urban rail network
platforms). We also assume passenger's latent groups can be characterized by two sociodemographic variables \(x^{(1)}\) and \(x^{(2)}\), where \(x^{(1)}\) is drawn from \(\mathcal{U}[-4,4]\) and \(x^{(2)}\) is drawn from \(\mathcal{U}[-2,2]\). Suppose there are two latent groups for the synthetic passengers: "time-sensitive" (TS) and "comfort-aware" (CA). The TS passengers, when making path choices, tend to minimize their total travel time, meaning that the impact of in-vehicle time, out-of-vehicle time, and denied waiting time are similar to passenger's path choice utility. CA passengers prefer paths with less walking or waiting time though the in-vehicle time could be longer. That is, the out-of-vehicle time and denied waiting time have a higher impact on these passengers' path choice utilities than the in-vehicle time.
Table 3 shows the parameters for the latent class path choice model for generating the synthetic data. These parameters are chosen based on the survey results in Jin et al. (2017). We set the in-vehicle time and path size factor parameters to be the same for TS and CA passengers according to the survey modeling results. In addition, we set the choice parameters to be \(0.7\times\) (\(1.5\times\)) of the parameters from the survey results for the TS (CA) group. The parameters for the latent class model are set as 1.5 and 0.6 for \(x^{(1)}\) and \(x^{(2)}\), respectively.
Table 4 summarizes the parameters of the network, train operations, and passengers. The synthetic data is generated for 9 OD pairs (origin stations A, B, D, and destination stations E, F, G) by simulating the tap-out time given tap-in time. All the OD pairs have 2 paths. For example, the possible paths for OD pair (A, E) are A-B-E and A-B-C-E. Without loss of generality, we assume that there are 2,700 passengers, each of whom performs 3 trips (i.e., \(N_{i}=3\)). Each trip is associated with a randomly selected OD pair. Algorithm 1 describes the detailed synthetic data generation procedure.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Variables & TS & CA & Jin et al. (2017) \\ \hline \multicolumn{4}{c}{_Path choice parameters_} \\ In-veh time & -0.2676 & -0.2676 & -0.2676 \\ Out-of-veh time & -0.2980 & -0.6386 & -0.4257 \\ Num of transfer & -1.3068 & -3.1737 & -1.8669 \\ Denied waiting time & -0.3222 & -0.7825 & -0.4603 \\ \(\log PS_{m}\) & 0.5815 & 0.5815 & 0.5815 \\ \(\sigma^{k}\) & 1 & 1 & N/A \\ \hline \multicolumn{4}{c}{_Latent class parameters_} \\ \(x^{(1)}\) & & 1.5 & \\ \(x^{(2)}\) & & 0.6 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Path choice parameters for synthetic data generation
In total, 8,100 trips from the 2,700 passengers were generated. The synthetic AFC data is then used for model estimation. As we have the "true" value of choice parameters (Table 3), we can validate the model's performance. The MLE is solved using the BFGS algorithm (Fletcher, 2013) in the Python Scipy package. \(\alpha^{\text{L}}=-3\), \(\alpha^{\text{U}}=3\), and \(\Delta=1\) are used for numerical integration.
Table 5 shows the estimation results of the path choice parameters. The percentage values in the brackets quantify the relative errors compared to the "true" parameters. Note that the actual walking speed distribution and left behind probabilities are used in the model estimation. And the sensitivity analysis on these inputs is shown in Section 3.1. For comparison purposes, we also estimate a baseline model without latent classes. The latent class model can estimate the actual parameters with a mean percentage error of around 10%. It outperforms the baseline model in estimation accuracy. The out-of-vehicle time parameter for the TS group has the maximum error (-33.2%), which may be due to the fact that the out-of-vehicle time is highly correlated with the number of transfers, making the numerical estimation harder. Note that as the absolute values for these parameters are relatively small, the absolute errors of the estimated parameters are acceptable.
\begin{table}
\begin{tabular}{l l} \hline \hline Entity & Settings \\ \hline Network & Walk distance 30-50 meters (access, egress, transfer) \\ \hline \multirow{4}{*}{Train operations} & Headway 2+\(\delta_{H}\) minutes. \\ & \(\delta_{H}\) is drawn uniformly from \([-10,10]\) seconds \\ & In-vehicle time\(+\delta_{V}\) (see Figure 3) \\ & \(\delta_{V}\) is drawn uniformly from \([-20,20]\) seconds \\ \hline \multirow{4}{*}{Passengers} & Walk speed distribution follows a lognormal distribution \\ & with mean of 1.2m/s and standard deviation of 0.5m/s. \\ \cline{1-1} & Left behind probabilities at station C, red line, up \\ \cline{1-1} & direction are 20\% no left behind, 50\% left behind once \\ \cline{1-1} & and 30\% left behind twice \\ \hline \hline \end{tabular}
\end{table}
Table 4: System settings for synthetic data generation
In terms of the goodness-of-fit, the initial log-likelihood (denoted as \(\mathcal{LL}_{0}\)) for the null model (with all parameters zero) is \(-52,219.17\), the final latent-class model log-likelihood (denoted as \(\mathcal{LL}^{*}\)) is \(-51,526.13\), and the final baseline model log-likelihood (denoted as \(\mathcal{LL}^{\text{B}}\)) is \(-51,571.67\). We conduct the log-likelihood ratio test (Wilks, 1938) and obtain the statistic \(\chi^{2}=-2(\mathcal{LL}^{\text{B}}-\mathcal{LL}^{*})=91.08\), which suggests a p-value of 0 given 5 degrees-of-restrictions (i.e., number of parameters of latent class model minus that of baseline model). This indicates that the latent-class model specification is significantly better than the baseline model. We also calculate the log-likelihood with "true" parameters (referred to as \(\mathcal{LL}^{\text{True-para}}\)) and the value is \(-51,535.16\). It is smaller than the \(\mathcal{LL}^{*}\), which means that the estimated parameters have a better goodness-of-fit than the "true" parameters. This suggests that the estimation errors may mostly come from random errors in the data generation process, instead of the model estimation.
All parameters have absolute t-values greater than \(1.96\), showing significant impacts of these parameters on passengers' path choices. This is reasonable because the synthetic data are generated with those parameters. We also observe that the in-vehicle time shows the highest significance compared to other cost parameters, which is consistent with survey results (Jin et al., 2017).
To further validate the model performance, sensitivity analysis was conducted to explore the impacts of parameter initialization on the model's performance. Moreover, we also evaluate whether the inaccurate estimation of walking speed distribution and left behind distribution would affect the model's performance.
Figure 4 shows the sensitivity analysis on the initialization of the parameters. A total of 20 experiments are conducted. In each experiment, the initial values of all parameters are drawn uniformly from \(\mathcal{U}[-5,5]\). We observe that the final estimated parameters all converged to the same values regardless of initialized parameter values, showing the estimation robustness against the parameter initialization.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Variables**} & \multicolumn{2}{c}{**Latent class**} & \multicolumn{2}{c}{**Baseline**} \\ & Estimation (Error) & t-value & Estimation (Error) & t-value \\ \hline _Choice model_ & & & & \\ In-veh time & -0.2599 (-2.9\%) & -10.62 & -0.2254 (-15.8\%) & -10.03 \\ TS: Out-of-veh time & -0.1991 (-33.2\%) & -2.88 & -0.3010 (1.0\%) & -6.88 \\ TS: Num of transfer & -1.3327 (+1.9\%) & -4.91 & -1.8777 (+43.7\%) & -16.77 \\ TS: Denied waiting time & -0.3789 (+17.6\%) & -6.18 & -0.4901 (+52.1\%) & -18.80 \\ CA: Out-of-veh time & -0.5419 (-15.14\%) & -3.82 & -0.3010 (-52.9\%) & -6.88 \\ CA: Num of transfer & -2.8071 (-11.5\%) & -4.98 & -1.8777 (-40.8\%) & -16.77 \\ CA: Denied waiting time & -0.7298 (-6.7\%) & -6.35 & -0.4901 (-37.4\%) & -18.80 \\ \(\log PS_{m}\) & 0.6758 (+16.2\%) & 12.72 & 0.6054 (+4.1\%) & 12.31 \\ \(\sigma^{k}\) & 0.9191 (-8.1\%) & 11.05 & 0.9122 (-8.8\%) & 14.40 \\ \hline _Latent group model_ & & & & \\ \(x^{(1)}\) & 0.6213 (+3.6\%) & 4.99 & N/A & N/A \\ \(x^{(2)}\) & 1.8637 (+24.3\%) & 7.01 & N/A & N/A \\ \hline Number of passengers: 2,700. Number of observations: 8,100 & & & \\ \(\mathcal{LL}_{0}\): -52,219.17; \(\mathcal{LL}^{*}\): -51,526.13; \(\mathcal{LL}^{\text{B}}\): -51,571.67; \(\mathcal{LL}^{\text{True-para}}\): -51,535.16 & & \\ \(\chi^{2}=-2(\mathcal{LL}^{\text{B}}-\mathcal{LL}^{*})=91.08\), likelihood ratio test p-value: 0 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Estimation results for the synthetic data
Figure 5 illustrates the \(\mathcal{LL}\) value as a function of variable values. The log-likelihood function is concave around the optimal values1, which further indicates that the estimation results are robust.
Footnote 1: Due to space limitations, we only show the function curves with respect to four variables
Figure 6 shows the model estimation results with respect to different inputs of walking speeds. We evaluates the model's robustness with respect to errors in walking speed distribution because the estimation passenger's walking speed may not be accurate in the real world. Let \(\mu^{\text{WS}}\) and \(\sigma^{\text{WS}}\) be the actual walking speed mean and standard deviation when generating the synthetic data (i.e., \(\mu^{\text{WS}}=1.2m/s\) and \(\sigma^{\text{WS}}=0.5m/s\)). When estimating the model, we set the speed distribution parameters as \((\Gamma_{1}\cdot\mu^{\text{WS}},\Gamma_{2}\cdot\sigma^{\text{WS}})\), where
Figure 4: Estimated parameters with different initializations
\(\Gamma_{1},\Gamma_{2}\in\{0.8,1,1.2\}\), which represents different perturbations in the speed parameter inputs (\(\Gamma_{1}=\Gamma_{2}=1\) means no errors). Figure 6 shows that the variability of the walking speed distribution does not show much impact on the model performance.
Figure 7 shows the estimation results with respect to different input left behind probabilities at Station C, red line, up direction, which indicates the model's performance if there are estimation errors in left behind probabilities. Similarly, left behind probabilities are chosen for sensitivity analysis because they are values estimated from data and may suffer from errors. Three scenarios are compared: actual crowding (20% no left behind, 50% left behind once and 30% left behind twice, which means no errors), less crowding (80% no left behind, 20% left behind once and 0% left behind twice), and more crowding (10% no left behind, 20% left behind once and 70% left behind twice). It can be seen that the parameters of in-vehicle time, \(x^{(1)}\), and \(x^{(2)}\) are not sensitive to the left behind inputs. But other parameters (such as the number of transfers and out-of vehicle time) are highly affected. The reason may be that errors in left behind estimation can affect the model's evaluation of other factors' impacts on travel time. That is, an additional 10-minute trip time can be caused by more transfers, or high out-of vehicle time, or left behind. If left behind is not estimated accurately, the impact of other factors on total travel time (and passenger choices) may be biased. Hence, the results highlight the importance of incorporating crowding and capacity constraints in the estimation of path choices.
Figure 6: Sensitivity analysis on walking speed inputs
### MTR empirical case study
The proposed method is also applied using actual AFC and AVL data from the Hong Kong MTR network. Figure 8 shows the MTR network and select OD pair areas (origins in the black dashed box and destinations in the red). These OD pairs are selected because 1) there are multiple paths between each OD pair, which supports the application of the path choice modeling, and 2) these stations have high enough OD passenger flows to allow the estimation of the left behind probability distribution (Ma et al., 2019).
We randomly select 3,425 passengers with trips between these OD pairs. We consider trips with departure times in the evening peak (5:30 PM - 7:30 PM). Finally, a total of 6,425 trips were collected from the AFC data in July 2018 (i.e., on average each passenger had 1.88 trips). The walking time is assumed to follow the log-normal distribution with mean and variance calibrated by MTR employees. \(\alpha^{\text{L}}=-9\), \(\alpha^{\text{U}}=9\), and \(\Delta=1\) are used for numerical integration.
Figure 7: Sensitivity analysis on left behind inputs
We assume passenger's latent groups can be characterized by the following attributes, readily extracted from AFC data: 1) travel frequency, defined as the average number of days with travel per week, 2) schedule flexibility, measured by the standard deviation of the first trip's departure time on weekdays, 3) spatial concentration of trips, defined as the minimum number of stations that covers 70% of trips in a month, 4) whether the cardholder is a student or not (dummy variable). All these attributes are calculated based on the AFC data in July 2018. The descriptive statistics of the data are shown in Table 6. Two latent groups are considered for the experiment. The reason for considering two groups instead of more is that two latent groups are more interpretable in terms of estimation results. The path attributes are the same as the synthetic data experiment except for the "number of transfers". The "number of transfers" is dropped from the model due to its high correlation with the "out-of-vehicle time". Similar to the synthetic data experiment, we make the parameters of in-veh-time the same for the two groups so that we can compare the scales of out-of-veh time and the number of transfers.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Variables** & **Mean** & **Std. Dev.** & **Max** & **Min** \\ \hline _Individual characteristics_ & & & & \\ Avg \# travel days per week & 4.31 & 2.06 & 7.00 & 0.12 \\ Std. of 1st trip detpt. time (hr) & 0.68 & 0.51 & 3.54 & 0.01 \\ Min \# stations with 70\% trips & 3.22 & 0.15 & 15 & 1 \\ If student (Yes = 1) & 0.10 & 0.30 & 1 & 0 \\ \hline _Path attributes_ & & & & \\ In-veh time (min) & 34.0 & 15.5 & 85.8 & 3.50 \\ Out-of-veh time (min) & 9.20 & 1.11 & 24.6 & 0.50 \\ Denied waiting time (min) & 1.03 & 2.48 & 16.3 & 0.0 \\ \hline Number of passengers: 3,425. Number of trips: 6,425 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Descriptive statistics of the MTR data
Figure 8: Hong Kong MTR network
The estimation results are shown in Table 7. The two latent groups are referred to as Group 1 and 2, respectively. Group 2 is set as the base alternative in the group-assignment multinomial logit model (Eq. 1). Results show that the signs and scales of all parameters are reasonable. All time-related parameters are significantly negative. The value of the in-vehicle time parameter is -0.2785, which is similar to the survey results (-0.2676) (Jin et al., 2017). For both Groups 1 and 2, the absolute values of the parameters for out-of-vehicle time and denied waiting time are both greater than that of the in-vehicle travel time, reflecting that passengers were more sensitive to walking and waiting times compared to the time seated in vehicles. And these results are also consistent with the survey.
Comparing the results of Group 1 and 2, we observe that the out-of-vehicle and denied waiting times show a larger impact on the path choice utility for passengers in group 1, which implies that Group 1 is possibly comfort-aware (CA) and Group 2 is time-sensitive (TS). The parameters determining the latent groups indicate that CA passengers have less travel frequency (i.e., the effect of avg. # travel days per week is negative) and more schedule flexibility (i.e., the effect for the std. of the 1st trip departure time is positive), and students are more likely to belong to this group. These suggest that CA passengers are most likely to be irregular users and they may mainly use the MTR system for non-work activities (such as entertainment). Hence, they care more about the trip comfort and prefer paths with less walking and waiting times. In contrast, TS passengers have higher travel frequency and less schedule flexibility and they may use the metro system mostly for regular commuting trips. Besides, they care more about saving the total travel time to arrive at their destinations on time. The spatial concentration variable (i.e., min # stations with 70% trips), though having a positive impact on being in the CA group, is not statistically significant (t-value 0.4). \(\sigma^{k}\) is significant for both groups, which means that the panel effects are diverse across populations (i.e., some have more stable travel patterns but are not).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Variables**} & \multicolumn{2}{c}{**Group 1 (CA)**} & \multicolumn{2}{c}{**Group 2 (TS)**} \\ & Estimation & t-value & Estimation & t-value \\ \hline _Choice model_ & & & \\ In-veh time & -0.2785 & -18.13 & Same as Group 1 \\ Out-of-veh time & -1.1320 & -2.09 & -0.7457 & -9.04 \\ Denied waiting time & -3.0450 & -1.95 & -0.4517 & -9.90 \\ \(\log PS_{m}\) & 1.2611 & 4.38 & Same as Group 1 \\ \(\sigma^{k}\) & 2.7294 & 11.99 & Same as Group 1 \\ \hline _Latent group model (Group 2 is set as the base group)_ & & \\ ASC\({}^{1}\) & -0.9644 & -6.83 & 0 (fixed) \\ Avg \# travel days per week & -0.0987 & -3.91 & 0 (fixed) \\ Std. of 1st trip dept. time & 0.8667 & 4.19 & 0 (fixed) \\ Min \# stations with 70\% trips & 0.1865 & 0.40 & 0 (fixed) \\ If student (Yes = 1) & 1.0192 & 2.14 & 0 (fixed) \\ \hline \hline \end{tabular}
* ASC: alternative specific constant Number of passengers: 3,425. Number of observations: 6,425 \(\mathcal{L}\mathcal{L}_{0}\): -32,157.84; Final \(\mathcal{L}\mathcal{L}^{*}\): -30,867.48 \(\chi^{2}=-2(\mathcal{L}\mathcal{L}_{0}-\mathcal{L}\mathcal{L}^{*})=2,580.72\), likelihood ratio test p-value: 0
\end{table}
Table 7: Estimation results for the real-world data
## 4 Conclusion
Understanding passenger path choices are important for operations management in urban rail systems, especially those with crowded conditions. This paper presents a probabilistic approach for the path choice model estimation with train capacity constraints using AFC (tap-in and tap-out) and AVL data. The choice heterogeneity and longitudinal behavioral correlations are captured by a latent class model with panel effects. Passenger's movement is formulated using a passenger to train assignment model with explicit modeling of the processes of access/egress, left behind (crowding), and transfer. A tractable likelihood function is derived to facilitate the model estimation. The t-value of estimated parameters is calculated based on the numerically estimated Hessian matrix and Cramer-Rao bound. The method is data-driven, flexible to accommodate different choice models, and easy to solve using non-linear optimizers.
The model performance is validated using synthetic data to estimate the individual choice parameters. The sensitivity analysis affirms its robustness to parameter initialization and small errors in inputs (walking speed and left behind distributions). It also highlights that neglecting capacity constraints (left behind) can lead to biased estimation of path choice parameters under crowding conditions. The model is also applied using real-world data from the MTR system in Hong Kong. It reveals two different groups of passengers: time-sensitive (TS) and comfort-aware (CA). TS passengers are generally regular commuters with high travel frequency and small schedule flexibility. They are more likely to choose paths with less trip travel times. CA passengers care more about travel comfort and prefer choosing paths with less walking and waiting times.
Example policy implications can be derived from the case study. As these two groups of passengers value in-vehicle and out-of-vehicle times differently, transit agencies can use this insight to design customized route recommendation systems with better passenger acceptance. Moreover, route recommendations can help to relieve congestion by recommending TS and CA passengers to use different routes during peak hours. Interesting future research directions include: 1) Explore the evolution of choice preferences and learning behavior over time under network interventions (such as network extension and demand management policies). 2) Develop a downstream model to utilize the latent passenger group information for better route recommendations or fare policies.
## 5 Authors' contribution
**Baichuan Mo**: Conceptualization, Methodology, Software, Formal analysis, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization. **Zhenliang Ma**: Conceptualization, Methodology, Supervision, Formal analysis, Data Curation, Writing - Original Draft, Writing - Review & Editing. **Haris N. Koutsopoulos**: Conceptualization, Supervision, Formal analysis. **Jinhua Zhao:** Conceptualization, Supervision, Project administration, Funding acquisition.
## 6 Acknowledgement
The authors would like to thank Chicago Transit Authority (CTA) for their support and data availability for this research.
## Appendix A Left behind probability calibration
The left-behind probability can be estimated from a Gaussian mixture model proposed by Ma et al. (2019). The main idea is that passengers being left behind different times would have different journey times. Let \(t_{i}^{\text{In}}\) be the random variable indicating the journey time of passenger \(i\) (i.e., tap-out time minus the tap-in time). Figure A.9 shows an example of journey time distribution between a specific OD pair. We observe there are three clusters, indicating passengers being left behind 0, 1, and 2 times at the origin station. Hence, we can model the journey time distribution as a Gaussian mixture model:
\[\text{Pr}(t_{i}^{\text{In}};\mathbf{\mu},\mathbf{\sigma},\mathbf{w})=\sum_{c=0}^{C}w_{c} \cdot\Phi(\mu_{c},\sigma_{c})\] (A.1)
where \(w_{c}\) is the (unknown) fraction of passengers in cluster \(c\), \(w_{c}>0\) and \(\sum_{c=0}^{C}w_{c}=1\). \(C\) is the total number of clusters (i.e., the maximum number of left behind times at the origin station). \(\Phi(\mu_{c},\sigma_{c})\) is the PDF for the Gaussian distribution \(\mathcal{N}(\mu_{c},\sigma_{c})\).
The Gaussian mixture model can be estimated by solving the following problem:
\[\max_{\mathbf{\mu},\mathbf{\sigma},\mathbf{w}} \sum_{i\in\mathcal{P}^{\prime}}\log(\text{Pr}(t_{i}^{\text{In}}; \mathbf{\mu},\mathbf{\sigma},\mathbf{w}))\] (A.2a) s.t. Auxiliary Constraints (A.2b) \[\sum_{c\in C}w_{c}=1\] (A.2c) \[w_{c}\geq 0\quad\forall c=0,1,...,C\] (A.2d)
where \(\mathcal{P}^{\prime}\) is the passenger set used for the left behind probability estimation. The auxiliary constraints are used for model stability. These constraints contain prior human knowledge on the journey time distribution, such as the difference between \(\mu_{c}\) and \(\mu_{c+1}\) should be close to a headway, the mean journey time without being left behind (i.e., \(\mu_{0}\)) should be close to the sum of access, egress, and in-vehicle times. More information on the model can be found in Ma et al. (2019).
Figure A.9: Example of journey time distribution
The model is station and time-specific, which enables the calibration of left behind probabilities for each station at different time intervals in the system (by adjusting \(\mathcal{P}^{\prime}\)). The estimated \(w_{c}\) (i.e., the fraction of passengers in cluster \(c\)) is the probability of being left behind \(c\) times at the corresponding station and time period.
## Appendix B Numerical calculation of Hessian matrix
According to Fornberg (1988), given a general function \(f(x,y)\), the second derivative with a fourth-order accuracy can be calculated as
\[\frac{\partial^{2}f(x,y)}{\partial x^{2}}|_{x_{0},y_{0}}=\frac{1} {h_{x}^{2}}\left[\frac{-1}{12}f(x_{-2},y_{0})+\frac{4}{3}f(x_{-1},y_{0})\right.\] \[\left.+\frac{-5}{2}f(x_{0},y_{0})+\frac{4}{3}f(x_{+1},y_{0})+ \frac{-1}{12}f(x_{+2},y_{0})\right]+\mathcal{O}(h_{x}^{4}) \tag{10}\]
and
\[\frac{\partial^{2}f(x,y)}{\partial x\partial y}|_{x_{0},y_{0}}= \frac{1}{h_{x}h_{y}}\left[\frac{-1}{48}f(x_{-2},y_{-2})+\frac{1}{3}f(x_{-1},y_ {-1})\right.\] \[+\frac{-1}{3}f(x_{-1},y_{+1})+\frac{-1}{3}f(x_{+1},y_{-1})+\frac {1}{3}f(x_{+1},y_{+1})\] \[\left.+\frac{1}{48}f(x_{+2},y_{-2})+\frac{1}{48}f(x_{-2},y_{+2}) +\frac{-1}{48}f(x_{+2},y_{+2})\right]\] \[+\mathcal{O}(h_{x}^{2}h_{y}^{2}) \tag{11}\]
where \(h_{x}\) and \(h_{y}\) are small perturbations for \(x\) and \(y\), respectively. \(x_{k}\) (\(y_{k}\)) represents \(x_{0}+kh_{x}\) (\(y_{0}+kh_{y}\)). The derivation is based on Taylor's series expansion with uniform grid spacing (Fornberg, 1988).
|
2306.01060 | Motivating semiclassical gravity: a classical-quantum approximation for
bipartite quantum systems | We derive a "classical-quantum" approximation scheme for a broad class of
bipartite quantum systems from fully quantum dynamics. In this approximation,
one subsystem evolves via classical equations of motion with quantum
corrections, and the other subsystem evolves quantum mechanically with
equations of motion informed by the evolving classical degrees of freedom.
Using perturbation theory, we derive an estimate for the growth rate of
entanglement of the subsystems and deduce a "scrambling time" - the time
required for the subsystems to become significantly entangled from an initial
product state. We argue that a necessary condition for the validity of the
classical-quantum approximation is consistency of initial data with the
generalized Bohr correspondence principle. We illustrate the general formalism
by numerically studying the fully quantum, fully classical, and
classical-quantum dynamics of a system of two oscillators with nonlinear
coupling. This system exhibits parametric resonance, and we show that quantum
effects quench parametric resonance at late times. Lastly, we present a curious
late-time scaling relation between the average value of the von Neumann
entanglement of the interacting oscillator system and its total energy: $S\sim
2/3 \ln E$. | Viqar Husain, Irfan Javed, Sanjeev S. Seahra, Nomaan X | 2023-06-01T18:05:33Z | http://arxiv.org/abs/2306.01060v1 | # Motivating semiclassical gravity: a classical-quantum approximation
###### Abstract
We derive a "classical-quantum" approximation scheme for a broad class of bipartite quantum systems from fully quantum dynamics. In this approximation, one subsystem evolves via classical equations of motion with quantum corrections, and the other subsystem evolves quantum mechanically with equations of motion informed by the evolving classical degrees of freedom. Using perturbation theory, we derive an estimate for the growth rate of entanglement of the subsystems and deduce a "scrambling time"--the time required for the subsystems to become significantly entangled from an initial product state. We argue that a necessary condition for the validity of the classical-quantum approximation is consistency of initial data with the generalized Bohr correspondence principle. We illustrate the general formalism by numerically studying the fully quantum, fully classical, and classical-quantum dynamics of a system of two oscillators with nonlinear coupling. This system exhibits parametric resonance, and we show that quantum effects quench parametric resonance at late times. Lastly, we present a curious late-time scaling relation between the average value of the von Neumann entanglement of the interacting oscillator system and its total energy: \(S\sim 2/3\ln E\).
###### Contents
* I Introduction
* II General Framework
* II.1 Hamiltonian
* II.2 Fully quantum-quantum (QQ) dynamics
* II.3 Measures of entanglement
* III Approximations of Quantum Dynamics
* III.1 Product State Approximation
* III.2 Classical-Quantum (CQ) approximation
* III.3 Classical-Classical (CC) approximation
* III.4 Classical background approximation
* III.5 Conservation of energy and probability via alternative Hamiltonians
* IV Perturbative analysis
* IV.1 Interaction picture and perturbative solutions in the quantum-quantum case
* IV.2 Scrambling time
* IV.3 Discrepancy between the quantum-quantum and classical-quantum calculations
* V Example: nonlinearly coupled oscillators
* V.1 Hamiltonian and operators
* V.2 Equations of motion and initial data
* V.2.1 Dimensionless variables
* V.2.2 Quantum-quantum case
* V.2.3 Classical-quantum case
* V.2.4 Classical-classical case
* V.3 Sample Simulations
* D Quantifying error in the CC and CQ approximations
* E. Parametric resonance
* F. Wigner quasiprobability distributions
* G. Long time evolution of entanglement entropy
* VI Conclusions and Discussion
* A Frobenius inner product
* B Classical-quantum approximation for one-dimensional potential problems
* B.1 Arbitrary potentials
* B.2 Quadratic potentials
* C Born-Oppenheimer approximation for one-dimensional potential problems
* D Properties of coherent states
## I Introduction
At present, there is no consensus on a quantum theory of gravity, not even an approach to one, with several ideas in circulation; see, e.g., [1; 2; 3; 4] for recent reviews. If quantum gravity is assumed to have the structure of usual quantum mechanics, then its Hilbert space would be a tensor product of the Hilbert spaces of gravity and matter. As such, it may be thought of as a "bipartite" system. The question that arises is whether there are viable approximations of the unknown theory of quantum gravity where matter is quantum and spacetime is classical. Two such approximations are known, quantum field theory (QFT) in curved spacetime and the semiclassical Einstein equation. It is, however, not clear how these
approximations might arise from a theory of quantum gravity or what their domains of validity are.
The study of QFT on fixed curved spacetimes has been a subject of interest for decades. The area is a natural outgrowth of QFT on flat spacetime with pioneering applications to black holes and cosmology [5; 6; 7; 8; 9]. In this paradigm, equations of motion are of the form
\[G_{\alpha\beta}(g) =0, \tag{1a}\] \[i\,\partial_{t}|\psi\rangle =\hat{H}_{\psi}(g)|\psi\rangle, \tag{1b}\]
where (1a) is the vacuum Einstein equation and (1b) is the Schrodinger picture functional evolution equation for the state vector \(|\psi\rangle\) of the quantum field on the curved background \(g\).1 The matter Hamiltonian operator \(\hat{H}_{\psi}(g)\) is parametrized by the classical metric tensor \(g\); therefore, the first equation is solved for \(g\) first, followed by the second for \(|\psi\rangle\). (The usual textbook Heisenberg picture procedure involves solving the free field equation on a given background and quantizing the mode expansion to construct the Hamiltonian operator.)
Footnote 1: In the more utilized Heisenberg picture, (1b) would be replaced by the Heisenberg equation of motion \(d\hat{O}/dt=i[\hat{H}_{\psi}(g),\hat{O}]+\partial_{t}\hat{O}\) for an arbitrary operator \(\hat{O}\).
In this approach, it is apparent that the quantum field has no dynamical effect on spacetime, i.e., no "backreaction" [10]. The original Hawking radiation derivation [11] relied on these equations, and the calculation of the growth of primordial perturbations during inflation (see, e.g., [12]) uses a modified version of (1), which adds a classical homogeneous scalar stress-energy tensor to the right-hand side of (1a).
From the perspective of quantum gravity, both matter and spacetime geometry are expected to be quantum fields, and Eqs. (1) are widely regarded as an emergent approximation to the fully quantum equations of motion of a bipartite system with Hilbert space \(\mathcal{H}_{\rm gravity}\otimes\mathcal{H}_{\rm matter}\).
It is instructive to recall how similar approximations are used in nongravitational systems. For example, in quantum chemistry, energy eigenstates of molecules are determined by considering a subsystem of atomic nuclei coupled to a subsystem of electrons. Since nuclei are much more massive than electrons, a common approximation is to consider the electrons as quantum particles moving in the electric field of atomic nuclei, which are modeled as very slowly moving classical objects. "Heavy" nuclei and the "light" electrons may be viewed, respectively, as analogous to spacetime geometry and matter fields in (1). Another example is the dynamics of ultracold neutrons moving in Earth's gravitational field [13], where quantum mechanics describes neutrons moving in the effectively nondynamical Newtonian gravitational field of the Earth.
Both the molecular and ultracold neutron calculations rely on the Born-Oppenheimer approximation [14], where the fully quantum dynamical equations are expanded in terms of large parameters (the nuclear and Earth masses). Similarly, several authors have used the Born-Oppenheimer approximation to derive equations (1) from the Wheeler-DeWitt equation in canonical quantum gravity [15; 16; 17; 18] with the relevant large expansion parameter being the Planck mass \(M_{\rm Pl}\).
Despite this theoretical grounding, there are serious deficiencies in the structure of (1). Among the more important ones is the lack of conservation of energy. Since the metric appears as an external source in (1b), it can drive particle creation in the quantum field \(|\psi\rangle\) without a commensurate reduction in the energy stored in the spacetime geometry. The same is true for the Born-Oppenheimer approximation as applied to molecules or ultracold neutrons falling near Earth's surface, but in such cases, it is physically sensible to ignore the (negligible) energy transfer between light and heavy objects. However, for certain gravitational calculations, it is important to follow energy exchange between subsystems. The prime example is the Hawking effect, where the energy carried away by the quantum fields to future null infinity reduces the mass of a black hole in the famous "black hole evaporation" process; this becomes especially important in the late stages of Hawking evaporation when the rate of mass loss of a black hole is conjectured to rapidly increase, a theoretical scenario where Eqs. (1) clearly fail; there is no analogous prediction in quantum chemistry or for neutrons in Earth's gravitational field.
These considerations suggest a modification of Eqs. (1) to allow the quantum state of matter to have a dynamical effect on spacetime geometry:
\[G_{\alpha\beta}(g) =M_{\rm Pl}^{-2}\langle\psi|\hat{T}_{\alpha\beta}(g)|\psi\rangle, \tag{2a}\] \[i\,\partial_{t}|\psi\rangle =\hat{H}_{\psi}(g)|\psi\rangle. \tag{2b}\]
The new feature is the expectation value of the stress-energy tensor in (2a); this makes the coupled set of equations nonlinear in the matter state. The first of these equations was proposed by Moller and Rosenfeld [19; 20]; the second was added in a discussion of whether gravity should be quantized based on the validity of the pair of equations [21].
While equations (2) have obvious intuitive appeal, they also present many conceptual and practical difficulties. One issue becomes apparent when one tries to solve the equations perturbatively by expanding the metric and state vector in powers of \(M_{\rm Pl}^{-2}\). While such an expansion reproduces (1) at zeroth order, the next-to-leading order corrections to the Einstein tensor \(G_{\alpha\beta}^{(1)}\) obey
\[G_{\alpha\beta}^{(1)}=M_{\rm Pl}^{-2}\langle\psi^{(0)}|\hat{T}_{\alpha\beta}(g ^{(0)})|\psi^{(0)}\rangle. \tag{3}\]
Here, \(g^{(0)}\) and \(|\psi^{(0)}\rangle\) are solutions to the zeroth order equations. For the simple situation where the zeroth order solution represents a free quantum field on Minkowski spacetime, it is easy to see that the right-hand side of equation (3) is formally infinite due to zero point fluctuations. Attempting to remedy this divergence by imposing
a Planck scale cutoff generates a huge effective cosmological constant that is inconsistent with observations.
In order to rescue the theory, one can add large and/or infinite counterterms to the Einstein equations or attempt to regulate \(\langle T_{\alpha\beta}\rangle\) via point-splitting or other techniques. However, these methods often do not generalize to situations where the zeroth order spacetime solution is not flat. For example, when the point-splitting regularization procedure is applied to \(\langle\psi|\hat{T}_{\alpha\beta}(g)|\psi\rangle\) in a curved spacetime, the result generally involves higher derivatives of the metric and therefore leads to higher order theories of gravity that are prone to various pathologies [9].
Additional computational difficulties are encountered in the perturbative approach to (2) if higher order terms are included in the expansion due to corrections to the state vector, mode function inner products, and the metric. There have been other (nonperturbative) critiques of (2): for example, if the state \(|\psi\rangle\) is a superposition of quantum states, representing, for instance, masses localized in different spatial locations, then according to (2a), the gravitational field corresponds to a weighted sum of the locations of the particle in the superposition, a feature that predicts a rapid change in spacetime geometry if the quantum superposition is measured [21]. Also, the nonlinearity of (2) in the quantum state \(|\psi\rangle\) means that the principle of quantum superposition is lost.
Given these objections, we see if Eqs. (2) are derivable as controlled approximations from first principles, similar to the Born-Oppenheimer derivation of (1) from the Wheeler-DeWitt equation. There is some reason for optimism as equations (1) are the \(M_{\rm Pl}\to\infty\) limit of (2) as noted above. This raises the possibility that (2) could emerge from the Born-Oppenheimer approximation if higher order terms in \(M_{\rm Pl}^{-2}\) were retained. If this were to work for Wheeler-DeWitt quantum gravity, it should also work for simpler quantum systems, something which leads to the following question: does application of the higher order Born-Oppenheimer approximation applied to any bipartite quantum system with a "heavy" and a "light" component lead to something similar to equations (2)?
Singh and Padmanabhan [18] investigated this question for a simple bipartite quantum mechanical system, finding out that the answer was a qualified "no." Specifically, they showed that the only way to recover the analogue of (2) using the Born-Oppenheimer approximation was to replace certain quantities in the classical equations of motion of the heavy degree of freedom with their expectation values in an _ad hoc_ manner; an extension of this analysis to the Wheeler-DeWitt equation gives a similar result for gravity. Kiefer and Singh [16] also studied the higher order Born-Oppenheimer approximation for the Wheeler-DeWitt equation. As in [18], corrections to equations (1) are not of the form of (2). Kiefer [22] applied the same higher order Born-Oppenheimer formalism to scalar quantum electrodynamics to study the pair production of particles in a semiclassical electric field, including backreaction effects.
In this paper, we derive the analogue of equation (2) for a broad class of bipartite quantum mechanical systems using assumptions that are related to, but logically distinct from, Born-Oppenheimer. More specifically, our "classical-quantum" approximation relies on the following hypotheses.
1. The bipartite quantum system starts in a product state.
2. One of the subsystems is in a semiclassical state.
3. The coupling between the subsystems is weak.
Under these conditions, we find that approximate equations of motion qualitatively similar to (1) hold for a _finite period of time_. That is, for some interval after \(t=0\), the equations of motion of subsystem 1 are classical and sourced by expectation values of the quantum state of subsystem 2, and the classical configuration variables of subsystem 1 appear in the quantum equations of motion of subsystem 2 as time-dependent functions.
One of the key features of this approximation is that it is possible to derive an effective Hamiltonian for each subsystem from the equations of motion of the reduced density matrices of the fully quantum formalism (referred to here as "quantum-quantum"). In our derivation, expectation values appear naturally in the approximate equation of motion of subsystem 1 and are therefore nonlinear in the quantum state of subsystem 2.
The main assumption in our derivation of the classical-quantum approximation is the choice of initial state of the bipartite quantum system: this is a product state, \(|\varphi_{1}\rangle\otimes|\varphi_{2}\rangle\), where \(|\varphi_{1}\rangle\) is a semiclassical state of subsystem 1 in isolation and \(|\varphi_{2}\rangle\) is any state of subsystem 2 in isolation. Under evolution with a Hamiltonian of the form \(\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{\rm int}\), such a state becomes entangled and \(|\varphi_{1}\rangle\) begins to lose its semiclassicality. This suggests two indepedendent time scales: the characteristic timescale for the growth of entanglement (which we call the "scrambling time") and the characteristic timescale for the decline of semiclassicality for subsystem 1 (referred to as the "Ehrenfest time" in some of the literature).
We derive bounds on the scrambling time using perturbation theory. This provides a time interval estimate for the validity of the classical-quantum approximation. We also show that the differences between trajectories of observables in the quantum-quantum and classical-quantum systems is correlated to the deviation of subsystem 1 expectation values from their classical counterparts. That is, subsystem 1 must adhere closely to the generalized Bohr correspondence principle (in which classical and quantum predictions agree) for the classical-quantum approximation to be valid.
To illustrate the behaviour of the classical-quantum approximation, we numerically study a system of two harmonic oscillators with nonlinear coupling. The results of the classical-quantum approximation are compared to the quantum-quantum and fully classical system trajectories. This analysis builds on recent work by some of us on a hybrid classical-quantum treatment
of a tripartite system of an oscillator coupled to spins [23]. We find that there is general good agreement between the classical-quantum and quantum-quantum calculations up to the scrambling time. An interesting classical feature of the coupled oscillator system is that it exhibits parametric resonance for certain interaction potentials and parameter choices. This effect is associated with the efficient energy exchange between the oscillators; we find that it can persist in both the classical-quantum and quantum-quantum calculations on timescales shorter than the scrambling time for certain choices of parameters. (We are unaware of any other numerical simulations of the fully quantum dynamics of an oscillatory system exhibiting parametric resonance in the literature.) We also note that particle creation driven by parametric resonance phenomenon may play a role in the preheating phases of cosmological evolution after inflation [24]; other authors have recently considered the semiclassical Einstein equation in this context [25]. Finally, we investigate the long time evolution of the von Neumann entropy using numeric simulations in the quantum-quantum case. We find an approximate (and slightly mysterious) scaling relation between the long term average of the the entanglement entropy and the total energy of the system: \(S_{\textsc{vn}}\sim\frac{2}{3}\ln E\).
The layout of the paper is as follows: in SSII, we introduce the bipartite system and relevant measures of entanglement; in SSIII, we describe several approximation schemes for the quantum-quantum equations of motion, derive a "classical-quantum" approximation, and show that it conserves probability and energy; in SSIV, we use perturbation theory to investigate the regime of validity of classical-quantum approximation; in SSV, we apply the approximations of previous section to the specific case of two harmonic oscillators with a nonlinear coupling; in SSVI, we present our conclusions and discuss some consequences for gravitational systems. The appendices contain several technical results and reviews of useful formulae.
## II General Framework
### Hamiltonian
Consider a system with two coupled degrees of freedom (each of which we refer to as a "subsystem"). The classical Hamiltonian is
\[\mathcal{H}(q_{1},p_{1},q_{2},p_{2})=\mathcal{H}_{1}(q_{1},p_{1}) +\mathcal{H}_{2}(q_{2},p_{2})\\ +\lambda\mathcal{V}_{1}(q_{1},p_{1})\mathcal{V}_{2}(q_{2},p_{2}). \tag{4}\]
Here, \((q_{1},p_{1})\) and \((q_{2},p_{2})\) are canonical phase space coordinates for each subsystem, \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) govern the interaction between the two subsystems, and \(\lambda\geq 0\) is a dimensionless coupling constant. Note that the classical Hamiltonian is assumed to carry no explicit dependance on time.
The quantum version of this system is bipartite with Hilbert space \(\mathscr{H}=\mathscr{H}_{1}\otimes\mathscr{H}_{2}\). Typically, we would expect the dimension of each of the subspaces to be infinite. For the purposes of numerical calculation, however, we will often consider finite truncations of each subspace defined by restricted basis sets such that2
Footnote 2: This kind of truncation is common in quantum chemistry, for example.
\[\dim(\mathscr{H}_{1})=d_{1},\quad\dim(\mathscr{H}_{2})=d_{2}. \tag{5}\]
The system Hamiltonian is of the form
\[\hat{H}=\hat{H}_{1}\otimes\hat{I}_{2}+\hat{I}_{1}\otimes\hat{H}_{2}+\lambda\, \hat{V}_{1}\otimes\hat{V}_{2}, \tag{6}\]
where \(\hat{I}_{1}\) and \(\hat{I}_{2}\) are the identity operators on each subspace, \(\hat{H}_{1}\) is the operator version of \(\mathcal{H}_{1}\), \(\hat{V}_{1}\) is the operator version of \(\mathcal{V}_{1}\), etc. All the operators are assumed to be Hermitian.
### Fully quantum-quantum (QQ) dynamics
We assume that the energy eigenvalue problem for each of the subsystem Hamiltonians is easily solvable:
\[\hat{H}_{1}|n\rangle=E_{n}|n\rangle,\quad\hat{H}_{2}|\mu\rangle=E_{\mu}|\mu\rangle. \tag{7}\]
Note that our notation is such that the indices \(n,n^{\prime}=0,1,2\ldots d_{1}-1\) will always be associated with subsystem \(1\) and the indices \(\mu,\mu^{\prime}=0,1,2\ldots d_{2}-1\) will always be associated with subsystem \(2\). Then, a complete basis for \(\mathcal{H}\) is given by the \(d_{1}\times d_{2}\) basis states
\[|n,\mu\rangle=|n\rangle\otimes|\mu\rangle,\quad\langle n^{\prime},\mu^{ \prime}|n,\mu\rangle=\delta_{nn^{\prime}}\delta_{\mu\mu^{\prime}}. \tag{8}\]
We write the full state vector of the system in the Schrodinger representation as
\[|\psi(t)\rangle=\sum_{n\mu}z_{n,\mu}(t)|n,\mu\rangle. \tag{9}\]
Inserting this into the Schrodinger equation
\[i\partial_{t}|\psi(t)\rangle=\hat{H}|\psi(t)\rangle \tag{10}\]
yields
\[i\partial_{t}z_{n^{\prime},\mu^{\prime}}(t)=\sum_{n\mu}\langle n^{\prime}, \mu^{\prime}|\hat{H}|n,\mu\rangle z_{n,\mu}(t). \tag{11}\]
We organize the expansion coefficients into a matrix:
\[Z(t)=\begin{pmatrix}z_{0,0}(t)&z_{0,1}(t)&\cdots&z_{0,d_{2}-1}(t)\\ z_{1,0}(t)&z_{1,1}(t)&\cdots&z_{1,d_{2}-1}(t)\\ \vdots&\vdots&\ddots&\vdots\\ z_{d_{1}-1,0}(t)&z_{d_{1}-1,1}(t)&\cdots&z_{d_{1}-1,d_{2}-1}(t)\\ \end{pmatrix}.\]
Inserting the expansion (9) into the Schrodinger equation yields:
\[i\partial_{t}Z=H_{1}Z+ZH_{2}+\lambda V_{1}ZV_{2}^{\mbox{\tiny T}}, \tag{12}\]
where the \(H_{1}\), \(H_{2}\), \(V_{1}\), and \(V_{2}\) square matrices have entries
\[(H_{1})_{n^{\prime}n} =\langle n^{\prime}|\hat{H}_{1}|n\rangle,\ \ \ \ \ (V_{1})_{n^{\prime}n}= \langle n^{\prime}|\hat{V}_{1}|n\rangle,\] \[(H_{2})_{\mu^{\prime}\mu} =\langle\mu^{\prime}|\hat{H}_{2}|\mu\rangle,\ \ \ \ \ (V_{2})_{\mu^{\prime}\mu}= \langle\mu^{\prime}|\hat{V}_{2}|\mu\rangle. \tag{13}\]
Note that since \(|n\rangle\) and \(|\mu\rangle\) are eigenvectors of \(\hat{H}_{1}\) and \(\hat{H}_{2}\), respectively, \(H_{1}\) and \(H_{2}\) are real diagonal matrices. Furthermore, \(V_{1}\) and \(V_{2}\) are Hermitian matrices due to the self-adjointness of the associated operators. Equation (12) represents \(d_{1}\times d_{2}\) complex linear differential equations whose solutions completely specify the quantum dynamics of the system. We also note that the normalization condition \(\langle\psi(t)|\psi(t)\rangle=1\) implies that \(Z\) satisfies
\[1=\mbox{Tr}\big{[}Z^{\dagger}(t)Z(t)\big{]}=\|Z(t)\|_{\mbox{F}}^{2}, \tag{14}\]
where \(\|\cdots\|_{\mbox{F}}\) is the Frobenius norm for complex rectangular matrices defined in Appendix A. It is easy to confirm that the conservation of \(\|Z(t)\|_{\mbox{F}}\) is guaranteed by the equation of motion (12).
Using the expansion (9), we can find the expectation values of operators that only act on either of the Hilbert subspaces of system 1 or system 2:
\[\langle\hat{A}_{1}\otimes\hat{I}_{2}\rangle =\sum_{nn^{\prime}\mu}z_{n^{\prime}\mu}^{*}\langle n^{\prime}| \hat{A}_{1}|n\rangle z_{n\mu}\] \[=\mbox{Tr}(\rho_{1}A_{1}), \tag{15a}\] \[\langle\hat{I}_{1}\otimes\hat{A}_{2}\rangle =\sum_{nn^{\prime}\mu}z_{n\mu^{\prime}}^{*}\langle\mu^{\prime}| \hat{A}_{2}|\mu\rangle z_{n\mu}\] \[=\mbox{Tr}(\rho_{2}A_{2}), \tag{15b}\]
where \(A_{1}\) and \(A_{2}\) are the matrix representations of each operator in the eigenbasis of \(\hat{H}_{1}\) and \(\hat{H}_{2}\), respectively; \(\hat{I}_{1}\) and \(\hat{I}_{2}\) are identity operators; and \(\rho_{1}\) and \(\rho_{2}\) are the reduced density matrices
\[\rho_{1}(t)=Z(t)Z^{\dagger}(t),\ \ \ \rho_{2}(t)=Z^{\mbox{\tiny T}}(t)Z^{*}(t). \tag{16}\]
With (12), it is possible to write down evolution equations for both reduced density matrices. We find
\[i\partial_{t}\rho_{1} =[H_{1},\rho_{1}]+\lambda[V_{1},ZV_{2}^{\mbox{\tiny T}}Z^{\dagger }], \tag{17a}\] \[i\partial_{t}\rho_{2} =[H_{2},\rho_{2}]+\lambda[V_{2},Z^{\mbox{\tiny T}}V_{1}^{\mbox{ \tiny T}}Z^{*}]. \tag{17b}\]
Note that equations (17) do not represent a closed system of differential equations for \(\rho_{1}\) and \(\rho_{2}\); i.e., to solve these for \(\rho_{1}\) and \(\rho_{2}\), one must also solve (12) for \(Z\).
Before moving on, we note that eigenstates of the full quantum Hamiltonian are defined by
\[\hat{H}|\psi_{j}\rangle=E_{j}|\psi_{j}\rangle. \tag{18}\]
We write as above
\[|\psi_{j}\rangle=\sum_{n\mu}z_{n\mu,j}|n,\mu\rangle,\ \ \ |n,\mu\rangle=|n\rangle\otimes|\mu\rangle, \tag{19}\]
where \(|n\rangle\) and \(|\mu\rangle\) are eigenstates of \(\hat{H}_{1}\) and \(\hat{H}_{2}\), respectively. Then, we find that
\[E_{j}Z_{j}=H_{1}Z_{j}+Z_{j}H_{2}+\lambda V_{1}Z_{j}V_{2}^{\mbox{\tiny T}}, \tag{20}\]
where \(Z_{j}\) is the matrix formed out of the \(z_{n\mu,j}\) coefficients. We can choose the \(Z_{j}\) to enforce \(\langle\psi_{j}|\psi_{j^{\prime}}\rangle=\delta_{jj^{\prime}}\), which implies the orthogonality relation
\[\langle Z_{j},Z_{j^{\prime}}\rangle_{\mbox{\tiny F}}=\delta_{jj^{\prime}}, \tag{21}\]
where the Frobenius inner product \(\langle\cdots,\cdots\rangle_{\mbox{\tiny F}}\) is defined in Appendix A. If we retain the upper left-hand corner of all matrices, (20) becomes a finite dimensional eigenvalue problem that can be solved numerically.
Any solution of the Schrodinger equation can be decomposed in terms of the energy eigenbasis as
\[|\psi(t)\rangle=\sum_{j}c_{j}e^{-iE_{j}t}|\psi_{j}\rangle,\ \ \ c_{j}=\langle\psi_{j}|\psi(0)\rangle \tag{22}\]
and expanded in the basis \(|n,\mu\rangle\) by writing
\[|\psi(t)\rangle = \sum_{jn\mu}c_{j}e^{-iE_{j}t}|n,\mu\rangle\langle n,\mu|\psi_{j}\rangle \tag{23}\] \[= \sum_{jn\mu}c_{j}e^{-iE_{j}t}z_{n\mu,j}|n,\mu\rangle\] \[= \sum_{n\mu}z_{n\mu}(t)|n,\mu\rangle,\]
where
\[z_{n\mu}(t)=\sum_{j}c_{j}e^{-iE_{j}t}z_{n\mu,j}. \tag{24}\]
Therefore, the solution of the dynamical system (12) in terms of the energy eigenstates is
\[Z(t)=\sum_{j}c_{j}e^{-iE_{j}t}Z_{j},\ \ \ c_{j}=\langle Z_{j},Z(0)\rangle_{\mbox{ \tiny F}}. \tag{25}\]
In practice, it is usually more computationally efficient to calculate \(Z(t)\) using this formula rather than via direct numerical solution of a finite truncation of the dynamical system (12).
### Measures of entanglement
We are primarily concerned with the situation where the system is prepared in an unentangled product state at \(t=0\). As time progresses, we intuitively expect that the interaction will cause the entanglement of the system to grow. To quantify this process, we find it useful to deploy
various measures of entanglement entropy. The simplest measure of entanglement are the purities \(\gamma_{1}=\operatorname{Tr}\bigl{(}\rho_{1}^{2}\bigr{)}\) and \(\gamma_{2}=\operatorname{Tr}\bigl{(}\rho_{2}^{2}\bigr{)}\) of the quantum state of subsystems 1 and 2, respectively. With the definitions of the reduced density matrices (16), it is easy to show that each subsystem has the same purity \(\gamma=\gamma_{1}=\gamma_{2}\); i.e.,
\[\gamma=\|\rho_{1}\|_{\mathrm{F}}^{2}=\|\rho_{2}\|_{\mathrm{F}}^{2}=\|ZZ^{ \dagger}\|_{\mathrm{F}}^{2}. \tag{26}\]
Via the properties listed in Appendix A, it is easy to confirm \(\gamma\leq 1\). Furthermore, if either \(\rho_{1}\) or \(\rho_{2}\) represents a product state (e.g., if \(\rho_{1}=\mathbf{ww}^{\dagger}\), where \(\mathbf{w}\) is column vector3), then this inequality is saturated and \(\gamma=1\). Closely related to the concept of purity is the linear entanglement entropy
Footnote 3: In this paper, all matrices labelled by lowercase letters in boldface type (e.g., \(\mathbf{u}\)) are assumed to be column vectors normalized under the usual inner product (equivalent to the Frobenius norm); i.e., \(\mathbf{u}^{\dagger}\mathbf{u}=\left\|\mathbf{u}\right\|_{\mathrm{F}}^{2}=1\).
\[S_{\textsc{lin}}=1-\gamma=1-\|\rho_{1,2}\|_{\mathrm{F}}^{2}, \tag{27}\]
which satisfies \(S_{\textsc{lin}}=0\) whenever the subsystems are described by pure states. Another popular measure of entanglement is the von Neumann entropy
\[S_{\textsc{vn}}=-\operatorname{Tr}(\rho_{1}\ln\rho_{1})=-\operatorname{Tr}( \rho_{2}\ln\rho_{2}). \tag{28}\]
Like the linear entropy, \(S_{\textsc{vn}}=0\) whenever each subsystem is described by a pure state.
The nonzero eigenvalues of \(\rho_{1}=ZZ^{\dagger}\) are the same as the nonzero eigenvalues of \(\rho_{2}=Z^{\mathrm{ T}}Z^{*}\). Furthermore, each of these nonzero eigenvalues are the square of one of the singular values of \(Z\) or \(Z^{\mathrm{ T}}\), which we denote by \(\{\chi_{i}\}\). Both the linear entropy and the von Neumann entropy can be expressed in terms of the singular values of \(Z\):
\[S_{\textsc{lin}}=1-\sum_{i}\chi_{i}^{4},\quad S_{\textsc{vn}}=-\sum_{i}\chi_{i }^{2}\ln\chi_{i}^{2}. \tag{29}\]
Since \(\operatorname{Tr}(\rho_{1})=1\) and each of the \(\chi_{i}\) must be nonnegative real numbers, we have
\[\sum_{i}\chi_{i}^{2}=1\quad\Rightarrow\quad\chi_{i}\in[0,1], \tag{30}\]
for which it is easy to show that
\[S_{\textsc{lin}}\leq S_{\textsc{vn}}; \tag{31}\]
that is, the linear entropy provides a lower bound for the von Neumann entropy. Further inequalities can be obtained by noting that both \(S_{\textsc{lin}}\) and \(S_{\textsc{vn}}\) are largest for maximally mixed states, which have
\[\chi_{i}^{2}=\frac{1}{d},\quad d=\min(d_{1},d_{2}). \tag{32}\]
This leads to
\[S_{\textsc{lin}}\leq 1-\frac{1}{d^{2}},\quad S_{\textsc{vn}}\leq\ln d. \tag{33}\]
From this, we see that \(S_{\textsc{lin}}\in[0,1]\) for all \(d\). Conversely, \(S_{\textsc{vn}}\) is not bounded from above in the infinite dimensional (\(d\to\infty\)) case.
## III Approximations of quantum dynamics
### Product State Approximation
Suppose that we prepare the system in a product state at \(t=0\) and the coupling parameter \(\lambda\) is small. Then, we expect that for some period of time the system remains in a nearly product state; i.e. the entanglement will be "small". However, as discussed in the previous subsection, there are multiple ways to quantify the entanglement of a quantum system, so it is not immediately obvious how to define "small" entanglement. One possibility is to say that a quantum state is nearly pure if its entanglement entropy is much less than that of a maximally entangled state. When this definition is applied to the linear and von Neumann entropies, we find two possible conditions for defining a nearly product state:
\[1-\gamma=S_{\textsc{lin}}\ll 1-\frac{1}{d^{2}},\quad S_{\textsc{vn}}\ll\ln d. \tag{34}\]
Both of these appear to be viable options for finite \(d\), but we see that the second inequality becomes uninformative when \(d\to\infty\). Also, the \(1/d^{2}\) term in the first inequality is not of much practical importance, so it appears the most useful operational definition of a nearly product state is a state for which \(S_{\textsc{lin}}\ll 1\); however, we will analyze both the conditions (34) below.
Over the time interval that the system is in a nearly product state (i.e. \(S_{\textsc{lin}}\ll 1\)), we can write
\[Z=\mathbf{u}_{1}\mathbf{u}_{2}^{\mathrm{ T}}+\lambda\,\delta Z,\quad\delta Z= \mathcal{O}(\lambda^{0}), \tag{35}\]
where \(\mathbf{u}_{1}=\mathbf{u}_{1}(t)\) and \(\mathbf{u}_{2}=\mathbf{u}_{2}(t)\) are column vectors of dimension \(d_{1}\) and \(d_{2}\), respectively. Under this assumption, the reduced density matrices take the form
\[\rho_{1} =\mathbf{u}_{1}\mathbf{u}_{1}^{\dagger}+\lambda\,\delta\rho_{1}, \tag{36a}\] \[\rho_{2} =\mathbf{u}_{2}\mathbf{u}_{2}^{\dagger}+\lambda\,\delta\rho_{2}. \tag{36b}\]
The expansions (35) and (36) allow us to expand (17) as
\[i\partial_{t}\rho_{1} =[H_{1}^{\textsc{eff}},\rho_{1}]+\mathcal{O}(\lambda^{2}), \tag{37a}\] \[i\partial_{t}\rho_{2} =[H_{2}^{\textsc{eff}},\rho_{2}]+\mathcal{O}(\lambda^{2}), \tag{37b}\]
with
\[H_{1}^{\textsc{eff}} =H_{1}+\lambda\operatorname{Tr}(\rho_{2}V_{2})V_{1}, \tag{38a}\] \[H_{2}^{\textsc{eff}} =H_{2}+\lambda\operatorname{Tr}(\rho_{1}V_{1})V_{2}. \tag{38b}\]
If we drop the \(\mathcal{O}(\lambda^{2})\) terms, (37) forms a closed set of differential equations for the density matrices and are identical to the von Neumann equations of two quantum systems with effective Hamiltonian matrices and an unusual coupling. Since each of the effective Hamiltonian matrices is Hermitian, we are guaranteed that if each subsystem is in a pure state at \(t=0\), then they will remain in a pure state at later times. Hence, we may write
\[\hat{\rho}_{1}(t)=|\varphi_{1}(t)\rangle\langle\varphi_{1}(t)|,\quad\hat{\rho}_ {2}(t)=|\varphi_{2}(t)\rangle\langle\varphi_{2}(t)|, \tag{39}\]
with \(|\varphi_{1}(t)\rangle\in\mathscr{H}_{1}\) and \(|\varphi_{2}(t)\rangle\in\mathscr{H}_{2}\). This identification means that
\[\operatorname{Tr}(\rho_{1}V_{1})=\langle\varphi_{1}|\hat{V}_{1}|\varphi_{1} \rangle,\quad\operatorname{Tr}(\rho_{2}V_{2})=\langle\varphi_{2}|\hat{V}_{2} |\varphi_{2}\rangle. \tag{40}\]
Hence, effective Hamiltonian operators are given by
\[\hat{H}_{1}^{\text{\tiny eff}} =\hat{H}_{1}+\lambda\langle\varphi_{2}|\hat{V}_{2}|\varphi_{2} \rangle\hat{V}_{1}, \tag{41a}\] \[\hat{H}_{2}^{\text{\tiny eff}} =\hat{H}_{2}+\lambda\langle\varphi_{1}|\hat{V}_{1}|\varphi_{1} \rangle\hat{V}_{2}. \tag{41b}\]
Then, to leading order in \(\lambda\) equations (37) are consistent with the Schrodinger equations
\[i\partial_{t}|\varphi_{1}(t)\rangle=\hat{H}_{1}^{\text{\tiny eff}}|\varphi_{1 }(t)\rangle,\quad i\partial_{t}|\varphi_{2}(t)\rangle=\hat{H}_{2}^{\text{\tiny eff }}|\varphi_{2}(t)\rangle. \tag{42}\]
We decompose each state as follows:
\[|\varphi_{1}(t)\rangle=\sum_{n}w_{n}(t)|n\rangle,\quad|\varphi_{2}(t)\rangle= \sum_{m}z_{m}(t)|m\rangle. \tag{43}\]
Note that (36), (39) and (43) imply that
\[\mathbf{w}\mathbf{w}^{\dagger} =\mathbf{u}_{1}\mathbf{u}_{1}^{\dagger}+\mathcal{O}(\lambda), \tag{44a}\] \[\mathbf{z}\mathbf{z}^{\dagger} =\mathbf{u}_{2}\mathbf{u}_{2}^{\dagger}+\mathcal{O}(\lambda). \tag{44b}\]
Using (43), we see that the Schrodinger equations (42) are equivalent to
\[i\partial_{t}\mathbf{w} =H_{1}\mathbf{w}+\lambda(\mathbf{z}^{\dagger}V_{2}\mathbf{z})V_{ 1}\mathbf{w}, \tag{45a}\] \[i\partial_{t}\mathbf{z} =H_{2}\mathbf{z}+\lambda(\mathbf{w}^{\dagger}V_{1}\mathbf{w})V_{ 2}\mathbf{z}. \tag{45b}\]
Equations (45) represent \(d_{1}+d_{2}\) complex nonlinear differential equations whose solution completely specify the quantum dynamics of the system within the context of the "product state" approximation.
We note that an interesting feature of the equations of motion (45) in the product state approximation is that they are nonlinear. This may seem surprising since the QQ equations of motion (12) are linear. However, the reason for this nonlinearity is simple: the effective Hamiltonians for subsystems 1 and 2 are deduced from the equations of motion for the reduced density matrices, which themselves are nonlinear functions of the full quantum state. In hindsight, it appears that such a nonlinearity is inevitable, since it is unclear how to separate out the dynamics of subsystem 1 from subsystem 2 without introducing a nonlinear object like the reduced density matrix.
We can also contrast the \(\rho_{1}\) and \(\rho_{2}\) evolution equations (37) in the product state approximation to the well-known Lindblad master equation governing the evolution of the density matrix of a quantum system coupled to a much larger system called the "environment" [26]. If we call subsystem 1 the environment, the Lindblad formalism makes several assumptions, the most important of which are:
1. The coupling of subsystem 2 to the environment is weak.
2. The effects of subsystem 2 on the environment are negligible; i.e., the dynamics of the environment is unaffected by the presence of subsystem 2. (This is essentially the Born approximation.)
3. The system is prepared initially in a product state, which implies that it remains in a product state under the Born approximation.
4. The time derivative of the subsystem 2 density matrix at given time depends only on the current value of the density matrix. (This is called the Born-Markov approximation.)
We see that the product state approximation presented above makes explicit use of assumptions 1 and 3, but does not neglect the effects of subsystem 2 on subsystem 1 (assumption 2) and does not artificially enforce any sort of Markovian approximation (assumption 4). The price to be paid is that in the product state approximation, we need to simultaneously solve for the dynamics of both systems. In contrast, the assumption that the environment is too big to be affected by subsystem 2 means that one has less degrees of freedom to solve for in the Lindblad formalism, since loss of coherence to the environment is not tracked. The same assumption also results in nonunitary linear equations of motion in the Lindblad case, as opposed to the unitary nonlinear equations we have in the product state approximation.
### Classical-Quantum (CQ) approximation
The classical-quantum approximation builds on the product state approximation of the previous section by making a significant assumption, namely that one subsystem behaves "classically"; i.e. it is acceptable to replace quantum expressions associated with one subsystem with their classical analogues. Stated another way, this approximation is based on the belief that it is safe to apply the generalized Bohr correspondence principle to one subsystem. Heuristically, one would expect this to be a valid approximation when the quantum number or energy of that subsystem is large, or when the associated wavefunction is "sharply peaked".
How does this work in practical terms? In this subsection, we discuss the assumptions required to arrive at the classical-quantum approximation in a general setting, while in appendix B we show how these assumptions
can be realized when the classical subsystem represents a particle moving in a one-dimensional potential.
In the general case, we start with the effective Hamiltonian operators in the product state approximation (41) and assume that subsystem 1 behaves "classically". The key approximation is that the expectation value of the potential \(\hat{V}_{1}\) as a function of time is well approximated by the potential function \(\mathcal{V}_{1}\) evaluated on a classical trajectory \((q_{1}(t),p_{1}(t))\). More specifically, we write
\[\mathcal{V}_{1}(q_{1}(t),p_{1}(t))=\langle\varphi_{1}(t)|\hat{V}_{1}|\varphi_ {1}(t)\rangle[1-\varepsilon(t)], \tag{46}\]
under the assumption that \(\varepsilon(t)\) is in some suitable sense "small". Here, \(q_{1}(t)\) and \(p_{1}(t)\) are the solutions of the equations of motion generated by an effective classical Hamiltonian
\[\mathcal{H}^{\mbox{\tiny eff}}_{\mbox{\tiny CQ}}=\mathcal{H}_{1}+\lambda \mathcal{V}_{1}\langle\varphi_{2}|\hat{V}_{2}|\varphi_{2}\rangle. \tag{47}\]
This classical Hamiltonian is obtained by replacing the subsystem 1 quantum operators in the expression (41a) for \(\hat{H}^{\mbox{\tiny eff}}_{\mbox{\tiny eq}}\) by their classical counterparts. To complete the picture, we demand that the state vector for subsystem 2 evolves via the Hamiltonian operator
\[\hat{H}^{\mbox{\tiny eff}}_{\mbox{\tiny CQ}}=\hat{H}_{2}+\lambda\mathcal{V}_{ 1}\hat{V}_{2}. \tag{48}\]
Here, \(\hat{H}^{\mbox{\tiny eff}}_{\mbox{\tiny CQ}}\) has been obtained by substituting (46) into the expression (41b) for \(\hat{H}^{\mbox{\tiny eff}}_{\mbox{\tiny eq}}\) in the product state approximation and neglecting \(\varepsilon(t)\).
We pause here to emphasize that the smallness of \(|\varepsilon(t)|\) does _not necessarily_ follow from a \(\lambda\ll 1\) approximation; that is, we expect \(\lim_{\lambda\to 0}|\varepsilon(t)|\neq 0\). The demand that \(|\varepsilon(t)|\) is small is really a requirement that the quantum corrections to the classical dynamics of subsystem 1 are small irrespective of the size of \(\lambda\), which in turn depends crucially on the nature of the initial quantum state \(|\varphi_{1}\rangle\) of subsystem 1. Quantifying the conditions under which \(\varepsilon(t)|\) is small is a subtle problem that we return to in SSIV.3 below.
Explicitly, the complete equations of motion for the system in this classical-quantum approximation are
\[\partial_{t}q_{1}=\{q_{1},\mathcal{H}^{\mbox{\tiny eff}}_{\mbox {\tiny CQ}}\},\quad\partial_{t}p_{1}=\{p_{1},\mathcal{H}^{\mbox{\tiny eff}}_{ \mbox{\tiny CQ}}\}, \tag{49a}\] \[i\partial_{t}|\varphi_{2}(t)\rangle=\hat{H}^{\mbox{\tiny eff}}_ {\mbox{\tiny CQ}}|\varphi_{2}(t)\rangle. \tag{49b}\]
If we again make use of the expansion (43) of the state vector into eigenstates of \(\hat{H}_{2}\), we obtain
\[\partial_{t}q_{1}=+\partial_{p_{1}}\mathcal{H}_{1}+\lambda\left( \mathbf{z}^{\dagger}V_{2}\mathbf{z}\right)\partial_{p_{1}}\mathcal{V}_{1}, \tag{50a}\] \[\partial_{t}p_{1}=-\partial_{q_{1}}\mathcal{H}_{1}-\lambda\left( \mathbf{z}^{\dagger}V_{2}\mathbf{z}\right)\partial_{q_{1}}\mathcal{V}_{1},\] (50b) \[i\partial_{t}\mathbf{z}=H_{2}\mathbf{z}+\lambda\mathcal{V}_{1}V _{2}\mathbf{z}. \tag{50c}\]
Equations (50) comprise 2 real and \(d_{2}\) complex nonlinear ordinary differential equations that completely specify the system dynamics.
Once equations (50) have been solved, the density matrix for oscillator 2 and the expectation value of any operator \(\hat{A}_{2}:\mathscr{H}_{2}\rightarrow\mathscr{H}_{2}\) is given by
\[\rho_{\mbox{\tiny CQ}}(t)=\mathbf{z}(t)\mathbf{z}^{\dagger}(t),\quad\langle A _{2}(t)\rangle=\operatorname{Tr}\left[\rho_{\mbox{\tiny CQ}}(t)A_{2}\right]. \tag{51}\]
This density matrix will evolve according to the von Neumann equation \(i\partial_{t}\rho_{\mbox{\tiny CQ}}=[H^{\mbox{\tiny eff}}_{\mbox{\tiny CQ}},\rho_{\mbox{\tiny CQ}}]\), or
\[i\partial_{t}\rho_{\mbox{\tiny CQ}}=[H_{2}+\lambda\mathcal{V}_{1}V_{2},\rho_{ \mbox{\tiny CQ}}]. \tag{52}\]
### Classical-Classical (CC) approximation
The CC approximation is the logical extension of the CQ approximation. We assume it is reasonable to treat all the system's degrees of freedom classically. The Hamiltonian is just the full classical Hamiltonian (4), and the equations of motion are Hamilton's equations:
\[\partial_{t}q_{1}=\{q_{1},\mathcal{H}\},\quad\partial_{t}p_{1}=\{ p_{1},\mathcal{H}\},\] \[\partial_{t}q_{2}=\{q_{2},\mathcal{H}\},\quad\partial_{t}p_{2}=\{ p_{2},\mathcal{H}\}. \tag{53}\]
These represent four real nonlinear differential equations that describe the system dynamics.
### Classical background approximation
One can obtain a different approximation from the CQ formalism by assuming that the backreaction of the quantum system on the classical system is negligibly small. Practically, this involves neglecting the terms proportional to \(\lambda\) in the classical part of the CQ equations of motion (50):
\[\partial_{t}q_{1}=+\partial_{p_{1}}\mathcal{H}_{1}, \tag{54a}\] \[\partial_{t}p_{1}=-\partial_{q_{1}}\mathcal{H}_{1},\] (54b) \[i\partial_{t}\mathbf{z}=H_{2}\mathbf{z}+\lambda\mathcal{V}_{1}V _{2}\mathbf{z}. \tag{54c}\]
This is expected to be reasonable if the energy stored in subsystem 1 is significantly larger than the interaction energy. This is quite similar to the Born-Oppenheimer approximation discussed in SSII. In fact, equations (54) are really the analogues of the semiclassical Einstein equations without backreaction (1) for our bipartite system (4). However, the path to arriving to these equations via the Born-Oppenheimer approximation is distinct from what we have presented above in several subtle ways. In appendix C, we show how to arrive at (54) using the Born-Oppenheimer approximation in the situation when each of subsystem 1 and 2 correspond to particles moving in one-dimensional potentials.
### Conservation of energy and probability via alternative Hamiltonians
It it interesting to note that the dynamical systems governing the system's evolution in the fully quantum-quantum formalism (12), the product state approximation (45), and the classical-quantum approximation (50) can each be derived directly from alternative Hamiltonians. For the quantum-quantum case, we can define an
alternative Hamiltonian by the expectation value of the Hamiltonian operator:
\[\mathfrak{H}_{\text{\tiny QQ}}=\langle\psi(t)|\hat{H}|\psi(t)\rangle. \tag{55}\]
Using the state expansion (9), we find
\[\mathfrak{H}_{\text{\tiny QQ}}=\text{Tr}(Z^{\dagger}H_{1}Z)+\text{ Tr}(ZH_{2}^{\text{\tiny T}}Z^{\dagger})+\\ \lambda\text{Tr}(V_{1}ZV_{2}^{\text{\tiny T}}Z^{\dagger}). \tag{56}\]
If we assume the Poisson brackets
\[\{z_{nm},z_{n^{\prime}m^{\prime}}^{*}\}=-i\delta_{nn^{\prime}}\delta_{mm^{ \prime}}, \tag{57}\]
then we see that the ordinary expression for time evolution
\[\partial_{t}z_{nm}=\{z_{nm},\mathfrak{H}_{\text{\tiny QQ}}\}, \tag{58}\]
is equivalent to the equations (12); hence, the dynamical system (12) is Hamiltonian.
For the product state approximation, an alternative Hamiltonian is defined by
\[\mathfrak{H}_{\text{\tiny PS}}=\mathbf{w}^{\dagger}H_{1}\mathbf{w}+\mathbf{z} ^{\dagger}H_{2}\mathbf{z}+\lambda(\mathbf{w}^{\dagger}V_{1}\mathbf{w})( \mathbf{z}^{\dagger}V_{2}\mathbf{z}), \tag{59}\]
which can be obtained from (56) under the assumption that \(Z\approx\mathbf{w}\mathbf{z}^{\text{\tiny T}}\). The Poisson brackets
\[\{w_{n},w_{n^{\prime}}^{*}\}=-i\delta_{nn^{\prime}},\quad\{z_{m},z_{m^{\prime }}^{*}\}=-i\delta_{mm^{\prime}}, \tag{60}\]
combined with
\[\partial_{t}w_{n}=\{w_{n},\mathfrak{H}_{\text{\tiny PS}}\},\quad\partial_{t}z_ {m}=\{z_{m},\mathfrak{H}_{\text{\tiny PS}}\}, \tag{61}\]
then reproduce (45). Hence, (45) is a Hamiltonian dynamical system.
For the classical-quantum approximation, the alternative Hamiltonian is
\[\mathfrak{H}_{\text{\tiny CQ}}=\mathcal{H}_{1}+\mathbf{z}^{\dagger}H_{2} \mathbf{z}+\lambda\mathcal{V}_{1}(\mathbf{z}^{\dagger}V_{2}\mathbf{z}), \tag{62}\]
which can be obtained from (59) under the assumption that \(\mathbf{w}^{\dagger}H_{1}\mathbf{w}\approx\mathcal{H}_{1}\) and \(\mathbf{w}^{\dagger}V_{1}\mathbf{w}\approx\mathcal{V}_{1}\). The Poisson brackets
\[\{z_{m},z_{m^{\prime}}^{*}\}=-i\delta_{mm^{\prime}}, \tag{63}\]
combined with
\[\partial_{t}q_{1}=\{q_{1},\mathfrak{H}_{\text{\tiny CQ}}\},\quad \partial_{t}p_{1}=\{p_{1},\mathfrak{H}_{\text{\tiny CQ}}\},\] \[\partial_{t}z_{m}=\{z_{m},\mathfrak{H}_{\text{\tiny CQ}}\}, \tag{64}\]
are equivalent to (50), which implies the dynamical system is Hamiltonian.
All three of the above alternative Hamiltonians are time-translation invariant, implying that each Hamiltonian is conserved on-shell. This in turn implies conservation of energy in solutions of the quantum-quantum, product-state approximation, and classical-quantum equations of motion. Similarly, each of the Hamiltonians are invariant under phase shifts of the quantum variables:
\[Z\mapsto e^{i\phi}Z \Rightarrow \mathfrak{H}_{\text{\tiny QQ}}\mapsto\mathfrak{H}_{\text{\tiny QQ}}, \tag{65a}\] \[(\mathbf{w},\mathbf{z})\mapsto(e^{i\phi_{1}}\mathbf{w},e^{i\phi _{2}}\mathbf{z}) \Rightarrow \mathfrak{H}_{\text{\tiny PS}}\mapsto\mathfrak{H}_{\text{\tiny PS }},\] (65b) \[\mathbf{z}\mapsto e^{i\phi}\mathbf{z} \Rightarrow \mathfrak{H}_{\text{\tiny CQ}}\mapsto\mathfrak{H}_{\text{\tiny CQ }}. \tag{65c}\]
Via Noether's theorem, these symmetries imply that the Frobenius norms of \(Z\), \(\mathbf{w}\) and \(\mathbf{z}\) are conserved on-shell. This in turn implies the conservation of probability in each calculations; i.e.,
quantum-quantum: \[\partial_{t}\langle\psi|\psi\rangle=0,\] (66a) product-state: \[\partial_{t}\langle\varphi_{1}|\varphi_{1}\rangle=\partial_{t} \langle\varphi_{2}|\varphi_{2}\rangle=0,\] (66b) classical-quantum: \[\partial_{t}\langle\varphi_{2}|\varphi_{2}\rangle=0.\] (66c)
Before moving on, we note that while energy is explicitly conserved in the QQ, product-state, CQ and CC approximations, it will not in be conserved in the classical background approximation as the influence of the classical system on the quantum system takes the form of an external source. That is, the quantum Hamiltonian contains explicit time dependence.
## IV Perturbative analysis
As noted above, the equations of motion for the reduced density matrices in the quantum-quantum case (17) are not closed, but when the coupling is small and if the system is in a nearly product state it is possible to write a self-contained dynamical system for \(\rho_{1}\) and \(\rho_{2}\). In this section, we investigate this regime more formally in the context of perturbation theory with the purpose of quantifying the regimes of validity in the product state and classical-quantum approximations.
### Interaction picture and perturbative solutions in the quantum-quantum case
In order to analyze the dynamics of the system under the assumption that the coupling is small \(\lambda\ll 1\), it will be useful to move from the Schrodinger picture to the interaction picture. Our notation is that an under-tilde denotes the interaction picture version of a given quantity. We define the interaction picture coefficient matrix \(Z\) by
\[Z=e^{iH_{1}t}Ze^{iH_{2}t}. \tag{67}\]
We also define the interaction picture density matrices via
\[\rho_{1}=Z\mathcal{Z}^{\dagger}=e^{iH_{1}t}\rho_{1}e^{-iH_{1}t}, \tag{68a}\] \[\rho_{2}=\mathcal{Z}^{\tau}\mathcal{Z}^{*}=e^{iH_{2}t}\rho_{2}e^{ -iH_{2}t}. \tag{68b}\]
The matrix representations of generic operators \(\hat{A}_{1}\) or \(\hat{A}_{2}\) on \(\mathscr{H}_{1}\) or \(\mathscr{H}_{2}\) are, respectively,
\[\hat{A}_{1}=e^{iH_{1}t}A_{1}e^{-iH_{1}t},\quad\hat{A}_{2}=e^{iH_{2}t}A_{2}e^{-iH_ {2}t}, \tag{69}\]
which implies
\[\langle\hat{A}_{1}\otimes\hat{I}_{2}\rangle=\mathrm{Tr}(\rho_{1} A_{1})=\mathrm{Tr}\bigl{(}\rho_{1}\hat{A}_{1}\bigr{)}, \tag{70a}\] \[\langle\hat{I}_{1}\otimes\hat{A}_{2}\rangle=\mathrm{Tr}(\rho_{2} A_{2})=\mathrm{Tr}\bigl{(}\rho_{2}\hat{A}_{2}\bigr{)}. \tag{70b}\]
That is, formulae for expectation values are the same in either picture. Finally, we note that \(H_{1}=\not{H}_{1}\) and \(H_{2}=\not{H}_{2}\).
Substituting the above definitions into (12), we find that
\[i\partial_{t}\mathcal{Z}=\lambda Y_{1}\mathcal{Z}V_{2}^{\mathrm{ T}}. \tag{71}\]
Similarly, the evolution equations for the interaction picture reduced density matrices are
\[i\partial_{t}\rho_{1}=\lambda[Y_{1},\mathcal{Z}V_{2}^{\mathrm{ T}}\mathcal{Z}^{\dagger}], \tag{72a}\] \[i\partial_{t}\rho_{2}=\lambda[Y_{2},\mathcal{Z}^{\mathrm{ T}}V_{1}^{\mathrm{ T}}\mathcal{Z}^{\ast}]. \tag{72b}\]
We now substitute a perturbative expansion of the coefficient matrix,
\[\mathcal{Z}=\mathcal{Z}^{(0)}+\lambda\mathcal{Z}^{(1)}+\cdots, \tag{73}\]
into (71). Setting coefficients of different powers of \(\lambda\) equal to zero yields
\[i\partial_{t}\mathcal{Z}^{(0)}=0,\quad i\partial_{t}Z^{(k+1)}=Y_{1}Z^{(k)}V_{ 2}^{\mathrm{ T}}, \tag{74}\]
for \(k=0,1,2\ldots\) We assume a product state solution for the zeroth order equation
\[\mathcal{Z}^{(0)}(t)=\mathbf{u}_{1}\mathbf{u}_{2}^{\mathrm{ T}}, \tag{75}\]
where \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) are constant column vectors of dimension \(d_{1}\) and \(d_{2}\), respectively. We also select initial conditions
\[\mathcal{Z}^{(k)}(0)=0,\quad k=1,2,3\ldots \tag{76}\]
This ensures that the subsystems are unentangled at \(t=0\). Then, the solution to the first order equation is
\[\mathcal{Z}^{(1)}(t)=-i\int\limits_{\mathcal{T}}V_{1}^{\prime}\mathbf{u}_{1} \mathbf{u}_{2}^{\mathrm{ T}}\mathcal{Z}_{2}^{\prime\mathrm{ T}}, \tag{77}\]
where here and below we make use of the notation
\[\int\limits_{\mathcal{T}}=\int\limits_{0}^{t}dt^{\prime},\quad \iint\limits_{\mathcal{T}}=\iint\limits_{0}^{t}dt^{\prime}dt^{\prime\prime},\] \[V_{1,2}(t^{\prime})=V_{1,2}^{\prime},\quad V_{1,2}(t^{\prime \prime})=V_{1,2}^{\prime\prime}. \tag{78}\]
Since the sum (or integral) of outer products of vectors is not in general itself an outer product, we expect that neither \(\mathcal{Z}^{(1)}(t)\) or \(\mathcal{Z}(t)\) will be expressible as an outer product for \(t>0\). That is, the system will generally be entangled for all \(t>0\).
The expansion (73) implies that the density matrices are of the form
\[\rho_{1,2}=\rho_{1,2}^{(0)}+\lambda\rho_{1,2}^{(1)}+\cdots,\quad\rho_{1,2}^{(0 )}=\mathbf{u}_{1,2}\mathbf{u}_{1,2}^{\dagger}, \tag{79}\]
with
\[\rho_{1}^{(k)}(0)=0=\rho_{2}^{(k)}(0),\quad k=1,2,3\ldots \tag{80}\]
Then, (72) gives
\[i\partial_{t}\rho_{1,2}^{(1)}=\mathrm{Tr}(\rho_{2,1}^{(0)}V_{2,1})[V_{1,2}, \rho_{1,2}^{(0)}]. \tag{81}\]
These equations allow us to obtain \(\rho_{1}^{(1)}\) and \(\rho_{2}^{(1)}\) via quadratures:
\[\rho_{1,2}^{(1)}(t)=-i\int\limits_{\mathcal{T}}\langle V_{2,1}^{\prime} \rangle_{(0)}[V_{1,2}^{\prime},\mathbf{u}_{1,2}\mathbf{u}_{1,2}^{\dagger}]. \tag{82}\]
Here and below, the \((0)\) subscript indicates evaluation at \(\lambda=0\); i.e. when the interaction between the subsystems is neglected.
### Scrambling time
In this subsection, we estimate how long after \(t=0\) the entanglement between the subsystems remains small and the product state approximation is expected to be valid. We call this timescale the "scrambling time" of the system. Our first step will be to derive bounds on the linear and von Neumann entropies. We begin by noting that the Frobenius norm of matrices is in general the same in the Schrodinger and interaction pictures by equation (101), which allows us to express the linear entropy as
\[S_{\textsc{lin}}=1-\|\rho_{1,2}\|_{\mathrm{F}}^{2}. \tag{83}\]
We write
\[\rho_{1,2}=\rho_{1,2}^{(0)}+\lambda\,\delta\rho_{1,2}, \tag{84}\]
where \(\rho_{1,2}^{(0)}\) is the leading term in the perturbative \(\lambda\)-expansion of \(\rho_{1,2}\). It then it follows from equations (100c), and the triangle inequality that:
\[S_{\textsc{lin}}\leq 2\lambda\|\delta\rho_{1,2}\|_{\mathrm{F}}(1+\tfrac{1}{2} \lambda\|\delta\rho_{1,2}\|_{\mathrm{F}}). \tag{85}\]
This is exact, but it is useful to re-express it in a less precise form using the perturbative expansions (79):
\[S_{\textsc{lin}}\leq 2\lambda\|\rho_{1,2}^{(1)}\|_{\mathrm{F}}+\mathcal{O}( \lambda^{2}). \tag{86}\]
Note that this actually represents two distinct upper bounds on \(S_{\textsc{lin}}\); i.e., one derived from \(\|\rho_{1}^{(1)}\|_{\mathrm{F}}\) and another from \(\|\rho_{2}^{(1)}\|_{\mathrm{F}}\). The two bounds can be combined as
\[S_{\textsc{lin}}\leq 2\lambda\min(\|\rho_{1,2}^{(1)}\|_{\mathrm{F}})+\mathcal{O}( \lambda^{2}). \tag{87}\]
We can also derive a bound on the von Neumann entropy by noting that the singular values \(\{\chi_{i}\}\) of \(Z\) are the same as the singular values of \(Z\). We then write
\[Z=Z^{(0)}+\lambda\,\delta Z. \tag{88}\]
where \(Z^{(0)}=\mathbbm{u}_{1}\mathbbm{u}_{2}^{\tau}\) is the zeroth order term in the perturbative expansion of \(Z\). The singular values of \(\mathbbm{u}_{1}\mathbbm{u}_{2}^{\tau}\) are trivially
\[\chi_{i}^{(0)}=\begin{cases}1,&i=1,\\ 0,&i=2,3,4\dots\end{cases} \tag{89}\]
Let us now write
\[\chi_{i}=\chi_{i}^{(0)}+\lambda\,\delta\chi_{i}. \tag{90}\]
Substituting this into the formula (29) for the von Neumann entropy, we obtain
\[S_{\textsc{{{{N}}}}}=-2\lambda\,\delta\chi_{1}+\mathcal{O}(\lambda^{2}\ln \lambda). \tag{91}\]
Now, due to a theorem by Weyl, we have
\[|\delta\chi_{1}|\leq\|\delta Z\|_{2}\leq\|\delta Z\|_{\mathrm{F}}. \tag{92}\]
Here, \(\|\delta Z\|_{2}\) is the spectral norm of \(\delta Z\). This gives us the following bound on \(S_{\textsc{{{{N}}}}}\):
\[S_{\textsc{{{{N}}}}} \leq 2\lambda\|\delta Z\|_{\mathrm{F}}+\mathcal{O}(\lambda^{2}\ln \lambda)\] \[=2\lambda\|Z^{(1)}\|_{\mathrm{F}}+\mathcal{O}(\lambda^{2}\ln \lambda), \tag{93}\]
where we have made use of (73) in moving from the first to second lines. We note that in comparing the inequalities (85) and (91), it is useful to make note of the fact that
\[\|\delta\varrho_{1,2}\|_{\mathrm{F}}\leq 2\|\delta Z\|_{\mathrm{F}}, \tag{94}\]
which is straightforward to prove. This is consistent with our expectation that \(S_{\textsc{{{{N}}}}{\textsc{{{N}}}}}<S_{\textsc{{{{N}}}}}\).
Comparing the inequalities (87) and (93) with (34) and retaining the leading order terms, we find two conditions to be satisfied for nearly product states:
\[\lambda\min(\|\varrho_{1,2}^{(1)}\|_{\mathrm{F}})\ll 1,\quad\lambda\|Z^{(1)} \|_{\mathrm{F}}\ll\ln d. \tag{95}\]
To analyze these further, we note that an explicit formula for \(\|Z^{(1)}\|_{\mathrm{F}}\) can be obtained from (77):
\[\|Z^{(1)}\|_{\mathrm{F}}=\left\{\iint\limits_{\mathcal{T}^{2}}\langle V_{1}^ {\prime}V_{1}^{\prime\prime}\rangle\langle V_{2}^{\prime}V_{2}^{\prime\prime} \rangle\right\}_{(0)}^{1/2}. \tag{96}\]
Similarly, (82) yields an expression for \(\|\varrho_{1,2}^{(1)}\|_{\mathrm{F}}\):
\[\|\varrho_{1,2}^{(1)}\|_{\mathrm{F}}=\left\{2\iint\limits_{\mathcal{T}^{2}} \langle V_{2,1}^{\prime}\rangle\langle V_{2,1}^{\prime\prime}\rangle\mathrm{ Cov}(V_{1,2}^{\prime},V_{1,2}^{\prime\prime})]\right\}_{(0)}^{1/2}. \tag{97}\]
Here, \(\mathrm{Cov}(V_{1,2}^{\prime},V_{1,2}^{\prime\prime})\) is the quantum covariance of \(V_{1,2}^{\prime\prime}\) and \(V_{1,2}^{\prime\prime}\):
\[\mathrm{Cov}(V_{1,2}^{\prime},V_{1,2}^{\prime\prime})=\tfrac{1}{2}\langle\{V _{1,2}^{\prime},V_{1,2}^{\prime\prime}\}\rangle-\langle V_{1,2}^{\prime} \rangle\langle V_{1,2}^{\prime\prime}\rangle, \tag{98}\]
where \(\left\{V_{1,2}^{\prime},V_{1,2}^{\prime\prime}\right\}=V_{1,2}^{\prime}V_{1,2 }^{\prime\prime}+V_{1,2}^{\prime\prime}V_{1,2}^{\prime}\) is the anti-commutator. This can be written in an alternate form:
\[\|\varrho_{1,2}^{(1)}\|_{\mathrm{F}}=\sqrt{2}\left\langle\left[\iint\limits_{ \mathcal{T}}\langle V_{2,1}^{\prime}\rangle\left(V_{1,2}^{\prime}-\langle V_{ 1,2}^{\prime}\rangle I_{1,2}\right)\right]^{2}\right\rangle_{(0)}^{1/2}. \tag{99}\]
The integrals (96) and (97) can be bounded by noting that
\[|\langle V_{1,2}^{\prime}\rangle| \leq\sqrt{\langle V_{1,2}^{\prime}\rangle}, \tag{100a}\] \[|\langle V_{1,2}^{\prime}V_{1,2}^{\prime}\rangle| \leq\sqrt{\langle V_{1,2}^{\prime}\rangle\langle V_{1,2}^{\prime \prime 2}\rangle},\] (100b) \[|\mathrm{Cov}(V_{1,2}^{\prime},V_{1,2}^{\prime\prime})| \leq\sqrt{\mathrm{Var}(V_{1,2}^{\prime})\mathrm{Var}(V_{1,2}^{ \prime\prime})}, \tag{100c}\]
with the variance defined as usual:
\[\mathrm{Var}(V_{1,2}^{\prime})=\langle V_{1,2}^{\prime 2}\rangle-\langle V_{1,2}^{ \prime}\rangle^{2}. \tag{101}\]
We obtain
\[\|Z^{(1)}\|_{\mathrm{F}} \leq\left\{\iint\limits_{\mathcal{T}}\sqrt{\langle V_{1}^{\prime 2 }\rangle\langle V_{2}^{\prime 2}\rangle}\right\}_{(0)}, \tag{102a}\] \[\|\varrho_{1,2}^{(1)}\|_{\mathrm{F}} \leq\left\{\iint\limits_{\mathcal{T}}\sqrt{2\langle V_{2,1}^{ \prime 2}\rangle\left(\langle V_{1,2}^{\prime 2}\rangle-\langle V_{1,2}^{\prime} \rangle^{2}\right)}\right\}_{(0)}. \tag{102b}\]
Then, we see that a sufficient condition for the inequality \(S_{\textsc{{{{N}}}}{\textsc{{{N}}}}}\sim\lambda\|Z^{(1)}\|_{\mathrm{F}}\ll\ln d\) to be satisfied is
\[t\ll t_{\textsc{{{{N}}}}{\textsc{{{N}}}}},\quad t_{\textsc{{{{N}}}}}=\frac{ \ln d}{\mathcal{E}_{\mathrm{int}}(t_{\textsc{{{{N}}}}{\textsc{{{N}}}}})}, \tag{103}\]
where \(\mathcal{E}_{\mathrm{int}}(t)\) is a measure of the average value of an upper bound on the interaction energy between the subsystems over the time interval \([0,t]\):
\[\mathcal{E}_{\mathrm{int}}(t)=\frac{\lambda}{t}\left\{\iint\limits_{ \mathcal{T}}\sqrt{\langle V_{1}^{\prime 2}\rangle\langle V_{2}^{\prime 2}\rangle} \right\}_{(0)}=\frac{1}{t}\left\{\iint\limits_{\mathcal{T}}\sqrt{\langle H_{ \mathrm{int}}^{\prime 2}\rangle}\right\}_{(0)}, \tag{104}\]
with \(\hat{H}_{\mathrm{int}}=\lambda\,\hat{V}_{1}\otimes\hat{V}_{2}\). Furthermore, a sufficient condition for \(S_{\textsc{{{{L}}}}{\textsc{{{{N}}}}}}\sim\lambda\min(\|\varrho_{1,2}^{(1)}\|_{ \mathrm{F}})\ll 1\) to hold is
\[t\ll t_{\textsc{{{{L}}}}{\textsc{{{{L}}}}}},\quad t_{\textsc{{{{L}}}}{\textsc{ {{{L}}}}}}=\frac{1}{\mathcal{E}_{\mathrm{int}}(t_{\textsc{{{{L}}}}{\textsc{{{ {L}}}}{\textsc{{{{L}}}}{\textsc{{{L}}}}{\textsc{{{{L}}}}{\textsc{{{L}}}}{ \textsc{{{L}}}}{\textsc{{{L}}}}{\textsc{{{L}}}}{\textsc{{{L}}}}{ \textsc{{{L}}}}{\textsc{{{L}}}}{\textsc{{{L}}}}{\textsc{{{L}}}}{\textsc{{{ {L}}}}{\textsc{{{L}}}}{\textsc{{{L}}}}}})}}, \tag{105}\]
where the functions \(\mathcal{N}_{1,2}(t)\) are given by
\[\mathcal{N}_{1,2}(t)=\left\{\frac{\int_{\mathcal{T}}[\langle V_{2,1}^{\prime 2}\rangle\mathrm{Var}(V_{1,2}^{\prime})]^{1/2}}{\int_{\mathcal{T}}[\langle V_{1 }^{\prime 2}\rangle\langle V_{2}^{\prime 2}\rangle]^{1/2}}\right\}_{(0)}. \tag{106}\]
It is easy to confirm that
\[\mathcal{N}_{1,2}(t)\in[0,1], \tag{107}\]
and that \(\mathcal{N}_{1,2}(t)=0\) implies that \(\mathrm{Var}(V_{1,2}^{\prime})=0\) for all \(t^{\prime}\in[0,t]\). That is, \(\mathcal{N}_{1,2}\) essentially measures the degree to which the quantum probability distributions for \(V_{1,2}\) are peaked at zero coupling; i.e. the degree of coherence of the state of subsystem 1 or 2 with respect to the \(V_{1}\) or \(V_{2}\) observable when \(\lambda=0\), respectively.
To summarize, the main results of this subsection are (103) and (105), which represent two alternate criterion for the validity of the product state approximation. They both bound the time interval after \(t=0\) for which the approximation is expected to hold; i.e, the scrambling time of the system. The inequality derived from the von Neumann entropy (103) is ill-defined in the infinite dimensional case. The inequality derived from the linear entropy suggests that the temporal regime of validity of the product state approximation will be longer when probability distribution for \(V_{1}\) or \(V_{2}\) are sharply peaked; i.e., when subsystem 1 or 2 are described by a coherent state at \(t=0\).
### Discrepancy between the quantum-quantum and classical-quantum calculations
We now turn our attention to the deviations between the quantum-quantum and classical-quantum calculations in the context of small-\(\lambda\) perturbation theory. The classical-quantum equations of motion that we will attempt to solve perturbatively are (50) and (52). As in the previous subsection, we will find it useful to work in the interaction picture for the quantum variables:
\[\mathbf{z}=e^{iH_{2}t}\mathbf{z},\quad\varrho_{\textsc{Cq}}=e^{iH_{2}t}\rho_{ \textsc{Cq}}e^{-iH_{2}t}. \tag{108}\]
Furthermore, let us define
\[\mathbf{Q}=\begin{pmatrix}q_{1}\\ p_{1}\end{pmatrix},\quad M=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}. \tag{109}\]
In terms of these, the equations of motion are
\[\partial_{t}\mathbf{Q} =M[\mathrm{grad}_{\mathcal{H}_{1}}(\mathbf{Q})+\lambda\,( \mathbf{z}^{\dagger}V_{2}\mathbf{z})\,\mathrm{grad}_{\mathcal{V}_{1}}( \mathbf{Q})], \tag{110a}\] \[i\partial_{t}\mathbf{z} =\lambda\mathcal{V}_{1}Y_{2}\mathbf{z},\] (110b) \[i\partial_{t}\rho_{\textsc{Cq}} =\lambda\mathcal{V}_{1}[Y_{2},\varrho_{\textsc{Cq}}], \tag{110c}\]
where \(\mathrm{grad}_{\mathcal{H}_{1}}(\mathbf{Q})\) and \(\mathrm{grad}_{\mathcal{V}_{1}}(\mathbf{Q})\) are the gradients of \(\mathcal{H}_{1}\) and \(\mathcal{V}_{1}\) evaluated at \(\mathbf{Q}\), respectively. We assume the following perturbative expansion:
\[\mathbf{Q} =\mathbf{Q}^{(0)}+\lambda\mathbf{Q}^{(1)}+\cdots \tag{111a}\] \[\mathbf{z} =\mathbf{z}^{(0)}+\lambda\mathbf{z}^{(1)}+\cdots\] (111b) \[\varrho_{\textsc{Cq}} =\varrho_{\textsc{Cq}}^{(0)}+\lambda\varrho_{\textsc{Cq}}^{(1)}+\cdots \tag{111c}\]
The zeroth order equations of motion are
\[\partial_{t}\mathbf{Q}^{(0)} =M\,\mathrm{grad}_{\mathcal{H}_{1}}(\mathbf{Q}^{(0)}),\] \[\partial_{t}\mathbf{z}^{(0)} =0,\quad\partial_{t}\varrho_{\textsc{Cq}}^{(0)}=0, \tag{112}\]
The first equation in (112) states that \(\mathbf{Q}^{(0)}\) is just the classical phase space trajectory of subsystem 1 when subsystem 2 is completely neglected. This equation may or may not be easy to solve depending on the exact form of \(\mathcal{H}_{1}\). Conversely, the last two equations are always easy to solve: Assuming that the density matrices for subsystem 2 in the QQ and CQ calculations coincide to zeroth order in \(\lambda\),
\[\varrho_{\textsc{Cq}}^{(0)}=\varrho_{2}^{(0)}, \tag{113}\]
we obtain
\[\mathbf{z}^{(0)}=\mathbf{u}_{2},\quad\varrho_{\textsc{Cq}}^{(0)}=\mathbf{u}_{ 2}\mathbf{u}_{2}^{\dagger}, \tag{114}\]
with \(\mathbf{u}_{2}\) being the same constant column vector appearing in SSIV.1.
The first order evolution equations for the classical degrees of freedom are of the form
\[\partial_{t}\mathbf{Q}^{(1)} =M[\mathrm{Hess}_{\mathcal{H}_{1}}(\mathbf{Q}^{(0)})\mathbf{Q}^{( 1)}+\lambda\langle V_{2}\rangle_{(0)}\,\mathrm{grad}_{\mathcal{V}_{1}}( \mathbf{Q}^{(0)})],\] \[i\partial_{t}\mathbf{z}^{(1)} =\lambda\mathcal{V}_{1}^{(0)}V_{2}\mathbf{u}_{2},\quad i\partial _{t}\varrho_{\textsc{Cq}}^{(1)}=\lambda\mathcal{V}_{1}^{(0)}[V_{2},\mathbf{u}_ {2}\mathbf{u}_{2}^{\dagger}], \tag{115}\]
where
\[\mathcal{V}_{1}^{(0)}=\mathcal{V}_{1}(\mathbf{Q}^{(0)}), \tag{116}\]
and \(\mathrm{Hess}_{\mathcal{H}_{1}}(\mathbf{Q}^{(0)})\) is the Hessian matrix of \(\mathcal{H}_{1}\) evaluated at \(\mathbf{Q}^{(0)}\). We can easily solve for the first order density matrix \(\varrho_{\textsc{Cq}}^{(1)}\):
\[\varrho_{\textsc{Cq}}^{(1)}(t)=-i\int\limits_{\mathcal{T}}\mathcal{V}_{1}^{ \prime(0)}[V_{2}^{\prime},\mathbf{u}_{2}\mathbf{u}_{2}^{\dagger}]. \tag{117}\]
This is very similar in form to the first order density matrices in the QQ calculation (82). In fact, the difference between the density matrices for subsystem 2 in the QQ and CQ calculations is simply
\[\delta\varrho(t) =\rho_{2}(t)-\varrho_{\textsc{Cq}}(t)\] \[=-i\lambda\int\limits_{\mathcal{T}}\{\langle V_{1}^{\prime} \rangle-\mathcal{V}_{1}^{\prime}\}_{0}\,[V_{2}^{\prime},\mathbf{u}_{2}\mathbf{ u}_{2}^{\dagger}]+\mathcal{O}(\lambda^{2}). \tag{118}\]
Interestingly, we see that the leading order contribution to \(\delta\varrho(t)\) is small if \(\langle V_{1}^{\prime}\rangle\approx\mathcal{V}_{1}^{\prime}\) when \(\lambda=0\). That is, the difference between the QQ and CQ density matrices for subsystem 2 will be minimized if the discrepancy between the expectation value of \(\hat{V}_{1}\) and its classical value is small at zero coupling.
Now, we define the relative error between observables in the QQ and CQ schemes. As above, suppose \(\hat{A}_{1}:\mathscr{H}_{1}\rightarrow\mathscr{H}_{1}\) and \(\hat{A}_{2}:\mathscr{H}_{2}\rightarrow\mathscr{H}_{2}\) are generic operators corresponding to the classical phase space functions \(\mathcal{A}_{1}=\mathcal{A}_{1}(q_{1},p_{1})\) and \(\mathcal{A}_{2}=\mathcal{A}_{2}(q_{2},p_{2})\), respectively. Then, the relative error in the observed value of \(\hat{A}_{1}\) is defined as
\[\Delta A_{1}=\frac{\langle A_{1}\rangle_{\textsc{Qq}}-\mathcal{A}_{1,\textsc{ Cq}}}{\langle A_{1}\rangle_{\textsc{Qq}}}, \tag{119}\]
where \(\langle A_{1}\rangle_{\text{\tiny QQ}}\) is evaluated in the QQ scheme and \(\mathcal{A}_{1,\text{\tiny CQ}}\) is evaluated in the CQ scheme. The relative error in \(\hat{A}_{2}\) is:
\[\Delta A_{2}=\frac{\langle A_{2}\rangle_{\text{\tiny QQ}}-\langle A_{2}\rangle_{ \text{\tiny CQ}}}{\langle A_{2}\rangle_{\text{\tiny QQ}}}. \tag{120}\]
Using the above perturbative expansions, we find
\[\Delta A_{1}=\left\{\frac{\langle A_{1}\rangle-\mathcal{A}_{1}}{\langle A_{1} \rangle}\right\}_{(0)}+\mathcal{O}(\lambda), \tag{121}\]
and
\[\Delta A_{2}=-i\lambda\left\{\frac{\int_{\mathcal{T}}\langle(V_{1}^{\prime})- V_{1}^{\prime}\rangle\langle[A_{2}^{\prime},V_{2}^{\prime}]\rangle}{\langle A_{2} \rangle}\right\}_{(0)}+\mathcal{O}(\lambda^{2}). \tag{122}\]
An important point is that \(\Delta A_{1}\) does not vanish in the \(\lambda\to 0\) limit. That is, even at zero coupling there will be a discrepancy between the QQ and CQ predictions for the \(\hat{A}_{1}\) observable. This is directly due to the fact that subsystem 1 is treated quantum mechanically in the QQ case and classically in the CQ case.
In equation (46), we defined a quantity \(\varepsilon\) that quantifies the magnitude of the approximation that led from the nearly product state to classical-quantum formalisms. It is fairly easy to confirm that \(\varepsilon=\Delta V_{1}+\mathcal{O}(\lambda)\), or, equivalently
\[\varepsilon=\varepsilon^{(0)}+\mathcal{O}(\lambda),\quad\varepsilon^{(0)}= \left\{\frac{\langle V_{1}\rangle-\mathcal{V}_{1}}{\langle V_{1}\rangle} \right\}_{(0)}. \tag{123}\]
If \(\lambda\ll 1\), it would appear that a necessary condition for the validity of the CQ approximation is that \(\varepsilon^{(0)}\) is small.
We conclude this section by noting that both equations (122) and (123) imply that the CQ approximation gets better the smaller the difference between \(\langle V_{1}\rangle_{0}\) and \(\mathcal{V}_{1}^{(0)}\) becomes. This implies that for the CQ approximation to be useful, the initial quantum state of subsystem 1 has to be selected such that the expectation value of \(\hat{V}_{1}\) is close to its classical value. We can state this another way by recalling that the generalized Bohr correspondence principle states that, in some limit, there should exist quantum states where expectation values match classical trajectories. Hence, it appears that a necessary condition for the validity of the CQ approximation is the imposition of initial data consistent with the Bohr correspondence principle. Actually finding such data will depend very much on the nature of the unperturbed Hamiltonian \(\hat{H}_{1}\). In SSV, we concentrate on the case where each subsystem is a simple harmonic oscillator and it is relatively straightforward to find states consistent with \(\langle V_{1}\rangle_{0}\approx\mathcal{V}_{1}^{(0)}\). In appendix SSB, we demonstrate how \(|\varepsilon|\ll 1\) can be realized for more general systems where subsystem 1 corresponds to a particle moving in a one-dimensional potential with a sharply peaked wavefunction.
## V Example: Nonlinearly coupled oscillators
To illustrate the various computational approaches described above in action, let us focus on a simple model in this section: two simple harmonic oscillators with a nonlinear coupling. We will compare numeric solutions to the equations of motion in the quantum, classical-quantum, and classical-classical calculation methods. Our motivation is to investigate when the CQ and CC schemes can yield reasonable approximations to the QQ dynamics. We therefore concentrate on quantum initial data that is expected to yield the most "classical" evolution in the absence of coupling; i.e., we will use coherent state initial data for the oscillators whenever applicable.
### Hamiltonian and operators
We consider a system of two simple harmonic oscillators with nonlinear coupling. The oscillators are described by canonical position and momentum variables \((q_{1},p_{1})\) and \((q_{2},p_{2})\), respectively. We can define alternative complex coordinates, as usual, by
\[a=\frac{q_{1}+ip_{1}}{\sqrt{2}},\quad b=\frac{q_{2}+ip_{2}}{\sqrt{2}}. \tag{124}\]
The relevant nonzero Poisson brackets are
\[\{q_{1},p_{1}\}=\{q_{2},p_{2}\}=1,\quad\{a,a^{*}\}=\{b,b^{*}\}=-i. \tag{125}\]
All the above variables are assumed to be dimensionless.
The classical Hamiltonian is
\[\mathcal{H} =\tfrac{1}{2}\Omega_{1}(q_{1}^{2}+p_{1}^{2})+\tfrac{1}{2}\Omega_ {2}(q_{2}^{2}+p_{2}^{2})+\lambda\bar{\Omega}q_{1}^{\nu}q_{2}^{\nu}\] \[=\Omega_{1}aa^{*}+\Omega_{2}bb^{*}+2^{-\nu}\lambda\bar{\Omega}(a+ a^{*})^{\nu}(b+b^{*})^{\nu}. \tag{126}\]
Here, \(\Omega_{1}>0\) and \(\Omega_{2}>0\) are the natural frequencies of each oscillator and their arithmetic mean is
\[\bar{\Omega}=\tfrac{1}{2}(\Omega_{1}+\Omega_{2}). \tag{127}\]
As before, \(\lambda\) is a dimensionless coupling parameter, and \(\nu=1,2,3,\ldots\) is a positive integer. Note that if \(\nu\) is odd, the Hamiltonian is not bounded from below.
For this example, we identify the various components of the classical Hamiltonian defined in section II.1 as follows:
\[\mathcal{H}_{1} =\Omega_{1}aa^{*}, \mathcal{V}_{1} =2^{-\nu/2}\bar{\Omega}^{1/2}(a+a^{*})^{\nu},\] \[\mathcal{H}_{2} =\Omega_{2}bb^{*}, \mathcal{V}_{2} =2^{-\nu/2}\bar{\Omega}^{1/2}(b+b^{*})^{\nu}. \tag{128}\]
with the corresponding quantum operators
\[\hat{H}_{1} =\Omega_{1}(\hat{a}^{\dagger}\hat{a}+\tfrac{1}{2}), \hat{V}_{1} =2^{-\nu/2}\bar{\Omega}^{1/2}(\hat{a}+\hat{a}^{\dagger})^{\nu},\] \[\hat{H}_{2} =\Omega_{2}(\hat{b}^{\dagger}\hat{b}+\tfrac{1}{2}), \hat{V}_{2} =2^{-\nu/2}\bar{\Omega}^{1/2}(\hat{b}+\hat{b}^{\dagger})^{\nu}. \tag{129}\]
with
\[[\hat{a},\hat{a}^{\dagger}]=[\hat{b},\hat{b}^{\dagger}]=1. \tag{130}\]
The solution of the eigenvalue problems for \(\hat{H}_{1}\) and \(\hat{H}_{2}\) are, of course, easy to solve:
\[\hat{H_{1}}|n\rangle =\Omega_{1}(n+\tfrac{1}{2})|n\rangle, \tag{131a}\] \[\hat{H_{2}}|m\rangle =\Omega_{2}(m+\tfrac{1}{2})|m\rangle, \tag{131b}\]
with \(n,m=0,1,2\ldots\) In these bases, matrix representations of the operators (129) are
\[H_{1} =\Omega_{1}(N+\tfrac{1}{2}I), V_{1} =\bar{\Omega}^{1/2}Q^{\nu},\] \[H_{2} =\Omega_{2}(N+\tfrac{1}{2}I), V_{2} =\bar{\Omega}^{1/2}Q^{\nu}, \tag{132}\]
where \(Q\) and \(N\) are matrix representations of the position and number operators
\[Q=\frac{1}{\sqrt{2}}\begin{pmatrix}0&\sqrt{1}&&\\ \sqrt{1}&0&\sqrt{2}&\\ &\sqrt{2}&0&\\ &&&\ddots\end{pmatrix},\,\,\,N=\begin{pmatrix}0&&\\ &1&&\\ &&2&\\ &&\ddots\end{pmatrix}, \tag{133}\]
and \(I\) is the identity matrix. We will also require the matrix representation of the momentum operator:
\[P=\frac{1}{\sqrt{2}i}\begin{pmatrix}0&+\sqrt{1}&&\\ -\sqrt{1}&0&+\sqrt{2}&\\ &-\sqrt{2}&0&\\ &&&\ddots\end{pmatrix}. \tag{134}\]
### Equations of motion and initial data
#### iv.2.1 Dimensionless variables
In order to simplify the formulae below, we will normalize all quantities by the mean frequency of the free oscillators \(\bar{\Omega}=\tfrac{1}{2}(\Omega_{1}+\Omega_{2})\). A useful dimensionless time variable is then
\[\tau=\bar{\Omega}t=\tfrac{1}{2}(\Omega_{1}+\Omega_{2})t. \tag{135}\]
We define an oscillator asymmetry parameter \(\sigma\in(-1,1)\) by
\[\sigma=\frac{\Omega_{1}-\Omega_{2}}{\Omega_{1}+\Omega_{2}},\quad\Omega_{1}=(1 +\sigma)\bar{\Omega},\quad\Omega_{2}=(1-\sigma)\bar{\Omega}. \tag{136}\]
In the plots below, all energies are measured in units of \(\bar{\Omega}\).
#### iv.2.2 Quantum-quantum case
In dimensionless form, the dynamical system governing the system's evolution (12) in the quantum-quantum case is explicitly:
\[i\dot{Z}=(1+\sigma)(N+\tfrac{1}{2}I)Z\\ +(1-\sigma)Z(N+\tfrac{1}{2}I)+\lambda Q^{\nu}ZQ^{\nu}, \tag{137}\]
where \(\dot{Z}=dZ/d\tau\). For all simulations, we will assume that at \(\tau=0\) the system is prepared in a product of coherent states for each oscillator. Then,
\[z_{nm}(0)=\exp\left(-\frac{|\alpha|^{2}+|\beta|^{2}}{2}\right)\frac{\alpha^{n }\beta^{m}}{\sqrt{n!m!}}, \tag{138}\]
where \(\alpha\) and \(\beta\) are dimensionless complex parameters. For convenience, we parametrize these as
\[\alpha=\sqrt{\frac{\zeta_{1}}{1+\sigma}}e^{i\phi_{1}},\quad\beta=\sqrt{\frac{ \zeta_{2}}{1-\sigma}}e^{i\phi_{2}}, \tag{139}\]
where \(\zeta_{1}\) and \(\zeta_{2}\) are nonnegative parameters, and \(\phi_{1}\) and \(\phi_{2}\) are angles. Unless stated otherwise, we will choose
\[\phi_{1}=\phi_{2}=\pi/2\quad\Rightarrow\quad\langle q_{1}(0)\rangle=\langle q _{2}(0)\rangle=0. \tag{140}\]
This choice is expected to minimize the initial value of the interaction energy.
Once we have a solution for \(Z(\tau)\) we can calculate the values of various observables and other quantities of interest by using the formulae in Table 1. Note that in this table, the oscillator energies are defined as
\[E_{1}=\langle\hat{H}_{1}\rangle,\quad E_{2}=\langle\hat{H}_{2}\rangle,\quad E _{\text{int}}=\lambda\langle\hat{V}_{1}\otimes\hat{V}_{2}\rangle. \tag{141}\]
From the table, we can see that \(\zeta_{1}\) and \(\zeta_{2}\) are equal to the initial energies of oscillator \(1\) and \(2\) above the ground state in units of \(\bar{\Omega}\). Once can confirm by explicit calculation that
\[\partial_{\tau}(E_{1}+E_{2}+E_{\text{int}})=0. \tag{142}\]
as expected.
#### iv.2.3 Classical-quantum case
In dimensionless form, the equations of motion for the classical-quantum case read
\[i\dot{a} =(1+\sigma)a+\tfrac{1}{\sqrt{2}}\lambda\nu(\sqrt{2}\operatorname{ Re}a)^{\nu-1}\mathbf{z}^{\dagger}Q^{\nu}\mathbf{z}, \tag{143a}\] \[i\dot{\mathbf{z}} =(1-\sigma)(N+\tfrac{1}{2}I)\mathbf{z}+\lambda(\sqrt{2} \operatorname{Re}a)^{\nu}Q^{\nu}\mathbf{z}. \tag{143b}\]
We parametrize the initial value of \(a(\tau)\) as follows:
\[a(0)=a_{0}e^{i\phi_{1}},\quad a_{0}=\sqrt{\frac{\zeta_{1}}{1+\sigma}}, \tag{144}\]
where \(\zeta_{1}\) is a nonnegative real parameter. As for the quantum-quantum case above, we take oscillator 2 to be in a coherent state:
\[z_{m}(0)=\exp\left(-\frac{|\beta|^{2}}{2}\right)\frac{\beta^{m}}{\sqrt{m}!}, \quad\beta=\sqrt{\frac{\zeta_{2}}{1-\sigma}}e^{i\phi_{2}}, \tag{145}\]
where \(\zeta_{2}\) is a nonnegative real parameter. Formulae for observables in the case are shown in Table 1. These are similar to the quantum-quantum calculation described above, but not identical. One notable difference is the formulae for the energies:
\[E_{1}=\mathcal{H}_{1},\quad E_{2}=\langle\hat{H}_{2}\rangle,\quad E_{\rm int}= \lambda\mathcal{V}_{1}\langle\hat{V}_{2}\rangle. \tag{146}\]
As above, we have conservation of the total energy (142). From Table 1, we see that \(\zeta_{1}\) is the dimensionless initial energy of oscillator 1, and \(\zeta_{2}\) is the dimensionless initial energy of oscillator 2 above the ground state.
#### iv.2.4 Classical-classical case
For the classical-classical case, the equations of motion in dimensionless form read
\[i\dot{a} =(1+\sigma)a+2^{\nu-1}\nu\lambda(\operatorname{Re}a)^{\nu-1}( \operatorname{Re}b)^{\nu}, \tag{147a}\] \[i\dot{b} =(1-\sigma)b+2^{\nu-1}\nu\lambda(\operatorname{Re}a)^{\nu}( \operatorname{Re}b)^{\nu-1}. \tag{147b}\]
We parameterize initial data as
\[a(0)=\sqrt{\frac{\zeta_{1}}{1+\sigma}}e^{i\phi_{1}},\quad b(0)=\sqrt{\frac{ \zeta_{2}}{1-\sigma}}e^{i\phi_{2}}. \tag{148}\]
In this case, the energies are simply
\[E_{1}=\mathcal{H}_{1},\quad E_{2}=\mathcal{H}_{2},\quad E_{\rm int}=\lambda \mathcal{V}_{1}\mathcal{V}_{2}, \tag{149}\]
and we obviously have total energy conservation (142). The initial values of the normalized energies are
\[E_{1}(0)=\bar{\Omega}\zeta_{1},\quad E_{2}(0)=\bar{\Omega}\zeta_{2}. \tag{150}\]
### Sample Simulations
In order to obtain approximate numeric solutions of the relevant equations of motion for the quantum-quantum (137) and classical-quantum (143) schemes, it is necessary to truncate the infinite dimensional matrices and vectors. We retain the first \((n_{\rm max}+1)\) entries of vectors and the upper-left \((n_{\rm max}+1)\times(n_{\rm max}+1)\) submatrix of all matrices. We would expect such an approximation to be valid if the statistical distribution of occupation numbers of each oscillator remains below \(n_{\rm max}\) during simulations. For example, in the quantum-quantum case this condition would be
\[\max\left[\langle n\rangle+\sqrt{\operatorname{Var}(n)},\langle m\rangle+ \sqrt{\operatorname{Var}(m)}\right]<n_{\rm max} \tag{151}\]
Heuristically, simulation results appear to be insensitive to the value of \(n_{\rm max}\) if the right-hand side of the above inequality is \(\gtrsim 1.5\) times larger than the left-hand side.
In figures 1 and 2 we show the results of some typical numeric simulations for the \(\nu=2\) case. In figure 1, we show simulation outputs for the trajectory of oscillator 2 and the partition of energy in the system as functions of time. In figure 2, we show the phase portraits of each oscillator for a number of different parameter combinations. The main qualitative solution to be drawn is that for small coupling, the results of the CC, CQ and QQ schemes match up reasonably well, but for higher coupling discrepancies become apparent.
\begin{table}
\begin{tabular}{l l l l l} \hline & quantum-quantum & classical-quantum & classical-classical & initial value \\ \hline \hline O1 position \(q_{1}\) & \(\operatorname{Tr}(ZZ^{\dagger}Q)\) & \(\sqrt{2}\operatorname{Re}a\) & \(\sqrt{2}\operatorname{Re}a\) & \(\sqrt{2}\zeta_{1}/(1+\sigma)\cos\phi_{1}\) \\ O2 position \(q_{2}\) & \(\operatorname{Tr}(Z^{\dagger}Z^{*}Q)\) & \(\operatorname{Tr}\left(\mathbf{zz}^{\dagger}Q\right)\) & \(\sqrt{2}\operatorname{Re}b\) & \(\sqrt{2}\zeta_{2}/(1-\sigma)\cos\phi_{2}\) \\ O1 momentum \(p_{1}\) & \(\operatorname{Tr}(ZZ^{\dagger}P)\) & \(\sqrt{2}\operatorname{Im}a\) & \(\sqrt{2}\operatorname{Im}a\) & \(\sqrt{2}\zeta_{1}/(1+\sigma)\sin\phi_{1}\) \\ O2 momentum \(p_{2}\) & \(\operatorname{Tr}(Z^{\dagger}Z^{*}P)\) & \(\operatorname{Tr}\left(\mathbf{zz}^{\dagger}P\right)\) & \(\sqrt{2}\operatorname{Im}b\) & \(\sqrt{2}\zeta_{2}/(1-\sigma)\sin\phi_{2}\) \\ O1 occupation number \(\langle n\rangle\) & \(\operatorname{Tr}(ZZ^{\dagger}N)\) & n/a & n/a & \(\zeta_{1}/(1+\sigma)\) \\ O2 occupation number \(\langle m\rangle\) & \(\operatorname{Tr}(Z^{\dagger}Z^{*}N)\) & \(\operatorname{Tr}(\mathbf{zz}^{\dagger}N)\) & n/a & \(\zeta_{2}/(1-\sigma)\) \\ \(\operatorname{Var}(n)=\langle n^{2}\rangle-\langle n\rangle^{2}\) & \(\operatorname{Tr}(ZZ^{\dagger}N^{2})-\operatorname{Tr}^{2}(ZZ^{\dagger}N)\) & n/a & n/a & \(\zeta_{1}/(1+\sigma)\) \\ \(\operatorname{Var}(m)=\langle m^{2}\rangle-\langle m\rangle^{2}\) & \(\operatorname{Tr}(Z^{\dagger}Z^{*}N^{2})-\operatorname{Tr}^{2}(Z^{\dagger}Z^{*}N)\) & \(\operatorname{Tr}(\mathbf{zz}^{\dagger}N^{2})-\operatorname{Tr}^{2}( \mathbf{zz}^{\dagger}N)\) & n/a & \(\zeta_{2}/(1-\sigma)\) \\ O1 energy \(E_{1}/\bar{\Omega}\) & \((1+\sigma)[\operatorname{Tr}(ZZ^{\dagger}N)+\frac{1}{2}]\) & \((1+\sigma)aa^{*}\) & \((1+\sigma)aa^{*}\) & \(\zeta_{1}\)+ zero-point energy \\ O2 energy \(E_{2}/\bar{\Omega}\) & \((1-\sigma)[\operatorname{Tr}(Z^{\dagger}Z^{*}N)+\frac{1}{2}]\) & \((1-\sigma)[\operatorname{Tr}(\mathbf{zz}^{\dagger}N)+\frac{1}{2}]\) & \((1-\sigma)b\delta^{*}\) & \(\zeta_{2}\)+ zero-point energy \\ interaction energy \(E_{\rm int}/\bar{\Omega}\) & \(\lambda\operatorname{Tr}(Z^{\dagger}Q^{\nu}ZQ^{\nu})\) & \(\lambda(\sqrt{2}\operatorname{Re}a)^{\nu}\operatorname{Tr}(\mathbf{zz}^{\dagger}Q ^{\nu})\) & \(2^{\nu}\lambda(\operatorname{Re}a)^{\nu}(\operatorname{Re}b)^{\nu}\) & \\ O1 Wigner distribution & \(\operatorname{Tr}\left[ZZ^{\dagger}\mathcal{W}(q_{1},p_{1})\right]\) & n/a & n/a & \\ O2 Wigner distribution & \(\operatorname{Tr}\left[Z^{\dagger}Z^{\prime}\mathcal{W}(q_{2},p_{2})\right]\) & \(\operatorname{Tr}\left[\mathbf{zz}^{\dagger}\mathcal{W}(q_{2},p_{2})\right]\) & n/a & \\ \hline \end{tabular}
\end{table}
Table 1: Formulae for various observables and other quantities in the quantum-quantum, classical-quantum, and classical-classical schemes for nonlinearly coupled oscillators. Here, “O1” stands for “oscillator 1” and “O2” stands for “oscillator 2”. Note that the “zero-point” energy for each oscillator is the energy of the ground state when the oscillator’s evolution is treated quantum mechanically, and zero when the oscillator dynamics are classical. The Wigner matrix \(\mathcal{W}(q,p)\) is defined in §V.7. Expressions for the initial values of the interaction energy and Wigner distributions are complicated and hence omitted from this table.
Figure 1: Typical numerical simulation results for the classical-classical (CC), classical-quantum (CQ) and quantum-quantum (QQ) schemes for \(\nu=2\); i.e., \(\mathcal{H}_{\rm int}=\lambda\bar{\Omega}q_{1}^{2}q_{2}^{2}\). Results on the left are for small coupling (\(\lambda=0.001\)), in the middle are for moderate coupling (\(\lambda=0.01\)) and on the right are for relatively high coupling (\(\lambda=0.1\)); all other parameters are the same for each panel. In each simulation, the initial energy of oscillator 1 is greater than that of oscillator 2 (\(\zeta_{1}>\zeta_{2}\)), but each oscillator has identical frequencies (\(\sigma=0\)). We see very good qualitative agreement between each of the calculation schemes for \(\lambda=0.001\), but the agreement is poor for \(\lambda=0.1\). Also, it is apparent that the error in the CC and CQ schemes increases with time.
Figure 2: Classical-classical (CC), classical-quantum (CQ) and quantum-quantum (QQ) phase portraits for oscillators 1 and 2 for a number of different parameter choices. We take \(\nu=2\) for all simulations. Each row shows results using the same parameters but a different simulation method, while each column shows results for the same calculation scheme, but with different parameters. There is better agreement between all schemes for smaller values of \(\lambda\). Interestingly, it is possible for the CC and CQ schemes to gives visually similar results at higher \(\lambda\) that are very different than the QQ calculation, as in the last row.
As mentioned above, the simulation results shown in figures 1 and 2 are only trustable if the probability distribution for the occupation numbers of each oscillator lie well below the cutoff \(n_{\text{max}}\). In figure 3, we show the average occupation number and its statistical "standard error" for each oscillator for a number of different simulations. The standard error around the expectation value of the occupation number of oscillator 1 is the interval
\[\left[\langle n\rangle-\sqrt{\text{Var}(n)},\langle n\rangle+\sqrt{\text{Var}( n)}\right], \tag{152}\]
with a similar expression for oscillator 2. We can see in this figure that the uncertainty bands lie well below our selected cutoffs, so we can be confident in the simulation results.
### Quantifying error in the CC and CQ approximations
The results in figures 1 and 2 strongly suggest that the CC and CQ approximation will give results close to the QQ calculation if \(\lambda\) is small and if \(t\) is not too large. In this section, we attempt to quantify these observations by introducing definitions for the discrepancy between the different calculation methods as well as the relative error in the CC and CQ schemes.
Before defining the discrepancy and relative error, it is useful to revisit the scrambling time \(t_{\textsc{LIN}}\) introduced in SSIV.2. Recall that this was the time at which we would estimate that the linear entropy would significantly deviate from zero for systems with infinite dimensional Hilbert spaces (such as the nonlinearly coupled oscillators of this section). Hence, we expect the product state approximation to break down at \(t=t_{\textsc{LIN}}\), implying that the CC and CQ approximations may cease to be be valid. Now, the scrambling time \(t_{\textsc{LIN}}\) is a function of the average interaction energy \(\mathcal{E}_{\text{int}}\) and the degree of coherence of each subsystem \(\mathcal{N}_{1,2}\). Given that at time \(t=0\), each oscillator is prepared in a coherent state, we can use the results presented in Appendix D to explicitly write down formulae for \(\mathcal{E}_{\text{int}}\) and \(\mathcal{N}_{1,2}\) in terms of integrals of elementary functions. For example, if \(\nu=2\), we find that
\[\mathcal{E}_{\text{int}}(t)=\frac{\lambda\bar{\Omega}}{t}\int_{0} ^{t}\left[(q_{1}^{\prime 4}+3q_{1}^{\prime 2}+\tfrac{3}{4})(q_{2}^{\prime 4}+3q_{2}^ {\prime 2}+\tfrac{3}{4})\right]^{1/2}dt^{\prime},\] \[\mathcal{N}_{1,2}(t)=\frac{\int_{0}^{t}\left[(q_{1,2}^{\prime 2}+ \tfrac{1}{4})(q_{2,1}^{\prime 4}+3q_{2,1}^{\prime 2}+\tfrac{3}{4})\right]^{1/2}dt^{ \prime}}{\int_{0}^{t}\left[(q_{1}^{\prime 4}+3q_{1}^{\prime 2}+\tfrac{3}{4})(q_{2}^{ \prime 4}+3q_{2}^{\prime 2}+\tfrac{3}{4})\right]^{1/2}dt^{\prime}}, \tag{153}\]
where \(q_{1}^{\prime}=q_{1}(t^{\prime})\) and \(q_{2}^{\prime}=q_{2}(t^{\prime})\) are the classical solutions for \(q_{1}\) and \(q_{2}\) evaluated at time \(t^{\prime}\) and at zero coupling:
\[q_{1}^{\prime}=q_{1}(t^{\prime})=\sqrt{2}|\alpha|\cos(\Omega_{1} t^{\prime}-\phi_{1}), \tag{154a}\] \[q_{2}^{\prime}=q_{2}(t^{\prime})=\sqrt{2}|\beta|\cos(\Omega_{2} t^{\prime}-\phi_{2}). \tag{154b}\]
Here, \(\alpha\) and \(\beta\) are related to other parameters by equation (139). It does not appear to be possible to evaluate the integrals in (153) analytically, but we can easily calculate the scrambling time numerically by solving
\[1=t_{\textsc{LIN}}\mathcal{E}_{\text{int}}(t_{\textsc{LIN}})\min[\mathcal{N}_ {1,2}(t_{\textsc{LIN}})] \tag{155}\]
for \(t_{\textsc{LIN}}\). We can derive a very crude estimate of \(t_{\textsc{LIN}}\) for the general \(\nu\) case by assuming that \(|\alpha|\gg 1\) and \(|\beta|\gg 1\), we obtain
\[\mathcal{E}_{\text{int}}\sim\lambda\bar{\Omega}|\alpha|^{\nu}|\beta|^{\nu}, \quad\mathcal{N}_{1}\sim|\alpha|^{-1},\quad\mathcal{N}_{2}\sim|\beta|^{-1}. \tag{156}\]
which leads to the following rough estimate for the scrambling time
\[\bar{\Omega}\,t_{\textsc{LIN}}=\tau_{\textsc{LIN}}\sim\frac{\max(|\alpha|,| \beta|)}{\lambda|\alpha|^{\nu}|\beta|^{\nu}}. \tag{157}\]
In figure 4, we plot numeric solutions in the CC, CQ and QQ schemes over time intervals \(\tau\in[0,2\tau_{\textsc{LIN}}]\), where we calculate \(\tau_{\textsc{LIN}}\) numerically from (155). We also plot the "discrepancy" between each pair of calculation methods, which is defined as the Euclidean distance in phase space between the expectation value of the system's trajectory in each scheme. More specifically, let us define phase space position vectors for each scheme:
\[\mathbf{X}_{\textsc{CC}} = [q_{1}^{\textsc{CC}},p_{1}^{\textsc{CC}},q_{2}^{\textsc{CC}},p_{2 }^{\textsc{CC}}]^{\text{T}}, \tag{158a}\] \[\mathbf{X}_{\textsc{CQ}} = [q_{1}^{\textsc{CQ}},p_{1}^{\textsc{CQ}},q_{2}^{\textsc{CQ}}, \langle p_{2}^{\textsc{CQ}}\rangle]^{\text{T}},\] (158b) \[\mathbf{X}_{\textsc{QQ}} = [\langle q_{1}^{\textsc{QQ}}\rangle,\langle p_{1}^{\textsc{QQ}} \rangle,\langle q_{2}^{\textsc{QQ}}\rangle,\langle p_{2}^{\textsc{QQ}} \rangle]^{\text{T}}. \tag{158c}\]
Then, the discrepancy between each pair of schemes at time \(\tau\) is
CC vs CQ discrepancy \[= \|\mathbf{X}_{\textsc{CC}}-\mathbf{X}_{\textsc{CQ}}\|_{\text{F}},\] (159a) CC vs QQ discrepancy \[= \|\mathbf{X}_{\textsc{CC}}-\mathbf{X}_{\textsc{QQ}}\|_{\text{F}},\] (159b) CQ vs QQ discrepancy \[= \|\mathbf{X}_{\textsc{CQ}}-\mathbf{X}_{\textsc{QQ}}\|_{\text{F}};\] (159c)
where the Frobenius norm reduces to the usual vector norm in \(\mathbb{R}^{4}\). Each of these discrepancy metrics appear to grow with characteristic timescale set by the scrambling time, which is intuitively expected. We see that the discrepancy between the CQ and QQ schemes is generally lowest, implying that the CQ approximation is a good match to the QQ results over the timescales simulated. We also show the behaviour of the linear and von Neumann entropies as a function of time in figure 4. Again, it appears that the scrambling time is the appropriate timescale for the growth of entanglement entropy, with \(S_{\textsc{LIN}}\) is of order 0.1 at \(\tau=\tau_{\textsc{LIN}}\).4
Footnote 4: Interestingly, it looks like \(S_{\textsc{VN}}\sim 2S_{\textsc{LIN}}\) from these simulations that over the interval \(\tau\in[0,2\tau_{\textsc{LIN}}]\), which might have been guessed from equations (87), (93), and (94).
For the simulations shown in figure 4, we can plot the discrepancy between the CQ and QQ approximations
versus the linear entropy; this is shown in figure 5. This plot matches the general expectation that as the entanglement entropy of the system increases, the agreement between the CQ and QQ approximation degrades.
In addition to condition that \(\tau\ll\tau_{\text{LIN}}\), in order for the classical-quantum approximation to be valid when \(\lambda\ll 1\) we also require that \(\varepsilon^{(0)}\) is "small". Making use of the definition (123) and the formulae of Appendix D, we can again write down explicit formulae for \(\varepsilon^{(0)}\). For example, for \(\nu=2\) we get
\[\varepsilon^{(0)}(t)=\frac{1}{4|\alpha|^{2}\cos^{2}(\Omega_{1}t-\phi_{1})+1}, \tag{160}\]
We see that in the \(|\alpha|\to\infty\) limit, this function goes to zero except for the discrete times satisfying \(\Omega_{1}t-\phi_{1}=\pm\pi/2,\pm 3\pi/2\ldots\) From this, it seems intuitively reasonable that the CQ approximation will become better and better (in some relative sense) as \(\zeta_{1}\propto|\alpha|^{2}\) becomes larger and larger. In order to test this intuition, we can define the relative error in the CC and CQ schemes over a given time interval \(\tau\in[0,T]\) as the \(L_{2}\) norms of their discrepancy with the QQ results normalized by the \(L_{2}\) norm of the QQ phase space trajectory. Explicitly, if
\[\langle\!\langle\mathbf{X}\rangle\!\rangle_{T}=\int_{0}^{T}d\tau\,\|\mathbf{X }\|_{\mathbb{F}}, \tag{161}\]
then
\[e_{T}^{\text{CC}} =\frac{\langle\!\langle\mathbf{X}_{\text{\tiny{CC}}}-\mathbf{X}_ {\text{\tiny{QQ}}}\rangle\!\rangle_{T}}{\langle\!\langle\mathbf{X}_{\text{ \tiny{QQ}}}\rangle\!\rangle_{T}}, \tag{162a}\] \[\epsilon_{T}^{\text{CO}} =\frac{\langle\!\langle\mathbf{X}_{\text{\tiny{CQ}}}-\mathbf{X}_ {\text{\tiny{QQ}}}\rangle\!\rangle_{T}}{\langle\!\langle\mathbf{X}_{\text{ \tiny{QQ}}}\rangle\!\rangle_{T}}, \tag{162b}\]
respectively. In figure 6, we show the relative size of these errors in the \((\zeta_{1},\zeta_{2})\) plane for a particular choice of parameters. In this plot, it can be clearly seen that the CQ approximation performs much better than the CC approximation when \(\zeta_{1}>\zeta_{2}\), which supports our intuition that the CQ approximation should be appropriate when oscillator 1 has high energy.
We have calculated the errors in the CC and CQ schemes over a rectangular region of the \((\lambda,\sigma,\zeta_{1},\zeta_{2})\) parameter space for the \(\nu=2\) coupling. Some of the results are shown in figure 7. From this figure and results not shown, we make the following qualitative observations:
* At sufficiently small coupling, the errors in both the CC and CQ scheme are proportional to \(\lambda\).
* For most parameters, the CQ scheme results in smaller errors than the CC scheme. Exceptions occur when the frequency asymmetry \(\sigma\) approaches 1 or when the initial energy in oscillator 2 is large compared to the initial energy in oscillator 1.
* The error in the CQ scheme tends to increase with increasing \(\sigma\), while the error in the CC scheme is fairly insensitive to the frequency asymmetry.
* The accuracy of the CQ scheme improves when the
Figure 3: Expectation value of occupation numbers of oscillator 1 and 2 in the quantum-quantum calculation for various choices of parameters. The shaded regions indicate the quantum uncertainty in the expected values as quantified by the standard error (i.e. the square root of the variances of \(n\) and \(m\), respectively). In each case the top of the uncertainty bands lie well below our selected value of \(n_{\text{max}}\), in indicating that our finite truncation of each oscillator basis should be a valid approximation.
initial energy of oscillator 2 is less that the initial energy in oscillator 1.
* The accuracy of the CC scheme improves when the initial energies in each oscillator become large.
### Parametric resonance
Parametric resonance is an important classical phenomena that this system can exhibit when \(\nu=2\). This effect allows for the efficient exchange of energy between the oscillators at small coupling. To see how this comes about, consider the classical equation of motion (147) under the assumption that the amplitude of one of the oscillators (say, oscillator 2) is very small:
\[\lambda|b|\ll 1. \tag{163}\]
Under this assumption, the equation of motion for the other oscillator is easily solvable:
\[i\dot{a}\approx(1+\sigma)a\quad\Rightarrow\quad a=|\alpha|e^{i(1+\sigma)t}. \tag{164}\]
If we substitute this into the equation of motion for the other oscillator, we can derive a second order equation for \(q_{2}=\sqrt{2}\operatorname{Re}b\) under the assumption that
\[\lambda|\alpha|^{2}\ll 1-\sigma. \tag{165}\]
We find that
\[\frac{d^{2}q_{2}}{dT^{2}}+(1+h\cos\omega T)q_{2}=0, \tag{166}\]
Figure 4: Numeric solutions in the the CC, CQ and QQ schemes over time intervals \(\tau\in[0,2\tau_{\text{LIN}}]\) with \(\nu=2\). We expect the product state approximation to break down around \(\tau=\tau_{\text{LIN}}\). All plots within a given column have the same \(\lambda\), and \(\lambda\) increases from left to right. We see in the top row that visual discrepancies between the various scheme are only visible for \(\tau\gtrsim\tau_{\text{LIN}}\). In the middle row, we show the discrepancy between each set of simulation results as defined by the Euclidean distance in phases space as a function of time. The discrepancy between the CQ and QQ schemes is always lower that the CC vs CQ and CC vs QQ discrepancies, indicating that the CQ scheme is the better approximation over these timescales. Finally, in the third row, we show the behaviour of the von-Neumann and linear entropies in the QQ simulations. As expected, we have \(S_{\text{VN}}>S_{\text{LIN}}\), but otherwise the two entropy curves are remarkably similar.
with
\[T=(1-\sigma)\tau,\quad\omega=2\left(\frac{1+\sigma}{1-\sigma}\right),\quad h= \frac{2|\alpha|^{2}\lambda}{1-\sigma}. \tag{167}\]
Equation (166) is the Mathieu equation, and the properties of its solutions are well-known. In particular, when \(h\ll 1\), we have exponentially growing solutions whenever \(\omega\sim 2/k\) with \(k=1,2,3\ldots\). The \(k=1\) case is the strongest resonance with the fastest rate of exponential growth. In that case, exponentially growing solutions exist when
\[|\omega^{2}-4|<2h\quad\Rightarrow\quad 4|\sigma|<|\alpha|^{2}\lambda, \tag{168}\]
assuming that \(|\sigma|\ll 1\). This assumption also implies that the growing solutions for \(q_{2}\) are of the form
\[q_{2}=\text{const}\times e^{|\alpha|^{2}\lambda\tau}\cos(\tau-\phi), \tag{169}\]
where \(\phi\) is an arbitrary phase. We can read off the timescale for the resonant growth of \(q_{2}\):
\[\tau_{\text{\tiny RES}}=\frac{1}{|\alpha|^{2}\lambda}. \tag{170}\]
Comparing this to our estimate for the scrambling timescale (157) when \(\nu=2\) and recalling \(|b|\) is supposed to be small such that \(\max(|\alpha|,|\beta|)=|\alpha|\), we see that
\[\frac{\tau_{\text{\tiny LIN}}}{\tau_{\text{\tiny RES}}}\sim\frac{|\alpha|}{| \beta|^{2}}\sim\frac{\zeta_{1}^{1/2}}{\zeta_{2}}. \tag{171}\]
In order for parametric resonance to occur, we need that that the scrambling time be at least the same magnitude as the resonance time. Furthermore, we would expect "more" parametric resonance to occur for larger values of the ratio \(\tau_{\text{\tiny LIN}}/\tau_{\text{\tiny RES}}\). To illustrate what this means, we show several long time numeric simulations for small coupling and \(\sigma=0\) in figure 8. In each simulation, the classical solution consists of a sinusoidal wave with a periodically modulated amplitude. We see that when the initial energy \(\zeta_{1}\) of oscillator 1 is increased, the QQ simulation matches the CC results for a greater number of cycles of amplitude modulation.
Figure 5: Discrepancy between CQ and QQ simulation results versus linear entanglement entropy for the simulations shown in 4. The general trend that higher entropy is associated with higher discrepancies is readily apparent, but the correlation is much tighter for smaller coupling.
Figure 6: Bubble plots comparing the relative errors in the CC and CQ approximations for \(\nu=2\) simulations over the time interval \(\tau\in[0,6\pi]\) as a function of the oscillator initial energies \((\zeta_{1},\zeta_{2})\). All results assume \(\nu=2\) and use \(n_{\text{max}}=35\). The areas of each bubble is proportional to the quantity it represents. In the top panel we see that the relative error in the CC approximation is bigger than that of the CQ approximation for all parameters simulated. In the bottom panel, we see that the accuracy advantage in the CQ approximation is largest when \(\zeta_{1}\) is big and \(\zeta_{2}\) is small; i.e., when the initial energy of oscillator 1 is high and that of oscillator 2 is low.
### Wigner quasiprobability distributions
The Wigner quasiprobability distribution is a useful quantity for visualizing the quantum uncertainty in the phase space trajectory of systems like a simple harmonic oscillator. In this section, we consider the behaviour of Wigner distribution of oscillator 2 in both the classical-quantum and quantum-quantum calculations.
For a single harmonic oscillator in one-dimension (or any particle moving in a one-dimensional potential), the Wigner distribution is defined as
\[W(q,p;\tau)=\frac{1}{\pi}\int_{-\infty}^{\infty}dy\langle q-y|\hat{\rho}(\tau) |q+y\rangle e^{-2ipy}, \tag{172}\]
where \(\hat{\rho}(\tau)\) is the density matrix operator and \(|q\pm y\rangle\) are position eigenstates. Making use of the completeness of the energy eigenfunction basis,
\[1=\sum_{n}|n\rangle\langle n|, \tag{173}\]
we get
\[W(q,p;\tau)=\mathrm{Tr}\left[\rho(\tau)\mathcal{W}(q,p)\right]. \tag{174}\]
where
\[\mathcal{W}_{nn^{\prime}}(q,p)=\frac{1}{\pi}\int_{-\infty}^{ \infty}dy\,\psi_{n^{\prime}}(q-y)\psi_{n}^{*}(q+y)e^{-2ipy},\] \[[\rho(\tau)]_{n^{\prime}n}=\langle n^{\prime}|\hat{\rho}(\tau)|n \rangle,\quad\psi_{n}(q)=\langle q|n\rangle. \tag{175}\]
The functions \(\psi_{n}(q)\) are merely the ordinary energy eigenfunctions of the simple harmonic oscillator:
\[\psi_{n}(q)=\frac{1}{\pi^{1/4}\sqrt{2^{n}n!}}e^{-q^{2}/2}H_{n}(q), \tag{176}\]
where \(H_{n}(q)\) are the Hermite polynomials. Fortunately, it is straightforward to analytically calculate the entries of the \(\mathcal{W}\) matrix using symbolic algebra. Once we have \(\mathcal{W}\), the Wigner distribution for the CQ and QQ schemes is readily obtained from (174) after substituting in the correct density matrix (as shown in Table 1). As elsewhere in this paper, we need to work in a finite mode truncation to get numerical values for the Wigner function. Due to limitations in available computational resources, the results in this subsection are obtained using \(n,n^{\prime}\lesssim 15\).
We show the time evolution of the oscillator 2 CQ and QQ Wigner distributions for a typical simulation in figure 9. We see that the phase space profile of oscillator 2 is almost constant over the time interval \([0,8_{\text{\tiny{LIN}}}]\). That is, the initial compact distribution associated with the initial coherent state is mostly preserved under time evolution with minor deformation. The situation is much different for the QQ scheme: one can see appreciable blurring of the initial profile at \(\tau=2\tau_{\text{\tiny{LIN}}}\), and by \(\tau=8\tau_{\text{\tiny{LIN}}}\) there has been a significant degree of decoherence.
It is interesting to observe that for many of the simulations we have presented up to this point, the CC and CQ results tend to agree with one another much more than they agree with the QQ calculation for times greater than \(\tau_{\text{\tiny{LIN}}}\). This is very striking in figure 8, for example. A plausible reason for this is suggested by the Wigner distributions in figure 9: since the phase space profile of the initial coherent state is preserved for longer in the CQ calculation and coherent states are known to closely mimic classical dynamics, it makes intuitive sense that the CQ and CC match over longer timescales.
In addition to 9, we have prepared four video clips
Figure 7: Errors in the CC and CQ schemes over the time interval \(\tau\in[0,6\pi]\) as a function of parameters. All results assume \(\nu=2\) and use \(n_{\text{max}}=35\).
showing the evolution of the Wigner distributions, entanglement entropy, and density matrices for both oscillators for several different choices of parameters; these are viewable online [27]. We have also superimposed the classical phase space position of each oscillator on the Wigner distributions (as calculated from the CC formalism) for comparison. For these simulations, we have assumed that subsystem 2 has very low initial energy and that each oscillator has the same frequency.
Video 1: This video shows the moderate coupling case (\(\lambda=0.01\)) over the time interval \([0,6\,\tau_{\mbox{\tiny LIN}}]\). One can see that the phase space profiles of each oscillator start to become deformed just after one scrambling time. Interestingly, we can see that the reduced density matrix of oscillator 1 starts to become more diagonally dominant at late times, as might be expected in a system that is undergoing decoherence due to interactions with an environment. However, the reduced density matrix of subsystem 2 does not appear to be approaching a diagonal matrix at late times. We also remark that the late time Wigner profiles in the QQ calculation exhibit a much richer structure than the CQ case.
Video 2: This video has the same parameters as video 1, but it is over a shorter time interval \([0,2\,\tau_{\mbox{\tiny LIN}}]\). In this video, we see that the CQ and QQ Wigner distributions of oscillator 2 show good qualitative agreement for early times \(\tau\lesssim\tau_{\mbox{\tiny LIN}}\), but start to deviate from each other after. Throughout these simulations, the evolution of each Wigner distribution tracks the classical orbits reasonably well.
Video 3: This video assumes large coupling (\(\lambda=1\)) and a relatively long timescale \([0,18\,\tau_{\mbox{\tiny LIN}}]\). In this
Figure 8: Simulations illustrating parametric resonance. In each case, we have selected parameters such that the classical solution exhibits parametric resonance; i.e. the efficient periodic transfer of energy between the oscillators. The parameters for each column are the same except for the initial energy \(\zeta_{1}\) of oscillator 1 and the occupation number cutoff \(n_{\max}\). We see that as \(\zeta_{1}\) is increased, the QQ results match the classical prediction for a greater number of resonant cycles. This is consistent with our expectation that the ratio of the scrambling to resonant timescales satisfies \(\tau_{\mbox{\tiny LIN}}/\tau_{\mbox{\tiny RES}}\sim\sqrt{\zeta_{1}}/\zeta_{2}\).
scenario, the Wigner functions are significantly deformed before either oscillator undergoes a single complete period. Also, there is little qualitative agreement between the CQ and QQ results for oscillator 2 for all \(\tau>0\). The CQ-CC discrepancy is visibly larger than the QQ-CC discrepancy for \(\tau\gtrsim\tau_{\text{LIN}}\).
* 4: This video assumes small coupling (\(\lambda=0.001\)) and evolution up to the scrambling time. One can see that in this case, each oscillator undergoes many coherent oscillations before the Wigner functions begin to lose their initial shape. For all times, the centers of the Wigner distributions coincide with the classical orbits even as they spread out in phase space.
### Long time evolution of entanglement entropy
In figure 10, we show the von Neumann entropy as a function of time from a number of quantum-quantum simulations with different choices of coupling power \(\nu\). These plots show the system's behaviour over timescales much longer than considered in previous sections. After an initial transient period, we see that the von Neumann entropy for all simulations seems to oscillate erratically about an average value. In figure 11, we plot the longterm average value of the von Neumann entropy versus the total energy of the oscillators \(E_{\text{tot}}\). We observe a rough scaling relationship:
\[S_{\text{\tiny{VN}}}\sim\frac{2}{3}\ln\left(\frac{E_{\text{tot}}}{\Omega} \right). \tag{177}\]
This does seem to hold for several different choices of \(\nu\). However, at higher energies it appears to become less and less accurate. It would be very interesting to probe this relation for different examples of bipartite system as well for significantly higher energy. We defer such an investigation to future work.
## VI Conclusions and discussion
For a broad class of bipartite systems, we presented derivations of the product state and classical-quantum approximations along with their ranges of validity. We did this by providing a weak coupling analysis of the dynamics of the fully quantum system starting from an initial product state. This demonstrated that the product state approximation remains valid for times less than the "scrambling time," which we defined and calculated explicitly. We also showed that the discrepancy between observables in the CQ and QQ cases was directly related to the degree of classicality exhibited by subsystem 1.
We demonstrated this general framework by numerically studying the system of two coupled oscillators interacting via monomial potentials. This example included
Figure 9: Comparison of typical classical-quantum and quantum-quantum Wigner quasi-probability distributions for oscillator 2. Simulation parameters are \(n_{\text{max}}=25\), \(\nu=2\), \(\sigma=0.2\), \(\lambda=0.01\), \(\zeta_{1}=12\), and \(\zeta_{2}=2\).
an application to energy transfer between the two oscillators exhibiting parametric resonance at the fully quantum level, a result in agreement with classical parametric resonance with a suitable choice of initial data and parameters. In the CQ case, this calculation bears some resemblance to the phenomenon of particle creation in dynamical spacetimes where subsystem 1 plays the role of a classical geometry driving creation of quanta in subsystem 2.
Overall, this work provides a deeper understanding of the emergence of a hybrid intermediate regime where a quantum and classical system interact with full nonperturbative "backreaction," together with its range of validity, namely weak coupling and sufficiently short evolution time before increasing entanglement in the QQ system leads to deviation from the CQ dynamics.
Our approach to the CQ dynamics relies crucially on the choice of product state initial data consistent with semiclassical dynamics of subsystem 1 as well as
Figure 11: Time averaged entanglement entropy versus energy of the simulations shown in figure 10.
Figure 10: Entanglement entropy for long time QQ simulation of the system with coherent state initial data for a variety of couplings. In each case, we assume identical oscillators (\(\sigma=0\)) and equal initial energies for each oscillator (\(\zeta_{1}=\zeta_{2}\)) such the total energy of system is \(E_{\rm tot}=2\zeta_{1}+1=2\zeta_{2}+1\). The top row has coupling power \(\nu=2\) (quadratic coupling), the middle row has \(\nu=4\) (quartic coupling), while the bottom row has \(\nu=6\).
weak coupling between the two systems. In contrast, the Born-Oppenheimer approximation assumes a large "mass" parameter in the Hamiltonian that separates the bipartite system into "heavy" and "light" subsystems, a measure that is _prima facie_ and thus qualitatively distinct from the hypotheses underlying the calculations presented above.
Lastly, the CQ system as formulated here may be viewed as providing a "continuous" monitoring of the quantum system by a classical system [28], at least within the time frame that the first system's description by a semiclassical state holds and the mutual entanglement is negligible. Beyond this time, as entanglement increases and the state of the first system deviates from semiclassicality, the notion of classical monitoring of a quantum system ceases to be true. Because the degree to which the classical system's trajectory is influenced by the quantum system's behaviour and the scrambling time are positively and negatively correlated with the interaction strength, respectively, it is not clear to what extent the classical subsystem is "measuring" the quantum subsystem. That is, we expect that by the time the classical dynamics have been appreciably altered by the quantum dynamics, the entanglement in the system would have grown to the stage where the CQ approximation is no longer valid.
Our analysis is potentially useful for "semiclassical approximations" postulated for gravity-matter systems where gravity is classical and matter is quantum. Such CQ models have been studied in the context of homogeneous isotropic cosmology with a scalar field [29; 30]. However, without a quantum theory of gravity, there is no full QQ system beyond simple symmetry reduced models, such as homogeneous cosmology. Nevertheless, our results indicate that there are regimes and states where the CQ system provides a good approximation to QQ dynamics. Thus, application to gravity-matter systems might provide a way to access at least a small sector of the dynamics of quantum gravity beyond the simplest models.
To derive a CQ approximation from quantum gravity, a possible starting point is the Wheeler-deWitt equation (WDE). In its full generality, a derivation of a CQ approximation from the WDE would be technically difficult but may be more accessible in homogeneous cosmological settings where the quantum constraint corresponding to spatial diffeomorphisms is absent and the Hamiltonian constraint is simpler in algebraic form. The main difference from the class of Hamiltonians considered in this paper is the nature of the coupling between matter and gravity--there is matter momentum-metric coupling in addition to matter configuration-metric coupling; a related issue is whether it is even possible to have solution of the Hamiltonian constraint that is a product state. One approach to address the latter problem is to fix a time gauge (c.f. Husain and Pawlowski [31]) and work with the corresponding physical Hamiltonian. This has been done in the fully quantum setting for homogeneous cosmology coupled to a dust and scalar field in the dust time gauge [32]; in this setting it was demonstrated that a variety of initial states, product and entangled, lead to entropy saturation at late times. This is a setting where a derivation of the CQ approximation from the QQ system appears to be possible under the conditions we have discussed.
Our work does not address the question of "emergence" of classicality from quantum theory. That is, we do not expect nearly classical dynamics to be achieved for arbitrary choices of initial data. The notion of a quantum system naturally behaving "classically" at late times possibly requires decoherence, whereby one subsystem may be viewed as an "environment" provided it satisfies the condition that it is approximately nondynamical while the other subsystem's density matrix becomes diagonal dynamically [33]. An environment by its nature should not be a system with one degree of freedom. Thus, the type of bipartite system we discuss is not large enough to address emergence. Whether a QFT coupled to a small quantum system with a few degrees of freedom can induce decoherence of the latter through the type of approximation we deploy is a potential topic for further study.
Another area of further study is the application of the CQ approximation as described here to the gravity-scalar field system in spherical symmetry. This model is well studied classically [34] but remains to be fully studied quantum mechanically [35]. In the QC setting with classical gravity and quantum scalar field, this model could yield interesting insights into semiclassical black hole evolution.
Finally, our derivation of the classical-quantum approximation from the full quantum theory, together with its regime of validity, provides in principle the possibility of an experimental probe and comparison with postulated stochastic models of classical-quantum interaction (see, e.g., [36] for a recent study). Our work may also be compared to the effective approach where the equations of motion of a truncated set of expectation values of products of phase space variables is used to approximate quantum evolution [37; 38].
## Appendix A Frobenius inner product
For two \(n\times m\) complex matrices \(A\) and \(B\), we define the Frobenius inner product as
\[\langle A,B\rangle_{\rm F}=\mathrm{Tr}\big{(}AB^{\dagger}\big{)}. \tag{104}\]
The Frobenius norm of an \(n\times m\) matrix is
\[\|A\|_{\rm F}=\sqrt{\langle A,A\rangle_{\rm F}}=\sqrt{\mathrm{Tr}(AA^{ \dagger})}. \tag{105}\]
Useful properties include
\[0 \leq\|A\|_{\mathrm{F}}, \tag{11a}\] \[\|A\|_{\mathrm{F}} =\|A^{\dagger}\|_{\mathrm{F}},\] (11b) \[\|AC\|_{\mathrm{F}} \leq\|A\|_{\mathrm{F}}\|C\|_{\mathrm{F}},\] (11c) \[|\langle A,B\rangle_{\mathrm{F}}| \leq\|A\|_{\mathrm{F}}\|B\|_{\mathrm{F}},\] (11d) \[\|UMU^{\dagger}\|_{\mathrm{F}} =\|M\|_{\mathrm{F}},\] (11e) \[\|A+B\|_{\mathrm{F}} \leq\|A\|_{\mathrm{F}}+\|B\|_{\mathrm{F}}, \tag{11f}\]
where \(C\) is an \(m\times k\) complex matrix, \(M\) is an \(n\times n\) complex matrix, and \(U\) is an \(n\times n\) unitary matrix.
## Appendix B Classical-quantum approximation for one-dimensional potential problems
### Arbitrary potentials
In this appendix, we demonstrate explicitly how the classical-quantum approximation can emerge from the product state approximation in the scenario when the Hamiltonian of subsystem 1 takes the form of a particle moving in a one-dimensional potential. In particular, we assume that
\[\hat{H}_{1}=\frac{\hat{p}_{1}^{2}}{2m}+\mathcal{U}(\hat{q}_{1}),\quad\hat{V}_ {1}=\mathcal{V}_{1}(\hat{q}_{1}), \tag{12}\]
where, as usual, \([\hat{q}_{1},\hat{p}_{1}]=i\) and the main \(\mathcal{U}\) and interaction \(\mathcal{V}_{1}\) potential functions are real. In the product state approximation described in SSIII.1, the quantum state \(|\varphi_{1}\rangle\) of subsystem 1 satisfies a Schrodinger equation, \(i\,\hat{\partial}_{t}|\varphi_{1}\rangle=\hat{H}_{1}^{\mbox{\tiny{\it int}}}| \varphi_{1}\rangle\), which can be expressed as wave equation for the wavefunction \(\varphi_{1}(t,q_{1})\):
\[i\frac{\partial\varphi_{1}}{\partial t}=-\frac{1}{2m}\frac{\partial^{2}\varphi _{1}}{\partial q_{1}^{2}}+[\mathcal{U}(q_{1})+\lambda\mathcal{V}_{1}(q_{1}) \langle\hat{V}_{2}\rangle]\varphi_{1}. \tag{13}\]
In this expression, \(\langle\hat{V}_{2}\rangle=\langle\hat{V}_{2}(t)\rangle\) is the expectation value of \(\hat{V}_{2}\) as obtained from the solution of effective Schrodinger equation for subsystem 2; i.e., \(i\,\partial_{t}|\varphi_{2}\rangle=\hat{H}_{2}^{\mbox{\tiny{\it int}}}| \varphi_{2}\rangle\). In this appendix, we will make no special assumptions about the form of \(\hat{H}_{2}\); that is, we do not assume that subsystem 2 is also a particle moving in a one-dimensional potential.
We now assume that \(\varphi\) is a sharply peaked function centered about \(q_{1}=Q_{1}(t)\). More specifically, we adopt the same _ansatz_ as in the seminal work of Heller [39] on time-dependent semiclassical dynamics:
\[\varphi_{1}=\left(\frac{2}{\pi}\right)^{1/4}e^{i\gamma(t)}e^{iP_{1}(t)[q_{1}- Q_{1}(t)]}e^{i\Sigma(t)[q_{1}-Q_{1}(t)]^{2}}. \tag{14}\]
Here, \(Q_{1}(t)\) and \(P_{1}(t)\) are real while \(\gamma(t)\) and \(\Sigma(t)\) are complex. In order to satisfy the sharply peaked requirement, we assume that \(\operatorname{Im}\Sigma\) is a large quantity (in a sense we will define more precisely below). In order to ensure that \(\langle\varphi_{1}|\varphi_{1}\rangle=\int_{-\infty}^{\infty}dq_{1}\varphi_{ 1}^{*}\varphi_{1}=1\), we impose the condition
\[0<\operatorname{Im}\Sigma=\exp(-4\operatorname{Im}\gamma). \tag{15}\]
For this wavefunction, the expectation values of \(q_{1}\) and \(p_{1}\) are given exactly as
\[\langle q_{1}\rangle=Q_{1}(t),\quad\langle p_{1}\rangle=P_{1}(t). \tag{16}\]
Furthermore, the expectation value of any smooth function \(f\) of \(q_{1}\) can be expressed in terms of the derivatives of \(f\) evaluated at \(Q_{1}(t)\):
\[\langle f(q_{1})\rangle=f(Q_{1})\left\{1+\frac{f^{[2]}(Q_{1})}{8f (Q_{1})\,\operatorname{Im}\Sigma}+\right.\\ \left.\frac{f^{[4]}(Q_{1})}{128f(Q_{1})(\operatorname{Im}\Sigma) ^{2}}+\mathcal{O}\left[\frac{f^{[6]}(Q_{1})}{f(Q_{1})(\operatorname{Im}\Sigma) ^{3}}\right]\right\}, \tag{17}\]
where \(f^{[n]}(Q_{1}(t))\) is the \(n^{\mathrm{th}}\) derivative of \(f\) evaluated at \(Q_{1}(t)\).
We can apply the Ehrenfest theorem to subsystem 1, something which yields equations of motion for the expectation values of \(q_{1}\) and \(p_{1}\):
\[\frac{d\langle q_{1}\rangle}{dt} =\frac{\langle p_{1}\rangle}{m}, \tag{18a}\] \[\frac{d\langle p_{1}\rangle}{dt} =-\left\langle\frac{d\mathcal{U}}{dq_{1}}\right\rangle-\lambda \langle\hat{V}_{2}\rangle\left\langle\frac{d\mathcal{V}_{1}}{dq_{1}}\right\rangle. \tag{18b}\]
Making use of equations (16) and (17), we could rewrite these as
\[\frac{dQ_{1}}{dt}= \frac{P_{1}}{m}, \tag{19a}\] \[\frac{dP_{1}}{dt}= -\mathcal{U}^{[1]}(Q_{1})\left\{1+\mathcal{O}\left[\frac{\mathcal{ U}^{[3]}(Q_{1})}{\mathcal{U}^{[1]}(Q_{1})\,\operatorname{Im}\Sigma}\right]\right\}\] \[-\lambda\mathcal{V}_{1}^{[1]}(Q_{1})\langle\hat{V}_{2}\rangle \left\{1+\mathcal{O}\left[\frac{\mathcal{V}^{[3]}(Q_{1})}{\mathcal{V}^{[1]}(Q_{ 1})\,\operatorname{Im}\Sigma}\right]\right\}. \tag{19b}\]
These equations are exact in the sense that they hold if \(\varphi_{1}\) is an exact solution of 13. We now demand that the width of the state \(\propto(\operatorname{Im}\Sigma)^{-1/2}\) is small compared to the third and higher derivatives of the potential and interaction functions
\[\left|\frac{\mathcal{U}^{[2n-1]}(Q_{1})}{\mathcal{U}^{[1]}(Q_{1})}\right|\ll( \operatorname{Im}\Sigma)^{n},\quad\left|\frac{\mathcal{V}_{1}^{[2n-1]}(Q_{1})}{ \mathcal{V}_{1}^{[1]}(Q_{1})}\right|\ll(\operatorname{Im}\Sigma)^{n} \tag{19}\]
for \(n=1,2,3\ldots\). Under these circumstances, we can clearly see the central values of our wave packet will evolve according to a dynamical system generated by the effective classical Hamiltonian
\[\mathcal{H}_{\mbox{\tiny{CG}}}^{\mbox{\tiny{\it eff}}}=\frac{P_{1}^{2}}{2m}+ \mathcal{U}(Q_{1})+\lambda\mathcal{V}_{1}(Q_{1})\langle\hat{V}_{2}\rangle. \tag{10}\]
This is exactly the subsystem 1 classical-quantum Hamiltonian (47) introduced in SSIII.2 with our assumed form of \(\hat{H}_{1}\) and \(\hat{V}_{1}\).
Now, while we can always select initial data for \(\varphi_{1}\) such that \(\operatorname{Im}\Sigma\) is large for some period, we generally expect the wavepacket to spread such that the error terms in (106b) become nonnegligible at late times. In order to determine the rate of spreading, we need to impose the requirement that (105) is an approximate solution of the wave equation when the width of the wave packet is much smaller than the scale of variation of \(\mathcal{U}(q_{1})\) and \(\mathcal{V}(q_{1})\). More specifically, we assume that it is legitimate to describe \(U\) and \(V_{1}\) with a quadratic approximation in some neighbourhood of size \(\sim(\operatorname{Im}\Sigma)^{-1/2}\) about the peak value of \(\varphi_{1}\). We begin by substituting the wavefunction ansatz (105) directly into the wave equation (104) as well as series expansions of \(\mathcal{U}\) and \(\mathcal{V}_{1}\) about \(q_{1}=Q_{1}\); i.e.,
\[\mathcal{U}(q_{1})=\mathcal{U}(Q_{1})+\mathcal{U}^{[1]}(Q_{1})(q _{1}-Q_{1})+\\ \tfrac{1}{2!}\mathcal{U}^{[2]}(Q_{1})(q_{1}-Q_{1})^{2}+\cdots \tag{107}\]
with a similar expression for \(\mathcal{V}_{1}\). Then, equating coefficients of \((q_{1}-Q_{1})^{0}\), \((q_{1}-Q_{1})^{1}\), and \((q_{1}-Q_{1})^{2}\) on the left- and right-hand sides of the resulting expression, we obtain _approximate_ ordinary differential equations for \(\gamma\), \(\Sigma\), and \(P_{1}\). Along with equation (106a), this gives a complete dynamical system for \(\{Q_{1},P_{1},\gamma,\Sigma\}\):
\[\frac{dQ_{1}}{dt} =\frac{P_{1}}{m}, \tag{108a}\] \[\frac{dP_{1}}{dt} =-\mathcal{U}^{[1]}(Q_{1})-\lambda\langle V_{2}\rangle\mathcal{V} _{1}^{[1]}(Q_{1}),\] (108b) \[\frac{d\gamma}{dt} =\frac{P_{1}^{2}}{2m}+\frac{i\Sigma}{m}-\mathcal{U}(Q_{1})-\lambda \mathcal{V}_{1}(Q_{1})\langle V_{2}\rangle,\] (108c) \[\frac{d\Sigma}{dt} =-\frac{2\Sigma^{2}}{m}-\frac{\mathcal{U}^{[2]}(Q_{1})+\lambda \mathcal{V}_{1}^{[2]}(Q_{1})\langle V_{2}\rangle}{2}. \tag{108d}\]
If \(\{Q_{1},P_{1},\gamma,\Sigma\}\) satisfy these equations and the wavepacket width is small, then we expect that (105) is a good approximate solution to (104). To confirm this, we can calculate the norm of the difference between the left- and right-hand sides of (108) when (108) is enforced:
\[\int\limits_{-\infty}^{\infty}\left|\left\{i\partial_{t}+\frac{ \partial_{q_{1}}^{2}}{2m}-\mathcal{U}(q_{1})-\lambda\mathcal{V}_{1}(q_{1}) \langle V_{2}\rangle\right\}\varphi_{1}\right|^{2}dq_{1}=\\ \frac{5}{768}\frac{[\mathcal{U}^{[3]}(Q_{1})+\lambda\langle V_{2 }\rangle\mathcal{V}_{1}^{[3]}(Q_{1})]^{2}}{(\operatorname{Im}\Sigma)^{3}}+ \cdots. \tag{109}\]
Here, the "\(\cdots\)" on the right-hand side indicates terms involving higher derivatives of \(U\) and \(V_{1}\) as well as higher inverse powers of \(\operatorname{Im}\Sigma\). Hence, if \(\operatorname{Im}\Sigma\) is sufficiently large, \(\varphi_{1}\) will be a good approximate solution to the wave equation (104). In this case, the evolution of the width of the wavepacket will be given by (108d). For particular choices of \(\mathcal{U}\) and \(\mathcal{V}\), one would need to solve the dynamical system (108) for \(\Sigma(t)\) to determine if and when the conditions (107) break down. If the conditions do indeed fail over a characteristic timescale when the initial wavepacket width is small, that timescale is known as the "Ehrenfest time."
We now turn our attention to subsystem 2. In the product state approximation, its effective Hamiltonian is
\[\hat{H}_{2}^{\text{\tiny eff}}=\hat{H}_{2}+\lambda\langle V_{1}(q_{1}) \rangle\hat{V}_{2}. \tag{110}\]
Making use of (107), we can rewrite this as
\[\hat{H}_{2}^{\text{\tiny eff}}=\hat{H}_{2}+\frac{\lambda\mathcal{V}_{1}(Q_{1} )\hat{V}_{2}}{1-\varepsilon}, \tag{111}\]
where, as in section SSIII.2, \(\varepsilon\) is given by
\[\varepsilon =\frac{\langle V_{1}(q_{1})\rangle-\mathcal{V}_{1}(Q_{1})}{ \langle V_{1}(q_{1})\rangle}\] \[=\frac{\mathcal{V}_{1}^{[2]}(Q_{1})}{8\mathcal{V}_{1}(Q_{1}) \operatorname{Im}\Sigma}+\mathcal{O}\left[\frac{\mathcal{V}_{1}^{[4]}(Q_{1})}{ \mathcal{V}_{1}(Q_{1})\left(\operatorname{Im}\Sigma\right)^{2}}\right]. \tag{112}\]
Hence, if
\[\left|\frac{\mathcal{V}_{1}^{[2]}(Q_{1})}{\mathcal{V}_{1}(Q_{1})}\right|\ll \left(\operatorname{Im}\Sigma\right)^{k},\quad k=1,2,3\ldots, \tag{113}\]
then the \(\varepsilon\) term in (110) can be neglected and subsystem 2 will be governed by the effective Hamiltonian
\[\hat{H}_{\text{\tiny CG}}^{\text{\tiny eff}}=\hat{H}_{2}+\lambda\mathcal{V}_{1} (Q_{1})\hat{V}_{2}. \tag{114}\]
We have hence recovered the subsystem 2 classical-quantum Hamiltonian operator (48) of SSIII.2 specialized to our choice of \(\hat{V}_{1}\).
To summarize, for the particular choices (107) the sharply-peaked wave-function _ansatz_ (105) leads us to the conclusion that subsystems 1 and 2 are well described by the coupled classical and quantum Hamiltonians
\[\mathcal{H}_{\text{\tiny CG}}^{\text{\tiny eff}} =P_{1}^{2}/(2m)+\mathcal{U}(Q_{1})+\lambda\mathcal{V}_{1}(Q_{1}) \langle\hat{V}_{2}\rangle, \tag{115a}\] \[\hat{H}_{\text{\tiny CG}}^{\text{\tiny eff}} =\hat{H}_{2}+\lambda\mathcal{V}_{1}(Q_{1})\hat{V}_{2} \tag{115b}\]
provided that the conditions (107) and (113) hold. Furthermore, the evolution of the width of subsystem 1's wavefunction will be governed by (108) as long as the width (\(\operatorname{Im}\Sigma\))\({}^{-1/2}\) is sufficiently small.
### Quadratic potentials
In this subsection, we specialize to the case when both the main \(\mathcal{U}\) and interaction \(\mathcal{V}_{1}\) potentials are quadratic monomial functions:
\[\mathcal{U}(q_{1})=\tfrac{1}{2}\Omega_{1}q_{1}^{2},\quad\mathcal{V}_{1}(q_{1}) =\sqrt{\Omega}q_{1}^{2},\quad m=\frac{1}{\Omega_{1}}. \tag{116}\]
This is similar to the \(\nu=2\) coupled oscillator case considered in SSV except that we do not necessarily assume that subsystem 2 is also an oscillator.
This case is interesting because several of the exact formulae from the previous subsection simplify considerably.5 For example, the Ehrenfest equations of motion (101) become exactly
Footnote 5: We should stress that in this appendix, the phrase “exact” is meant to be applied to for formulae that hold in the product state approximation without any additional assumptions.
\[\partial_{t}Q_{1} =\Omega_{1}P_{1}, \tag{102a}\] \[\partial_{t}P_{1} =-\left(\Omega_{1}+\lambda\sqrt{\Omega}\langle\hat{V}_{2}\rangle \right)Q_{1}. \tag{102b}\]
That is, the exact expectation values of \(\hat{q}_{1}\) and \(\hat{p}_{1}\) evolve according to the CQ Hamiltonian
\[\mathcal{H}_{\text{\tiny{CG}}}^{\text{\tiny{eff}}}=\tfrac{1}{2}\Omega_{1}(Q_ {1}^{2}+P_{1}^{2})+\lambda\sqrt{\Omega}Q_{1}^{2}\langle\hat{V}_{2}\rangle. \tag{103}\]
Also, the right-hand side of (101) is identically equal to zero, implying that (101) is an exact solution to the wave equation provided that (101) are satisfied in addition to
\[\partial_{t}\gamma =\tfrac{1}{2}\Omega_{1}P_{1}^{2}+i\Omega_{1}\Sigma-\tfrac{1}{2} \left(\Omega_{1}+\lambda\sqrt{\Omega}\langle V_{2}\rangle\right)Q_{1}^{2},\] \[\partial_{t}\Sigma =-2\Omega_{1}\Sigma^{2}-\tfrac{1}{2}\left(\Omega_{1}+\lambda\sqrt {\Omega}\langle V_{2}\rangle\right). \tag{104}\]
Another exact relation is
\[\langle V_{1}(q_{1})\rangle=\frac{\sqrt{\Omega}}{2}\left(Q_{1}^{2}+\frac{1}{4 \operatorname{Im}\,\Sigma}\right), \tag{105}\]
which gives that the effective Hamiltonian of subsystem 2 is
\[\hat{H}_{2}^{\text{\tiny{eff}}}=\hat{H}_{2}+\frac{\lambda\sqrt{\Omega}}{2} \left(Q_{1}^{2}+\frac{1}{4\operatorname{Im}\,\Sigma}\right)\hat{V}_{2}. \tag{106}\]
In this scenario, the classical-quantum approximation will be realized if one assumes \(Q_{1}^{2}\gg|4\operatorname{Im}\Sigma|^{-1}\) so that
\[\hat{H}_{2}^{\text{\tiny{eff}}}\approx\hat{H}_{2}+\frac{1}{2}\lambda\sqrt{ \Omega}Q_{1}^{2}\hat{V}_{2}, \tag{107}\]
which then implies that equations (104) need not be explicitly solved for \(\Sigma\) and \(\gamma\) in order to solve for \(Q_{1}\), \(P_{1}\), and \(|\varphi_{2}\rangle\).
Of course, to actually determine if \(Q_{1}^{2}\gg|4\operatorname{Im}\Sigma|^{-1}\) is a reasonable assumption, one does need to solve (104) for \(\Sigma\). When the interaction is small, \(\lambda\ll 1\), we can easily obtain
\[Q_{1}(t) =\sqrt{2}|\alpha|\cos(\Omega_{1}t+\phi_{1})+\mathcal{O}(\lambda), \tag{108a}\] \[\Sigma(t) =\frac{2\Sigma_{0}-\tan\Omega_{1}t}{2(1+2\Sigma_{0}\tan\Omega_{1 }t)}+\mathcal{O}(\lambda), \tag{108b}\]
where \(\Sigma_{0}=\Sigma(0)\) with \(\operatorname{Im}\Sigma_{0}>0\) and we have parametrized the solution for \(Q_{1}(t)\) in a manner consistent with SSV.4 and SSD. Notice that if \(\Sigma_{0}=i/2\), we obtain that \(\Sigma(t)=i/2+\mathcal{O}(\lambda)\) for all time; i.e., the wavepacket has a nearly constant width. Other than the \(\mathcal{O}(\lambda)\) correction, these are the well-known coherent state solutions of the simple harmonic oscillator. When \(\Sigma_{0}\neq i/2\), we recover the oscillator squeezed state solutions subject to \(\mathcal{O}(\lambda)\) corrections.
From (108), we see that the condition \(Q_{1}^{2}\gg|4\operatorname{Im}\Sigma|^{-1}\) can be satisfied for almost all \(t\) if \(|\alpha|^{2}\gg|\Sigma_{0}|^{-1}\). This is essentially the same as demanding that subsystem 1 be initially prepared in a high energy state. However, it should be noted that because there are times for which \(Q_{1}(t)=0\), it is impossible to satisfy \(Q_{1}^{2}\gg|4\operatorname{Im}\Sigma|^{-1}\) for all \(t\). However, the time intervals over which the condition fails to hold will be very short if \(|\alpha|^{2}\gg|\Sigma_{0}|^{-1}\). Outside of these intervals, the effective Hamiltonian of subsystem 2 will reduce down to the classical-quantum form
\[\hat{H}_{2}^{\text{\tiny{eff}}}\approx\hat{H}_{\text{\tiny{CG}}}^{\text{ \tiny{eff}}}=\hat{H}_{2}+\tfrac{1}{2}\lambda\sqrt{\Omega}Q_{1}^{2}\hat{V}_{2}. \tag{109}\]
## Appendix C Born-Oppenheimer approximation for one-dimensional potential problems
In this appendix, we consider the Born-Oppenheimer approximation to our bipartite system when subsystems 1 and 2 each represent particles moving in one-dimensional potentials. Our treatment is somewhat similar to a calculation presented in Singh and Padmanabhan [18], but there are some key differences. While [18] analyzes both time-independent (via the WKB approximation) and time-dependent Schrodinger equations (via sharply peaked states), we will only work on the latter. There are also some technical assumptions in [18] that we prefer to avoid; these are discussed in further detail below.
For this appendix, we assume that
\[\hat{H}_{1} =\hat{p}_{1}^{2}/2m_{1}+m_{1}\mathcal{U}_{1}(\hat{q}_{1}), \hat{V}_{1} =\mathcal{V}_{1}(\hat{q}_{1}),\] \[\hat{H}_{2} =\hat{p}_{2}^{2}/2m_{2}+m_{2}\mathcal{U}_{2}(\hat{q}_{2}), \hat{V}_{2} =\mathcal{V}_{2}(\hat{q}_{2}), \tag{110}\]
where \([\hat{q}_{i},\hat{p}_{j}]=i\,\delta_{ij}\) and the \(\mathcal{U}_{1,2}\) and \(\mathcal{V}_{1,2}\) potential functions are real. Notice that the \(\mathcal{U}_{1,2}\) potentials each have a prefactor of the particle masses \(m_{1,2}\) for later convenience. As elsewhere in this paper, subsystem 1 is expected to behave classically, which, in the context of the Born-Oppenheimer approximation, means that the mass of particle 1 should be large. That is, we will work in the \(m_{1}\to\infty\) limit.
The complete time dependent Schrodinger equation for the wavefunction \(\psi=\psi(t,q_{1},q_{2})\) is
\[i\frac{\partial\psi}{\partial t}=-\frac{1}{2m_{1}}\frac{\partial ^{2}\psi}{\partial q_{1}^{2}}+m_{1}\mathcal{U}_{1}(q_{1})\psi-\frac{1}{2m_{2}} \frac{\partial^{2}\psi}{\partial q_{2}^{2}}\\ +m_{2}\mathcal{U}_{2}(q_{2})\psi+\lambda\mathcal{V}_{1}(q_{1}) \mathcal{V}_{2}(q_{2}). \tag{111}\]
We will make an explicit product state _ansatz_ for \(\psi\):
\[\psi(t,q_{1},q_{2})=\psi_{1}(t,q_{1})\psi_{2}(t,q_{2}). \tag{112}\]
Since subsystem 1 is meant to exhibit classical properties, we assume that its wavefunction is a Gaussian sharply peaked about \(q_{1}=Q_{1}(t)\):
\[\psi_{1}=\left(2/\pi\right)^{1/4}e^{im_{1}\Gamma(t)}e^{im_{1}v_{1}( t)[q_{1}-Q_{1}(t)]}\\ \times e^{im_{1}^{2}\varsigma(t)[q_{1}-Q_{1}(t)]^{2}}. \tag{100}\]
This is the same _ansatz_ (101) we made use of in the previous appendix to derive the classical-quantum approximation for a similar potential problem with a few changes of notation:
\[\gamma=m_{1}\Gamma,\quad P_{1}=m_{1}v_{1},\quad\Sigma=m_{1}^{2}\varsigma. \tag{101}\]
The \(m_{1}\) scalings appearing on the right-hand sides of these definitions are required to make the \(m\to\infty\) Born-Oppenheimer limit work out correctly.
The normalization of \(\psi\) requires that
\[1=\int_{-\infty}^{\infty}|\psi_{1}|^{2}dq_{1}=\int_{-\infty}^{\infty}|\psi_{2 }|^{2}dq_{2}, \tag{102}\]
which yields that
\[0<m_{1}^{2}\operatorname{Im}\varsigma=\exp(-4m_{1}\operatorname{ Im}\Gamma)\] \[\langle q_{1}\rangle=Q_{1}(t),\quad\langle p_{1}\rangle=m_{1}v_{ 1}(t). \tag{103}\]
Using these and our assumed Hamiltonian, we can use the Ehrenfest theorem to easily show that
\[v_{1}=\frac{dQ_{1}}{dt}. \tag{104}\]
As in the previous appendix, we assume that \(\operatorname{Im}\Sigma\) is large such that the width of the wavefunction in the \(q_{1}\) direction is much smaller than the characteristic variation scale of the potentials \(\mathcal{U}_{1}\) and \(\mathcal{V}_{1}\). We therefore seek a solution of (100) valid in some neighbourhood of \(q_{1}=Q_{1}(t)\). More specifically, we substitute (101) into (100) and expand the result in powers of \(\delta q_{1}=q_{1}-Q_{1}(t)\). We obtain
\[0=c_{0}+c_{1}\delta q_{1}+c_{2}\delta q_{1}^{2}+\mathcal{O}(\delta q_{1}^{3}). \tag{105}\]
Here,
\[c_{0}=m_{1}\left[\frac{d\Gamma}{dt}-i\varsigma-\frac{v_{1}^{2}} {2}+\mathcal{U}_{1}(Q_{1})\right]-i\frac{1}{\psi_{2}}\frac{\partial\psi_{2}}{ \partial t}\\ -\frac{1}{2m_{2}}\frac{1}{\psi_{2}}\frac{\partial^{2}\psi_{2}}{ dq_{2}^{2}}+m_{2}\mathcal{U}_{2}(q_{2})+\lambda\mathcal{V}_{1}(Q_{1})\mathcal{V}_{2}(q _{2}),\\ c_{1}=m_{1}\frac{dv_{1}}{dt}+\frac{\partial}{\partial q_{1}} \left[m_{1}\mathcal{U}_{1}(q_{1})+\lambda\mathcal{V}_{1}(q_{1})\mathcal{V}_{2 }(q_{2})\right]_{q_{1}=Q_{1}},\\ c_{2}=m_{1}^{2}\frac{d\varsigma}{dt}+2m_{1}^{3}\varsigma^{2}\\ +\frac{\partial^{2}}{\partial q_{1}^{2}}\left[m_{1}\mathcal{U}_{ 1}(q_{1})+\lambda\mathcal{V}_{1}(q_{1})\mathcal{V}_{2}(q_{2})\right]_{q_{1}=Q _{1}}. \tag{106}\]
As in the previous appendix, we demand that each of the coefficents of \(\delta q_{1}^{0}\), \(\delta q_{1}^{1}\), and \(\delta q_{1}^{2}\) vanish independently in (105).6 Furthermore, let us make the approximation that in the \(m_{1}\to\infty\) limit
Footnote 6: At this stage, our approximation deviates from that of Singh and Padmanabhan [18]. In that paper, the authors assume that \(\delta q_{1}=m_{1}^{-1/2}y\) and then demand that the coefficient of each power of \(m_{1}\) vanish. This results in approximate equations of motion involving an arbitrary function \(y=y(t,q_{1})\); i.e., an underdetermined system of equations.
\[m_{1}\mathcal{U}_{1}(q_{1})+\lambda\mathcal{V}_{1}(q_{1})\mathcal{V}_{2}(q_{2 })\approx m_{1}\mathcal{U}_{1}(q_{1}). \tag{107}\]
Finally, we demand that the coefficients of \(m_{1}^{0}\) and \(m_{1}^{1}\) in the above expression for \(c_{0}\) individually vanish. This gives us a complete set of equations for all of the unknown functions:
\[\frac{dQ_{1}}{dt} =v_{1}, \tag{108a}\] \[\frac{dv_{1}}{dt} =-\mathcal{U}_{1}^{[1]}(Q_{1}),\] (108b) \[\frac{d\Gamma}{dt} =i\varsigma+\frac{v_{1}^{2}}{2}-\mathcal{U}_{1}(Q_{1}),\] (108c) \[\frac{d\varsigma}{dt} =-2m_{1}\varsigma^{2}-\frac{1}{m_{1}}\mathcal{U}_{1}^{[2]}(Q_{1}),\] (108d) \[i\frac{\partial\psi_{2}}{\partial t} =-\frac{1}{2m_{2}}\frac{\partial^{2}\psi_{2}}{dq_{2}^{2}}+\] \[[m_{2}\mathcal{U}_{2}(q_{2})+\lambda\mathcal{V}_{1}(Q_{1}) \mathcal{V}_{2}(q_{2})]\psi_{2}. \tag{108e}\]
We see that (108a) and (108b) are the classical equations of motion for subsystem 1 neglecting subsystem 2 (or, in other words, in the \(\lambda\to 0\) limit). Apart from a change in notation, equations (108c) and (108d) are the same as evolution equations for the phase (108c) and width (108d) of a Gaussian wavepacket in the \(\lambda\to 0\) limit of the classical-quantum approximation. Equation (108e) is exactly the time-dependent Schrodinger equation for subsystem 2 in the classical-quantum approximation for the system studied in this appendix. Therefore, we have essentially recovered the classical-background approximation (54) for the system described by the choices (107).
Before moving on, we make note of something interesting. Suppose that we do not make the approximation (107). Rather, let us replace the equations \(c_{1}=0\) and \(c_{2}=0\) with their expectation values in the state (101); i.e.,
\[\langle\psi|c_{1}|\psi\rangle=\langle\psi|c_{2}|\psi\rangle=0, \tag{109}\]
or, more explicitly,
\[\int_{-\infty}^{\infty}dq_{2}\,\psi_{2}^{*}c_{1}\psi_{2}=\int_{-\infty}^{\infty} dq_{2}\,\psi_{2}^{*}c_{2}\psi_{2}=0. \tag{110}\]
We then find that equations (108b) and (108d) get mod
ified to
\[\frac{dv_{1}}{dt} =-\frac{m_{1}\mathcal{U}_{1}^{[1]}(Q_{1})+\lambda\mathcal{V}_{1}^{[1 ]}(Q_{1})\langle V_{2}\rangle}{m_{1}}, \tag{143a}\] \[\frac{d\varsigma}{dt} =-2m_{1}\varsigma^{2}-\frac{m_{1}\mathcal{U}_{1}^{[2]}(Q_{1})+ \lambda\mathcal{V}_{1}^{[2]}(Q_{1})\langle V_{2}\rangle}{m_{1}^{2}}. \tag{143b}\]
These are the analogues of the classical-quantum equations of motion in the previous appendix. Perhaps this is not overly surprising as the strategy of replacing equations like \(c_{1}=c_{2}=0\) with their expectation values is not dissimilar to the leap made from the classical Einstein equations to the semiclassical system (2). However, this discussion does provide an intriguing counterpoint to our derivation of the classical-quantum equations of motion using density matrices in SSIII.2.
## Appendix D Properties of coherent states
In this appendix, we review some basic properties of coherent states \(|\alpha\rangle\) of the simple harmonic oscillator. The Hamiltonian of the system is, as usual,
\[\hat{H}=\Omega(\hat{a}^{\dagger}\hat{a}+\tfrac{1}{2}),\quad[\hat{a},\hat{a}^ {\dagger}]=1. \tag{144}\]
In the Heisenberg representation, coherent states are eigenstates of the \(\hat{a}\) operator with eigenvalue \(\alpha\),
\[\hat{a}|\alpha\rangle=\alpha|\alpha\rangle, \tag{145}\]
where \(\alpha\) is a complex constant. Energy eigenstates of the Hamiltonian are, as usual,
\[\hat{H}|n\rangle=\Omega(n+\tfrac{1}{2})|n\rangle,\quad n=0,1,2\dots. \tag{146}\]
The expansion of an arbitrary coherent state in the energy eigenbasis reads
\[|\alpha\rangle=\sum_{n}\langle n|\alpha\rangle|n\rangle=\sum_{n}\frac{\alpha^ {n}e^{-|\alpha|^{2}/2}}{\sqrt{n!}}|n\rangle. \tag{147}\]
The position and momentum operators are
\[\hat{q}=\frac{\hat{a}+\hat{a}^{\dagger}}{\sqrt{2}},\quad\hat{p}=\frac{\hat{a}- \hat{a}^{\dagger}}{\sqrt{2}i}. \tag{148}\]
We have the expectation values
\[\langle\hat{q}\rangle =+\sqrt{2}|\alpha|\cos(\Omega t-\phi),\quad\langle\hat{q}^{2} \rangle=\langle\hat{q}\rangle^{2}+\tfrac{1}{2}, \tag{149a}\] \[\langle\hat{p}\rangle =-\sqrt{2}|\alpha|\sin(\Omega t-\phi),\quad\langle\hat{p}^{2} \rangle=\langle\hat{p}\rangle^{2}+\tfrac{1}{2}, \tag{149b}\]
where \(\phi=\arg(\alpha)\). Expectation values of higher powers of \(\hat{q}\) can be obtained by making use of the identity
\[[\hat{A}^{n},\hat{B}]=n\hat{A}^{n-1}[\hat{A},\hat{B}], \tag{150}\]
which holds if \([\hat{A},[\hat{A},\hat{B}]]=[\hat{B},[\hat{A},\hat{B}]]=0\). We can use this to derive the following recursion relation for the expxction values of \(\hat{q}^{n}\) for coherent states:
\[\langle\hat{q}^{n}\rangle=\langle\hat{q}\rangle\langle\hat{q}^{n-1}\rangle+ \tfrac{1}{2}(n-1)\langle\hat{q}^{n-2}\rangle. \tag{151}\]
This yields
\[\langle\hat{q}^{3}\rangle =\langle\hat{q}\rangle^{3}+\tfrac{3}{2}\langle\hat{q}\rangle, \tag{152a}\] \[\langle\hat{q}^{4}\rangle =\langle\hat{q}\rangle^{4}+3\langle\hat{q}\rangle^{2}+\tfrac{3}{4},\] (152b) \[\langle\hat{q}^{5}\rangle =\langle\hat{q}\rangle^{5}+5\langle\hat{q}\rangle^{3}+\tfrac{15}{ 4}\langle\hat{q}\rangle, \tag{152c}\]
and so on. A very similar derivation results in a recursion relation for the the central moments of \(\hat{q}\) for coherent states:
\[\langle(\hat{q}-\langle\hat{q}\rangle)^{n}\rangle=\tfrac{1}{2}(n-1)\langle( \hat{q}-\langle\hat{q}\rangle)^{n-2}\rangle. \tag{153}\]
Making note of \(\langle(\hat{q}-\langle\hat{q}\rangle)^{0}\rangle=1\) and \(\langle(\hat{q}-\langle\hat{q}\rangle)^{1}\rangle=0\), this yields
\[\langle(\hat{q}-\langle\hat{q}\rangle)^{n}\rangle=\begin{cases}0,&n\text{ odd},\\ 2^{-n/2}(n-1)!!,&n\text{ even}.\end{cases} \tag{154}\]
|
2308.09190 | Linear Parameter Varying Power Regulation of Variable Speed Pitch
Manipulated Wind Turbine in the Full Load Regime | In a wind energy conversion system (WECS), changing the pitch angle of the
wind turbine blades is a typical practice to regulate the electrical power
generation in the full-load regime. Due to the turbulent nature of the wind and
the large variations of the mean wind speed during the day, the rotary elements
of the WECS are subjected to significant mechanical stresses and fatigue,
resulting in conceivably mechanical failures and higher maintenance costs.
Consequently, it is imperative to design a control system capable of handling
continuous wind changes. In this work, Linear Parameter Varying (LPV) H_inf
controller is used to cope with wind variations and turbulent winds with a
turbulence intensity greater than 10%. The proposed controller is designed to
regulate the rotational rotor speed and generator torque, thus, regulating the
output power via pitch angle manipulations. In addition, a PI-Fuzzy control
system is designed to be compared with the proposed control system. The
closed-loop simulations of both controllers established the robustness and
stability of the suggested LPV controller under large wind velocity variations,
with minute power fluctuations compared to the PI-Fuzzy controller. The results
show that in the presence of turbulent wind speed variations, the proposed LPV
controller achieves improved transient and steady-state performance along with
reduced mechanical loads in the above-rated wind speed region. | T. Shaqarin, Mahmoud M. S. Al-Suod | 2023-08-17T21:26:23Z | http://arxiv.org/abs/2308.09190v1 | Linear Parameter Varying Power Regulation of Variable Speed Pitch Manipulated Wind Turbine in the Full Load Regime
###### Abstract
In a wind energy conversion system (WECS), changing the pitch angle of the wind turbine blades is a typical practice to regulate the electrical power generation in the full-load regime. Due to the turbulent nature of the wind and the large variations of the mean wind speed during the day, the rotary elements of the WECS are subjected to significant mechanical stresses and fatigue, resulting in conceivably mechanical failures and higher maintenance costs. Consequently, it is imperative to design a control system capable of handling continuous wind changes. In this work, Linear Parameter Varying (LPV) H\({}_{\infty}\) controller is used to cope with wind variations and turbulent winds with a turbulence intensity greater than \(\pm\) 10%. The proposed controller is designed to regulate the rotational rotor speed and generator torque, thus, regulating the output power via pitch angle manipulations. In addition, a PI-Fuzzy control system is designed to be compared with the proposed control system. The closed-loop simulations of both controllers established the robustness and stability of the suggested LPV controller under large wind velocity variations, with minute power fluctuations compared to the PI-Fuzzy controller. The results show that in the presence of turbulent wind speed variations, the proposed LPV controller achieves improved transient and steady-state performance along with reduced mechanical loads in the above-rated wind speed region.
- Wind energy; H\({}_{\infty}\) Control; Variable Pitch Wind Turbine; Fuzzy Control. 222224-2856
222224-2856
## 1 Introduction
During the past decades, the usage of wind turbine plants increased and became more competitive among other renewable energy forms. The current trend is toward the advancement of green energy production. On the contrary, more restrictions are enforced to reduce the energy produced from conventional generation sources, which produce greenhouse gases that lead to global warming. Recently, the wide adoption of wind farms in the United States positively affects greenhouse emissions and water consumption. For instance, in 2018, the United States' wind capacity of 96 GW annually lessened CO\({}^{2}\) emissions and water consumption by about 200 million metric tons, and 95 billion gallons, respectively, [1].
Due to the turbulent nature of the wind and the large variations of the mean wind speed during the day, the rotary elements of the WECS are subjected to significant mechanical stresses and fatigue, resulting in conceivably mechanical failures and higher maintenance costs.
As a result, mechanical failures are unavoidable in dynamic and turbulent wind speed conditions unless proper power regulation control systems are used, such as pitching controllers. Despite decreasing mechanical stresses on the wind turbine's rotary elements, a sophisticated control system helps in stabilizing, maximizing, and restricting the generated power at above-rated wind speeds, [2].
In wind energy generation systems, the effect of turbulence variations and noise created by turbines may cause fluctuations and a reduction in power output. Wind turbines are designed and analyzed by modeling and simulating the turbines using various software and hardware techniques. The wind turbine model in WECS was developed by Manyonge _et al.,_[3], via examining the power coefficient parameter needed to understand the wind turbine dynamics over its operational regime, which contributes to controlling the performance of wind turbines.
The work presented by Taher _et al.,_[2], introduced a Linear Quadratic Gaussian (LQG) gain-scheduling controller to cope with the variable wind velocity in the WECS. They introduced gain-scheduling controllers (GSC), which aimed to manipulate the blades' pitch angle, regulate the electrical torque, and maintain the rotor speed
constant at their nominal values at the full load region. Petru and Thiringer, [4], presented in their work a dynamic model for a wind turbine in both fixed-speed and stall-regulated systems. The main objective of his work is to evaluate the dynamic power quality effect on the electrical grid.
Sabzevari _et al._, [5], proposed a maximum power point tracking (MPPT) approach emanating from a neural network that is trained offline using particle swarm optimization (PSO). The proposed MPPT estimated the wind speed to adapt the fuzzy-PI controller. The adaptive controller manipulated the boost converter duty cycle for the driven permanent magnet synchronous generator (PMSG). Macedo and Mota, [6], presented a comprehensive description of the wind turbine system equipped with an asynchronous induction generator. They implemented a controller using a Fuzzy control system by manipulating the pitch angle. The main objective of their work was to reduce the fluctuations in the generator output power. Salmi _et al._, [7], designed an optimal backstepping controller via particle swarm optimization (PSO) and an artificial bee colony (ABC) algorithm for doubly fed induction generator (DFIG) wind turbines. They aimed at MPPT and to decrease transient loads by controlling power transferred between the generator and the electrical grid in the presence of uncertainty. Aissaoui _et al.,_[8], presented in their work a comprehensive model of the WECS equipped with PMSG; they designed a Fuzzy-PI controller to maximize the extracted power with low power fluctuation. They managed to control and regulate the generator speed with low fluctuations.
The use of nonlinear control systems is considered one of the prevalent control systems of wind energy conversion systems. Thomsen, [9], described and analyzed different nonlinear control techniques for power and rotor speed regulation of the wind turbine. Additionally, other nonlinear control methods were implemented including gain scheduling technique, feedback linearization, [10], and sliding mode control, [11].
The work by Shao _et al.,_[12] deals with the restitution of the wind turbine pitch actuator system by addressing the PI- and PID-based pitch control methods. They sought to enhance the control system by mitigating the effect of pitch delay perturbations on the wind turbine output power.
Robust control theory tackled the control problem of WECS due to its ability to deal with external disturbances and model uncertainties. Bakka and Karimi, [13], implemented a mixed H\({}_{2}\)-H\({}_{e}\) control design for the WECS, based on state-feedback control. They successfully regulated the rotor rotational speed subject to disturbances in the gearbox and wind turbine tower. Muhando _et al.,_[14], described a design of multi-objective H\({}_{e}\) control of the WECS that incorporates a doubly-fed induction generator. They designed a controller to accomplish the dual purpose of energy capture optimization and alleviating the cyclic load against wind speed fluctuations.
Linear varying parameter (LPV) controller is convenient for control problems that involve the regulation of wind turbine output power. Initially, the nonlinear model is transformed into an LPV model that consists of a group of linear models by assuming free-stream velocity, turbine shaft angular velocity, and pitch angle as varying parameters. Therefore, the control structure is reduced to an LPV controller which is a convex combination of linear controllers. Ying _et al.,_[15], have presented in their work a designed H\({}_{e}\) loop shaping torque and LPV-based pitch controllers. They aimed to enhance the performance of the pitch actuator system through the region of transition around the nominal wind speed. Gebraad _et al.,_[16], presented an LPV controller for WECS in the partial load region. They designed a full model that aimed to control the rotor vibration in the partial load region, and they used a proportional controller to enhance the produced power from the wind turbine. Inthamoussou _et al.,_[17], proposed an LPV controller for regulating the power of WECS above the rated wind speed. The suggested controller was compared with gain-scheduling PI and H\({}_{e}\) controllers. The work done by Lescher _et al.,_[18], adopted multivariable gain-scheduling controllers for the wind turbine based on a linear parameter-varying control approach. They designed a two-bladed wind turbine by situating smart micro-sensors hosted on the blades. They aimed to alleviate wind turbine cyclic loads addressed by the wind turbine in a full load regime.
The work presented here is mainly targeting the development of a robust linear parameter-varying H\({}_{e}\) controller for a WECS via pitch manipulation. It is precisely aimed at regulating generator output power via the regulation of the generator shaft angular velocity and torque. The control problem in hand lays down restrictions on the control design spec. Initially, the acceptable power fluctuations are limited to \(\pm\) 5% of its nominal value regardless of the incoming wind speed variations. Additionally, the pitch actuator limitations introduce extra restrictions on the pitch angle and its derivative. These restrictions impose limitations on closed-loop performance. Furthermore, the suggested LPV controller is compared with the fuzzy logic controller under the same operating conditions.
## 2 Modelling of the Wind Turbine
The WECS operating region is dependent on the wind velocity, and it has three main regions: the No-Generation Region (NGR), the Partial-Load Region (PLR), and the Full-Load Region (FLR) as shown in Fig. 1.
The schematic diagram of the variable speed WECS is depicted in Fig. 2 (A), the figure shows four subsystems: the mechanical, the aerodynamic, the pitch actuator, and the generator subsystem. The wind-captured aerodynamic power (\(P_{r}\)) can be obtained by the following equation, [19];
\[P_{r}=\frac{1}{2}\rho\pi R^{2}v^{3}Cp(\lambda,\beta) \tag{1}\]
where \(\rho\) is the air density, \(R\) is the blade radius, and \(v\) is the wind velocity. The aerodynamic torque (\(T_{r}\)) can be expressed by the following:
\[T_{r}=\frac{P_{r}}{\omega_{r}} \tag{2}\]
where \(\omega_{r}\) is the rotor rotational speed. The power coefficient (\(Cp\)) can be given by Taher _et al._[2]:
\[Cp(\beta,\lambda)=0.22\left(\frac{116}{\lambda_{i}}-0.6\beta-5\right)\exp{ \left(-\frac{12.5}{\lambda_{i}}\right)} \tag{3}\]
where:
\[\lambda_{i}=\frac{1}{\frac{1}{\lambda+012\beta}-\frac{0.035}{(15\beta)^{4}+1}} \tag{4}\]
where \(\lambda\) is the tip speed ratio and \(\beta\) is the pitch angle.
The WECS model is considered a two-mass model as shown in Fig. 2(B). In this model, the turbine consists of two main components separated by the transmission: the low-speed shaft (rotor side) and the high-speed shaft (generator side). The gearbox ratio \(N_{g}\) of the system is defined by:
\[N_{g}=\frac{T_{ts}}{T_{hs}}=\frac{w_{g}}{w_{r}} \tag{5}\]
where \(T_{ts}\) is also defined by:
\[T_{ts}=D_{s}\delta+K_{s}\delta \tag{6}\]
\[\delta=\omega_{r}-\frac{\omega_{g}}{N_{g}} \tag{7}\]
where \(\omega_{g},T_{ts},T_{hs},\delta,K_{s},D_{s}\), and \(N_{g}\) are the generator speed, low-speed torque, high-speed torque, shaft twist, spring constant, damping coefficient, and gearbox ratio, respectively. The wind turbine mechanical dynamic equations [19] are obtained using Newton's second law:
\[\dot{\omega}_{r}J_{r}=T_{r}-T_{ts} \tag{8}\]
\[\dot{\omega}_{g}J_{g}=T_{hs}-T_{g} \tag{9}\]
Fig. 1: Full operating region of variable speed WECS
Fig. 2: Schematic diagram of the wind turbine (A), Two-mass WECS scheme (B).
where \(J_{r}\) and \(J_{g}\) are rotor inertia and the generator inertia. The generator model is given by:
\[\hat{T}_{g}=-\frac{1}{\tau_{T}}T_{g}+\frac{1}{\tau_{T}}T_{g,r} \tag{10}\]
where \(T_{g,r}\) is the desired generator torque. This is a simplified generator first-order model with a time constant (\(\tau_{T}\)). The generator power can be acquired by:
\[P_{e}=T_{g}\ \omega_{g} \tag{11}\]
The pitch actuator is designed to regulate the rotational rotor speed at its rated value. It works by controlling the input power aerodynamic flow at the full load region. The turbine blades will turn in when the power is too low and will turn out when the power is too high. Generally, the power coefficient in (3) is minimized by raising the blades' pitch angle (\(\beta\)). The blade pitching process of the WECS imposes a time delay to reach the desired set-point value. Thus, the pitch actuator model is first-order with a rate limiter constrained to extreme values of \(\pm\ 12\) deg/s. The pitch actuator model shown in Fig. 3 is described as follows:
\[\hat{b}=\frac{1}{\tau_{\beta}}(\beta_{r}-\beta) \tag{12}\]
where \(\beta_{r}\) and \(\tau_{\beta}\) are the input blades' pitch angle and the time constant of the pitch actuator, respectively.
### Nonlinear WECS
The equations of the mechanical dynamics obtained in Eq. (8) and (9) can be reformulated as:
\[\dot{\omega}_{r}=-\frac{K_{g}}{J_{r}}\delta-\frac{D_{g}}{J_{r}}\omega_{r}+ \frac{D_{g}}{J_{r}N_{g}}\omega_{g}+\frac{1}{J_{r}}T_{r} \tag{13}\]
\[\dot{\omega}_{g}=\frac{K_{g}}{J_{g}N_{g}}\delta+\frac{D_{g}}{J_{g}N_{g}}\omega _{r}-\frac{D_{g}}{J_{g}N_{g}}^{2}\omega_{g}-\frac{1}{J_{g}}T_{g} \tag{14}\]
The state-space model for the non-linear wind turbine system is:
\[\dot{x}=Ax+Bu \tag{15}\]
\[\begin{bmatrix}\dot{\delta}\\ \frac{\partial}{\partial x}\\ \dot{\omega}_{
The linearized rotor rotational speed can be obtained by substituting Eq. (17) in Eq. (13) to get the following formula:
\[\omega_{r}=-\frac{K_{g}}{J_{r}}\delta+\frac{K_{u}-D_{g}}{J_{r}}\omega_{r}+\frac{D _{g}}{J_{r}N_{g}}\omega_{g}+\frac{K_{g}}{J_{r}}v+\frac{K_{g}}{J_{r}}\beta \tag{18}\]
The linear state-space model can be given by the following representation:
\[\Delta\dot{x}=A\Delta x+B\Delta u \tag{19}\]
\[\Delta y=C\Delta x \tag{20}\]
where the state vector
\[\Delta x=x-\bar{x}=\left\{\Delta\delta\ \Delta\omega_{r}\ \Delta \omega_{g}\ \Delta\beta\ \Delta T_{g}\right\}^{T},\] the control action \[\Delta u=u-\bar{u}=\left\{\Delta\beta_{r}\ \Delta T_{g,r}\right\}^{T},\] the measured output \[\Delta y=y-\bar{y}=\left\{\Delta\omega_{g}\ \Delta T_{g}\right\}^{T}\] and \[v\] is the exogenous input. The model transfer function is \[G(s)=C(sl-A)^{-1}B \tag{22}\]
## 3 Controller Design
The robustness of a control system to disturbances has always been the main issue in feedback control systems. No need for feedback control if there are no disturbances in control systems, [20]. The robust control aims to achieve both robust performance and robust stability of the closed-loop system. In this section, LPV control based on mixed-sensitivity H\({}_{e}\) control is designed and presented above the rated wind energy conversion system. The primary objective of the proposed controller is to regulate the rotational rotor speed and the generator torque. Accordingly, maintaining the generator output power of the WECS to the rated power. Moreover, the turbulent wind velocity with large variations is presented in this research to estimate the stability and robustness of the suggested controller.
### LPV Controller Design
The nonlinear WECS can be modeled as a linearized state-space system whose parameters vary with their states, [21], [22], [23]. The varying parameters (\(\theta\)) of the model are presumed to be measurable, bounded in a polytopic system, and slowly varying in real-time:
\[\dot{x}=A(\theta)x+B_{1}(\theta)v+B_{2}(\theta)u \tag{23}\]
\(y=C(\theta)x\)
The LPV model matrices are,
\[A(\theta)=\begin{bmatrix}\omega_{r}-\frac{1}{N_{g}}\omega_{g}\\ -\frac{K_{g}}{J_{r}}\delta+\frac{K_{u}(\theta)-D_{g}}{J_{r}}\omega_{r}+\frac{D _{g}}{J_{r}N_{g}}\omega_{g}+\frac{K_{g}(\theta)}{J_{r}}\\ \frac{K_{g}}{J_{g}N_{g}}\delta+\frac{D_{g}}{J_{g}N_{g}}\omega_{r}-\frac{D_{g}} {J_{g}N_{g}^{2}}\omega_{g}-\frac{1}{J_{g}}T_{g}\\ -\frac{1}{\tau_{g}}\beta\\ -\frac{1}{\tau_{g}}T_{g}\end{bmatrix}\]
\[B_{1}(\theta)=\begin{bmatrix}0&\frac{K_{g}(\theta)}{J_{r}}&0&0\end{bmatrix}^ {T},\]
\[B_{2}=\begin{bmatrix}0&0&0&\frac{1}{\tau_{g}}&0\\ 0&0&0&0&\frac{1}{\tau_{g}}\end{bmatrix}^{T},\]
\[C=\begin{bmatrix}0&0&1&0&0\\ 0&0&0&0&1\end{bmatrix}.\]
The LPV controller is intended to control the WECS in the full-load regime, which covers wind speeds ranging from 11 to 24 m.s-1. As a result, the wind speed (v) is the scheduling parameter. The rotor rotational speed \(\omega_{r}\) of the wind turbine is held constant at a rated value of 4.3 rad/s. Where the main goal is to regulate the generator's output power around its rated value. The time-varying parameter varies in a polytopic system whose vertices are \(\psi_{1}(v_{max})\) and \(\psi_{2}(v_{min})\), in which the parameters are assumed to be measurable and slowly varying. The WECS LPV model can be realized with two vertices as follows:
\[\dot{x}=\left[\sum_{j=1}^{2}\alpha_{j}(\theta)\,A\left(\psi_{j}\right)\right]x+ \left[\sum_{j=1}^{2}\alpha_{j}(\theta)\,B\left(\psi_{j}\right)\right]u \tag{25}\]
The controller \(K(\theta)\) state space matrices at the vertices can be given by:
\[\begin{pmatrix}A_{c}(\theta)&B_{c}(\theta)\\ C_{c}(\theta)&D_{c}(\theta)\end{pmatrix}=\sum_{j=1}^{2}\alpha_{j}(\theta)\,K_{ j}(\theta)=\sum_{j=1}^{2}\alpha_{j}(\theta)\begin{pmatrix}A_{c}(\theta)&B_{c}( \theta)\\ C_{c}(\theta)&D_{c}(\theta)\end{pmatrix} \tag{26}\]
\[\text{where}\qquad\alpha_{1}=\frac{v-v_{\min}}{v_{\max}-v_{\min}},\ \ \alpha_{2}=1-\alpha_{1}\qquad\text{and}\] \[\sum_{j=1}^{2}\alpha_{j}(\theta)=\mathbb{1}\]
According to Eq. (26), the LPV controller is designed as a convex combination of the vertices w1 and w2.
### Mixed-weight \(\mathbf{H}_{\alpha}\) Control Design
The mixed-weight \(\mathbf{H}_{\alpha}\) controller's technique provides a closed-loop response of the system by shaping the frequency responses for noise attenuation and disturbance rejection. The controller design involves incorporating additional weighting functions in the original system, carefully chosen to demonstrate the system's performance and robustness specifications, [24]. As mentioned in Shaqarin _et al._, [25, 26], the generalized form of the mixed-weight \(\mathbf{H}_{\alpha}\) problem can be elicited as explained in Fig. 4. Where \(w\), \(y\), \(z\), \(u\) and \(e\) are the external input, the system-measured output, performance output, system input, and the control input, respectively. The general expanded plant \(P(s)\) can be provided by:
\[z=\begin{bmatrix}z_{1}\\ z_{2}\\ z_{3}\end{bmatrix}=\begin{bmatrix}W_{1}e\\ W_{2}u\\ W_{3}y\end{bmatrix} \tag{27}\]
\[\begin{bmatrix}z\\ e\end{bmatrix}=P\begin{bmatrix}w\\ u\end{bmatrix}\ \rightarrow\ \begin{bmatrix}z\\ e\end{bmatrix}=\begin{bmatrix}P_{11}&P_{12}\\ P_{21}&P_{22}\end{bmatrix}\begin{bmatrix}w\\ u\end{bmatrix} \tag{28}\]
\[P=\begin{bmatrix}P_{11}&P_{12}\\ P_{21}&P_{22}\end{bmatrix}=\begin{bmatrix}W_{1}&-W_{1}G\\ o&W_{2}\\ o&W_{3}G\\ I&-G\end{bmatrix} \tag{29}\]
\[z=\begin{bmatrix}z_{1}\\ z_{2}\\ z_{3}\\ e\end{bmatrix}=\begin{bmatrix}W_{1}&-W_{1}G\\ o&W_{2}\\ o&W_{3}G\\ I&-G\end{bmatrix}\begin{bmatrix}w\\ u\end{bmatrix} \tag{30}\]
where \(W_{1}\), \(W_{2}\) and \(W_{3}\) are weighting functions. The closed-loop transfer function from \(w\) to \(z\) can be formulated using Eq. (28) and (29) as follows:
\[\tau_{\text{zw}}=P_{11}\left(s\right)+P_{12}(s)\,K(s)[I-P_{22}(s)\,K(s)]^{-1} \,P_{21}\left(s\right) \tag{31}\]
\[T_{\text{zw}}=\begin{bmatrix}W_{1}S\\ W_{2}KS\\ W_{3}T\end{bmatrix} \tag{32}\]
where \(S\) is the sensitivity function, \(T\) is the complementary sensitivity function. The primary goal is to define a controller (\(K\)) that reduces the infinity norm of \(T_{\text{zw}}\) in the polytope (\(\psi_{1},\psi_{2}\)), such that \(\|T_{\text{zw}}\|_{\infty}<\gamma\), where \(\gamma\) is the upper bound of \(\|T_{\text{zw}}\|_{\infty}\). The selection of frequency-dependent weights (\(W_{1}\), \(W_{2}\), and \(W_{3}\)) substantially enhances the control design. Generally, at low frequencies, the sensitivity function \(S\) is made small. This results in excellent disturbance rejection and a low tracking error. The complementary sensitivity function \(T\) is also reduced in the high-frequency domain. As a result, there is high noise rejection and a broad stability margin. Further details on the weight selection can be found in Shaqarin _et al._[25, 26].
Figure 4: Mixed-weight closed-loop system
### PI-Fuzzy Logic Control (PIFLC) Design
The Fuzzy Logic Control (FLC) system has become one of the most common intelligent techniques utilized in many applications in current control systems. FLC can cope with various types of systems, ranging from linear processes to highly complex systems, such as nonlinear processes or time-varying systems. FLC versatility is attributed to its parameter tunability, such as the membership function type and number, rule base, scaling factor, inference techniques, fuzzification, and defuzzification. The process of fuzzy logic is shown in Fig. 5(A), which consists of a fuzzifier, inference engine, rule base, and defuzzifier. The process can be explained along these lines: the crisp inputs from the input data are initially fuzzified as fuzzy inputs. Hence, they trigger the inference engine and the rule base to generate the fuzzy output. The inference engine provides an input/output map just after blending the activated rules. Then, the inference engine outputs are sent to the defuzzifier, which produces the crisp outputs.
In the developed controller shown in Fig. 5(B), two input fuzzy variables: error (e) and change in error (\(\lambda e\)) with the output \(\lambda u\) are shown. With a sampling period Ts, the signal \(e\) is sampled and \(\lambda e\) is calculated as:
\[\lambda e(k)=e(k)-e(k-1) \tag{33}\]
where \(k\) denotes the sample number, and \(\lambda^{\text{-1}}\) denotes the unit time delay. As depicted in Fig. 5(B), the PIFLC controller output \(u(k)\) can be found as:
\[u(k)=\Delta u(k)+u(k-1) \tag{34}\]
It is worth mentioning that the continuous control output \(u(t)\) is obtained by assuming a zero-order hold between samples.
The membership functions of the inputs and the output are depicted in Fig. 6(A), where \(\mu\) is the membership value. The performance of the PIFLC can be tuned via error gain (\(\text{K}_{\text{e}}\)), the change in the error (\(\text{K}_{\text{w}}\)), and the change in control output (\(\text{K}_{\text{u}}\)), as shown in Fig. 5(B).
The FLC used in this work is a Mamdani type, where the structure of fuzzy rule is formulated as:
\[\text{IF}\;\text{e}_{\text{f}}(\text{k})\;\text{is}\;\text{A}\;\;\text{AND}\; \text{Ce}_{\text{f}}(\text{k})\;\text{is}\;\text{B}\;\;\text{THEN}\;\Delta \text{U}(\text{k})\;\text{is}\;\text{C} \tag{35}\]
Fig. 5: Internal structure of the fuzzy controller(A), PIFLC control structure (B).
Fig. 6: Membership functions of the inputs e and \(\Delta\)e and the output \(\Delta\)u (A), Input-output surface relationship (B).
where A, B and C are fuzzy sets. The system has 49 fuzzy rules, and the surface input-output relationship is shown in Fig. 6 (B).
## 4 Results and Discussions
This section aims to assess the performance and stability of the suggested LPV-based H\({}_{\varepsilon}\) control system of the WECS. This is accomplished via simulating the closed-loop response of the suggested controller using MATLAB/SIMULINK. Moreover, the suggested controller is compared with the response of the PIFLC and the WECS. In this work, the nominal parameters of the Vestas V29-225 kW WECS shown in Table 1 were used in the simulation. The linearization coefficient values (\(K_{\omega}\),\(K_{\varphi}\) and \(K_{\beta}\)) for both vertices are presented in Table 2. The weights are selected as follows: \(W_{1}\) was chosen to yield better disturbance rejection through the shaping of the sensitivity function, which leads to a small tracking error. The weighting function \(W_{2}\) was designed to shape the control sensitivity function, aiming at limiting the actuator effort. More precisely, \(W_{2}^{-1}\) is responsible for limiting the pitch actuator effort to cope with the limited actuator bandwidth.
### Nonlinear Versus Linearized WECS Simulation
The dynamic model of the WECS was obtained for both nonlinear and linearized cases as shown in sections (2.2 and 2.3). The linearization is carried out around operating points, which vary in a polytopic system whose vertices are
\[\psi_{1}(v_{max}=24\ m/s,\ \beta_{max}=24^{o})\]
and
\[\psi_{2}\ (v_{min}=11\ m/s,\ \beta_{min}=0^{o})\]
The rotational speed of the rotor (\(\omega_{r}\)) is assumed constant at the rated value.
Figure 7 shows the step response of the non-linear and the linearized systems under four wind speed values. The figure shows that the steady-state responses of the non-linear and linearized systems are identical at the linearization operating points with slight differences in the transients. However, the discrepancy between the two systems increases as the operating points change and move away from the linearization points.
### Open-loop Response of WECS Subjected Wind Speed change
The open-loop response of WECS is simulated to evaluate the wind turbine performance when exposed to various wind speeds at a fixed pitch angle. The variable-speed wind turbine in Fig. 8 started with a smooth wind speed ranging from 11 m/s to 24 m/s at a minimum pitch angle of zero degrees. The figure shows that the rotor rotational speed and the speed of the generator increase as the free-stream velocity increases.
As a result, the generator's output power is increased to up to four times its nominal value. This motivates the need for closed-loop control of the WECS for power regulation. This is due to the fact that the generator's speed is also increased four times above its rated value, which Jeopardizes the wind turbine's safety and complicates the connection with the grid.
### Controlled WECS Subjected to a Step-Ramp Change in Wind Speed
To evaluate the closed-loop system of the suggested LPV controller with variable speed WECS, step-ramp changes in free-stream velocity with a white noise variance of 0.0102 are introduced. These steps force the controller to modify the blade's pitch angle, and consequently the generator's rotational speed. Figure 9 illustrates the simulation of the WECS that started with a noisy free-stream velocity of 24 m/s, ranging from 0 to 25 sec. Then, the free-stream velocity stepped down with a slope of -0.6 m/s\({}^{2}\), until it reached a mean free-stream velocity of 17.5 m/s after 35 sec. This free-stream velocity was maintained constantly from t = 35 sec to 60 sec. Another stepped-down free-stream velocity occurred with the same previous slope until it reached the minimum free-stream of 11 m/s after 70 sec, then it remained constant. The figure shows that the response of the closed-loop system does not
Figure 8: Open-loop response of WECS subjected to a smoothly varying wind speed.
Figure 7: Comparison of the nonlinear and linearized WECS open-loop step responses.
introduce high oscillations over the entire operating region.
The response of the generator speed slightly changes around the nominal value, which is 105.78 rad/s. This indicates the LPV-based H\({}_{\infty}\) controller's effectiveness in maintaining the generator's output power very close to the rated value of 225 kW, in presence of noisy varying free-stream velocity, as seen in Fig. 9. Regarding the mechanical safety aspects, it is depicted in the figure that slight oscillations occurred in rotor shaft (\(\pm\) 0.0005 \(rad\)), as seen in the response of the shaft twist angle over the whole operating range.
The suggested controller adequately maintained the generator speed and output power at their rated levels while maintaining the shaft's angle of twist nearly constant. It's worth noting that for the mechanical loads to be in the acceptable range, the less the peak twist angle, the less mechanical stress.
Closed-loop Response of the WECS with LPV and PI-Fuzzy Controllers in Presence of Turbulent Wind Speed
In this work, the Von Karman turbulence spectrum model is used with an average velocity of 17.5 m/s, with a turbulence intensity of the incoming wind flow greater than \(\pm\) 10%, a turbulence length scale of 170 m, and a 2 m/s standard deviation. The wind speed used in the turbulence model shown in Fig. 10 varies in a range between 11 and 24 m/s, which lies in the full load region.
The proposed control system was able to handle these wind speed variations with high efficiency, as depicted in Fig. 10. The generator speed and the electromagnetic torque were controlled in the full load region of the WECS around their nominal values, which lead to stabilizing and maintaining the generator output power around its rated value, without violating the pitch angle ranges; 0 \(<\beta<\) 24\({}^{o}\) and the pitch actuator constraints; \(-\)12 \(<\) \(\beta<\) 12. A PIFLC is designed and implemented in this paper to be compared with the proposed LPV control system. The gains were selected as presented in Table 3.
\begin{table}
\begin{tabular}{|l|l|} \hline Gain & Value \\ \hline \(K_{\mathbf{e}}\) & 2 \\ \hline \(K_{\mathbf{de}}\) & 2 \\ \hline \(K_{\mathbf{u}}\) & -2 \\ \hline \end{tabular}
\end{table}
Table 3: PIFLC Gains of the pitch controller
Figure 10: Closed-loop simulation of the WECS with both LPV and PIFLC Controllers Subjected to Turbulent Wind Speed.
Figure 9: Closed-loop simulation of the WECS with LPV controller subjected to step-ramp change in wind speed with white noise.
The closed-loop simulations for both controllers are presented in Fig. 10. The figure shows the robustness of the proposed controller in enhancing the performance and stability of the WECS against free stream velocity and wind variations. The proposed controller response showed much fewer fluctuations in the generator outputs. This proves that the suggested LPV controller was superior at maintaining the output power of the generator at around its nominal value without causing large fluctuations in the output power of the WECS. On the other hand, there were significant power spikes and fluctuations in the Fuzzy controller response that exceeded the permissible design limitations. Though it was clear that the suggested LPV controller in this work was assigned to only one varying parameter (v) capable of satisfying the required control objectives.
The severe wind turbulence conditions with large mean wind speed variations implemented on the wind turbine, can undeniably conclude the main benefits of the LPV controller over the PIFLC controller, as shown in Fig. 10. The closed-loop response from the LPV controller indicates that the generator speed has very few fluctuations (\(\pm\)4%), whereas the PIFLC controller has significantly large peaks in the generator speed (\(\pm\)24%). This is translated to \(\pm\)4% and \(\pm\)24% peak fluctuations around the mean in the generated power for LPV and PIFLC controller cases, respectively. The fluctuations in the twist angle of the wind turbine shaft are comparable for both cases, whereas the variance of the fluctuation in the twist angle is two times less for the LPV controller case. This is highly beneficial in reducing the destructive mechanical loads on the wind turbine shaft. It is worth noting that the aforementioned analysis neglects the startup conditions.
## 5 Conclusion
In this paper, the suggested LPV based on an H\({}_{e}\) controller was employed to control a WECS via, manipulating the blades' pitch angle in the full load regime. The proposed controller was able to maintain and regulate the turbine shaft angular velocity, the electromagnetic torque, and thus the generator output power of the WECS to their nominal values. The proposed control design demonstrated proper performance and robustness when applied to a 225-kW WECS under turbulent free-stream velocity conditions. In comparison with the PIFLC, the suggested LPV controller was more effective in coping with the turbulent wind speed with a turbulence intensity of \(\sim\pm\)10%, which improved the wind turbine performance in terms of minimizing the fluctuations and smoothing the generator power. When the proposed LPV controller was assigned to only a single varying parameter (v), it showed its capability of meeting the desired control objectives of regulating and stabilizing the desired output power around its rated value, while complying with the pitch angle range; \(0<\beta<24^{o}\), and the pitch actuator constraints; \(-12<\dot{\beta}<12\).
|
2307.06426 | Scientific mobility, prestige and skill alignment in academic
institutions | Scientific institutions play a crucial role in driving intellectual, social,
and technological progress. Their capacity to innovate depends mainly on their
ability to attract, retain, and nurture scientific talent and ultimately make
it available to other organizations, industries, or the economy. As researchers
change institutions during their careers, their skills are also transferred.
The extent and mechanisms by which academic institutions manage their internal
portfolio of scientific skills by attracting and sending researchers are far
from being understood. We examine 25 million publication histories of 9.2
million scientists extracted from a large-scale bibliographic database covering
thousands of research institutions worldwide to understand how the skills of
mobile scientists align with those present in-house. We find a clear
association between top-ranked institutions and greater skill alignment, i.e.,
the degree to which skills of incoming academics match those of their
colleagues at the institution. We uncover similar high-alignment for scientists
leaving top-ranked institutions. This type of academic alignment is more
pronounced in engineering and life, health, earth, and physical sciences than
in mathematics, computer science, social sciences, and the humanities. We show
that over the past two decades, institutions generally have become more closely
aligned in their overall skill profiles. We interpret these results in terms of
levels of proactive management of the composition of the scientific workforce,
diversity, and internal collaboration strategies at the institutional level. | Marcia Ferreira, Rodrigo Costas, Vito Servedio, Stefan Thurner | 2023-07-12T19:32:45Z | http://arxiv.org/abs/2307.06426v1 | # Scientific mobility, prestige and skill alignment in academic institutions
###### Abstract
Scientific institutions play a crucial role in driving intellectual, social, and technological progress. Their capacity to innovate depends mainly on their ability to attract, retain, and nurture scientific talent and ultimately make it available to other organizations, industries, or the economy. As researchers change institutions during their careers, their skills are also transferred. The extent and mechanisms by which academic institutions manage their internal portfolio of scientific skills by attracting and sending researchers are far from being understood. We examine 25 million publication histories of 9.2 million scientists extracted from a large-scale bibliographic database covering thousands of research institutions worldwide to understand how the skills of mobile scientists align with those present in-house. We find a clear association between top-ranked institutions and greater skill alignment, i.e., the degree to which skills of incoming academics match those of their colleagues at the institution. We uncover similar high-alignment for scientists leaving top-ranked institutions. This type of academic alignment is more pronounced in engineering and life, health, earth, and physical sciences than in mathematics, computer science, social sciences, and the humanities. We show that over the past two decades, institutions generally have become more closely aligned in their overall skill profiles. We interpret these results in terms of levels of proactive management of the composition of the scientific workforce, diversity, and internal collaboration strategies at the institutional level.
## Introduction
Scientific discovery requires the capacity to seek, nurture, and combine internal and external sources of knowledge. Universities, in particular, serve as vital "containers" for the advancement and integration of this knowledge. However, because of the "tacit" nature of knowledge[1], knowledge synergies do not emerge automatically. They rely on transfer mechanisms such as collaboration, networks, and labor mobility[1, 2, 3, 4]. Academic mobility is a particularly important mechanism for knowledge to flow effectively across people, organizations, locations, and time[5, 2, 6].
Scientists are moving between different institutions with increasing frequency[7]. According to some estimates, in 1990, about 2% of scientists worked outside their country of origin[5]. By 2000, this proportion increased to 14%[5], and by 2015, it was estimated that about one-third of scientists were working outside their country of origin[8]. A similar trend has been observed in Europe, where it has been reported that 7% of hired researchers were from abroad[9]. However, the presence of mobile researchers varies considerably
by region and institution [7, 9, 10]. At Cambridge University, for example, it has been reported that more than 40% of the faculty were foreign-born [9].
The attraction of mobile individuals to institutions has been studied for many decades [5, 6, 11], and several analyses have shown that external talent is essential for innovation [2, 5, 6, 7, 12]. Attracting individuals trained in various research contexts is critical for frontier research [5, 7, 13, 14, 15, 16], as it enables institutions to explore new areas of knowledge. However, it is also a challenge faced by most research institutions worldwide [13]. The exchange of talent is increasingly concentrated in a handful of universities [12]. In the United States, for example, the most prestigious institutions attracted and trained most of the available faculty before sending them to other mid- and upper-level research institutions [17, 18]. Education systems also differ dramatically, with more prominent, well-funded universities offering more facilities, funding opportunities, and research diversity than smaller, specialized universities [13, 19, 20, 21]. This unequal access to knowledge has significant implications for knowledge sharing across academic institutions and, more importantly, within
Figure 1: A graphical representation of the skill and workforce structures of a scientific institution. We study three types of scientists (A, B, and C): _institutional newcomers_\(\nu\) (A), _institutional natives_\(\sigma\) (B), and _institutional outgoers_\(\omega\) (C). Every individual has a skill vector; every component in it, \(S_{k}\), represents a particular skill \(k\). We aggregate the individual skill vectors for each population (A), (B) and (C) of researchers and compute the cosine similarities between them, as shown in the grey boxes in panels G and H. We denote these similarities by \(\theta\) (G) and \(\eta\) (H). For illustration purposes, we took angles larger than \(90\deg\). For definitions of the measures of skill alignment, see Materials and Methods.
the organization [22, 23].
It has been argued that most academic institutions pursue the overarching goal of profile continuity and that the long-term sustainability of institutions can only be achieved through various forms of alignment [24, 25]. Knowledge institutions typically invest and strengthen knowledge in their established research areas over time [26], as leveraging on existing competencies can create alignments that improve performance, learning, and knowledge transfer [2, 27]. When scientists within the institution have a common knowledge base, they can better learn from each other [2], leading to improved productivity and reduced barriers to collaboration. Yet, top institutions with large endowments are those that can experiment more in new emerging scientific fields.
An important driver of attracting talent that matches the internal profiles of institutions is the current reward system of science [28]. This system often and increasingly discourages the pursuit of novel research areas because the returns from new ideas and topics are seen as uncertain, distant, and often risky [24]. In contrast, the benefits of refining and expanding existing expertise and technologies are positive, immediate, and predictable [25]. Other studies have suggested that too much similarity in knowledge can also limit innovation [29], which at the institutional level means that as academic organizations exceed optimal levels of alignment, they potentially 'lock' into dominant thematic profiles.
The mobility of academic talent, collaboration, and the alignment of skill profiles between institutions and incoming and outgoing researchers can play a critical role in shaping research dynamics within and across institutions. However, on a quantitative basis, there is limited understanding of the processes behind the alignment of knowledge and skills within institutions and the alignment of the skills of institutions and new hires. To gain a better understanding of these alignment strategies, we study academic institutions from the perspective of the composition of their workforce and the internal skill profiles they generate as the composition of their workforce changes over time.
We use the _Dimensions_ database (see Materials and Methods) to compare the internal skill profiles of millions of mobile individuals with the skill structures of the institutions they move to or leave. We quantify the academic skills of individuals by using their publications mapped into a high-resolution classification scheme of scientific topics across all disciplines [30]. This classification is the basis for defining the skill vector, \(S^{j}\) for every individual, \(j\). Every component of that binary vector represents a skill of the researcher, if the k-th component is \(S^{j}_{k}=1\), researcher \(j\) has competency \(k\), if \(S^{j}_{k}=0\), \(j\) has no skill in \(k\). If an author publishes in many different research areas, they has many'skills', if they publish on only one specific topic, the author has only a single non-zero component in the skill vector; see Materials and Methods. The skills of an institution are defined as the superposition (sum of all vectors) of all the members of the existing workforce. These aggregated vectors are indicated as \(S_{\Sigma}\), with a subscript \(\Sigma\). The _Dimensions_ database allows not only to quantify skills but also to observe the flows of researchers around the globe.
However, measuring scientific skill profiles by bibliographic means is not an easy task. This partly depends on the level of resolution we use to determine researchers' skills. Data limitations have also been an obstacle to the study of scientists' knowledge pathways [7, 10, 31], leading to a prevalence of findings from self-reported information, small-scale studies, or studies limited to researchers from specific fields or countries [32, 33, 34, 5, 6]. The situation is particularly problematic at the institutional level, as it relies on clear institutional identification and robust author-name disambiguation algorithms [10, 36]. This situation has changed recently as more databases improve author and affiliation metadata [7, 10, 31]. In what follows, we focus on harmonized research-intensive institutions data [37, 38] for which extensive metadata on author-affiliation transitions exists [10].
In Fig. 1, we present a schematic view of how we approach the problem of skill assignment. We define three types of researchers: _Newcomers_ (A), _Natives_ (B), and _Outgoers_ (C). The natives represent
the non-mobile workforce at a given institution, \(i\). In the figure, we represent them as two scientists, \(\sigma_{1}\) and \(\sigma_{2}\), (green), both of which have different skills that are given by a skill vector, \(S\), that has \(n=4,163\) components that mark the different individual categories in the science classification scheme. Native scientist \(\sigma_{1}\) has three skills \(S_{1}\), \(S_{2}\), and \(S_{4}\), hence \(S_{1}^{\sigma_{1}}=S_{4}^{\sigma_{1}}=1\), whereas \(\sigma_{2}\) has only two, \(S_{2}\) and \(S_{3}\), \(S_{2}^{\sigma_{2}}=S_{3}^{\sigma_{2}}=1\), all other components being zero. Their combined skills are given by the sum of their skill vectors indicated by the small arrows. The skills present at the institution are seen in panel E. In this example, there are \(r=42\) native researchers present; their skills are collected in the table. The sum of all their skills is called \(S_{\Sigma}^{\sigma}\) and represents the current skill vector of the institution, \(i\). We now assume that in the next time period, a set of researchers will join the institution (newcomers, \(\nu\)) (A), and some will leave (the outgoers, \(\omega\)) (C). The collective skill vectors of these groups are called \(S_{\Sigma}^{\nu}\) and \(S_{\Sigma}^{\omega}\), respectively. In this example, we have 25 newcomers and 20 outgoers, with their skills captured in the tables in D and F. Data shows that the native population and newcomers make up the largest fraction in most institutions, while the outgoing population makes up the smallest fraction.
With this notion, we can now quantify the _newcomer skill alignment_ between the natives (plus outgoers) and the incoming workforce as the cosine of the angle, \(\theta\), between the incoming skill vector, \(S_{\Sigma}^{\nu}\) and the sum of natives and outgoers, \(S_{\Sigma}^{\sigma}+S_{\Sigma}^{\omega}\); see panel (G). With this measure, we can analyze whether internally trained authors (natives and outgoers) and external authors (newcomers) generate aligned or divergent skill profiles at the institutional level. Similarly, we define the _outgoer skill alignment_ by calculating the cosine of the angle, \(\eta\), between the skill vector of outgoers, \(S_{\Sigma}^{\omega}\) and the combined skill vectors of natives and newcomers, \(S_{\Sigma}^{\sigma}+S_{\Sigma}^{\nu}\); see panel (H).
## Results
### Skill Alignment in Research Institutions
To what extent does the skills profile of externally trained incoming scientists match that of institutional natives? Do their skills align, or are they different? Figures 2A and B show the cosine similarity between the skills profile of the institutions and its newcomer and outgoing workforce. We find a substantial similarity with a median of 0.84 and 0.86 for the newcomers and outgoers, respectively. The fact that the skill alignment is slightly lower for the newcomers than for the outgoers suggests that the outgoers have become more similar in their skills while they stayed at the institutions. Panels A (purple) and B (light blue) also show the similarity between the existing workforce skills and the skills profile generated by those newcomers and outgoers who did not interact with the rest of the institution's workforce during their stay. For these cases, we find much less similarity (median 0.54 and 0.61). This indicates that internal collaboration is a potential driver of intra-institutional skill alignment. The regression analysis shown in SI text 2 confirms that collaboration within institutions is an important predictor of intra-institutional skill alignment.
To illustrate that the observed alignments are a significant and genuine effect that does not simply emerge as a statistical consequence of the definition of the cosine-similarity measure, we devise a simple "null model". We preserve the skill profiles of the in- and outgoers but remove the correlations with the profiles of the institutions. We do this by randomly assigning newcomer and outgoer skill profiles to institutions. The distributions are shown as transparent lines in Figures 2A and B. The skill alignments practically vanish as a result.
There is a clear relation between institutional prestige, as captured by the PP\({}_{\text{top1}\%}\) indicator (for definition, see Materials and Methods), and skill alignment. Figures 2C and E show that institutions that have substantially more than one percent of their publications in the top 1% most cited papers worldwide tend to have similar skill profiles across the different types of workforce. Newcomers who move to an
institution with top-cited publications tend to have more similar skill profiles than newcomers who move to a less prestigious institution, see C. The situation is similar for departing scientists, see E. In other words, talent flowing to and from organizations with high institutional prestige is associated with greater skill alignment. A greater skill dispersion is also observed at universities with lower prestige. If we compare C
Figure 2: Alignments of the skill profile of the native faculty and newcomers to- and outgoers from all academic institutions. Panels A and B show the distributions for newcomers, \(\cos\theta\), and outgoers, \(\cos\eta\), respectively. The purple (panel A) and blue (panel B) distributions show the alignment of skills between those newcomers and outgoers that were not collaborating with their peers at the institution before they joined or during their stay, respectively. The transparent lines represent a reference distribution of skill alignments obtained by shuffling the target (source) affiliations of newcomers (outgoers) (\(n=10,000\) random assignments). Clearly, skill similarity is absent in the shuffled data. Panels C, D, E, and F show the influence of the institution’s reputation. The alignment is shown for the quartiles within the top 1% most impactful institutions, \(\text{PP}_{\text{top1}\%}\). The more impact, the more alignment, regardless of existing collaborations (compare C E and D E). Panel G captures the influence of institution type. It gives the median alignment scores by institution type. The shaded bars represent the percentage and number of institutions by organization type in our sample.
with D and E with F, we see that the prestige effect is independent of whether there are collaboration ties (i.e., co-authorship) between newcomers and outgoers with local researchers.
Figure 2G shows the median alignment between newcomers and outgoers (colors correspond to those in panels A and B) and institutional natives by organization type. More generalist educational institutions (e.g., universities) tend to have lower median levels of simi
Figure 3: Scatter plot of the alignments of newcomers, \(\cos\theta\), and outgoers, \(\cos\eta\) (A). The line represents the diagonal (same in- and outgoer alignment). The grey quadrant lines are positioned at the median values of \(\cos\theta\) (\(\tilde{x}=0.84\)) and \(\cos\eta\) (\(\tilde{x}=0.86\)). Every circle represents an institution. Color indicates the scientific impact of the institutions (\(PP_{\rm top1\%}\) indicator). Grey institutions are below the global average (\(PP_{\rm top1\%}\leq 0.01\)) of institutions with the same skills profile and years of production as explained in Materials and Methods). Orange, blue, and yellow circles represent institutions with medium (\(0.01\leq PP_{\rm top1\%}\leq 0.05\)), high (\(0.06\leq PP_{\rm top1\%}\leq 0.09\)), and very high impact (\(PP_{\rm top1\%}\geq 0.10\)), respectively. Top institutions tend to have generally high alignments and a slightly higher out-alignment. The scatter plots in panels B and C show the relation between the number of skills present at an institution (as a % of all skills in the sample) and the skill alignments \(\cos\theta\) (B) and \(\cos\eta\) (C) for Education, Healthcare, and Facility research institutions, respectively. Healthcare and Facilities tend to have high in- and out-alignments; see also SI Figure 6.
focused institutions such as Facilities, Archives, Companies, Non-profits, or Governmental institutions. Interestingly, Healthcare research institutions also show comparatively low median scores of alignment, which may indicate that while they are considered specialized, their skill sets are broad enough to encompass a greater diversity of skill profiles between in and outgoing researchers and natives.
Figure 3A shows the alignment of newcomers (x-axis) versus the alignment of outgoers (y-axis). The color indicates the degree of the citation's impact of institutions. The solid line marks the regression result. We segment the plot into four quadrants (gray lines at the median alignment values) associated with strategic patterns of talent attraction and training. Institutions (41%) in the first quadrant (top right) attract and send the same skills at a rate above the median. Most notably, the U.S. Ivy Leagues, top European universities, and prominent molecular biology and cancer research institutes are in the first quadrant. There we also find several university hospitals and medical centers. This indicates that the institutions' strategy in the first quadrant is thematic continuity [24, 25] and homogeneity in their recruitment and training practices.
In the third quadrant (bottom left), we observe the opposite trend for about 45% of institutions. Here, the profiles of newcomer hiring and outgoing researchers within an institution diverge and fall below the overall median scores. An example of this pattern is the Institute for Advanced Study (IAS) at Princeton (blue circle). This institute has a remarkably low alignment between outgoers and the rest of the institution, as well as between newcomers and the rest of the institution. Historically, the IAS has been a place where scientists retreat for sabbaticals and exchange ideas, encouraging unexpected discoveries and interdisciplinary thinking. This suggests that it is not always necessary to have a high-skill alignment of newcomers and outgoers to conduct high-impact research. Our method captures their (non) alignment strategy. However, IAS is an exception since this strategy seems prevalent at most institutions whose citation performance is below or close to the global average (grey).
The second quadrant (upper left) shows institutions (8%) that attract researchers working in potentially complementary research areas. These institutions send researchers who produce skill-aligned profiles above the median of the other institutions to the institution's facilities and attract those who produce differentiated work below the median to the rest of the institution. In the fourth quadrant (bottom right), we find the opposite situation; about 6% of institutions bring more of the same skills and send more researchers who exhibit different skills.
In Figures 3B and C show the number of skills present at an institution (in % of all possible 4,163 skills in our classification, see see Materials and Methods), versus the skill alignment at institutions, B newcomers, C outgoers. We find considerable heterogeneity. Colors highlight three types of organizations: Education, Healthcare, and Facility, which account for 93% of the institutions in our sample. The education category includes general and specialized universities, while the healthcare category includes university hospitals and medical research centers. Facilities, typically established by the government or academic stakeholders, often specialize in one particular field, such as agriculture, high-energy physics, specific technologies, and others. We see that research facilities tend to be more specialized (small percentage of skills) and have higher skill alignments of both their incoming (B) and outgoing (C) workforce. Educational institutions tend to have a larger number of skills and show a large spread in both number of skills and alignment. Healthcare institutions fall between the two regarding skill diversity and show relatively high alignment values.
In SI text 2, we conduct a multivariate regression analysis that examines the relationship between alignment and scientific impact measures and various controls while considering the different sizes of institutions. Our findings indicate that the level of internal collaboration within an institution and citation impact are important factors in determining skill alignment within academic institutions.
#### Skill Alignment in Different Science Fields
Various degrees of skill alignment are found in the five major areas of science. Figure 4 shows a breakdown of the distribution of alignment scores for newcomers (A) and outgoers (B). We find that the profiles for academics in the social sciences, humanities, mathematics, and computer science show lower levels of alignment. This is especially true for lower impact institutions, as captured by the quartiles of the proportion of papers in the top one percent (PPtop1% indicator). Finally, in the fields of biomedical and health sciences, life and earth sciences, and physical sciences and engineering, there is a comparatively
Figure 4: Skill alignment of newcomers (A) and outgoers (B) in five main scientific fields as a function of institutional prestige as captured by impact quartiles of PPtop1% (same figure setup as in Fig 2 C and E). Again, top-cited institutions have higher newcomer and outgoer alignment than average organizations (two-sample \(t\) test, \(P\) value <0.001). Disciplines are arranged alphabetically from top to bottom. Social sciences, humanities, mathematics, and computer science show lower levels of alignment than biomedical and health sciences, life and earth sciences, physical sciences and engineering. For the case controlled for collaborations, see SI Figure 7.
slightly higher degree of similarity between the skills of newcomers with their institution as well as between outgoers and the rest of their institution.
#### Skill Alignment Between Academic Institutions Over Time
Finally, we present the situation of the alignment of skills between all the institutions in the sample. We analyze the inter-institutional skill alignment, i.e., the similarity of skill profiles of the entire workforce of institutions between all institutions in the sample. The inter-institutional skill alignment is defined as the skill profile similarity denoted by \(\cos\phi\) (see Materials and Methods for the definition). We compute it for all pairs of institutions.
Figure 5A shows average inter-institutional skill alignment during four time periods from 2000 to 2019. The red error bars mark standard errors of the mean. We see that the average skill alignment is generally low, however, it doubled in the past twenty years, i.e., across the globe, skill profiles of institutions have become more similar.
Figure 5B shows the distribution of the differences in skill alignment between the pairs of institutions over time. A value of \(\Delta\cos\phi\) below zero means that institutions have become more similar; when \(\Delta\cos\phi>0\), institutions become more dissimilar. The dashed red line indicates the zero line of the x-axis. The three sub-panels capture the changes between the time periods, P1-P2, P2-P3, and P3-P4. It is visible that the peak of the distribution is moving toward the leftover time, indicating an acceleration toward becoming more similar in more recent years.
## Discussion
By quantifying the alignment of skills present at \(3,965\) institutions, which includes the publication records of \(9,299,250\) disambiguated authors affiliated with \(108\) countries, with the \(4,163\) skill types of the incoming and outgoing workforce, we can show strong quantitative signatures of academic skill alignment - the degree to which mobile scholars (newcomers or outgoers) publish on topics that are in line with those of their colleagues already at the new institution. In particular, newcomers tend to publish on topics that align with those of their colleagues already at the new institution. Alignment, as measured by skills profile similarity, is more pronounced at the most prestigious (i.e., top-cited) institutions than at average institutions. Even within the top \(1\%\) of highly cited institutions, there is a correlation between skill alignment and institutional citation performance. Research institutions with moderate levels of citation impact tend to have significantly less aligned skill profiles between natives and in- and outgoers. The greater alignment of skill profiles at top institutions is not surprising, as it indicates a strategic, specific, and targeted hiring policy that may not be present at more moderate institutions.
Highly aligned skill profiles potentially realize synergies between newcomers and existing faculty [2, 27] and can reinforce already strong research portfolios. However, this also may lead to selection pressures for those hired and the hiring institutions themselves and eventually lead to the under-representation of relevant research expertise, and important topics [39].
Two likely mechanisms could explain the origin of the observed similarities. One is that the newly hired scholars adapt their publication behavior (and scholarly interests) to the existing academic interests of the new institution. In this work, we see evidence that this may be the case, the outgoing researchers are more similar to the natives than the incoming researchers. This is reflected in a shift toward higher alignment and a narrower alignment distribution among outgoers. The other mechanism is the preference of institutions to hire scholars with similar skills to their current knowledge base or the preference of scholars to move to universities that are established in their fields. Also, a preferential dynamics of researchers going to places where lots of expertise exists, as, e.g. described in [40], might explain part of the observed effects.
We also assessed the role of collaboration within the institution. We found a significant difference in the skill alignments of newcomers who have collaborative relationships (co-authorship on publications)
Figure 5: Inter-institutional skill alignment over time. Panel A shows the average of pairwise cosine similarity for all pairs of institutions for four non-overlapping time periods: P1:2000-2004, P2:2005-2009, P3:2010-2014, and P4:2015-2019. The red error bars represent standard errors. Panel B shows the differences between the overall skill alignment of pairs of institutions, \(\Delta\cos\phi\), over time. \(\Delta<0\) means that institutions become increasingly similar in the composition of their overall skill profiles. The sub-panels capture the changes between the time periods. The tendency of becoming more similar is visible (over time, the peak is moving to the left).
with colleagues (natives or outgoers) at hiring institutions. Our results suggest that collaboration is the most natural approach for newcomers and natives to align, combine, and complement their skills at the institution. Both newcomers and outgoers who did not engage in collaborations are virtually unmatched by the local workforce, i.e., the cosine \(\sim 0.5\). Note that skill differences between natives and newcomers are small when the degree of cooperation between them is high. The role of collaboration in skill alignment may also explain the disciplinary differences observed in our study, where Social Sciences, Humanities, Mathematics, and Computer Science show systematically lower levels of skill alignment. These disciplines traditionally have lower collaboration rates than the Natural Sciences or Engineering[41].
A shortcoming of the present work is that individual preferences and motivations for mobility cannot be assessed. It would be interesting to supplement these results in future work with appropriately designed surveys and controlled experiments to uncover the relative importance of the individual-level mechanisms that lead to the observed profile alignments at the institutional level.
The presented results on the extent of skill alignments in scientific hiring can be considered steps toward a better understanding of talent flows in science. Future research would be important to determine how the alignment of institutions' skills profile interacts with different dimensions of workforce diversity, such as gender and seniority. This could provide policymakers with analytical tools to uncover the latent capabilities of different kinds of newcomers and assess their ability to influence (or not) the skill profile of their institutions.
Quantitative measures, such as those presented here, can inform and evaluate university (and unit) policy regarding their mobility, recruitment, and talent acquisition strategies, particularly about their existing competency profiles and those desired in the future. University leadership, funding agencies, and science policymakers in general- may benefit from a quantitative assessment of the degree of alignment within their respective areas or organizations and can use it to develop interventions aimed at reaching desired alignment levels (e.g., by promoting internal collaboration networks).
## Materials and Methods
Investigating the alignment between the skills profile of mobile scientists (newcomers or outgoers) and that of resident faculty at the institutional level requires data that describes the skills of individuals and captures the temporal information of the affiliations of every scientist. Such information is typically unavailable in surveys on a country's labor force. Even if available, the categories of highly skilled workers are often too ambiguous to identify specific groups of scientists[5]. Therefore, we use data from the _Dimensions1_ database, which we accessed through the Centre for Science and Technology Studies (CWTS) at Leiden University. _Dimensions_ covers local journals more comprehensively than other large-scale bibliographic databases such as _Web of Science_ or _Scopus_. Its broader scope allows our analysis to be more inclusive of organizations with a more local focus, and thus, we also reduce mainstream effects[10, 37].
Footnote 1: _Dimensions_ is produced by _Digital Science_ and was launched in January 2018. For more references, see Dimensions.ai website
We examine the publication patterns of disambiguated authors from 108 countries between 2000 and 2020. Our analysis relies on three major improvements to the data: algorithmic disambiguation of author-names, improved consistency of organizations' metadata[37], and a highly detailed field classification system[30], the latter also provided by the CWTS. We focus on disambiguated publications by authors with harmonized affiliation links. High-precision author disambiguation and institutional harmonization allow us to track the publication history of individual scientists across research institutions[10]. This provides us with 9,299,250 million disambiguated author names.
Individual authors were disambiguated using the author-name disambiguation algorithm developed by _Dimensions_[37], which uses the public ORCID2 as the basis for validating each author and their publication history. The disambiguation of organization names is based on the GRID3 system38. For these authors, we retrieve their affiliations and publication history. We only consider publication-intensive institutions with at least 2,000 indexed publications, resulting in 3,965 institutions. Associated with these authors are 25,310,742 distinct documents indexed in the _Dimensions_ database. The disambiguation of author names and the harmonization of research organization procedures allow us to produce a consistent overview of changes in researchers' affiliations with different institutions10 and topics in large-scale bibliometric analysis.
Footnote 2: For more information, see ORCID documentation
Footnote 3: For more information, see GRID website
### Measures of skill alignment for scientific institutions
The skill sets of an institution's workforce are defined using the publication-level classification system of science developed by Waltman and van Eck42. The classification is done using the Leiden algorithm[30] that clusters publications based on direct citation relations. With this method, we obtain a detailed classification system for scientific literature that covers all scientific fields. It provides several features. First, it identifies the relatedness between pairs of 38.4 million publications indexed in _Dimensions_ that are directly linked to 513 million citation relations. This step processes publications such as articles, reviews, book chapters, and proceedings from 2006 to 2020. In the second step, the publications are clustered into research areas using a clustering procedure and the areas are organized in a hierarchical structure[30]. Finally, the methodology results in a hierarchical clustering system: i) a top level with 22 broad disciplines, ii) the second level with 824 areas, and iii) the third level with 4,163 micro-clusters. For more details on this approach, we refer to[30, 42].
In this paper, we consider the third classification level of 4,163 micro-clusters to define the skills profile vectors of institutions, as shown in Figure 1. In the figure, quantities with a subscript \(\Sigma\) refer to the aggregate quantities of institutions. Formally, for an institution, \(i\) (for which we have omitted the index \(i\) in the figure for simplicity),
\[S_{\Sigma,i}^{\nu}=\sum_{r=1}^{N_{i}}S_{r,i}^{\nu_{r}}\ \,\ \ \ S_{\Sigma,i}^{\sigma}=\sum_{r=1}^{S_{i}}S_{r,i}^{\sigma_{r}}\ \,\ \ S_{\Sigma,i}^{\omega}=\sum_{r=1}^{O_{i}}S_{r,i}^{\omega_{r}}\]
where \(S_{r,i}^{\nu},\ S_{r,i}^{\sigma},\ S_{r,i}^{\omega}\) represent the \(n\)-component skill cluster vector of the newcomer, native, and outgoer scientists, \(r\), respectively. The institution where these scientists are hosted is labeled by the index, \(i\). The values \(N_{i},\ S_{i},\ O_{i}\) represent the total number of newcomers, natives, and outgoers, respectively, in institution, \(i\). Components in the skill vectors are always binary, 1 if the skill is present, 0 if it is not present in an individual or at the institutional level.
The angles \(\theta_{i}\) and \(\eta_{i}\) are defined according to the cosine similarity expressions with the Euclidean dot product:
\[\cos\theta_{i}=\frac{S_{\Sigma,i}^{\nu}\cdot(S_{\Sigma,i}^{\omega}+S_{\Sigma, i}^{\sigma})}{|S_{\Sigma,i}^{\nu}|\,|S_{\Sigma,i}^{\omega}+S_{\Sigma,i}^{\sigma}|}. \tag{1}\]
\[\cos\eta_{i}=\frac{S_{\Sigma,i}^{\omega}\cdot(S_{\Sigma,i}^{\nu}+S_{\Sigma,i}^ {\sigma})}{|S_{\Sigma,i}^{\omega}|\,|S_{\Sigma,i}^{\nu}+S_{\Sigma,i}^{\sigma}|}. \tag{2}\]
These definitions quantify the skill profile alignment of newcomers, \(\cos\theta_{i}\), and the skills profile alignment of outgoers, \(\cos\eta_{i}\), relative to the remaining researchers at academic institutions. A value of \(\cos\theta_{i}=1\) (or \(\cos\eta_{i}=1\)) indicates that two vectors of skill clusters are identical for a given institution, reflecting the fact that the institution attracts new scientists and retains native scientists or promotes outgoing scientists and retains native scientists with the same micro-clusters or'skills', but also that there are scientists producing publication outputs that express these skills with the same weight. Conversely, a value of 0 means that these different types of scientists do not have the same skills profile.
We introduce a reference or "null" model for these two measures that retains the size of the institutions in terms of their total number of competencies and the number of authors. This is done to remove correlations between newcomers (outgoers) and the profile of the natives. This way, we recalculate \(\cos\theta\) and \(\cos\eta\) by randomly assigning newcomers and outgoers to institutions. This procedure allows us to disentangle actual "local" matching between scientists from statistical effects due to collaborative activities and the institutional size; see Figure 2A and B.
Using the same workforce components shown in Figure 1, we additionally calculate a measure of inter-institutional skill alignment between institution \(i\) and \(j\), \(\cos\phi_{ij}(t)\), to track whether institutions' profiles are aligning or diverging over time. Here, \(\phi_{ij}(t)\) is the angle between total (native, incoming, and outgoing) skill vectors of institutions \(i\) and \(j\) at time period, \(t\) (i.e., 2000-2004, 2005-2009, 2010-2014, and 2015-2019). This indicator estimates and accounts for an institution's entire workforce (i.e., no distinction is made between newcomers, natives, and outgoers) and reflects overall profile alignment across all pairs of institutions over four non-overlapping time periods, \(t\).
We define inter-institutional alignment, \(\cos\phi_{ij}(t)\), as
\[\cos\phi_{ij}(t)=\frac{T_{i}\cdot T_{j}}{\left|T_{i}\right|\left|T_{j}\right|}\,. \tag{3}\]
where \(T_{i}=S_{\Sigma,i}^{\nu}+S_{\Sigma,i}^{\omega}+S_{\Sigma,i}^{\sigma}\) is the combined total skill vector of all scientists at institution, \(i\). Finally, we capture the change in pairwise institutional alignment, which we refer to as inter-institutional alignment, as
\[\Delta\cos\phi_{ij}(t)=\cos\phi_{ij}(t+1)-\cos\phi_{ij}(t)\,. \tag{4}\]
Clustering relatively homogeneous publication sets into high-resolution clusters allows us to compare the aggregate capabilities of scientists within and across institutions. As we explain in the following section, this allows us to compute the citation impact of organizations in a similar research context[42, 43].
### Citation impact indicators and normalization
In recent years, in scientometrics, several changes took place that continue to influence the formal analyses of scientific dynamics. A growing awareness of the need to account for differences across and within disciplines when assessing the impact of research has increased research toward innovative indicators. In particular, field-normalized indicators based on bibliometric analyzes have become increasingly important for evaluating citation impact. For example, the average number of citations per publication varies significantly across scientific fields, institutions, and countries. The average number of citations per publication also varies by the age of the publication[44]. Older publications are cited more frequently than more recent ones[45]. Because of this uneven distribution of citations across different fields or years, citation counts or averages cannot be compared across research units[44]. This is also important for the life and earth sciences, biomedical and health sciences, physical sciences and engineering, mathematics and computer
science, and social sciences and humanities because these fields encompass different sub-disciplines and the sub-disciplines vary widely.
Taking these issues into account, we use the same high-resolution micro-clusters of topics used to define institutional competency vectors in - or the third level of classification and denoted by \(S\) in 1 - to calculate the normalized citation indicators for each institution in our sample. These micro-clusters contain publications from multiple years (2000-2020), and each publication is assigned to a cluster based only on its citation relationships with other publications [30]. We use a full-counting approach at the institutional level to calculate citation impact. That is, if two institutions contribute co-authors to a publication, the publication is counted as a full publication for both institutions. Using a full-counting approach, we give more weight to collaborative publications than non-collaborative ones [42]. We use a publication window from 2006 to 2020 and a fixed citation window of four years to count citations to these papers through 2020. The authors' self-citations are not included in the calculation of impact indicators.
Before proceeding with the formal definition of citation-based indicators, we first consider a set of \(n\) publications denoted by \(1,\cdots,n\). Let \(c_{i}\) denote the number of citations of a publication \(i\), and \(e_{i}\) denote the expected number of citations of publication \(i\) given micro-cluster \(S\) and year \(t\) in which publication \(i\) was published. In other words, \(e_{i}\) is the average number of citations of all publications published in the same micro-cluster and year as publication \(i\). We define two indicators of citation impact for each research institution: the _total normalized citation score_, TNCS, and the _proportion of publications in the top \(n^{th}\%\)_, \(\text{PP}_{\text{topth}\%}\). The TNCS indicator captures an institution's total normalized citation rate of the produced publication volume. It is similar to what [46] calls the total field normalized citation score indicator and is defined as
\[\text{TNCS}=\sum\nolimits_{i=1}^{n}\frac{c_{i}}{e_{i}}\,. \tag{5}\]
The \(\text{PP}_{\text{topth}\%}\) uses percentile rank classes instead of mean-based indicators to normalize the citation impact of publications [42, 47, 48]. It measures the proportion of articles among the top \(n^{th}\%\) most cited papers in the same skill or micro-cluster, of the same age and document type. We first assign a percentile based on its position in the citation distribution of articles in the same micro-cluster. We use the approach described in [48] to calculate the percentile rank of each publication. In our analysis, we compute three variations of this metric. Specifically, we use the 99\({}^{\text{th}}\), the 95\({}^{th}\) and the 90\({}^{\text{th}}\) percentile ranks, which assign papers with a percentile equal to or greater than the 99\({}^{\text{th}}\), 95\({}^{\text{th}}\), and 90\({}^{\text{th}}\) percentile to the top 1%, 5%, and 10% of frequently cited papers, respectively. The percentile rank measures, \(\text{PP}_{\text{top}\,1\%}\), \(\text{PP}_{\text{top}5\%}\), and the \(\text{PP}_{\text{top}\,10\%}\) of each publication was calculated using
\[\text{PP}_{\text{top}\,\times\%}=\frac{\sum_{i=0}^{\infty}n_{i}c_{i}}{\sum_{i =0}^{\infty}n_{i}}\,, \tag{6}\]
where \(n_{i}\) denotes the number of publications from a scientific institution with \(i\) citations. The score of a publication with \(i\) citations is indicated by \(c_{i}\). According to this definition, the \(\text{PP}_{\text{topth}\%}\) is simply the average score of the publications of the unit of analysis. For simplicity, the definition assumes that all publications of a given unit belong to the same cluster. For more details, see [44, 48, 49].
|
2307.10244 | Evaluating and Enhancing Robustness of Deep Recommendation Systems
Against Hardware Errors | Deep recommendation systems (DRS) heavily depend on specialized HPC hardware
and accelerators to optimize energy, efficiency, and recommendation quality.
Despite the growing number of hardware errors observed in large-scale fleet
systems where DRS are deployed, the robustness of DRS has been largely
overlooked. This paper presents the first systematic study of DRS robustness
against hardware errors. We develop Terrorch, a user-friendly, efficient and
flexible error injection framework on top of the widely-used PyTorch. We
evaluate a wide range of models and datasets and observe that the DRS
robustness against hardware errors is influenced by various factors from model
parameters to input characteristics. We also explore 3 error mitigation methods
including algorithm based fault tolerance (ABFT), activation clipping and
selective bit protection (SBP). We find that applying activation clipping can
recover up to 30% of the degraded AUC-ROC score, making it a promising
mitigation method. | Dongning Ma, Xun Jiao, Fred Lin, Mengshi Zhang, Alban Desmaison, Thomas Sellinger, Daniel Moore, Sriram Sankar | 2023-07-17T05:35:30Z | http://arxiv.org/abs/2307.10244v1 | # Evaluating and Enhancing Robustness of Deep Recommendation Systems Against Hardware Errors
###### Abstract.
Deep recommendation systems (DRS) are designed to offer customized content with multi-modal contexts such as user profile, interaction history, and item information. To handle sophisticated inputs, DRS usually incorporate heterogeneous architectures that may include multi-layer perceptrons (MLPs), embedding tables, and attention mechanisms. Large scale DRS heavily depend on specialized high performance computing (HPC) hardware and accelerators to optimize energy, cost efficiency, and recommendation quality. However, with the increasing workload of large-scale AI jobs like DRS, modern data center fleets have become more heterogeneous and scaled with a growing number of computing accelerators. This significantly increases the risk of hardware failures which can lead to wrong results and degraded service. Therefore, it is a critical need to study and enhance the robustness of widely deployed AI models with error injection campaign across software and hardware stacks. This paper presents the first systematic study of DRS robustness against hardware errors. We develop _PyTEI_, a user-friendly, efficient and flexible error injection framework on top of the widely-used PyTorch framework. _PyTEI_ enables extensive error injection campaign on various DRS models and datasets. We first use dummy models to identify potential factors affecting DRS robustness, and then evaluate 5 realistic models on 3 benchmark datasets. We find that the DRS robustness against hardware errors is influenced by various factors from model parameters to input characteristics. The MLP components inside DRS is particularly vulnerable to hardware errors, and other factors such as the sparse and dense feature ratio and input sparsity also account for the drastic differences of robustness among DRS models and datasets. Additionally, we explore enhancing DRS robustness with 3 error mitigation methods including algorithm based fault tolerance (ABFT), activation clipping and selective bit protection (SBP). Particularly, applying activation clipping can recover up to 30% of the degraded AUC-ROC score, making it a promising method for enhancing DRS robustness.
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
+
Footnote †: journal: Computer Architecture
For example, SDC is a type of error that often goes undetected by the system's error detection mechanisms, leading to inaccurate or incorrect results that can have significant consequences for HPC systems and application services. In cloud services, SDC can result in data loss or corruption, causing service disruptions and loss of revenue. Technology companies such as Google and Meta, have reported instances of SDC in their large-scale infrastructure fleet (Gomez et al., 2017; Gomez et al., 2018). Google's study of their production systems found that SDC occurred more frequently than previously believed, with an average rate of a few per several thousand machines (Gomez et al., 2018). The existence of SDC in HPC systems with millions of machines can significantly impact application quality, and cause errors or incorrect results toward irremedable loss. More importantly, the traditional data corruption usually leads to noticeable system crashes or data loss, however, SDC can occur without visible symptoms or warnings, making it challenging to detect and diagnose.
In this paper, we propose to systematically evaluate the impact of hardware errors during DRS model inference. We first delve into the architecture of a DRS and perform an analytical study on a benchmark model to evaluate the impact of hardware errors on the output quality. We then sweep across models with different (hyper-)parameters and inputs workload characteristics to investigate the model robustness. To expand the analytical study, we further inject hardware errors into 5 widely acknowledged DRS models with 3 benchmark datasets. We also discuss and examine the effectiveness of 3 error detection methods including algorithm based fault tolerance (ABFT), activation clipping and selective bit protection (SBP). The main contributions of the paper are as follows:
* This paper presents the first study of evaluating DRS robustness against hardware errors. Specifically, we systematically explore multiple model hyper-parameters and input characteristics using a dummy model to identify potential factors that can impact DRS model robustness under hardware error. The use of dummy models and synthetic data drastically facilitates the design space exploration process while still providing insightful results to guide further experiments.
* We develop a user-friendly, efficient and flexible hardware error injection framework, called _PyTEI_, on top of the widely-used PyTorch framework. Using _PyTEI_, we conduct an extensive error injection campaign on various DRS models and application datasets. We then inject hardware errors into 5 realistic DRS models on 3 benchmark datasets to evaluate their corresponding robustness. We show that the robustness of DRS models can be affected by factors from both architecture and input features, including **DRS-specific features** such as dense/sparse feature ratio and input sparsity of data.
* We explore enhancing DRS robustness with three error mitigation methods including ABFT, activation clipping and SBP. Particularly, activation clipping can recover up to 30% of the degraded AUC-ROC score, which paves promising way toward robustness enhancement for DRS.
## 2. Background
Deep learning algorithms are introduced into recommendation systems with the objective of providing high quality and accurate personalized contents to users (Wang et al., 2019). A typical task is the click-through-rate (CTR) prediction, where a DRS is developed to estimate the probability of a user's interaction (click, bookmark, purchase, etc.) with a specific item in a given context (Wang et al., 2019). Diverse range of DRS have been proposed and implemented. For example, Wide & Deep (WD) from Google considers the joint advantage of neural networks and factorization models (FM) for enhanced performance (Wang et al., 2019). Further, DeepFM (DFM) enables the learning of both high- and low-order feature interactions as well as a sharing strategy of feature embedding which shows increased performance and efficiency (Huang et al., 2019). Other experimental and/or commercial architectures such as attentional FM (AFM) (Wang et al., 2019), deep cross networks (DCN) (Wang et al., 2019) and deep learning recommendation model (DLRM) from Meta (Wang et al., 2019) have also been introduced for higher performance and efficiency.
Without loss of generality, in Fig. 1 we present a typical model of DRS that is trying to predict the purchase probability of an item during a user visit. The input to the DRS model consists of multimodal features related to the user and the item, where the features can be mainly considered into two groups: the dense features such as the age and gender of the user and the time of day of this visit, and sparse features such as user and item information. Typically, the dense features are passed to MLPs for feature extraction. The sparse categorical features on the other hand, are handled by embedding tables and converted into latent embeddings. After the sparse/dense interaction, the features are then used to predict the probability of user purchase on the item via another MLP. Such probability can be used on a set of items to select relevant candidates to be recommended to the user.
Processing the dense features using neural networks and processing the sparse features with embedding tables have exhibited drastically different workload characteristics during run-time: the neural networks inside an DRS (rendered by MLPs or attentional layers with matrix operations) are usually compute-intensive, while the embedding processes (rendered by indexing embedding tables) are usually memory-intensive (Huang et al., 2019; Wang et al., 2019).
## 3. Methodology
### Error Model
In this paper, we focus on the random bit flip error model for its wide acknowledgement to describe hardware errors (e.g., SDCs) in HPC
Figure 1. An example DRS model for purchase prediction.
systems [27; 44; 43; 39; 29]. In this model, each bit position can flip based on a probability referred to as bit error rate (BER). The bit flip errors are also statistically independent from each other. Although there are other error models such as permanent errors where the erroneous positions are stuck at a fault value [41], we note that they are not the focus of this work since they are less difficult to mitigate compared with random bit flip errors, as permanent errors can be detected and/or mitigated via error correction and detection codes [42].
### Error Injection Framework: _PyTEI_
To enable systematic analysis on DRS robustness against SDC, it requires efficient error injection and mitigation so that designs inside the vast design space can be emulated and iterated with reasonable amount of time. We have searched through the available open-sourced error injection frameworks including PyTorchFI [31], Ares [36], BinFI [7], GoldenEye [32] and FIdelity [23]. However, those frameworks have some common handicaps from being used in this work. The most challenging obstacle is the efficiency. As floating point tensors in PyTorch do not naturally support bit-level operations (such as bit-wise XOR), many frameworks leverage additional packages such as _struct_ or _bistring_ for error injection. These packages require frequent conversions between torch tensors and other types of data containers with back-and-forth movements between devices. This accounts for most of the time spent on error injection based on our evaluation, particularly under larger model and higher BERs. Another issue is the tedious setup for environment to use those frameworks. Most of the available frameworks rely on specific environments and/or dockers with numerous additional packages. Some even require users to compile or build CUDA codes by themselves. Moreover, many of them do not provide error mitigation methods or lack of flexible options for hooking in customized error mitigation. As a summary, we need a framework that has the following critical requirements specifically:
* The framework should be able to perform bit-level flips on parameters of PyTorch models to inject emulated hardware errors based on a pre-defined BER. The framework should also provide interface to implement error mitigation methods accordingly.
* The framework should be easily generalized for models with different architectures or components, convenient to setup and require minimal dependency beyond PyTorch.
* The emulation of error injection and mitigation should be considerably fast so that large numbers of different designs can be evaluated efficiently.
To realize such requirements, we develop a framework _PyTEI_, which is an user-friendly, efficient and flexible hardware error injection framework using no other dependencies besides PyTorch. It has the following key features and contributions which essentially enable us to evaluate DRS robustness at large scale efficiently:
* **User-friendly.**_PyTEI_ does not strictly require any other dependency other than PyTorch. Note that user may use other packages such as _timeit_ and _tqdm_ for auxiliary purposes such as tracking the progress and time. Additionally, _PyTEI_ also does not require users to compile, build such as other frameworks, nor even use custom environments.
* **Efficient.**_PyTEI_ uses PyTorch built-in data type conversion (_torch.view()_) in bit flip operations, which significantly (about 100X) accelerates the error injection effort. _PyTEI_ can inject bit flip errors at an considerably high BER (\(10^{-3}\)) to a model with about 19M parameters within a few seconds using even just a commodity CPU. This also enables _PyTEI_ on any PyTorch model as long as they are implemented as _torch.nn.Module_ with named parameters.
* **Flexible.**_PyTEI_ provides flexible interfaces for customization on both error model and mitigation. _PyTEI_ allows user to define their customized error model such as random value error beyond the provided bit flip error. _PyTEI_ also implements two mitigation methods for emulation: activation clipping and SBP (introduced in Sec. 4.2) and allows user to customize the mitigation methods.
An overview of our approach to inject bit errors into DRS models using _PyTEI_ is present in Fig. 2. Before error injection, we use a set of test data to obtain golden outputs (from error-free models) and/or test score (e.g. AUC-ROC). In the meanwhile, the user can provide information such as the BER (e.g., 1e-7), the target (e.g., MLP) and parameters (e.g., weights) for error injection. Based on the BER, injection target and model architecture, we generate an error map to indicate what parameters and which bit positions of
Figure 2. Overview of injecting bit errors in DRS models in PyTorch using _PyTEI_.
the parameter to have error. The model and the error map are then iterated together to update the model parameters. With the model after injection, we use the same test data again to observe outputs which is compared with the golden outputs and/or labels to obtain the output deviation and/or the test score degradation.
## 4. Evaluation Approach
### Evaluation of DRS Robustness
An overview of experiments in this work to evaluate the robustness of DRS against hardware error is shown in Fig. 3. It features a two-stage evaluation using dummy and realistic models respectively with the error injection framework _PyTEL_. The first stage is to inject errors into dummy models, where the objective is to explore a broad design space to provide insights such as identifying the impact of model hyper-parameters and analyzing which of the components inside a DRS are less robust against hardware errors. The second stage is to inject errors into realistic models with the guidance from the observations using the dummy model. The realistic models are trained and tested with realistic datasets as well. Three error mitigation methods are also discussed and/or evaluated for their effectiveness.
For dummy models, we first provide hyper-parameters such as the depth of MLP layers, the size of MLP hidden layer and the size of the latent embedding to configure the model. The model parameters are initialized uniformly (Krizhevsky et al., 2014) and **are not further trained**. The motivation of using dummy models without training is to reduce the effort of model training, which usually takes unrealistic amount of GPU hours (even just by fine-tuning). Additionally, further experiments using realistic models (which are trained) and data align with the observation using the dummy model as discussed in Sec. 6. Therefore we think it is not plausible or necessary to explicitly train each of the model on the entire huge design space. The dummy model is then injected with error to obtain the error-injected model. Random synthetic data are input to both the models and the RMSE score is calculated based on the outputs of the two models to describe the output deviation after error injection which is used to evaluate the hardware error robustness. The detailed results on dummy models are presented in Sec. 5.
For realistic models, the architecture is implemented based on corresponding literature. Realistic datasets are used to train and test the model to obtain the baseline scores. After error injection, the same test data are input to the error-injected realistic model to obtain the scores after error injection. The score differences are used to evaluate the DRS robustness. The detailed results on dummy models are presented in Sec. 6. The components inside realistic models can be slightly different from that of dummy models due to architectural differences. For example, the attentional mechanisms inside AFM are also considered as MLP due to their similar implementation using fully connected layers and the major computations inside them are also matrix multiplications.
### Evaluation of Error Mitigation
In this work, we also present preliminary studies on some error mitigation schemes that are in general implemented for other deep learning models: ABFT (Krizhevsky et al., 2014), output clipping (Krizhevsky et al., 2014) and selective bit protection (Krizhevsky et al., 2014). Analysis and results of evaluation on error mitigation is presented at Sec. 7.
**ABFT.** ABFT has recently been introduced to DRS for soft error detection (Krizhevsky et al., 2014). For the general matrix multiply (GEMM) operations like \(\mathbf{C}=\mathbf{A}\times\mathbf{B}\), an extra column \(\mathbf{S_{B}}\) is appended to \(\mathbf{B}\) of which each element is the corresponding row-sum, i.e., \(\mathbf{S_{B}}[i]=\sum_{j=0}^{n-1}\mathbf{B}[i][j]\). Therefore, with the appended column, we can obtain the equality check vector \(\mathbf{AS_{B}}\) of which each element **should be** the corresponding row-sum of C. Therefore, if such equality is not hold, an error is detected. The result of this GEMM operation will be discarded and the operation is also repeated. For embedding operations, an additional column \(\mathbf{C_{T}}\) is similarly added alongside embedding table \(\mathbf{T}\) of which each element is the corresponding row-sum as well. Assume indices \(\mathbf{F}\) are hit during an embedding operation, then the output embedding should be \(\mathbf{R}=\sum_{f\in F}\mathbf{T}[f]\). The same indices are also used to index the elements in the additional column to obtain the equality check value \(r=\sum_{f\in F}\mathbf{C}\mathbf{T}[f]\) and the value \(r\)**should be** equal to the sum of \(\mathbf{R}\). Similarly, if such equality is not hold, an error is detected.
**Activation Clipping.** Clipping the network activations has been investigated to improve the fault tolerance (Krizhevsky et al., 2014), or enable ultra-low precision designs (Krizhevsky et al., 2014). For example, the ReLU activation function in the model is revised by Eq. (1) (Krizhevsky et al., 2014) or Eq. (2) (Krizhevsky et al., 2014) where activation will be confined within a range. \(T\) is the pre-defined threshold for clipping.
\[f(x)=\begin{cases}x&\text{if }0\leq x\leq T\\ 0&\text{otherwise}\end{cases} \tag{1}\]
\[f(x)=\begin{cases}0&\text{if }x<0\\ x&\text{if }0\leq x\leq T\\ T&\text{if }x>T\end{cases} \tag{2}\]
**Selective Bit Protection.** Preliminary studies have illustrated that certain bit positions in specific data types may have dominating effect on the error behavior (Krizhevsky et al., 2014). For example, in IEEE-754 floating point data format, the value of a 32-bit float number is determined by 3 fields of bits: the sign-bit, the 8-bit exponent filed and the 23-bit mantissa field as shown in Eq. (3). Therefore, the most significant bits in the exponent field are observed to have the most critical contribution of the value and is prioritized to have protection from bit errors (Krizhevsky et al., 2014) when it is impossible to protect all the bit positions due to resource limitations. In Sec. 7, we provide analytical and/or experimental results on those error mitigation schemes and compare their effectiveness and efficiency.
\[\text{val}=\text{sign}\times 2^{\text{exponent}}\times\text{ mantissa} \tag{3}\]
For a concise summary of our methodology, we aim to explore and answer the following research questions in this work:
* **RQ1**: How will DRS models generally be impacted by hardware errors? What components inside a DRS model are more vulnerable, or vice versa?
* **RQ2**: How will hyper-parameters such as the hidden layer size of MLPs and the embedding dimensions affect the robustness against hardware errors?
* **RQ3**: Will error mitigation methods for other deep learning models such as ABFT, activation clipping and SBP still be effective for DRS models?
## 5. Experiments: Dummy Models
### Dummy Models and Data Generation
For simplicity, we build a dummy DRS model with minimal configuration. The inputs to the model are straightforwardly synthesized, each of which has two vectors representing the dense and sparse features, respectively. To ensure simplicity, the dense features are floating point numbers generated with standard Gaussian distribution, and the sparse features are binary and in Bernoulli given a probability (sparsity). The models consists of a MLP handling the dense features, an embedding table to handle the sparse features and another MLP as the predictor, as shown in Fig. 3. The hyper-parameters of the dummy model and the parameters for generating the dataset which establish the design space to explore according to 1. Specifically, we explore the impact of the depth and hidden layer size of the MLP model, the dimension of embedding for models and the input sparsity and sweep the BER from \(10^{-9}\) to \(10^{-2}\). We show below that such dummy models with synthetic data have already shown remarkable performance in identifying DRS robustness vulnerability.
### Robustness Analysis of Dummy Models
As briefly described in Sec. 4.1 and Fig. 3, for dummy models, we use the RMSE between the outputs from error-free and error-injected models to evaluate the deviation and then characterize the impact of hardware errors. From Fig. 4 to Fig. 6 we show the RMSE metrics of the all the design points explored. We omit the RMSEs for BERs below \(10^{-8}\) and above \(10^{-6}\) because they are either in all zeros or all _inf_ or _nan_ numbers. According to our observation, there are possibilities that the bit errors happen on critical positions within a number such as the exponent field in the IEEE-754 32-bit floating point number (Kang et al., 2016) and result in extremely large deviations (e.g., flipping the 2nd bit of the number 0.625 will get \(2.13\times 10^{38}\)) which, after network propagation, can exceed the range of floats and become _inf_ or _nan_. Therefore, if there are more than 1 such invalid numbers of the output, we mark the RMSE as _inf_ or _nan_ accordingly in the figure. On the other hand, we mark any RMSE less than 0.005 as 0.0 for presenting in the figure, indicating there is no noticeable errors observed. For example, the 0.0 at the upper left slot of the row "Entire Model" indicates that when both the **MLP hidden size** and the **embedding dimension** are 64, injecting errors to the weights on the **entire model** does not induce in any noticeable error on the output.
We can make several observations from the figure. Overall for the figures from Fig. 4 to Fig. 6, when the BER increases the RMSE metric will also increase accordingly. This is intuitive and aligns with the observations of other machine learning models in general, since more errors injected into the model will likely translate into higher output deviations. However, we do observe that tuning the hyper-parameters of different components inside a DRS can uncover the different characteristics of robustness. For example, although by increasing the MLP hidden size (from top to down in each sub-figure) and the embedding dimension (from left to right in each sub-figure) can both lead to increasing error, MLP hidden size seems to induce more impact by having higher RMSE, or more invalid numbers.
Based on the results we infer that such difference between MLP and embedding originates from their corresponding architecture.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{model hyper-parameters} & MLP depth & \(1\), \(2\) \\ \cline{2-3} & MLP hidden layer size & 64, 128, 256, 512 \\ \cline{2-3} & embedding dimension & 64, 128, 256, 512 \\ \hline \multirow{3}{*}{input characteristics} & dense dimension & 128 \\ \cline{2-3} & sparse dimension & 8192 \\ \cline{1-1} \cline{2-3} & sparsity & 0.001, 0.01 \\ \hline bit error rate & from 1e-9 to 1e-2 \\ \hline \end{tabular}
\end{table}
Table 1. Hyper-parameters of the dummy model, the parameters for generating the input and the evaluated BERs.
Figure 3. The methodology of experiments: A two-stage robustness evaluation using the dummy model and the realistic models respectively. Each stage features error injection using _PyTEI_. Three error mitigation methods are also discussed and/or evaluated in the stage of realistic models.
For MLP, even with drop-out schemes, each weight as a parameter will more or less contribute when calculating prediction results with the forward pass. Therefore, when increasing the hidden layer size of MLP, the outputs also become more influenced. However, as to embedding, for each sample only a limited number of entries in the embedding table are indexed due to the sparsity of inputs, therefore it is of a high chance that the indexed entries are not affected by the error. Thus, increasing the embedding dimension does not exacerbate the output error as significant as MLP.
**Impact of MLP depth.** In Fig. 5, we show the results when increasing the depth of the MLPs in the DRS model by adding another hidden layer. It can be observed that the added hidden layer deteriorates the output quality regardless of whether the error injection is into MLP, embedding or the entire model. According to our analysis, the reason behind of the increasing error is two-fold: first, an additional hidden layer in MLP indicates more parameters. Thus, with the same BER the absolute number of bit errors is also higher. Second, if the bit error in the first hidden layer result in an erroneous value, it can be propagated to the second layer and cause more erroneous values due to the fully connection.
**Impact of input sparsity.** In Fig. 6, we further change the sparsity of the sparse vectors in the input set by \(10X\) from 0.001 to 0.01. By comparing the last row "Embedding" in Fig. 6 and Fig. 5, we can observe that since more entries are indexed within the embedding table, more errors are incorporated in the latent embeddings. Then the impact from error-injected embedding table will subsequently become more pronounced, which is presented by having higher RMSE and/or more invalid numbers.
As a summary, from the experimental results on dummy models we can have the following insights: First, similar to other machine learning models, DRS model suffers a similar trend that when the BER increases, the output quality will degrade in general. However, the hyper-parameters of different components inside a DRS can pose different impact on the robustness. This is resulted from the inherent architectural differences of those components. When the BER increases to around \(10^{-6}\), nearly all the results witness invalid values in their outputs. Therefore, we can infer that for the experiments on realistic model and data, the AUC-ROC score will experience noticeable degradation for BERs above \(10^{-6}\).
## 6. Experiments: Realistic Models
### Realistic Models and Datasets
We target at 5 widely acknowledged recommendation systems: Factorization Machine (FM) (Wang et al., 2016), Deep Factorization Machine (DFM) (Wang et al., 2016), Attentional Factorization Machine (AFM) (Wang et al., 2016), Deep Cross Network (DCN) (Wang et al., 2016), and Wide and Deep (WD) (Wang et al., 2016). The models are implemented from _torchfm_1 and the details about the evaluated models are listed in Tab. 2. As it is very difficult to find pre-trained PyTorch models for those DRS models since different models use different frameworks for implementations and datasets for training, we train all the models afresh using the default parameters in _torchfm_.
Footnote 1: [https://github.com/rixwew/pytorch-fm](https://github.com/rixwew/pytorch-fm)
We use AUC-ROC score as our quality metric for consistency on comparison since it is used mostly across the papers of the DRS models and use early stopper that the training terminates when the AUC-ROC score does not improve for 3 consecutive epochs. The baseline AUC-ROCs reported in Tab. 2 mostly align with what the original research papers present though we do note that they can still be slightly different. Since the main objective of this paper is not focused on competing the scores but on evaluating the robustness, we recognize such difference acceptable for research in this work.
We use 3 benchmark datasets for CTR prediction: Movielens-1M2, Movielens-20M and Criteo3 DAC dataset. The Movielens-1M dataset contains 1 million ratings from 6,000 users on 4,000 movies and the Movielens-20M dataset contains 20 million ratings and 465,000 tag applications applied to 27,000 movies by 138,000 users. The Criteo dataset contains the click records of 45 million users
Figure 4. Root mean square error (RMSE) between outputs of error-free models and outputs from error-injected models under different BERs, MLP hidden layer size and embedding dimensions. Errors are injected into different components inside the model as outlined by the title of each row.
with 13 continuous and 26 categorical features. The datasets are randomly split by 8:1:1 for training, validation and testing.
### Robustness Analysis of Realistic Models
In Fig. 7 to Fig. 9 we present the AUC-ROCs score of the evaluated realistic DRS models under different BERs from \(10^{-9}\) to \(10^{-2}\). The line shows the average of the AUC-ROC scores while the shade represents the range of the scores observed from 10 individual experiments. We can first observe that the general trend is consistent with what we have observed from experiments on dummy models: with higher BER, the model outputs (scores) suffers from higher quality degradation. Moreover, for most of the models, the scores exhibit noticeable quality loss after the BER of \(10^{-6}\), which also aligns with our inference using dummy models. However, severe score drops start from \(10^{-5}\) or \(10^{-4}\) and most of the models becomes completely disrupted into random guess (AUC-ROC score of around 0.5) with BER higher than \(10^{-3}\).
By using dummy models we observe that different components inside the DRS model have different robustness characteristics. We also have similar observations from realistic models: the embedding tables are more robust against error as the significant error drops start after BER \(10^{-3}\) as shown in Fig. 8. The range of scores of 10 runs is also more consolidated near the average by having smaller shading areas. Moreover, models with more MLP architectures such
Figure 5. Root mean square error (RMSE) between outputs of error-free models and outputs from error-injected models under different BERs, MLP hidden layer size and embedding dimensions _when the number of hidden layers inside MLP is increased from 1 to 2._ Errors are injected into different components inside the model as outlined by the title of each row.
Figure 6. Root mean square error (RMSE) between outputs of error-free models and outputs from error-injected models under different BERs, MLP hidden layer size and embedding dimensions _when the number of hidden layers inside MLP is increased from 1 to 2 and the input sparsity is increased by 10X to 0.01._ Errors are injected into different components inside the model as outlined by the title of each row.
as DCN, DFM and WD are less robust against hardware errors compared with embedding-dominant architectures such as FM (which only has MLP for the final predictor), based on Fig. 9.
We have also observed that different datasets can also impact the model robustness. For dummy models, we use randomly synthesized data with fixed dimensions of dense and sparse features as well as the sparsity. However, it can be observed that models developed for the Criteo dataset are less robust against errors, as the BER where they exhibit noticeable score degradation is lower than that of the two MovieLens datasets. This can result from dataset-specific attributions, for example the models developed for Criteo have in general higher amount of MLP architectures compared with the rest datasets. The Criteo dataset has less sparsity which also influences the robustness of DRS model as analyzed using dummy models.
As a summary, from the experimental results on realistic models and datasets we can have the following insights: In general, the robustness of DRS against hardware error intuitively follows the trend of other deep learning models, i.e., the score gradually decreases when BER increases. However, the robustness of DRS drastically (up to 2 orders of magnitude) vary between different models and datasets. Such huge variation, based on our analysis and observation, originates from two major aspects: the architectural differences and the characteristics of input data. A DRS model with more MLP architectures, higher amount of dense input features and less sparsity in sparse input features is likely to be more susceptible against hardware errors and therefore exhibits lower robustness and requires more intensive error mitigation.
## 7. Experiments: Error Mitigation
ABFT.ABFT is proved to be effective to detect soft errors in DRS of having 99% effectiveness and 10% false positive of bit flip error detection with overhead below 26% on various DRS architectures (Zhou et al., 2017). However, most of such error correction methods are capable of single-error correcting and double-error detecting (SEC-DED), and simply re-execute when an error is detected but not correctable. When we attempt to implement ABFT, the experiment hardly finishes and outputs any meaningful results. Particularly when the BER is higher than \(10^{-4}\), almost all the matrices for parameters under such higher BER contain multiple errors which always force re-execution of operations under ABFT scheme. Therefore, we conclude that such error-correction-code based methods work for intermittent, low-rate soft and transient errors caused by, e.g., cosmic rays and radiation effects yet are infeasible for scenarios with higher BERs which can come from logic and data-path errors, or voltage and frequency scaling.
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Model** & **Dataset** & **\#Parameters** & **Baseline AUC-ROC** \\ \hline \multirow{3}{*}{fm (Wang et al., 2018)} & MovieLens-1M & 171K & 0.813 \\ & MovieLens-20M & 4.59M & 0.838 \\ & Criteo & 18.5M & 0.779 \\ \hline \multirow{3}{*}{dfm (Wang et al., 2018)} & MovieLens-1M & 170K & 0.842 \\ & MovieLens-20M & 4.59M & 0.838 \\ & Criteo & 18.5M & 0.785 \\ \hline \multirow{3}{*}{afm (Wang et al., 2018)} & MovieLens-1M & 170K & 0.862 \\ & MovieLens-20M & 4.58M & 0.827 \\ & Criteo & 18.5M & 0.801 \\ \hline \multirow{3}{*}{dcn (Wang et al., 2018)} & MovieLens-1M & 161K & 0.824 \\ & MovieLens-20M & 4.32M & 0.818 \\ & Criteo & 17.4M & 0.799 \\ \hline \multirow{3}{*}{wd (Wang et al., 2018)} & MovieLens-1M & 170K & 0.828 \\ & MovieLens-20M & 4.59M & 0.815 \\ \cline{1-1} & Criteo & 18.5M & 0.791 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Models, Datasets and Baseline AUC-ROC Scores
Figure 7. Performance of recommendation systems under different bit error rates. Errors are injected into the weights in _both embedding tables and neural networks_ in each model. Results are averaged from 10 different runs of error injection with shaded areas covering the range of maximum and minimum scores observed.
**Activation Clipping and SBP.** For activation clipping we manually clip the activations inside the range of \([-6,6]\) (inspired by ReLU6 (LeCun et al., 2015)). For SBP, we protect the sign and exponent bits. Although this work, which aims to provide primary evaluation on the error mitigation methods for DRS, does not focus on exploring the optimal parameters for activation clipping or the bit positions to protect for specific DRS models, we provide functionality for users to customize those parameters in _PyTEL_. We present the results of error mitigation using activation clipping and SBP in Fig. 10. The BERs start from \(10^{-5}\) since any lower BER does not incur significant degradation in score. We can observe that mostly, both of the error mitigation schemes are effective to regain some AUC-ROC score across all the DRS models and all the datasets by 0.05 to 0.3.
However, activation clipping compared with SBP is more effective on regaining the score loss, particularly for models that have deep MLP architectures such as DCN, WD and DFM. The effectiveness of error mitigation also varies across different datasets. Generally, models that show stronger robustness (FM and AFM) as discussed in Sec. 6 also experience better recovery. Those observations align with our analysis that the MLP components inside DRS models, when having error, contribute to most of the output quality degradation. Thus, clipping activation from MLPs can enable better error recovery compared with generally protecting
Figure 8. Performance of recommendation systems under different bit error rates. Errors are injected into the weights in the _embedding tables_. Results are averaged from 10 different runs of error injection with shaded areas covering the range of maximum and minimum scores observed.
Figure 9. Performance of recommendation systems under different bit error rates. Errors are injected into the weights in the _neural networks_ in each model. Results are averaged from 10 different runs of error injection with shaded areas covering the range of maximum and minimum scores observed.
selective bit positions of all the parameters. Another advantage of activation clipping is its smaller overhead. Protecting selective bit positions require dedicated mechanisms or hardware. Clipping activation of MLP layers are less resource-limiting since it can be simply implemented by modifying the activation functions.
## 8. Discussion
In this section, we can discuss the research questions as mentioned at the end of Sec. 3:
**RQ1:** How will DRS models generally be impacted by hardware errors? What components inside a DRS model are more vulnerable to hardware errors, or vice versa?
**AI:** Similar to other machine learning models, DRS models are also negatively impacted by hardware errors. However, different components inside a DRS exhibit different hardware error robustness due to the drastic heterogeneity of architectures incorporated. According to observations from Sec. 5 and Sec. 4.1, MLP architectures are usually considered less robust while embedding tables are more robust against hardware errors. Therefore, we recommend future system designers to prioritize MLPs for protection to mitigate the impact of hardware errors.
**RQ2:** How will hyper-parameters such as the hidden layer size of MLPs and the embedding dimensions affect the robustness against hardware errors?
**AZ:** Tuning the hyper-parameters will influence the hardware error robustness of DRS models. Similarly, different components inside DRS also exhibit different sensitivity on hyper-parameter tuning. In general, by increasing the hidden layer size of MLPs in DRS, the robustness suffers from more severe degradation comparing with increasing the embedding dimension. Additionally, dense/sparse feature ratio and input sparsity are the unique robustness characteristics of DRS compared with other deep learning models, which are originated from the architectural difference between MLP and embedding components, as analyzed in Sec. 5.
**RQ3:** Will error mitigation methods for other deep learning models such as ABFT, activation clipping and SBP still be effective for DRS models?
**A3:** According to our analysis and observation, conventional ABFT is implausible to enhance the robustness of DRS under our error scenario due to their theoretical handicap. Activation clipping and SBP are effective for the evaluated DRS models to recover degraded score. However, activation clipping is observed to be more effective than SBP since it specifically protects the MLP components inside the DRS model which are less robust against hardware errors based on our analysis. Activation clipping also requires less overhead compared with SBP.
## 9. Related Works
Hardware errors can originate from various sources. Various beam testings have revealed that cosmic rays and radiation can induce soft transient errors in GPU memories (Wang et al., 2016; Wang et al., 2017). Additionally, manufacturing defects and device aging can induce permanent faults which are particularly common in deep learning accelerators with systolic arrays (Wang et al., 2016; Wang et al., 2017; Wang et al., 2017). In addition to faults, hardware errors can also be intentional, e.g., from the trade-off between accuracy and efficiency which is acknowledged as approximate computing (Wang et al., 2016; Wang et al., 2017). For example, voltage scaling can induce logic or timing errors in memories (Wang et al., 2016; Wang et al., 2017; Wang et al., 2017) or computational units (Wang et al., 2017; Wang et al., 2017), which can degrade the deep learning model output quality. Approximate or inexact logic in circuit design can also be applied to functional units such as multipliers for accuracy and power trade-offs (Wang et al., 2016; Wang et al., 2017; Wang et al., 2017). Due to the diverse sources of error, the corresponding error rate can also drastically vary, which motivates us to evaluate across a broad range of BERs to accurately assess deep learning model robustness in this paper.
Figure 10. Comparison between no protection, activation clipping, and selective bit protection under different BERs. Results are averaged from 10 individual runs.
There are method to mitigate the impact of hardware errors in deep learning systems to enhance their robustness. For example, limiting the activation has been explored to improve the resilience of hardware faults for DNNs by confining the output of activation functions within a valid range (Kumar et al., 2018). For specific data types, critical bits protection is also investigated to mitigate the impact of hardware bit flip errors (Kumar et al., 2018). Error code correction is also commonly used to identify and mitigate the impact of hardware errors, e.g., single bit and double bit error correction using ECC requires about 12.5% redundancy for HBM2 memory on compute-class GPUs (Kumar et al., 2018). ABFT has been recently evaluated on the MLP and embedding components of a DRS model on soft errors (Kumar et al., 2018), but it assumes the error correction to be simply re-execution of the operation impacted, which becomes implausible under our error schemes (as detailed in Sec. 7). In addition, there are other frameworks for fault or resilience analysis such as Thales (Thales, 2018), and FIdelity (Kumar et al., 2018) which consider hardware factors such as flip-flop reuse, but those frameworks still largely focus on single-bit soft and/or transient errors. Such absence motivates us to present results with realistic models and datasets in this paper.
## 10. Conclusion
With burgeoning deployment of DRS on HPC with various architectures from GPUs to ASICs, the associated hardware errors from diverse sources become a more pronounced concern to threat the QoS. In this paper, we systematically analyze the robustness of DRS against SDC from hardware errors and evaluate the factors that significantly contribute to the degraded output quality under hardware error. We develop _PyTEL_, a user-friendly, efficient and flexible framework for error injection. We identify that the MLP components inside DRS models are the most susceptible to hardware errors, while other major factors such as the input sparsity account for the drastic robustness difference between DRS models for different datasets. We also provide evaluation on 3 error mitigation methods including ABFT, activation clipping and SBP. Activation clipping, based on our observation, is the most promising error mitigation method which can regain up to 0.3 AUC-ROC score with minimal overhead. This paper provides a systematic effort on evaluating hardware error robustness the emerging DRS models, providing insights to the promising future research directions.
|
2306.07411 | On the Absoluteness of Rotation | We argue that in the general relativistic calculation of planetary orbits,
the choice of a reference frame which is an obligatory condition in the
Newtonian approach is replaced by an appropriate boundary condition on the
solution of Einstein equation. Implications of this observation on the nature
of rotation and the physical interpretation of the metric tensor are discussed. | P. Hraskó, D. Szepessy | 2023-06-12T20:38:41Z | http://arxiv.org/abs/2306.07411v1 | # On the Absoluteness of Rotation.
###### Abstract
We argue that in the general relativistic calculation of planetary orbits, the choice of a reference frame which is an obligatory condition in the Newtonian approach is replaced by an appropriate boundary condition on the solution of Einstein equation. Implications of this observation on the nature of rotation and the physical interpretation of the metric tensor are discussed.
In _Principia_[1] we read the following lines about Newton's famous bucket experiment:
If a vessel, hung by a long cord, is so often turned about that the cord is strongly twisted, then filled with water, and held at rest together with the water; after, by the sudden action of another force, it is whirled about in the contrary way, and while the cord is untwisting itself, the vessel continues for some time this motion; the surface of the water will at first be plain, as before the vessel began to move; but the vessel by gradually communicating its motion to the water, will make it begin sensibly to revolve, and recede by little and little, and ascend to the sides of the vessel, forming itself into a concave figure... This ascent of the water shows its endeavour to recede from the axis of its motion; and the true and absolute circular motion of the water, which is here directly contrary to the relative, discovers itself, and may be measured by this endeavour.... And therefore, this endeavour |
2305.01530 | On cubic-line arrangements with simple singularities | In the present note we study combinatorial and algebraic properties of
cubic-line arrangements in the complex projective plane admitting nodes,
ordinary triple and $A_{5}$ singular points. We deliver a Hirzebruch-type
inequality for such arrangement and we study the freeness of such arrangements
providing an almost complete classification. | Przemysław Talar | 2023-05-02T15:36:35Z | http://arxiv.org/abs/2305.01530v2 | # On cubic-line arrangements with simple singularities
###### Abstract
In the present note we study combinatorial and algebraic properties of cubic-line arrangements in the complex projective plane admitting nodes, ordinary triple and \(A_{5}\) singular points. We deliver a Hirzebruch-type inequality for such arrangement and we study the freeness of such arrangements providing an almost complete classification.
**Keywords** cubic-line arrangements, freeness, Hirzebruch-type inequalities
**Mathematics Subject Classification (2020)** 14N20, 14C20
## 1 Introduction
The main aim of the present note is to start a systematic studies on arrangements of plane curves consisting of elliptic curves and lines admitting some simple singularities. Our motivation comes from the recent papers devoted to arrangements consisting of conics and lines in the plane, or generally speaking, arrangements of rational curves in the plane, see for instance [5, 11]. From that perspective, it is worth recalling a recent paper by Dimca and Pokora [5] devoted to conic-line arrangements admitting nodes, tacnodes, and ordinary triple points as singularities. In the aforementioned paper, the authors deliver a Hirzebruch-type inequality for weak combinatorics of such arrangements and they provide a complete characterization of free conic-line arrangements with nodes, tacnodes, and ordinary triple points. Here our aim is to extend current research by looking at arrangements admitting positive genera curves. Our first choice is to study arrangements of elliptic curves and lines in the complex projective plane such that these arrangements admit nodes, \(A_{5}\) singular points, and ordinary triple points. This selection is based on a very recent paper by Dimca, Ilardi, Pokora and Sticlaru [4] where, among many things, the authors want to understand configurations of flex points and the associated arrangements of curves. In particular, we want to explain certain aspects revolved around configurations of flex points associated to elliptic curves and the arrangement constructed as unions of elliptic curves and lines that are tangent to flex points. Let us recall briefly the main results of the present note.
First of all, let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\} \subset\mathbb{P}^{2}_{\mathbb{C}}\) be an arrangement consisting of \(k\geqslant 1\) smooth cubic curve and \(d\geqslant 1\) lines admitting \(n_{2}\) nodes, \(t_{5}\) singular points of type \(A_{5}\), and \(n_{3}\) ordinary triple points. Then for such arrangements we have following combinatorial count:
\[9{k\choose 2}+3kd+{d\choose 2}=n_{2}+3t_{5}+3n_{3}. \tag{1}\]
Obviously the above naive count is very coarse. In the course of the paper, by the weak combinatorics of a given arrangement of elliptic curves and lines \(\mathcal{EL}\) we mean the vector \((d,k;n_{2},n_{3},t_{5})\in\mathbb{Z}_{\odot 0}^{5}\). In the theory of line arrangements we have certain inequalities involving weak combinatorics, for instance the celebrated Hirzebruch's inequality [1] for complex arrangements. In the setting of our paper, we show the following result.
Theorem **A** (see Theorem 2.2).: Let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\} \subset\mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement consisting of \(k\geqslant 1\) smooth cubic curves and \(d\geqslant 1\) lines such that \(3k+d\geqslant 6\), admitting only \(n_{2}\) nodes, \(t_{5}\) points of multiplicity \(A_{5}\), and \(n_{3}\) triple intersection points. Then we have
\[27k+n_{2}+\frac{3}{4}n_{3}\geqslant d+5t_{5}.\]
Our proof uses an orbifold version of the Bogomolov-Miyaoka inequality for log pairs.
Next, we focus on the freeness of our cubic-line arrangements. Let us recall basic definitions. Denote by \(S:=\mathbb{C}[x,y,z]\) the coordinate ring of \(\mathbb{P}_{\mathbb{C}}^{2}\) and for a homogeneous polynomial \(f\in S\) we denote by \(J_{f}\) the Jacobian ideal associated with \(f\), i.e., the ideal generated by the partial derivatives of \(f\).
**Definition 1.1**.: We say that \(C\subset\mathbb{P}_{\mathbb{C}}^{2}\) of degree \(d\) given by \(f\in S_{d}\) is a free curve if the \(S\)-module \(\operatorname{Syz}(J_{f})\) is minimally generated by \(2\) homogeneous syzygies \(\{r_{1},r_{2}\}\) of degrees \(d_{i}=\deg r_{i}\), ordered such that
\[1\leqslant d_{1}\leqslant d_{2}\quad\text{ and }\quad d_{1}+d_{2}=d-1.\]
The multiset \((d_{1},d_{2})\) is called the exponents of \(C\) and \(\{r_{1},r_{2}\}\) is said to be a minimal set of generators for the \(S\)-module \(\operatorname{Syz}(J_{f})\).
In the setting of the above definition, the minimal degree of the Jacobian relations among the partial derivatives of \(f\) is
\[\operatorname{mdr}(f):=d_{1}.\]
It is somehow complicated to check the freeness of a given curve using the above definition. However, by a result of du Plessis and Wall [8] we have the following effective criterion.
**Theorem 1.2** (du Plessis-Wall).: _A reduced curve \(C\subset\mathbb{P}_{\mathbb{C}}^{2}\) given by \(f\in S_{d}\) with \(\operatorname{mdr}(f)\leqslant(d-1)/2\) is free if and only if_
\[(d-1)^{2}-d_{1}(d-d_{1}-1)=\tau(C), \tag{2}\]
_where \(\tau(C)\) denotes the total Tjurina number of \(C\)._
First of all, in the light of the above discussion, we can show the following result.
Theorem **B** (see Theorem 3.1).: Let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\} \subset\mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement consisting of \(k\geqslant 1\) smooth cubic curve and \(d\geqslant 1\) lines admitting only \(n_{2}\) nodes, \(t_{5}\) singularities of type \(A_{5}\), and \(n_{3}\) triple intersection points. If \(\mathcal{EL}\) is free, then \(3k+d\leqslant 9\).
Then we focus our efforts on the classification problem.
**Problem 1.3**.: For a fixed \(3k+d\in\{4,5,6,7,8,9\}\) does there exist an arrangement \(\mathcal{EC}\) consisting of \(k\geqslant 1\) elliptic curves and \(d\geqslant 1\) with nodes, singular points of type \(A_{5}\), and ordinary triple points that is free?
In the context of the above question, it turns out that \(3k+d\in\{4,5,8\}\) there is no single example of a free arrangement. Then, for \(3k+d\in\{6,7\}\), we can construct examples of free arrangements using the Fermat cubic and inflection lines. In the case \(3k+d=9\) we are able to extract four admissible weak combinatorics, but we do not know whether we can realize them as cubic-line arrangements. To sum up this brief discussion, we have proved the following.
Theorem **C** (see Theorem 3.3).: Let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\} \subset\mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement consisting of \(k\geqslant 1\) smooth cubic curve and \(d\geqslant 1\) lines admitting only nodes, \(A_{5}\) singular points, and ordinary triple points. If \(\mathcal{EL}\) is free, then \(3k+d\in\{6,7,9\}\), possibly except the case \(3k+d=9\).
## 2 On the weak combinatorics of cubic-line arrangements
Our approach towards showing a Hirzebruch-type inequality for cubic-line arrangements with nodes, singularities of type \(A_{5}\), and ordinary triple points is based on Langer's variation on the Miyaoka-Yau inequality [9] which uses local orbifold Euler numbers \(e_{orb}\) of singular points. We work with **log pairs**\((X,D)\), where \(X\) is a complex smooth projective surface and \(D\) is a boundary divisor - it is an effective \(\mathbb{Q}\)-divisor whose coefficients are \(\leqslant 1\) and such that \(K_{X}+D\) is \(\mathbb{Q}\)-Cartier. We say that a log pair \((X,D)\) is effective if \(K_{X}+D\) is effective.
**Definition 2.1**.: Let \((X,D)\) be a log pair and let \(f:Y\to X\) be a proper birational morphism from a normal surface \(Y\). Write
\[K_{Y}+D_{Y}=f^{*}(K_{X}+D)\]
with \(f_{*}D_{Y}=D\). If the coefficients of \(D_{Y}\) are less than or equal to one for every \(f:Y\to X\), then \((X,D)\) is called a log canonical surface.
In our setting we look at log pairs \((\mathbb{P}_{\mathbb{C}}^{2},\alpha D)\), where \(D\) is a boundary divisor consisting of \(k\) smooth cubic curves and \(d\) lines admitting only prescribed above singularities, and \(\alpha\in[0,1]\cap\mathbb{Q}\). We need also to recall the local orbifold numbers \(e_{orb}\) which appear in the context of our arrangements, namely:
* if \(q\) is a node, then \(e_{orb}(q,\mathbb{P}_{\mathbb{C}}^{2},\alpha D)=(1-\alpha)^{2}\) with \(0\leqslant\alpha\leqslant 1\);
* if \(q\) is a point of type \(A_{5}\), then \(e_{orb}(q,\mathbb{P}_{\mathbb{C}}^{2},\alpha D)=\frac{(4-6\alpha)^{2}}{12}\) with \(\frac{1}{3}<\alpha\leqslant\frac{2}{3}\);
* if \(q\) is an ordinary triple point, then \(e_{orb}(q,\mathbb{P}_{\mathbb{C}}^{2},\alpha D)\leqslant\left(1-\frac{3\alpha }{2}\right)^{2}\) with \(0\leqslant\alpha\leqslant\frac{2}{3}\).
Now we are ready to show our result devoted to weak combinatorics of our cubic-line arrangements.
**Theorem 2.2**.: _Let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\}\subset \mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement consisting of \(k\geqslant 1\) elliptic curves and \(d\geqslant 1\) lines such that \(3k+d\geqslant 6\). Assume that \(\mathcal{EL}\) admits only \(n_{2}\) nodes, \(t_{5}\) points of type \(A_{5}\), and \(n_{3}\) triple intersection points. Then we have_
\[27k+n_{2}+\frac{3}{4}n_{3}\geqslant d+5t_{5}.\]
Proof.: We will follow the path presented in [10]. Let \(D=\mathcal{E}_{1}+...+\mathcal{E}_{k}+\ell_{1}+...+\ell_{d}\) be a divisor and \(\deg(\mathcal{EL}):=m=3k+d\) with \(k\geqslant 1\) and \(d\geqslant 1\). We will work with the pair \(\left(\mathbb{P}_{\mathbb{C}}^{2},\frac{1}{2}D\right)\) which is log-canonical and effective since \(3k+d\geqslant 6\) and we can choose any \(\alpha\in\left(\frac{1}{3},\frac{2}{3}\right]\), so \(\alpha=\frac{1}{2}\) is obviously admissible. We are going to use inequality from [9], namely
\[\sum_{p\in\mathrm{Sing}(\mathcal{C})}3\bigg{(}\frac{1}{2}\bigg{(}\mu_{p}-1 \bigg{)}+1-e_{orb}\bigg{(}p,\mathbb{P}_{\mathbb{C}}^{2},\frac{1}{2}D\bigg{)} \bigg{)}\leqslant\frac{5}{4}m^{2}-\frac{3}{2}m, \tag{3}\]
where \(\mu_{p}\) is the Milnor number of a singular point \(p\in\mathrm{Sing}(\mathcal{C})\). The left-hand side of the inequality can be bounded from below by
\[\frac{9}{4}n_{2}+\frac{117}{16}n_{3}+\frac{35}{4}t_{5},\]
which is an easy computation. Now we look at the right-hand side. Since
\[m^{2}=(3k+d)^{2}=9k+d+2n_{2}+6n_{3}+6t_{5}\]
we have
\[\frac{5}{4}m^{2}-\frac{3}{2}m=\frac{5}{4}(9k+d+2n_{2}+6n_{3}+6t_{5})-\frac{3} {2}(3k+d).\]
Now we plug data into (3) and we get
\[36n_{2}+117n_{3}+140t_{5}\leqslant 108k-4d+40n_{2}+120n_{3}+120t_{5}\]
and after re-arranging we finally obtain
\[27k+n_{2}+\frac{3}{4}n_{3}\geqslant d+5t_{5},\]
which completes the proof.
**Remark 2.3**.: Our Hirzebruch-type inequality is rather tight. If we take the Fermat cubic and \(9\) inflectional lines, i.e., lines tangent to the Fermat cubic at inflection points, we obtain an arrangement having \(n_{2}=27\), \(n_{3}=3\), and \(t_{5}=9\). We have than
\[54+\frac{9}{4}=27+27+\frac{3}{4}\cdot 3\geqslant 9+5\cdot 9=54.\]
## 3 Freeness of cubic-line arrangements
We start with our bound on the degree of cubic-line arrangements.
**Theorem 3.1**.: _Let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\}\subset \mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement consisting of \(k\geqslant 1\) smooth cubic curves and \(d\geqslant 1\) lines admitting only \(n_{2}\) nodes, \(t_{5}\) singularities of type \(A_{5}\), and \(n_{3}\) ordinary triple points. If \(\mathcal{EL}\) is free, then \(3k+d\leqslant 9\)._
Proof.: We are going to use a result due to Dimca and Sernesi [7] which tells us that for our cubic-line arrangements \(\mathcal{EL}\) of degree \(m=3k+d\) given by \(f\in S_{3k+d}\) one has
\[\mathrm{mdr}(f)\geqslant\alpha_{\mathcal{EL}}\cdot m-2, \tag{4}\]
where \(\alpha_{\mathcal{EL}}\) is the Arnold exponent of \(\mathcal{EL}\). Following discussions contained in [5, 6], we can verify that
\[\alpha_{\mathcal{EL}}=\min\biggl{\{}1,\frac{2}{3},\frac{2}{3}\biggr{\}}=\frac {2}{3}.\]
For a small clarification, we have taken above the minimum among the log canonical thresholds of singular points that \(\mathcal{EL}\) admits. Then we have
\[\frac{m-1}{2}\geqslant\mathrm{mdr}(f)\geqslant\alpha_{\mathcal{EL}}\cdot m-2 =\frac{2}{3}m-2,\]
which gives us that \(3k+d=m\leqslant 9\), and this completes our proof.
It means that if \(\mathcal{EL}\) is an arrangement of \(k\geqslant 1\) elliptic curves and \(d\geqslant 1\) lines admitting nodes, singularities of type \(A_{5}\), and ordinary triple points, then \(\deg(\mathcal{EL})=3k+d\in\{4,5,6,7,8,9\}\). In the next part of the paper we focus on a naive classification problem, i.e., we would like to check for which degrees \(3k+d\in\{4,5,6,7,8,9\}\) one can find a free cubic-line arrangement.
**Remark 3.2**.: Theorem 3.1 explains disappointment of the authors in [4] where an arrangement, denoted there by \(C^{{}^{\prime\prime\prime}}\), consisting of one elliptic curve and \(9\) inflection lines is not free.
Now we proceed with put classification of cubic-line arrangements. We will do it case by case, checking each admissible degree.
**(3k+d=4)**: In this situation we have one elliptic curve and one line. By the combinatorial count, one has
\[3=n_{2}+3n_{3}+3t_{5},\]
and we have the following list of possible weak combinatorics:
\[(n_{2},n_{3},t_{5})\in\{(3,0,0),(0,0,1)\}.\]
If we have a reduced plane curve \(C\) of degree \(4\) which is free, then its total Tjurina number has to be equal to \(7\). If we have an arrangement with \(3\) nodes, then the total Tjurina number is equal to \(3\), and if our arrangement has one \(A_{5}\) point, then the total Tjurina number is equal to \(5\). This shows that for degree \(4\) we do not have free arrangements.
**(3k+d=5)**: In this scenario we have one elliptic curve and two lines. Our combinatorial count gives us that
\[7=n_{2}+3n_{3}+3t_{5}.\]
We can easily check that we have \(6\) possibilities for weak combinatorics, namely \[(n_{2},n_{3},t_{5})\in\{(1,0,2),(1,1,1),(1,2,0),(4,0,1),(4,1,0),(7,0,0)\}.\] Recall that a reduced plane curve \(C\) of degree \(5\) is free if \(d_{1}\in\{1,2\}\) and its total Tjurina number satisfies \(\tau(C)\in\{12,13\}\). Computing naively the total Tjurina number for each weak combinatorics presented above we see that this number is less than or equal to \(11\). It shows that for degree \(5\) we do not have free arrangements.
**(3k+d=6)**: Let us consider the arrangement \(\mathcal{EL}_{6}=\{\mathcal{E}_{1},\ell_{1},\ell_{2},\ell_{3}\}\) in \(\mathbb{P}_{\mathbb{C}}^{2}\) which is given by \[Q(x,y,z)=(x^{3}+y^{3}+z^{3})\cdot(x^{3}+y^{3}).\] For this arrangement we have \(t_{5}=3\) and \(n_{3}=1\). Using Singular we can check that \(d_{1}=\operatorname{mdr}(Q)=2\) since one has \[x^{2}\cdot\frac{\partial\,Q}{\partial_{y}}-y^{2}\cdot\frac{\partial\,Q}{ \partial_{x}}=0.\] Using Theorem 1.2 we obtain that \[19=25-d_{1}(5-d_{1})=\tau(\mathcal{EL}_{6})=3\cdot 5+1\cdot 4=19,\] which means that \(\mathcal{EL}_{6}\) is free.
**(3k+d=7)**: Let us consider the arrangement \(\mathcal{EL}_{7}=\{\mathcal{E}_{1},\ell_{1},\ell_{2},\ell_{3},\ell_{4}\}\) in \(\mathbb{P}_{\mathbb{C}}^{2}\) which is given by \[H(x,y,z)=(x^{3}+y^{3}+z^{3})\cdot(x^{3}+y^{3})\cdot(y+z).\] For this arrangement we have \(t_{5}=4\), \(n_{2}=3\) and \(n_{3}=1\). Using Singular we can check that \(d_{1}=\operatorname{mdr}(H)=3\). Using Theorem 1.2 we obtain \[27=36-d_{1}(6-d_{1})=\tau(\mathcal{EL}_{7})=3\cdot 1+1\cdot 4+4\cdot 5=27,\] which means that \(\mathcal{EL}_{7}\) is free.
**(3k+d=8)**: Recall that by [6] a reduced curve \(C\subset\mathbb{P}_{\mathbb{C}}^{2}\) of degree \(8\) given by \(f\in S_{8}\) admitting only ADE singularities satisfies the condition that \(d_{1}=\operatorname{mdr}(f)\geqslant 3\), and \(C\) is free if and only if it is maximizing, i.e., \(\tau(C)=37\) and \(d_{1}=3\). Now we are going to use Theorem 3.1 suitably adapted to our scenario, i.e., \(C\) is a cubic-line arrangement with nodes, \(A_{5}\) singularities, and ordinary triple points, then we have \[\frac{7}{2}\geqslant\operatorname{mdr}(f)\geqslant\frac{2}{3}\cdot 8-2=\frac{10 }{3}.\] Since \(\operatorname{mdr}(f)\) is an integer, we have a contradiction.
**(3k+d=9)**: Using the same argument as above, we have \[4=\frac{8}{2}\geqslant\operatorname{mdr}(f)\geqslant\frac{2}{3}\cdot 9-2=4,\] so the only case to consider here is \(d_{1}=4\). Assuming the freeness of an arrangement, we can use Theorem 1.2 obtaining \[48=64-d_{1}(8-d_{1})=n_{2}+4n_{3}+5t_{5}.\] (5)
If the degree of our arrangements is \(9\), we have \((k,d)\in\{(1,6),(2,3)\}\). Let us start with the case \((k,d)=(2,3)\). By the combinatorial count
\[30=n_{2}+3n_{3}+3t_{5}. \tag{6}\]
Our problem boils down to find positive integer solutions of the following system of equations:
\[\begin{cases}48=n_{2}+4n_{3}+5t_{5}\\ 30=n_{2}+3n_{3}+3t_{3}.\end{cases}\]
We have only two possible solutions, namely
\[(n_{2},n_{3},t_{5})\in\{(3,0,9),(0,2,8)\}.\]
At this moment we do not know whether these weak combinatorics can be realized over the complex numbers as cubic-line arrangements and we hope to come back to this issue soon.
Let us now pass to the case \((k,d)=(1,6)\). By the combinatorial count we have
\[33=n_{2}+3n_{3}+3t_{3}. \tag{7}\]
Again, our problem boils down to find positive integer solutions of the following system of equations:
\[\begin{cases}48=n_{2}+4n_{3}+5t_{5}\\ 33=n_{2}+3n_{3}+3t_{3}.\end{cases}\]
It turns out that we have four solutions, namely
\[(n_{2},n_{3},t_{5})\in\{(0,7,4),(3,5,5),(6,3,6),(9,1,7)\}.\]
Now our Hirzebruch inequality comes into play since we can easily check that weak combinatorics \((k,d;n_{2},n_{3},t_{5})=(1,6;6,3,6)\) and \((k,d;n_{2},n_{3},t_{5})=(1,6;9,1,7)\) do not satisfy the inequality
\[27k+n_{2}+\frac{3}{4}n_{3}\geqslant d+5t_{5}.\]
On the other hand, we cannot decide whether the first two weak combinatorics can be realized as cubic line arrangements over the complex numbers. We hope to return to this question as soon as possible with more effective methods.
We can summarize our discussion by the following classification result, which is the main result of our note.
**Theorem 3.3**.: _Let \(\mathcal{EL}=\{\mathcal{E}_{1},...,\mathcal{E}_{k},\ell_{1},...,\ell_{d}\} \subset\mathbb{P}^{2}_{\mathbb{C}}\) be an arrangement consisting of \(k\geqslant 1\) smooth cubic curve and \(d\geqslant 1\) lines admitting only nodes, \(A_{5}\) singular points, and ordinary triple points. If \(\mathcal{EL}\) is free, then \(3k+d\in\{6,7,9\}\), possibly except the case \(3k+d=9\)._
## Acknowledgment
This note is part of the author's Master's thesis, written under the supervision of Piotr Pokora. Moreover, the author is partially supported by The Excellent Small Working Groups Programme DNWZ.711/IDUB/ESWG/2023/01/00002 at the Pedagogical University of Cracow.
|
2302.14824 | Auditing Lustre file system | With the increasing demand for data storage and the exponential growth of
data, traditional single-server architectures are no longer sufficient to
handle the massive amounts of data storage, transfer, and various file system
events. As a result, distributed file systems have become a necessity to
address the scalability challenges of file systems. One such popular
distributed file system is Lustre, which is extensively used in
high-performance computing environments. Lustre offers parallel file access,
allowing multiple clients to access and store data simultaneously. However, in
order to ensure the security and integrity of data, auditing plays a crucial
role. Lustre auditing serves as a proof of security and enables the
implementation of robust security features such as authentication with
Kerberos, mandatory access control with SELinux, isolation, and more. Auditing
helps track and monitor file system activities, providing valuable insights
into user actions, system events, and potential security breaches. The
objective of this project is to explore Lustre auditing using CentOS, a popular
Linux distribution, within a Lustre architecture. By implementing Lustre
auditing, we aim to enhance the security and reliability of the file system.
Additionally, we plan to develop a graphical interface that presents the
auditing features in a user-friendly and visually appealing manner. This
interface will provide administrators and users with a convenient way to
monitor and analyze auditing logs, view access patterns, detect anomalies, and
ensure compliance with security policies. By combining the power of Lustre's
parallel file system architecture with comprehensive auditing capabilities and
an intuitive graphical interface, we aim to provide a robust and user-friendly
solution for managing and securing large-scale data storage and access. | Sayed Erfan Arefin | 2023-01-21T06:38:42Z | http://arxiv.org/abs/2302.14824v2 | # Auditing Lustre file system
###### Abstract
With the increasing time, we are facing massive demand for the increasing amount of data storage. Single-server traditional architectures are failing to meet the demand of dealing with the humongous amount of data storage, transfer, and different events of file systems. Therefore, distributed file systems have become a necessity in order to deal with the scalability of file systems. Lustre file system is one of the most popular parallel file systems which is used by most of high-performance computers nowadays. Lustre auditing plays a vital role and as a proof of security to support rich security features like authentication with Kerberos, mandatory access control with SELinux, isolation, etc. In this project, we will explore luster auditing using centos in a lustre architecture and represent the auditing features with a graphical interface.
Authors' choice; of terms; separated; by semicolons; include commas, within terms only; required.
lustre, centos, auditing, parallel file system
## 1 Introduction
We are heading towards network-centric advancement of computer technology where reliable and high-performance storage systems have become a must. Conventional file system models with single server architecture tend to fail to achieve the scalability in accordance to the demand we have. Therefore, distributed file systems are more common nowadays which ensures availability, scalability and secured performance.
Lustre filesystem is one of them and it is also one of the most common parallel and influential file systems which is used by most of the high performance computers currently. Lustre is continuously expanding scalability and complexity which opens the doors for hardware and software vulnerabilities, inconsistent metadata, security threats and administrative issues[4],[2]. Luster auditing helps to check if there is a possibility of inconsistencies,failures and administrative errors.
Lustre system gives the user the scope to read, write and persist data using physical machines as major components which are dedicated to some specific roles. It is used to scale file system performance for massive data. Lustre file systems consist of three main components: one or more clients, a metadata service and one or more object storage services shown in figure 1. These three components have their own lustre logic that they have to run but in case of networking, they converge to the same point. For example, MDS handles allocation of storage objects on OSS, OSS provides data storage to clients and clients use the file system and requests for filesystem events. In this project, we had a thorough understanding of the lustre file system and we were able to build it using four different virtual machines as three different components of lustre. We showed the lustre auditing information using mongodb.
## 2 Experimental Setup
In this section the description of how we set up the virtual machines, how we installed the lustre in those vms and the configurations are given along with how we used changelog and stored them in mongodb.
### Setting up the Virtual Machines
Four virtual machines were used to establish the main components of the lustre system, for example, one vm for client, one vm for MDS and two vms for OSS. We used oracle virtual box to set up the vms where each vm has 2GB of memory and 20 GB of disk space and Centos version 8.3 as operating system with kernel version of 4.18.0-240.1.1.el8_x86_64.
On the MDS one Meta Data Target (MDT) was created. A total number of 6 Object Storage Target (OST) were created on each of the OSSs. This information can be fetched from the client by using the "lfs df -h" command. The result after running this command oon the client can be seen in Figure 2.
Figure 1: Lustre parallel file system
### Lustre Installation
For installing, we copied the lustre packages to an HTTP server on the network with the integration into local YUM repositories. We used the following instructions to establish a web server as a YUM repository host for the Lustre packages.
1. At first we created a temporary YUM repository definition using following commands which is used to assist the initial acquisition of Lustre packages.
2. Next, we used the reposync command (distributed in the yum-utils package) to download mirrors of the Lustre repositories to the manager serve using following commands: ```
mkdir-p/var/www/html/repo red/var/www/html/repo repposync-c/tmp/lustrre-repo.conf-n --repoid=lustrre-server --repoid=lustrre-client --repoid=e2fsprogs-wc
3. Then, we created the repository metadata with following instructions:
``` yuminstallcreaterepo-y cd/var/ww/html/repo foriin e2fsprogs-wc lustre-client lustre-server;do ```
yuminstallcreaterepo-y cd/var/w/html/repo foriin e2fsprogs-wc lustre-client lustre-server;do
```
4. Afterwards, we created a file containing repository definitions for the Lustre packages and stored it in the web server static content directory which makes it easier to distribute to the Lustre servers and clients. The commands we used are as follows: ```
hnn='hostname-fqdn' cat/var/ww/html/lustre.repo <<_EOF [lustrre-server] name=lustrre-server baserurl=https://Shn/repo/lustrre-server enabled=0 gpgcheck=0 proxy=none sslverify=0 [lustrre-client] [lustrre-client] name=lustrre-client baserurl=https://Shn/repo/lustrre-client enabled=0 gpgcheck=0 sslverify=0 [e2fsprogs-wc] name=e2fsprogs-wc baseurl=https://Shn/repo/e2fsprogs-wc enabled=0 gpgcheck=0 sslverify=0
``` e2fsprogs-wc name=e2fsprogs-wc baseurl=[https://downloads.whamcloud.com/public/](https://downloads.whamcloud.com/public/) e2fsprogs/latest/e18 gpgcheck=0 ```
e2fsprogs/latest/e18 gpgcheck=0
``` eOF ```
5. We installed Lustre Server Software by installing e2fsprogs, installing and upgrading the kernel, installing ldiskfs kmod and lustre packages. Next, loaded lustre to the kernel. The commands we used are given below:
``` yum-nogpgcheck-disablerepo=* -enablerepo=e2fsprogs-wc instillall$ yum-nogpgcheck-disablerepo=base,extras,updates \ --enablerepo=lustrre-server install\ kernel| kernel-devel\ kernel-headers\ kernel-tools\ kernel-heads\ kernel-tools\ kernel-tools-libs-devel ```
6. Lustre-client-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-locallocal-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-locallocal-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-locallocal-local-local-local-local-local-local-locallocal-local-local-local-local-local-local-local-local-locallocal-local-local-local-local-local-local-local-local-local-local-local-local-local-local-local-locallocal-local-local-local-local-locallocal-local-local-local-local-locallocal-local-local-local-local-local-local-local-local-locallocal-local-locallocal-locallocal-local-local-local-local-local-local-local-local-local-local-locallocal-local-local-local-local-local-local-local-local-
### Lustre Configuration
The Lustre file system configuration steps should always be performed in the following order:
* Metadata Server
* Object Store Servers
* Client
After the client is configured the lustre file system is usable. SELinux and the firewall was disabled in all the virtual machines. Configuration for all the components are as follows.
### Metadata Server
The following steps were executed for MDS.
1. The baseline for the MDS is creating a file system named lustre on the server including the metadata target (-mdt) and the management server (-mgs).
2. A mount point was created.
3. MDS was started using the following command: mount -t lustre /dev/vg00/mdt /mdt
### Object Store Servers
The following steps were executed for each OSS.
1. Mount points were created for the OSTs; this example uses six named ost1 through ost6:
3. The OSS was started by mounting the OSTs to the corresponding mounting points created in the previous step. The following commands were used for all the mounting points on all the OSSs.
### Client
The client was setup by mounting the lustre mounting point using the following commands.
1. First, a mounting point was created using the following command.
2. The client completed setup by mounting the client mount point using the following command.
## File system auditing
File system auditing is an important section of data organization and data integrity. From an organization perspective, file system auditing enables the system to provide proof that the data stored in the system is not modified and valid. Certain criteria of file system auditing are as follows.
* This evaluate the organization's ability to protect its information assets. An organization in this century can have several terbytes of information regarding its policies, organization, employee information and data owned by the organization. It is the organization's responsibility to protect its data from being stolen or corrupted. Thus, file system auditing enables the organization to evaluate if the data owned and stored for the organization is protected or not.
* It also checks the ability to properly dispense information to authorized parties. An organization can have multiple level of employees each assigned with different tasks. They have different sort of access to the organization's data. Providing proper access according to policies to this data is required.
* Audit consists in controlling that all data accesses made were done according to the access control policy in place.
* Audit can be used as a proof of security in place.
## Implementation
Lustre has a command "changelog" which can be used to audit the file system. Changeelog records contain all information necessary for auditing purposes. It also has the ability to identify object of action with file identifiers (FIDs) and name of targets. This also provides ability to identify subject of action with UID/GID and NID information. Every entry from this command output comes with a timestamp. Which provides the ability to identify time of action with timestamp.
### Prepare for changelog
In order to prepare our implementation to be able to get changelogs from the MDS virtual machine, the following commands were executed.
1. It is required to register the user to receive changelogs. To register a new changelog user for a device ( example: lustre-MDT0000 ) the following command was executed.
2. It is also required to enable all changelog entry types. In order to enable all changelog entry types the following command was executed.
An example of ta changelog entry of "OPEN" type can be seen i nthe following example.
This command was used in python program to get changelog necy 5 seconds. This change logs are then pushed to a monogdb collection. Here the collection is named after the node name. In this case: MDT-0000. The documents are store in the monogdb collection. Afterwards the changelogs are cleared in order to avoid redundancy.
Python 3.8 was used to collect change log information from the MDS. The output of the command was formatted properly and stored in the monogdb. A snapshot of the monogdb collection after running the MDS and executing dummy 100 file creation of 10 MB files from the client can be seen in figure 3. All the possible changelog record type can be observed in Table 1[1].
## 6 Related works
Dong D et al did a performance study of Lustre file systems. They focused on the LFSCK operation and found that there can be cascading errors which may be unrecoverable. This can be severe problem in case of HPC systems [3].
In the work of Arnab k et al. it can be observed that they did a study on file system monitoring. There are many tools available for desktop file system monitoring but there are no proper tool to monitor the lustre parallel file system. They proposed a monitoring tool for scalable file system [5].
## 7 Conclusion and future works
In this study lustre installation, configuration and file auditing functions were implemented. Several queries can be executed on the data that was collected form the lustre file system and stored in the monogdb. The future work includes creating an interactive dashboard with nodejs in order to provide a better usability of the lustre auditing file system. Another future work is to detect security vulnerability based on the audit data collected and stored in monogdb using machine learning classifiers.
###### Acknowledgements.
Mr Ahmadian Misha was the mentor for this project.
|
2305.15545 | Reconstructing Transit Vehicle Trajectory Using High-Resolution GPS Data | High-resolution location ("heartbeat") data of transit fleet vehicles is a
relatively new data source for many transit agencies. On its surface, the
heartbeat data can provide a wealth of information about all operational
details of a recorded transit vehicle trip, from its location trajectory to its
speed and acceleration profiles. Previous studies have mainly focused on
decomposing the total trip travel time into different components by vehicle
state and then extracting measures of delays to draw conclusions on the
performance of a transit route. This study delves into the task of
reconstructing a complete, continuous and smooth transit vehicle trajectory
from the heartbeat data that allows for the extraction of operational
information of a bus at any point in time into its trip. Using only the
latitude, longitude, and timestamp fields of the heartbeat data, the authors
demonstrate that a continuous, smooth, and monotonic vehicle trajectory can be
reconstructed using local regression in combination with monotonic cubic spline
interpolation. The resultant trajectory can be used to evaluate transit
performance and identify locations of bus delay near infrastructure such as
traffic signals, pedestrian crossings, and bus stops. | Yuzhu Huang, Awad Abdelhalim, Anson Stewart, Jinhua Zhao, Haris Koutsopoulos | 2023-05-24T20:19:07Z | http://arxiv.org/abs/2305.15545v2 | # Reconstructing Transit Vehicle Trajectory Using High-Resolution GPS Data
###### Abstract
High-resolution location ("heartbeat") data of transit fleet vehicles is a relatively new data source for many transit agencies. On its surface, the heartbeat data can provide a wealth of information about all operational details of a recorded transit vehicle trip, from its location trajectory to its speed and acceleration profiles. Previous studies have mainly focused on decomposing the total trip travel time into different components by vehicle state and then extracting measures of delays to draw conclusions on the performance of a transit route. This study delves into the task of reconstructing a complete, continuous and smooth transit vehicle trajectory from the heartbeat data that allows for the extraction of operational information of a bus at any point in time into its trip. Using only the latitude, longitude, and timestamp fields of the heartbeat data, the authors demonstrate that a continuous, smooth, and monotonic vehicle trajectory can be reconstructed using local regression in combination with monotonic cubic spline interpolation. The resultant trajectory can be used to evaluate transit performance and identify locations of bus delay near infrastructure such as traffic signals, pedestrian crossings, and bus stops.
## I Introduction
### _Background_
As more sensors and devices are installed on transit vehicles for monitoring various aspects of transit fleet operations, higher quality and more granular data becomes available to transit agencies. One recent data source addition is the high-resolution GPS data of vehicle location, often called "second-by-second data" (referred to as "heartbeat data" hereinafter), that records the historical location and other metadata of each transit vehicle at almost every second.
While analysts can use operational metrics obtained from analyzing stop-level AVL data to identify areas with poor performance, it is difficult to determine what exactly contributes to the poor performance in each area. As an example, an analyst may find that the average speed of buses is particularly low within one stop-to-stop segment, but the low average speed could be caused by slow-moving traffic due to congestion, or by stopping delays incurred by traffic signals. The average speed information alone is not enough to determine which one of the two is the primary source of delay, thus making it challenging to pinpoint the most effective traffic intervention or transit improvement strategy.
In comparison, the heartbeat data offers timestamps of vehicles not only at bus stops, but also at locations in between them. Therefore, the heartbeat data contains a richer set of data than AVL and would allow analysts to understand the exact behaviors of vehicles at any location along its route. Such detailed information offers an opportunity to uncover the interactions between buses and other road infrastructure and points of conflict besides bus stops that could impede bus movement such as traffic signals, pedestrian crossings, etc.
### _Motivation_
At first glance, the heartbeat data seems to provide information about the exact location of each vehicle in such detail that it should be able to tell everything an analyst would wish to know about a vehicle's trip. However, closer examination of the data would show that the coordinate data can be noisy and vehicles may appear to straddle the road network, and the data recorded can have inconsistent frequency, making it difficult to directly read the exact location and speed of the vehicle from the heartbeat data at every point of interest.
The task of extracting operational information from the heartbeat data would be easier if a complete transit vehicle trajectory could be reconstructed from raw heartbeat data. Such a trajectory would provide reliable information about the location, speed, and acceleration of the transit vehicle at any time and distance into the trip, thus offering valuable information to analysts trying to understand how transit vehicles interact with the built environment at a very granular level.
This research features the following contributions:
* A concise definition for an ideal transit vehicle trajectory is proposed;
* A process for how noisy and intermittent heartbeat data can be converted to a series of discrete timestamped location data is demonstrated;
* Several smoothing algorithms that convert the discrete data to a continuous trajectory function are explored and evaluated.
To the authors' best knowledge, this research is the first of its kind that attempts to understand bus operations by reconstructing complete vehicle trajectories from heartbeat data.
### _Related Work_
Researchers have explored various uses of transit heartbeat data to understand bus operations. Hall and Vyas calculated average segment speeds of transit vehicles by dividing segment length by non-dwell travel times, and evaluated
how well transit vehicle speeds can represent the congestion state of general traffic [1]. Colghlan et al. calculated the average speed of the entire trip by dividing the total trip length by total trip time and compared the result to an ideal speed to draw conclusion about queueing delay [2]. Aemmer et al. used high-resolution GTFS-RT data to calculate vehicle speeds by taking the slope between two time-distance data points and used the difference between the said speed and free-flow speed to categorize vehicle delays [3]. Lind and Reid calculated vehicle speeds similarly and made conclusions about local and rapid bus services by comparing the speed and reliability patterns [4]. These researchers have relied on drawing conclusions about bus operations by calculating an average vehicle speed within a segment or between data records rather than attempting to reconstruct a complete speed profile of the vehicle. Cathey and Deiley constructed speed profiles of transit vehicles on freeways using the Kalman Filter and concluded that granular AVL data gathered from buses can serve as an additional data source for freeway performance monitoring, but did not address the issue of negative speeds resulted from the filtering process [5].
As reviewed by Li et al., researchers in the non-transit space have proposed smoothing algorithms and imputation methods to reconstruct complete vehicle trajectories using vehicle location data of general-purpose vehicles [6]. Toledo et al. proposed the use of local regression as the smoothing algorithm to minimize the measurement errors of vehicle location data, and to construct a smooth and monotonic high-order polynomial as the vehicle trajectory. Venthuruthiyil et al. further discussed the strategies for selecting the optimal window size and polynomial order in reconstructing the trajectory function [7]. Although the researchers demonstrated the applicability of local regression in retrieving complete vehicle trajectories, they did not address the lack of guarantee for the differentiability of high-order polynomials. Rafati Fard et al. presented a method for reconstructing the trajectories of vehicles captured in aerial photography using a two-step wavelet analysis technique that reduces measurement errors, but did not discuss whether the reconstructed location trajectory is guaranteed to be non-decreasing [8]. Wu et al. explored reconstructing traffic-related spatial-temporal dataset by imputing missing values with Graphical Neural Networks, but presented applications only in reconstructing complete speed profiles rather than location profiles [9].
Therefore, this research attempt to address the gap in literature by presenting methods to reconstruct a continuous, smooth, and monotonic transit vehicle trajectory directly from high-resolution GPS coordinate data to enable the extract of the location, speed, and acceleration of a bus at any point in time into its trip.
## II From Raw Coordinates to Time-Distance Data
### _Notation and Formulation_
The goal in reconstructing the vehicle trajectory using heartbeat data is to convert the series of timestamped coordinates into a continuous function that maps every point in time into a bus trip, i.e. "time into trip", to a "distance into trip" value. With classic vehicle kinematics equations, the time-to-distance mapping offered by the trajectory function will then allow for derivation of the vehicle speed and acceleration profiles.
Suppose the heartbeat data of a bus trip contains a series of \(n\) timestamped coordinates, i.e.
\[C=[C_{1},C_{2},...,C_{n}]^{T} \tag{1}\]
recorded at timestamps
\[S=[S_{1},S_{2},...,S_{n}]^{T}, \tag{2}\]
where for each \(i\in\{1,2,...,n\}\), \(C_{i}\) is a pair of latitude and longitude values \((lat_{i},lon_{i})\) and timestamp \(S_{i}\) is the date and time at which the location of the bus is recorded as coordinate \(C_{i}\) by the onboard device.
Through a map matching process, the series of raw coordinates, \(C\), is converted to a series of map-matched coordinates, \(M\). Using information about the characteristics of the road network, the series of timestamps \(S\) and the map-matched coordinates \(M\) can be converted to time into trip values
\[T=[t_{1},t_{2},...,t_{n}]^{T}, \tag{3}\]
and distance into trip values
\[D=[d_{1},d_{2},...,d_{n}]^{T}. \tag{4}\]
Detailed information about how the map matching process can be carried out is described in Section II-B.
The series of discrete time-distance data can be used to infer the location of the vehicle at time points in the complement set \(T^{C}=\{t\in T_{R}:t\notin T\}\), where \(T_{R}\) is the set of real values within the time range \([0,t_{n}]\). To do so, an interpolation method or smoothing method is needed in order to impute the distance into trip values at time points in \(T^{C}\). The algorithm takes \(T\) and \(D\) as input, and depending on the assumption around the error term associated with each value in \(D\), outputs a function \(f:T_{R}\to X\), where \(X\) is a real set of true distance into trip values corresponding to each time into trip value in \(T_{R}\).
The continuous function \(f\) provides a mapping from time \(T_{R}\) to distance \(X\), and can be alternatively expressed as
\[x(t)=f(t). \tag{5}\]
The speed profile of the vehicle can then be derived as
\[v(t)=\frac{d}{dt}x(t)=f^{\prime}(t), \tag{6}\]
and the acceleration profile is
\[a(t)=\frac{d^{2}}{dt^{2}}x(t)=f^{\prime\prime}(t). \tag{7}\]
The entire process of reconstructing a complete vehicle trajectory from timestamped raw coordinates given by the heartbeat data is illustrated in the flow chart in Figure 1.
### _Map Matching_
The raw coordinates recorded in the heartbeat data sometimes land in areas beyond the road network rather than within actual road segments. To mitigate the issue, a map-matching process is needed to "snap" each coordinate point to the closest road segment to restore the location information as much as possible.
The map-matching tool used by the authors is the Valhalla map engine, which offers the "trace attributes" service that allows for the conversion of raw coordinates \(C\) to map-matched coordinates \(M\)[10, 11]. The service also returns the ID of the specific road segment, \(r_{i}\), along the inferred path that each matched coordinate \(M_{i}\) lies on, as well as the most probable normalized position, \(p_{i}\), along the road segment \(r_{i}\). The normalized position value \(p_{i}\) is given so that \(0\leq p_{i}\leq 1\). \(p_{i}=0\) means the coordinate \(M_{i}\) is at the beginning of the segment \(r_{i}\), while \(p_{i}=1\) means the end.
For each bus trip, the vehicle is expected to run on a predefined and fixed route, so the ordered list of possible road segments that make up the pattern of the bus trip is known and denoted by
\[\mathbf{R}=\langle R_{1},R_{2},...,R_{s}\rangle, \tag{8}\]
where \(s\) is the total number of road segments in the list, and segment \(R_{b}\) is downstream of \(R_{a}\) for \(b>a\). The map matched coordinate \(M_{i}\) is considered valid only if \(r_{i}\in\mathbf{R}\).
When analyzing the heartbeat data of a single bus trip, the trip is recorded to start at timestamp \(S_{1}\) and ends at timestamp \(S_{n}\). Therefore, each timestamp \(S_{i}\) can be converted to a "time into trip" value \(t_{i}\) through the simple calculation:
\[t_{i}=S_{i}-S_{1}. \tag{9}\]
For every road segment in \(\mathbf{R}\), its features including length is available from OpenStreetMap [11]. With this information, each matched coordinate of the bus can be converted to a distance value relative to the starting location of the trip. Denote the length of segment \(R_{j}\) by \(L_{j}\) and the index of the road segment \(r_{i}\) in the set \(\mathbf{R}\) by \(id_{i}\) (i.e. \(r_{i}=R_{id_{j}}\)), then a "distance into trip" value can be calculated as follows:
\[d_{i}=\sum_{j=1}^{id_{i}-1}L_{j}+L_{id_{i}}*p_{i}-L_{id_{i}}*p_{1}. \tag{10}\]
Combining the time into trip and distance into trip information calculated using Equations 9 and 10, a discrete series of time-distance data can be obtained from the map-matched coordinates.
### _Properties of An Ideal Bus Trajectory_
The goal of data smoothing is to obtain a trajectory that resembles the real-world behavior of the bus as realistically as possible. Therefore, a few properties must be considered for the result to be representative of an actual transit vehicle trip. These properties include:
* The trajectory should be non-decreasing, i.e. the distance into trip at a later time point should not be smaller than the distance into trip at a previous time point.
* The trajectory is made up of composite cubic polynomials. The position of the vehicle is a result of control inputs and follows vehicle kinematics, and as Nagy et al. reported, cubic polynomials are the lowest order curves that can be used to generate the trajectory of car-like robots [12].
* The trajectory function should be continuous and at least once differentiable so that the speed profiles are also continuous and can be easily retrieved from the function.
Since a suitable smoothing algorithm is the backbone in the process of constructing a continuous trajectory function from discrete time-distance data points, discussions around the exploration and evaluation of the algorithms are carried out in the following section separately.
## III Trajectory Smoothing
### _Linear Interpolation_
The simplest method to construct a continuous curve on the time-space diagram using the discrete time-distance data obtained from the previous steps is through linear interpolation, or equivalently, connecting adjacent data points using line segments (LSEG).
An example snippet of the trajectory obtained from linear interpolation is shown in Figure 1(a). Considering the three properties of an ideal bus trajectory introduced above, it can be noticed that although the trajectory is continuous and monotonic, it is not smooth and therefore is not differentiable. Intuitively, the consequence of such a drawback is that a continuous speed profile cannot be easily obtained from such a trajectory function.
### _Polynomial Cubic Interpolation_
To address the non-differentiability issue of linear interpolation, the Piecewise Cubic Hermite Interpolant (PCHIP) algorithm developed by Fritsch et al. is explored [13]. The PCHIP algorithm connects adjacent data points in \(T\) and \(D\) with a cubic polynomial function and enforces a smooth trajectory by ensuring that the first derivative is equal for the
Fig. 1: Flow chart of data processing.
connecting polynomial functions on each side of a data point [14]. This maintains the cubic polynomial and monotonic properties of the trajectory.
Figure 1(b) shows an example of the trajectory obtained using PCHIP. While the PCHIP algorithm guarantees a continuous first derivative, it does not ensure a continuous second derivative, resulting in a continuous speed profile but not a continuous acceleration profile.
### _Local Regression_
Although the PCHIP algorithm allows for the construction of a trajectory that satisfies all three properties of an ideal trajectory, one of the fundamental assumptions that has to be made is that all discrete distance into trip data points that serve as the input to the algorithm are the true distances, and that there is no error associated with each distance value. Such an assumption may not be valid, however, if the measurement error of the onboard device is taken into consideration.
A local regression process (LOCREG) can be used to estimate the true distance into trip of the bus vehicle at each recorded timestamp by weighing the importance of nearby data points based on their distance from the timestamp of interest, consequently smoothing out the transit vehicle trajectory [15].
Suppose the true distance of the vehicle at each time point \(t_{i}\) is \(x_{i}\) such that
\[x_{i}=d_{i}+\varepsilon_{i}, \tag{11}\]
where \(d_{i}\) is the measured distance at time \(t_{i}\), and \(\varepsilon_{i}\) is the measurement error for \(d_{i}\). The objective of the local regression algorithm is to find a function \(f:t\to x\) such that it solves the following minimization problem at each data point \((t_{i},d_{i})\):
\[\min\sum_{j=1}^{n}w_{i,j}(d_{i}-f(t_{i}))^{2}, \tag{12}\]
where the value of each weight term \(w_{i,j}\) is determined by the selected bandwidth and kernel function. In this study, cubic polynomials, the tricube kernel and a bandwidth of 20 data points are selected empirically in the estimation of each true distance \(x_{i}\).
An example snippet of the trajectory obtained from local regression is shown in Figure 1(c). LOCREG is superior to LSEG and PCHIP in that it produces a continuous trajectory that takes into account the measurement error of each measured distance, but it does not necessarily guarantee differentiability or monotonicity of the overall function. This can result in negative speed values when taking the first derivative of the LOCREG trajectory and also present challenges in deriving speed and acceleration profiles.
### _Interpolation and Regression_
To address the issue with monotonicity from using local regression, an exploratory algorithm is experimented with. This algorithm, named LOCREG-PCHIP, combines the desired properties of both PCHIP and LOCREG, removes the non-monotonic sections of the LOCREG data, and fills in the now missing intervals by passing the remaining data through a monotonic interpolation algorithm such as PCHIP.
A complete workflow that starts from data smoothing and ends with filling in data to obtain a smooth and monotonic trajectory is detailed in Algorithm 1.
```
1:X = {}
2:\(f_{\text{toreg}}=LOCREG(T,D)\)
3:for i = 1, 2,..., n do
4:\(x_{i}=f_{\text{toreg}}(t_{i})\)
5:if\(i>1\) and \(x_{i}<x_{i-1}\)then
6:\(x_{i}=x_{i-1}\)
7:endif
8:\(X.append(x_{i})\)
9:endfor
10:\(f=PCHIP(T,X)\)
11:return \(f\)
```
**Algorithm 1** LOCREG-PCHIP
An example snippet of the trajectory obtained from local regression is shown in Figure 1(d). The trajectory obtained using LOCREG-PCHIP preserves the true distance values estimated from local regression at all time points except for those where the monotonicity principle is violated, in which case the distance value is chosen to be equal to the largest distance value observed prior to the said time points. Since the algorithm ends with constructing a PCHIP function using the modified distance values, the resultant trajectory function is both monotonic and differentiable.
Fig. 2: Trajectory constructed from different algorithms.
## IV Evaluation and Validation
The ground truth location, speed and acceleration data of a bus is not often available. The following sections offer some alternative validation strategies. The discussion uses a sample trajectory that is considered representative of a typical bus trip, thus the conclusions are assumed to be applicable to other bus trips as well.
### _Validation of the Speed Profile_
One way to validate the accuracy of the speed profile is to check if the speed from the calculated trajectory is indeed zero when the transit vehicle was recorded as having doors open at a bus stop in the AVL data. The speed profiles obtained from all aforementioned algorithms can be displayed over color bands of door-open intervals in the AVL data as shown in Figure 3.
In order to measure how well the speed trajectory aligns with the AVL stop events, the number of integer seconds within AVL door-open intervals that correspond to a zero speed on the speed profile can be calculated. The notion of "zero speed" is loosely defined as a speed below which a vehicle is considered not traveling to account for the additional errors in the distance values that were not accounted for in the smoothing process.
As summarized in Table I, if the "stop speed" is defined as below 5 mph, the speed at over 90% of the integer timestamps labeled as door-open in the AVL data are correctly captured using the LOCREG, LOCREG-PCHIP and PCHIP algorithms. A more intuitive plot comparing the percentage of stop speed correctly captured by each algorithm is shown in Figure 4, where it shows that the performance of the LOCREG and LOCREG-PCHIP algorithms outperforms the other two algorithms at most threshold levels.
### _Validation of the Acceleration Profile_
Another way to check the performance of the algorithm is to examine the percentage of accelerations that are beyond a reasonable threshold. Based on the research by Kirchner et al., the maximum bus acceleration is \(0.17g=3.7\) mphps and maximum deceleration is \(-0.24g=-5.3\) mphps [16]. As indicated by the acceleration profiles shown in Figure 5, LOCREG-PCHIP produces a more reasonable acceleration profile than all other methods. This conclusion is verified by the percentage of unreasonable accelerations summarized in Table II, where the authors find that only 1.3% of the accelerations from LOCREG-PCHIP are beyond the reasonable threshold, whereas 5.3% of accelerations from PCHIP are unreasonable. This, however, is still a major improvement compared to relying on a linear interpolation which would yield 54% unreasonable acceleration instances.
Given the discussion above, the trajectories produced by both LOCREG-PCHIP and PCHIP are better compared to those from LSEG or LOCREG when validated against stop-level AVL data and reasonable acceleration thresholds.
### _Selection of the Best Algorithm_
The "best algorithm" should produce trajectories that not only perform reasonably well in the evaluation of speed and acceleration profiles, but also satisfy all three characteristics of an ideal trajectory. As shown in Table III, the algorithms that satisfy these criteria are LOCREG-PCHIP and PCHIP.
Comparing the LOCREG-PCHIP algorithm with PCHIP, LOCREG-PCHIP is preferable because it is able to predict the
\begin{table}
\begin{tabular}{c c} \hline \hline Algorithm &
\begin{tabular}{c} \% unreasonable \\ acceleration \\ \end{tabular} \\ \hline LSEG & 53.6 \\ LOCREG & 0.0 \\ LOCREG-PCHIP & 1.3 \\ PCHIP & 5.3 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Percentage of trajectory data of which accelerations are beyond the \([-5.3,3.7]\) mphps threshold.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{Stop-Speed Threshold (mph)} \\ \cline{2-4} & 0 & 3 & 5 \\ \hline LSEG & 0 & 77 & 86 \\ LOCREG & 7 & 77 & 100 \\ LOCREG-PCHIP & 0 & 84 & 92 \\ PCHIP & 0 & 92 & 98 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Percentage of AVL door-open integer timestamps at which speeds are correctly captured by each algorithm.
Fig. 4: Percentage of AVL door-open integer timestamps at which speeds are correctly captured vs. the “stop speed” threshold.
Fig. 3: Comparison of all speed profiles.
true distances from observed distances using the information provided by adjacent data points by taking advantage of the LOCREG algorithm, rather than trusting each observation a hundred percent. Therefore, it is decided that LOCREG-PCHIP is the best overall algorithm for reconstructing bus trajectories.
## V Case Study
To demonstrate how a complete bus trajectory can be reconstructed using heartbeat data, a case study using real-world data is provided in the following section. The case study aims at providing a concrete example of the data processing procedures discussed in the previous sections and offers insight into how analysts can utilize the complete bus trajectories constructed using methods provided in this research to draw insight into transit operations.
### _Data Source_
The heartbeat data analyzed in this study are archived snapshots of the GTFS-RT data taken from the public-facing API made available by the Massachusetts Bay Transportation Authority (MBTA). Each heartbeat data point records a timestamped location of the transit vehicle during its trip. An examination of the heartbeat data recorded over 12 weekdays of outbound Route 1 operated by the MBTA reveals that most data is recorded in intervals shorter than 10 seconds with the mode being 3 seconds.
For the purpose of this case study, the heartbeat data of one outbound Route 1 trip operated on a weekday morning (Monday April 25, 2022 at 8 am) is analyzed.
### _Reconstructing Vehicle Trajectory_
The heartbeat data of the analyzed trip contains 328 timestamped raw coordinates, a subset of which is shown in the \(S\) and \(C\) columns in Table IV. The raw coordinates are passed into the map-matching engine Valhalla [10], which returns the corresponding matched coordinates shown in column \(M\), the ID of the road segment in column \(r\), and the position of each matched coordinate along its segment in column \(p\).
Using Equations 9 and 10, the time into trip \(T\) and distance into trip \(D\) values of each data point can be calculated and passed to the LOCREG-PCHIP algorithm, a continuous vehicle trajectory color-coded by vehicle speed is obtained and shown in the time-space diagram in Figure 6. The location of road infrastructure such as bus stops, traffic signals, and pedestrian crossings can also be identified through the map matching process and plotted in the same figure.
The time-space diagram of a complete vehicle trajectory, in combination with AVL data,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(i\) & \(S\) & \(C\) & \(M\) & \(r\) & \(p\) \\ \hline
0 & 2022-04-25 & (42.372642, & (42.372660, & 0 & 0.639 \\ & 08:24:45 & -71.119048) & -71.119108) & & \\
1 & 2022-04-25 & (42.372365, & (42.372373, & 0 & 0.940 \\ & 08:24:50 & -71.119241) & -71.119267) & & \\
2 & 2022-04-25 & (42.372246, & (42.372252, & 2 & 0.463 \\ & 08:24:57 & -71.119324) & -71.119319) & & \\... & & & & & \\ \hline \hline \end{tabular}
\end{table} TABLE IV: An example map-matched coordinate table.
Fig. 5: Comparison of all acceleration profiles.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Algorithm & \multicolumn{6}{c}{Evaluation Criteria} \\ \cline{2-9} & MON1 & CUB2 & DIF3 & ERR4 & AVL & ACC & \\ & & & & & (\%)5 & Best \\ \hline LSEG & ✓ & ✗ & ✗ & ✗ & ✗ & 86 & 46 & \\ LOCRREG & ✗ & ✗ & ✗ & ✓ & 100 & 100 & \\ DCHIP & ✓ & ✓ & ✓ & ✓ & 92 & 99 & ✓ \\ PCHIP & ✓ & ✓ & ✓ & ✗ & 98 & 95 & \\ \hline \hline \end{tabular}
* MON: the trajectory is non-decreasing
* CUB: the trajectory is made up of cubic polynomials
* DIF: the trajectory is once differentiable
* ERR: minimizes measurement error
* AVL (%): the percentage of AVL dwell activities correctly captured
* ACC (%): the percentage of accelerations within a reasonable threshold
\end{table} TABLE III: Evaluation of algorithms.
Fig. 6: A snippet of a LOCREG-PCHIP trajectory used for the operational analysis of a sample weekday PM inbound trip of Route 1 operated by the MBTA.
information regarding the bus operation and lend insight into the movement of the bus throughout its trip.
### _Qualitative Analysis of A Vehicle Trajectory_
Several observations of the operations of the sample bus trip are described below to showcase the information that a complete bus trajectory can provide.
#### V-C1 Stop Dwelling Activities
As shown by the sections of the trajectory labeled \(A\) in Figure 6, the trajectory captures the bus stopping at the far-side bus stop at Massachusetts Ave & Albany St after it waits at the traffic signal just upstream of the bus stop.
The sections labeled \(B\) show that the bus opened its door once upstream of the bus stop at 84 Massachusetts Avenue and once at the bus stop. Interestingly, this behavior could be due to the Massachusetts State regulation which requires bus drivers to stop and open the doors before a railroad track [17]. Both of these stopping activities are validated by the door-open intervals recorded in the AVL data.
#### V-C2 Vehicle Speed
The section labeled \(C\) in Figure 6 shows that the bus traveled at a speed of approximately 25-30 mph 1.9-2.2 miles into the trip (Harvard Bridge), faster than its speed at most other sections of the trip.
#### V-C3 Stopping at a Pedestrian Crossing
From the section labeled \(D\) in Figure 6, one can see that the vehicle stopped before a pedestrian crossing prior to stopping at the bus stop at Massachusetts Ave opposite the Christian Science Center.
#### V-C4 Stopping at A Traffic Signal
The section labeled \(E\) shows that the bus stopped at a traffic signal upstream of the far-side stop at Massachusetts Ave & Tremont St. Besides clearly showing the existence of the stopping activity at the traffic signal, the trajectory also shows the duration that the bus stopped at the signal. Such information about how long buses stop at a signal can be valuable for the decision making related to transit signal priority projects.
## VI Conclusion and Discussion
This study developed methodologies to reconstruct continuous, monotonic, and differentiable bus trajectories from noisy heartbeat data. The trajectory smoothing algorithm LOCREG-PCHIP was identified as the best algorithm that produces a trajectory satisfying ideal properties while also performing well against expected speed and acceleration data. The continuous bus trajectories allow for the extraction of the bus location, speed, and acceleration at any point in time into its trip.
Several limitations are present with the methodologies that could be addressed in future research. First, this study only explored using heartbeat data with an average frequency of below 10 seconds. Further research may be needed to determine how the algorithm will perform on data with larger time intervals. Secondly, none of the smoothing algorithms presented in this study guarantees that the trajectory is twice-differentiable, therefore the acceleration profile is not guaranteed to be smooth. Lastly, the validation method used in this study to evaluate the performance of the algorithm is merely checking whether the speed and acceleration values fall within acceptable thresholds. An ideal method, however, is to compare the location, speed, and acceleration of vehicles with ground-truth data collected from telematics devices.
Further research by the authors will explore how the analysis of multiple bus trajectories can be analyzed in batch to allow for the categorization and quantification of different types of delays encountered by buses of specific routes or corridors.
|
2301.08785 | ntLink: a toolkit for de novo genome assembly scaffolding and mapping
using long reads | With the increasing affordability and accessibility of genome sequencing
data, de novo genome assembly is an important first step to a wide variety of
downstream studies and analyses. Therefore, bioinformatics tools that enable
the generation of high-quality genome assemblies in a computationally efficient
manner are essential. Recent developments in long-read sequencing technologies
have greatly benefited genome assembly work, including scaffolding, by
providing long-range evidence that can aid in resolving the challenging
repetitive regions of complex genomes. ntLink is a flexible and
resource-efficient genome scaffolding tool that utilizes long-read sequencing
data to improve upon draft genome assemblies built from any sequencing
technologies, including the same long reads. Instead of using read alignments
to identify candidate joins, ntLink utilizes minimizer-based mappings to infer
how input sequences should be ordered and oriented into scaffolds. Recent
improvements to ntLink have added important features such as overlap detection,
gap-filling and in-code scaffolding iterations. Here, we present three basic
protocols demonstrating how to use each of these new features to yield highly
contiguous genome assemblies, while still maintaining ntLink's proven
computational efficiency. Further, as we illustrate in the alternate protocols,
the lightweight minimizer-based mappings that enable ntLink scaffolding can
also be utilized for other downstream applications, such as misassembly
detection. With its modularity and multiple modes of execution, ntLink has
broad benefit to the genomics community, from genome scaffolding and beyond.
ntLink is an open-source project and is freely available from
https://github.com/bcgsc/ntLink. | Lauren Coombe, René L. Warren, Johnathan Wong, Vladimir Nikolic, Inanc Birol | 2023-01-20T20:08:00Z | http://arxiv.org/abs/2301.08785v1 | # ntLink: a toolkit for _de novo_ genome assembly scaffolding and mapping using long reads
###### Abstract
With the increasing affordability and accessibility of genome sequencing data, _de novo_ genome assembly is an important first step to a wide variety of downstream studies and analyses. Therefore, bioinformatics tools that enable the generation of high-quality genome assemblies in a computationally efficient manner are essential. Recent developments in long-read sequencing technologies have greatly benefited genome assembly work, including scaffolding, by providing long-range evidence that can aid in resolving the challenging repetitive regions of complex genomes. ntLink is a flexible and resource-efficient genome scaffolding tool that utilizes long-read sequencing data to improve upon draft genome assemblies built from any sequencing technologies, including the same long reads. Instead of using read alignments to identify candidate joins, ntLink utilizes minimizer-based mappings to infer how input sequences should be ordered and oriented into scaffolds. Recent improvements to ntLink have added important features such as overlap detection, gap-filling and in-code scaffolding iterations. Here, we present three basic protocols demonstrating how to use each of these new features to yield highly contiguous genome assemblies, while still maintaining ntLink's proven computational efficiency. Further, as we illustrate in the alternate protocols, the lightweight minimizer-based mappings that enable ntLink scaffolding can also be utilized for other downstream applications, such as disassembly detection. With its modularity and multiple modes of execution, ntLink has broad benefit to the genomics community, from genome scaffolding and beyond. ntLink is an open-source project and is freely available from [https://github.com/bcgsc/ntLink](https://github.com/bcgsc/ntLink).
1
[FOOTNO
UK) and Pacific Biosciences of California, Inc. (PacBio, Menlo Park, CA) have gained in popularity. Although long reads still have a higher base error rate than typical short read sequencing technologies, such as those from Illumina, their lengths can range from kilobases to megabases, orders of magnitude longer than the typical read length of 150-300 bp for short reads. This length distribution enables the long reads to span over the numerous repetitive elements present in complex genomes, therefore allowing for repeats to be resolved in a draft assembly. With the continued improvement and accessibility of long-read genome sequencing, there is great opportunity to harness the rich information of this data type for facilitating and improving _de novo_ genome assemblies.
While current state-of-the-art _de novo_ long-read assemblers such as Flye (Kolmogorov et al., 2019) are generating highly contiguous assemblies, we observe that they still do not fully exhaust or necessarily correctly use the long-range evidence inherent in long reads (Coombe et al., 2021). Therefore, there is great value in stand-alone genome assembly scaffolders, such as LINKS (Warren et al., 2015), OPERA-LG (Gao et al., 2016), IRScaf (Qin et al., 2019), and our alignment-free scaffolder ntLink (Coombe et al., 2021). ntLink is a lightweight, minimizer-based long-read scaffolding tool that was previously published as a central step in the correction and scaffolding pipeline LongStitch (Coombe et al., 2021). More recently, ntLink was integrated as a key step in our _de novo_ long-read assembler GoldRush (Wong et al., 2022). ntLink uses long-read evidence to further contiguate draft assemblies from any sequencing technology. Instead of using alignments of long reads to the draft assembly, like many state-of-the-art long-read scaffolding tools, ntLink uses minimizer mappings (leveraging particular subsets or sketches of the sequence \(k\)-mers), which we have shown to be effective for assembly scaffolding as it confers considerable computational benefit.
ntLink is a flexible toolkit which can be run in various modes depending on the desired user output (Figure 1), with multiple new functionalities introduced since the published version. For each basic protocol, the input files provided by the users include the long reads and a draft assembly to be improved, and the main output file is a scaffolded assembly in FASTA format. The core functionality of ntLink uses the long-read evidence and generated minimizers to infer how the input contigs (draft assembly sequences) should be ordered and oriented, and subsequently performs these joins. The basic protocols differ in additional steps that are performed to more accurately join contigs together, fill ambiguous sequences (i.e. gap-filling), and enhance the final contiguity of the assembly. In earlier versions of ntLink, contigs were joined together end-to-end naively, whether they overlapped in the genomic space or not. The overlap detection feature of ntlink identifies cases where adjacent contigs overlap, and trims them to remove the overlapping regions prior to concatenation (Basic Protocol 1, Figure 1A). Furthermore, scaffolding assemblies generally introduces gaps, or ambiguous nucleotide bases ("N"), between joined contigs. The gap-filling feature of ntLink instead fills gaps with bases from a representative read supporting the join (Basic Protocol 2, Figure 1B). Finally, running additional iterations of ntLink can further improve the contiguity of the final output assembly. To take advantage of these contiguity gains efficiently, ntLink can run in-code scaffolding iterations (also termed rounds hereon) powered by a coordinate liftlower module (Basic Protocol 3, Figure 1C).
Although ntLink was developed as a scaffolding tool, the initial minimizer-guided step of mapping long reads to the draft assembly can also benefit other bioinformatics utilities that require approximate mapping information, including the ntEdit+Sealer polishing component of the GoldRush assembler (L. X. Li et al., 2022; Paulino et al., 2015; Warren et al., 2019; Wong et al., 2022) and Tigmint-long (Coombe et al., 2021; Jackman et al., 2018), an assembly correction tool also integrated in the LongStitch pipeline. In Alternate Protocols 1 and 2, we highlight the usage of ntLink as a mapping tool, and, as an example of using ntLink mappings in a different downstream application, showcase how these mappings can be utilized in Tigmint-long.
With its numerous modes and available protocols, ntLink is a flexible and wide-reaching tool for improving _de novo_ genome assemblies and helping researchers better leverage their long-read sequencing data.
## Strategic Planning
### Hardware
ntLink is a command-line tool, which can be run on 64-bit Linux or MacOS operating systems with sufficient available RAM (random-access memory). The amount of RAM and disk space required for running ntLink varies with the draft genome size and coverage of the long-read dataset. See Table 1 for the peak memory and disk space usage for representative ntLink scaffolding runs using four different species with varying genome sizes. The wall-clock time for each ntLink run is also included in Table 1. For the basic and alternate protocols, each step uses 5 threads, but this can be adjusted depending on the specifications of the user's machine. The parameter t controls the number of threads used (where applicable), and is shown in each corresponding command.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Species** & **Approximate** & **Fold coverage of** & **Peak memory** & **Peak disk space** & **Wall-clock time** \\ & **genome size** & **long reads** & **usage (GB)** & **usage (GB)** & **(min)** \\ & **(Mbp)** & & & & \\ \hline _Caenorhabditis elegans_ & 100 & 93 & 0.9 & 0.9 & 6.5 \\ \hline _Oryza sativa_ & 373 & 62 & 3.4 & 2.8 & 21.4 \\ _Solanum lycopersicum_ & 824 & 72 & 5.4 & 5.6 & 43.4 \\ _Homo sapiens_ & 3,055 & 50 & 20.8 & 25.0 & 136.0 (2h16m) \\ \hline \hline \end{tabular}
*Default ntLink parameters were used for each example run, except for _Solanum lycopersicum_, which used k=64, w=250. One round of ntLink, including the gap-filling option, was run with each dataset.
\end{table}
Table 1: Example peak memory usage, disk space usage and wall-clock times when running ntLink scaffolding on draft genomes of various sizes. The disk space usage of the input draft assembly and long reads files are not included in the benchmarks*. See Supplementary Tables 1 and 2 for more information about the data used for these example ntLink runs.
Figure 1: Flowchart showing the various available features for ntLink scaffolding. ntLink uses minimizers (indicated by coloured circles) to map the input long reads to the draft genome. Identical minimizers are represented by the same colour. ntLink uses these mappings to infer how the input contigs (draft genome sequences) should be ordered and oriented to produce genome scaffolds. (A) ntLink can also use minimizers to detect when adjacent contigs have an overlapping region, and resolves these overlaps (indicated by the vertical black line), as described in Basic Protocol 1. (B) In addition, as demonstrated in Basic Protocol 2, ntLink can use the input long-read information and minimizers to fill gaps between joined contigs (dark grey box). (C) Finally, running multiple in-code rounds, or iterations, of ntLink scaffolding, as shown in Basic Protocol 3, can maximize the contiguity of the final output scaffolds. The ntLink rounds can be run with or without gap-filling, as indicated by the dashed transitive arrow.
#### Software
ntlink is available from the conda package manager for a more straightforward installation. Users can also install ntlink from the provided source code on GitHub. Detailed instructions for installing ntlink are available in the Support Protocol.
#### Files
Each ntlink protocol requires input long sequencing reads (in FASTA or FASTQ format), and an input draft genome assembly to be scaffolded (in FASTA format). Both files can be in single-line or multi-line (standard) FASTA format.
## Basic Protocol 1: ntLink scaffolding using overlap detection
Basic Protocol 1 describes running ntlink to scaffold an input draft assembly using long reads, with overlap detection enabled. The overlap detection functionality identifies when adjacent contigs (draft assembly sequences) overlap in genomic space, and trims the contigs to ensure the sequences are merged without duplicating this overlapping sequence.
ntLink leverages the long-range information inherent in long-read sequencing data to scaffold an input draft assembly. First, ntLink maps the long reads to a draft assembly using a minimizer-based approach. Long-read mappings that span multiple contigs provide evidence that these contigs should be joined together. These mappings are also used to estimate gap sizes between the contigs. After ntlink determines the sequences of oriented contigs to be joined together as scaffolds, the overlap detection feature identifies adjacent contigs that have a putative overlap (indicated by a negative estimated gap size), and resolves the overlaps. Finally, the ordered and oriented scaffolds are output in FASTA format.
### Necessary Resources:
#### Hardware
This protocol requires a 64-bit Linux or MacOS operating system with sufficient RAM and available disk space (See Strategic Planning for more information).
#### Software
The following software must be installed and available in your PATH environment variable:
- -
BRA toolkit (v3.0.0+): _([https://github.com/ncbi/sro-tools](https://github.com/ncbi/sro-tools))_
- -
curl: _([https://curl.se/](https://curl.se/))_
- -
Python 3.7+: _([https://www.python.org/](https://www.python.org/))_
- -
ntLink (v1.3.7+): _([https://github.com/bcgsc/ntLink](https://github.com/bcgsc/ntLink))_
- -
ABySS (v2.3.0+): _([https://github.com/bcgsc/obyss](https://github.com/bcgsc/obyss))_
- -
QUAST (v5.2.0+): _([https://github.com/ablab/quast](https://github.com/ablab/quast))_
#### Files
The input files for ntLink are long genome sequencing reads and a draft genome assembly. The long sequencing reads can be provided in FASTA or FASTQ format, either compressed with gap or uncompressed. The input draft assembly to be scaffolded should be in FASTA format (multi-line or single-line).
#### Sample Files
To demonstrate the usage of ntlink in Basic Protocol 1, we will scaffold a _C. elegans_ draft assembly with a corresponding _C. elegans_ Oxford Nanopore long-read dataset. The _C. elegans_ long reads are available from the Sequence Read Archive (SRA) under accession SRR10028109. The draft assembly is a Flye (Kolmogorov et al., 2019) assembly of the same _C. elegans_ long reads, and is available from [https://doi.org/10.5281/zenodo.7526395](https://doi.org/10.5281/zenodo.7526395). To assess the genome assemblies generated from ntlink, a _C. elegans_ N2 (Bristol strain) reference genome (accession GCA_00002985.3) will be used. There are detailed steps in the protocol to guide the user in downloading these files.
### Protocol steps:
1. Install ntlink _See Support Protocol for detailed instructions and options for installing ntlink._
2. Install protocol-specific dependencies curl, SRA toolkit, and QUAST. _Option A: Use conda package manager_
If Option A of the Support Protocol was used to install ntlink, the protocol-specific dependencies can be installed in the same conda environment. conda activate ntlink env conda install -y -c bioconda -c conda-forge curl quaset'sra- tools>=2.10.2'
_Option B: Install from source_
Install curl _Many servers will already have curl installed. To check if curl is available:_
which curl _If you see the path to a curl installation, curl is already installed and you can continue to part (ii) to install QUAST. Otherwise, follow the next steps._
_Go to [https://curl.se/download.html](https://curl.se/download.html), and find the tarball for the latest released version. Version 7.86.0_
_is used below to illustrate the steps. Change your terminal's current directory to the location where you would like curl installed, download the tarball, extract the tarball and change your directory into the downloaded curl directory._
cd /path/to/new/curl/installation wget [https://curl.se/download/curl-7.86.0.tar.gz](https://curl.se/download/curl-7.86.0.tar.gz) tar zvxf curl-7.86.0.tar.gz cd curl-7.86.0/
_._ _Compile the source code_
mkdir curl_install ./configure --without-ssl -- prefix=/path/to/new/curl/installation/curl-7.86.0/curl_install make make install _c._ _Add the curl installation directory to your PATH_
export PATH=/path/to/new/curl/installation/curl- 7.86.0/curl_install/bin:SPATH
ii. Install QUAST (Mikheenko et al., 2018)
_Go to [https://github.com/oblob/quest/releases](https://github.com/oblob/quest/releases), download the latest release and extract the tarball._
_Version 5.2.0 is shown as on example in the following commands._
cd /path/to/new/quaset/installation curl -L --output quaset-5.2.0.tar.gz [https://github.com/ablab/quest/releases/download/quaset_5.2.0/quaset-](https://github.com/ablab/quest/releases/download/quaset_5.2.0/quaset-) 5.2.0.tar.gz tar xvzf quaset-5.2.0.tar.gz cd quaset-5.2.0/./install.sh
_._ _Add the QUAST installation directory to your PATH_
export PATH=/path/to/new/quaset/installation/quaset-5.2.0:SPATH
ii. Install the SRA toolkit
_Go to [https://github.com/ncbi/sra-tools/wiki/02-Downloading-SRA-Toolkit](https://github.com/ncbi/sra-tools/wiki/02-Downloading-SRA-Toolkit), and find the pre-built binary appropriate for your system. Download the archive and extract it. We use the centOSinux release v3.0.2 as an example below._
cd /path/to/new/sratools/installation curl -L --output sratoolkit.3.0.2-centos_linux64.tar.gz https://ftp- trace.ncbi.nlm.nih.gov/sra/sdk/3.0.2/sratoolkit.3.0.2- centos_linux64.tar.gz tar xvzf sratoolkit.3.0.2-centos_linux64.tar.gz
_Add the srtoolkit installation directory to your PATH_ export PATH=/path/to/new/sratools/installation/sratoolkit.3.0.2-centos_linux64/bin:$PATH
3. Navigate to the directory where you want to run the ntlink tests, and download the sample long-read data. cd /path/to/ntlink/test fasterq-dump SRR10028109 _Once the command has finished, the reads will be available in the file_ 3RR10028109.fastq_. These reads are ~93-fold coverage C. elegans Oxford Nanopore long reads.
4. Download the sample draft long-read assembly. _This is a Frye (Kolmogorov et al., 2019) assembly of the long reads downloaded in the previous step._ curl -L --output celegans flye.fa [https://zenodo.org/record/7526395/files/celegans_flye.fa](https://zenodo.org/record/7526395/files/celegans_flye.fa)
5. Download a reference genome assembly for the _C. elegans_ Bristol N2 strain. _This assembly will be used in a later step when assessing the final assembly scaffolds using QUAST._ curl -L --output celegans_reference.fa.gz [https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/002/985/GCA_000002985.3_WBce1235/GCA_000002985.3_WBce1235_genomic.fna.gz](https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/002/985/GCA_000002985.3_WBce1235/GCA_000002985.3_WBce1235_genomic.fna.gz)
6. Run ntlink using the ntLink Makefile _The specified values of k, w and overlap are the default values, but are included in the command to demonstrate how to set these parameters using the ntLink Makefile._ ntLink scaffold target=celegans_flye.fa reads=SRR10028109.fastq k=32 w=100 t=5 overlap=True
7. Check the logs and output files to ensure that the run executed successfully. _If ntLink completed successfully, the ntLink log should contain the message "Done ntLink! Find post-ntLink scaffolds can be found in: celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.fa". In addition, the final output scaffolds file "celegans_flye.fo.k32.w100.z1000.ntLink.scaffolds.fa" should be in the current working directory._
8. Assess the final output scaffolds using abyss-fac (_de novo_ approach) and QUAST (reference-based approach). See Table 2 for the expected statistics generated from these steps. 1. Run abyss-fac using the input draft genome assembly and the post-ntLink genome assembly. 2. abyss-fac -G100e6 --count-ambig celegans_flye.fa celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.fa _The "-G" option specifies the genome size, which is approximately 100 Mbp for C. elegans. The "-count-ambig" option counts any ambiguous bases (ex. "N"s) in the output statistics. After running this command, you will see that the NG50 length (at least half of the genome is in pieces at least this length) increases after ntlink scaffolding, and the number of sequences ("n') decreases._
9. Run QUAST to assess the input draft genome assembly and the post-ntLink genome assembly using the previously downloaded _C. elegans_ reference assembly. 1.
assembly compared to the reference. Therefore, after scaffolding, we want to minimize the number of contigs and misassemblies, while maximizing the NG50 length and NG50 length. Note that the QUAST executable will be_ quast.py _if the tool was installed manually._
## Basic Protocol 2: ntLink scaffolding with gap-filling
Basic Protocol 2 describes how to run ntLink scaffolding with a gap-filling step. In this protocol, instead of simply introducing ambiguous bases, ntlink fills gaps with sequence from the input long-read sequencing data. The initial steps of ntLink are executed as described in Basic Protocol 1. Then, an additional step is performed which fills-in the ntLink-induced scaffold gaps with bases from a representative read that supports the given join (the read that has the highest average number of minimizers in common with the incident contigs). Following this gap-filling step, the scaffolds are output in FASTA format. Because the gaps are filled with raw long-read sequence, we recommend polishing the output assembly using a long-read polishing tool such as ntEdit+Sealer (I. X. Li et al., 2022; Wong et al., 2022), Racon (Vaser et al., 2017) or Medaka (_Medaka: Sequence Correction Provided by ONT Research_, n.d.) following the ntLink scaffolding and gap-filling.
### Necessary Resources:
#### Hardware
This protocol requires a 64-bit Linux or MacOS operating system with sufficient RAM and available disk space (See Strategic Planning for more information).
#### Software
The following software must be installed and available in your PATH environment variable:
- - SRA toolkit (v3.0.0+): (_[https://github.com/ncbi/sro-tools_](https://github.com/ncbi/sro-tools_))
- curl: (_[https://curl.se_](https://curl.se_))
- Python 3.7+: (_[https://www.python.org/](https://www.python.org/))_
- ntLink (v1.3.7+): (_[https://github.com/bcgsc/ntlink_](https://github.com/bcgsc/ntlink_))
- ABySS (v2.3.0+): (_[https://github.com/bcgsc/obwss_](https://github.com/bcgsc/obwss_))
- minimap2: (_[https://github.com/h3/minimap2_](https://github.com/h3/minimap2_))
- Racon (v1.5.0+): (_[https://github.com/lbcb-sci/racon_](https://github.com/lbcb-sci/racon_))
- QUAST (v5.2.0+): (_[https://github.com/oblob/quost_](https://github.com/oblob/quost_))
#### Files
The input files for ntLink are long genome sequencing reads and a draft genome assembly. The long sequencing reads can be provided in FASTA or FASTQ format, either compressed with gzip or uncompressed. The input draft assembly to be scaffolded should be in FASTA format (multi-line or single-line).
#### Sample Files
The sample files used for this protocol are the same as used in Basic Protocol 1.
#### Protocol steps:
1. Install the required software.
_For more information about installing ntLink and all other dependencies other than minimap2 and Racon, please see the detailed instructions in Basic Protocol 1, steps 1-2._
_Installing protocol-specific dependencies minimap2 and Racon:_
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Assembly**} & **Number of sequences** & **NG50 length** & **NG50 length** & \multirow{2}{*}{**Number of misassemblies**} \\ & \textgreater{}= **3 kbp** & **(Mbp)** & **(Mbp)** & \\ \hline Baseline assembly & 63 & 3.6 & 2.3 & 75 \\ \hline Baseline assembly + ntLink & 33 & 6.8 & 3.7 & 66 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Expected results from scaffolding the _C. elegans_ Flye assembly using ntLink with the steps documented in Basic Protocol 1. Compared to the baseline, scaffolding using ntLink with overlap detection increases the assembly NGAS0 length approximately 1.7-fold, while also reducing the number of misassemblies.
_Option A: Using conda package manager_
if notlink was installed using Option A of the Support Protocol, minimap2 and Racon can be installed in the same environment.
conda activate ntlink env
conda install -y -c bioconda -c conda-forge minimap2 racon
_Option B: Installing from source_
1. Install minimap2
a. For Linux, minimap2 provides pre-compiled binaries. Go to _[https://github.com/lh3/minimap2/releases_](https://github.com/lh3/minimap2/releases_)
to find the most recent pre-compiled binary. Here, we show downloading the v2.24 binary as an example:
curl -L [https://github.com/lh3/minimap2/releases/download/v2.24/minimap2-2.24_x64-linux.tar.bz2](https://github.com/lh3/minimap2/releases/download/v2.24/minimap2-2.24_x64-linux.tar.bz2) | tar -jxvf -
b. For MacOS, review the minimap2 dependencies (_[https://github.com/lh3/minimap2_](https://github.com/lh3/minimap2_)), download the most recent release tarball from _[https://github.com/lh3/minimap2_](https://github.com/lh3/minimap2_), extract it and compile the code. We show downloading and compiling the v2.24 release as an example.
curl -L --output minimap2-2.24.tar.bz2 [https://github.com/lh3/minimap2/releases/download/v2.24/minimap2-2.24.tar.bz2](https://github.com/lh3/minimap2/releases/download/v2.24/minimap2-2.24.tar.bz2) tar -jxvf minimap2-2.24.tar.bz2 cd minimap2-2.24 make
c. Append the path to the minimap2 installation to your PATH environment variable.
export PATH=/path/to/minimap2/installation:$PATH
ii. Install Racon (See _[https://github.com/lhcb-sci/racon_](https://github.com/lhcb-sci/racon_) for information about dependencies), and add the path to the Racon installation to your PATH environment variable.
git clone [https://github.com/lhcb-sci/racon](https://github.com/lhcb-sci/racon) & cd racon
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release..
make
export PATH=/path/to/racon/install/build/bin:$PATH
2. Download the sample data. As the sample data for this protocol is the same as used for Basic Protocol 1, please see steps 3-5 of Basic Protocol 1 for full details about downloading the long reads, draft genome assembly and reference genome.
3. Change to the directory with the downloaded data, and run ntlink with the gap-filling option specified. The ntLink steps are powered by the ntLink Makefile.
cd /path/to/ntlink/test
ntLink scaffold gap_fill target=celegans_flye.fa reads=SRR10028109.fastq
k=32 w=100 t=5 _Note that the k and w values specified are the default values, but are included in the command to illustrate how they can be set when running ntLink. The target gap_fill being specified in the command triggers the gap-filling stage after the initial scaffolding steps of ntLink._
4. Check the logs and output files to ensure that the run executed successfully.
_If ntLink completed successfully, this message will be found in the logs: "Done ntLink! Final post-ntLink and gap-filled scaffolds can be found in: celegans_flye.fa.k32.w100.21000.ntLink.scaffolds.fa". In addition, the final output scaffolds file "celegans_flye.fa.k32.w100.21000.ntLink.scaffolds.fa" will be in the current working directory. The intermediate scaffold file before gap-filling is "celegans_flye.fa.k32.w100.21000.stitch.obyss-scaffold.fa"._
Polish the gap-filled ntink scaffolds. For illustrative purposes, we demonstrate polishing using Racon, but any long-read polishing tool can be utilized. This is an optional step in the pipeline, and can be bypassed if the integration of raw long reads (with a lower base quality) into the draft assembly is not a concern.
First, align the long reads to the draft assembly, and output the alignments in SAM format.
minimap2 -a -t 5 -x map-ont -o celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.SRR10028109.sam celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.fa SRR10028109.fastq
Next, run Racon, supplying the SAM file generated in step 5a.
racon -u -t 5 SRR10028109.fastq celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.SRR10028109.sam celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.fa 1> celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.racon-polished.fa
Check the Racon output files to ensure that the run executed successfully. The final, polished assembly will be in the file "celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.racon-polished.fa", and the final log message in a successful Racon run will include "[racon::Polisher::] total =", along with the runtime.
6. Assess the final, polished output scaffolds using abys-fac (reference-free) and QUAST (reference-based). See Table 3 for the expected results.
Run abyss-fac using the input draft genome assembly, the ntLink intermediate scaffolds file before gap-filling, and the final output scaffolds file after gap-filling and polishing.
abyss-fac -G100e6 --count-ambig celegans_flye.fa celegans_flye.fa.k32.w100.z1000.stitch.abyss-scaffold.fa celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.racon-polished.fa
See Basic Protocol 1, Step 8a for detailed information describing the abyss-fac output.
Run QUAST to assess the input draft genome assembly, the ntLink intermediate scaffolds file before gap-filling, the output scaffolds after gap-filling and the final output scaffolds file after gap-filling and polishing.
quasi -t 5 -o -q -q -ntlink_bp2 -r celegans_reference.fa.gz --fast --large --split-scaffold celegans_flye.fa celegans_flye.fa.k32.w100.z1000.stitch.abyss-scaffold.fa celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.fa celegans_flye.fa.k32.w100.z1000.ntLink.scaffolds.racon-polished.fa
See Basic Protocol 1, Step 8b for detailed information about the QUAST output. In addition to the QUAST statistics described in the previous protocol, for this protocol we are also interested in distinguishing between "Scaffold NG50/NG50" and "Config NG50/NG50", which are available from the QUAST output in the quast_ntlink_bp2 directory. The "Scaffold NG50" defines the sequence length where at least half of the genome is in sequences at test the NG50 length, with the "Scaffold NG50" being the equivalent statistic, but calculated using alignment block lengths instead of sequence lengths. The "Config NG50/NG50" statistics are similar, except that the sequences are broken at ambiguous codes ("N"s) prior to the calculation. When the --split-scaffold option is specified for QUAST, it will output the statistics for the full input assembly ("Scaffold" statistics), and the assembly after breaking the sequences at regions of >= 10 Ns ("Config" statistics, "_broken" added to filename). Therefore, the "Config NG50/NG50" statistics are a measure of contiguity as well as the number and distribution of gaps in the assembly. Furthermore, the QUAST statistic "# N's per 100 kbp" gives a direct measure of the number of ambiguous bases in the assembly. With efficient gap-filling, the "Config" statistics will become closer to the "Scaffold" statistics, and the "# N's per 100 kbp" will decrease.
## Basic Protocol 3: Running in-code iterations of ntLink scaffolding
Basic Protocol 3 describes how to run multiple iterations, or rounds, of ntLink using a liftower-based approach. When scaffolding assemblies, the goal is to achieve the highest possible contiguity without sacrificing the correctness of the assembly. While running a single round of ntLink, as described in Basic Protocols 1 and 2, is very effective in improving upon a draft genome assembly from any technology, further gains are possible with additional rounds of ntLink. Using the in-code round capability of ntLink allows a user to maximize the contiguity of the final assembly without needing to manually run ntLink multiple times. To avoid re-mapping the reads at each round, ntLink lifts over the mapping coordinates from the input draft assembly to the output post-ntLink scaffolds, which can then be used for the next round of ntLink. The same process can be repeated as many times as needed, thus enabling multiple rounds of ntLink to be powered by a single instance of long-read mapping.
### Necessary Resources:
#### Hardware
This protocol requires a 64-bit Linux or MacOS operating system with sufficient RAM and available disk space (See Strategic Planning for more information).
#### Software
The following software must be installed and available in your PATH environment variable:
- -
_For detailed instructions describing downloading the sample data, see Basic Protocol 1, steps 3-5._
3. Run 3 rounds of ntlink scaffolding. _Change into a new directory, and create soft links so that the input files are accessible in the current working directory_ cd /path/to/ntLink/test mkdir -p run_rounds && cd run_rounds ln -s../celegans_flye.fa && In -s../SRR10028109.fastq ln -s../celegans_reference.fa.gz _Option A: Run rounds of ntLink without gap-filling ntLink_rounds run_rounds target=celegans_flye.fa reads=SRR10028109.fastq k=32 w=100 t=5 rounds=3 dev=True Option B: Run rounds of ntLink with gap-filling ntLink_rounds run_rounds_gaps target=celegans_flye.fa reads=SRR10028109.fastq k=32 w=100 t=5 rounds=3 dev=True _The dev=True option will retain all intermediate files. Although this is useful to be able to see all the file types generated by ntLink for working through this protocol, this option can be omitted for most runs. When omitted, some intermediate files will be automatically deleted to save disk space._
4. Check the logs and output files to ensure that the ntLink run executed successfully. _After the ntLink command has completed, check the log for this final message, which indicates a successful run: "Done ntLink rounds! Final scaffolds found in celegans_flye.fa.k32.w100.z1000.ntLink.3rounds.fa". This message also indicates the FASTA file which contains the final, scaffolded assembly sequences._
5. Use abyss-fac (_de novo_ approach) and QUAST (reference-based approach) to assess the genome assembly after each round of ntLink scaffolding, and compare the results to the initial baseline assembly. See Figure 2 for a summary of the expected assembly statistics. 1. Reference-free analysis of the ntLink output scaffolds using abyss-fac. _If Option A was followed in Step 3:_ abyss-fac --count-ambig -G100e6 celegans_flye.fa aclegans_flye.fa.k32.w100.z1000.ntLink.fa celegans_flye.fa.k32.w100.z1000.ntLink.ntLink.fa celegans_flye.fa.k32.w100.z1000.ntLink.3rounds.fa _If Option B was followed in Step 3:_ abyss-fac --count-ambig -G100e6 celegans_flye.fa.k32.w100.z1000.ntLink.gap_fill.fa celegans_flye.fa.k32.w100.z1000.ntLink.ntLink.gap_fill.fa celegans_flye.fa.k32.w100.z1000.ntLink.3rounds.fa _See Basic Protocol 1, Step 8a for detailed information describing the abyss-fac output. 2. Reference-based analysis of the ntLink output scaffolds using QUAST. 3. _If Option A was followed in Step 3:_ quasi -t 5 -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -oo -o -o -o -o -oo -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -o -oo -o -o -o -o -o -o -oo -o -oo -o -oo -oo -o -oo -oo -o -oo -oo -o -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -oo -ooo -oo -oo -ooo -oo -ooo -oo -ooo -ooo -ooo -ooo -ooo -oo -ooo -ooo -ooo -oooo -ooo -oooo -oooo -ooooo -
See Basic Protocol 1, Step 8b and Basic Protocol 2, Step 6b for detailed information about the QUAST output. The QUAST output will be written to the directory named \(\mathtt{quasntlink\_bp3}\). Note that if QUAST was installed from source, the executable will be named \(\mathtt{quasnt.py}\). Note that the final scaffolds file name will be the same whether Option A or Option B was followed, but the names of the files from intermediate rounds differ slightly.
## Alternate Protocol 1: Generating long-read to contig mappings with ntLink
Although ntLink is most commonly used as a scaffolding tool, the minimizer-based mapping functionality that enables assembly scaffolding can also be run in isolation. In any mode, ntLink first maps the input long reads to the input draft genome. When using ntLink in the default scaffolding mode, these mappings are parsed to infer evidence that supports ordering and orienting contigs into scaffolds. However, this mapping information can also be simply output to a file in a standard format and used for other downstream applications. In Alternate Protocol 1, we demonstrate the mapping mode of ntLink, which outputs the mappings in a standard (Pairwise mapping Format) PAF format.
### Necessary Resources:
#### Hardware
This protocol requires a 64-bit Linux or MacOS operating system with sufficient RAM and available disk space (See Strategic Planning for more information).
#### Software
The following software must be installed and available in your PATH:
[MISSING_PAGE_POST]
* [noitemsep]
* [noitem]
* [noitemsep]
* [noitem]
* [noitemsep]
* [noitemsep]
* [noitemsep]
* [noitem]
* [noitemsep]
[MISSING_PAGE_POST]
ep]
* [noitem]
The input files for mapping with ntlink are long genome sequencing reads and a draft genome assembly. The long sequencing reads can be provided in FASTA or FASTQ format, either compressed with gzip or uncompressed. The input draft assembly to be scaffolded should be in FASTA format (multi-line or single-line).
#### Sample Files
In this protocol, we will map the same Oxford Nanopore _C. elegans_ long reads as used in the basic protocols to a short-read _C. elegans_ ABySS (Jackman et al., 2017) assembly. The ABySS assembly is available from [https://zenodo.org/record/7526395/files/celegans_abyss.fa](https://zenodo.org/record/7526395/files/celegans_abyss.fa).
### Protocol steps:
1. Install the required software. _For more information about installing ntlink and other dependencies, please see detailed instructions in Basic Protocol 1, steps 1-2._
2. Install the protocol-specific dependency, miller. _Option A: Use conda package manager_ i. If Option A of the Support Protocol was used to install ntlink, miller can be installed in the same conda environment. _conda activate ntlink_env_ conda install -y -c conda-forge miller _Option B: Install from source_ i. Go to the miller releases page (_[https://github.com/johnker/miller/releases_](https://github.com/johnker/miller/releases_)) and find the pre-built binary that is appropriate for your system. ii. Download the binary from the releases page, and extract the compressed tarball. A Linux pre-built binary is shown for illustration purposes, but the path to any pre-built miller binary can be used in this step. _curl -L --output miller_download.tar.gz_ [https://github.com/johnkerl/miller/releases/download/v6.5.0/miller-6.5.0-linux-amd64.tar.gz](https://github.com/johnkerl/miller/releases/download/v6.5.0/miller-6.5.0-linux-amd64.tar.gz) _tar xvzf_miller_download.tar.gz_ iii. Append the path to the miller installation to your PATH environment variable. _export_ PATH=/path/to/miller/installation/miller/:$PATH
3. Download the sample long reads. This file is the same as used for Basic Protocols 1, 2 and 3. _For detailed instructions describing downloading the sample long reads, see Basic Protocol 1, step 3._
4. Download the sample draft ABySS (Jackman et al., 2017) short-read assembly. _curl -L --output celegans-abyss.fa_ [https://zenodo.org/record/7526395/files/celegans_abyss.fa](https://zenodo.org/record/7526395/files/celegans_abyss.fa)
5. Run ntlink to map the sample long reads to the draft assembly. Ensure that the downloaded sample data files are in your current working directory. _ntLink pair target=celegans-abyss.fa_ reads=SRR10028109.fastq t=5 sensitive=True paf=True
6. Check the logs and output files to ensure that the run executed successfully. _If the ntLink mapping completed successfully, the log messages from ntLink should finish with a time stamp and "DONE!". Furthermore, a PAF-formatted mapping file called "celegans-abyss.fa.k32.w100.z1000.pdf" should be in your current working directory._
7. Assess the mapping file output from ntLink. See Table 4 for the expected mapping statistics. 1. To make the generation of summary statistics more straightforward with miller, add column labels to the PAF file.
cat celegans-abyss.fa.k32.w100.z1000.paf|mlr--tsv--implicit-csv-headerlabel read,read_len,read_start,read_end,strand,contig,contig_len,contig_start,contig_end,numminimzers,len_mapping,mapping_qual> celegans-abyss.fa.paf.mlr.tsv b. Count the total number of mappings of the long reads to the query contigs. mlr --tsv stats1 -a count -f read celegans-abyss.fa.paf.mlr.tsv c. Calculate the average mapping block length. cat celegans-abyss.fa.paf.mlr.tsv|mlr--tsv stats1 -a mean -f len_mapping d. Calculate the average number of read mappings per draft assembly contig. cat celegans-abyss.fa.paf.mlr.tsv|mlr--tsv cut -f read,contig then uniq -g read,contig then stats1 -a count -f contig -g contig then stats1 -a mean -f contig_count
## 7 Alternate Protocol 2: Using ntLink mappings for genome assembly correction with Tigmint-long
As described in Alternate Protocol 1, the mapping functionality in ntLink can be used to inform scaffolding, the most common use of ntLink, or separately to provide mapping information that can be used by other downstream pipelines. One such alternate application is Tigmint-long (Coombe et al., 2021), a _de novo_ genome assembly correction tool which utilizes information in long reads to detect and cut at putative misassemblies. In the default mode, Tigmint-long simulates pseudo-linked reads from the long reads. This involves breaking the long reads into tiles, which represent short-read fragments, then generating paired-end reads from the fragments. Each read pair from the same long read is assigned the same barcode, adhering to the expected format for linked reads. These reads are then mapped to the draft assembly using minimap2 (H. Li, 2018), and these mappings are parsed to look for regions of the draft assembly that are not well-supported by the reads. However, as only approximate mappings are required, ntLink mapping can be used in place of minimap2. As ntLink uses more streamlined mapping logic, the reads do not need to be broken into pseudo-linked reads prior to mapping, thus eliminating a step in the pipeline. The output of Tigmint-long is a contigs file in FASTA format, where the sequences are broken at putative misassemblies.
### Hardware
This protocol requires a 64-bit Linux or MacOS operating system with sufficient RAM and available disk space (See Strategic Planning for more detail).
### Software
The following software must be installed and available in your PATH:
- SRA toolkit (v3.0.0+): (_[https://github.com/ncbi/sra-tools_](https://github.com/ncbi/sra-tools_))
- curl: (_[https://curl.see_](https://curl.see_))
- Python 3.7+: (_[https://www.python.org/_](https://www.python.org/_))
- ntLink (v1.3.7+): (_[https://github.com/bcasc/ntlink_](https://github.com/bcasc/ntlink_))
- Tigmint (v1.2.9+): (_[https://github.com/bcasc/tiamint_](https://github.com/bcasc/tiamint_))
- QUAST (v5.2.0+): (_[https://github.com/qlab/guost_](https://github.com/qlab/guost_))
\begin{table}
\begin{tabular}{l c} \hline \hline
**Total number of read mappings** & 622,975 \\ \hline
**Average mapping block length (bp)** & 5,647.7 \\ \hline
**Average number of distinct mapped reads per contig** & 105.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Expected results from assessing the ntLink mappings of the sample _C. elegans_ long reads to the draft ABySS assembly in Alternate Protocol 1.
#### Files
The input files for ntLink are long genome sequencing reads and a draft genome assembly. The long sequencing reads can be provided in FASTA or FASTQ format, either compressed with gzip or uncompressed. The input draft assembly to be corrected should be in FASTA format (multi-line or single-line).
### Protocol steps:
1. Install the required software used in the earlier protocols, if not already installed. _For more information about installing ntlink and other common dependencies, please see detailed instructions in Basic Protocol 1, steps 1-2._
2. Install the protocol-specific dependency Tigprint. _Option A: Use conda package manager_ i. If Option A of the Support Protocol was used to install ntLink, Tigprint can be installed in the same conda environment. _conda activate ntlink env_conda install -y -c bioconda -c conda-forge tigmint'samtools>=1.10' _Option B: Install from source_ i. Consult the README in the Tigmint GitHub repository (_[https://github.com/bogsc/igmint_](https://github.com/bogsc/igmint_)) to ensure that the required dependencies are installed. ii. Go to the releases page for Tigprint (_[https://github.com/bogsc/igmint/releases_](https://github.com/bogsc/igmint/releases_)) and find the most recent release tarball. Download and extract this tarball in the directory where you would like Tigmint to be installed. To demonstrate the required commands, Tigmint v1.2.9 is shown below, but the URL can be substituted for any later release of Tigprint. _curl -L --output tigmint-1.2.9.tar.gz_[https://github.com/bcgsc/tigmint/releases/download/v1.2.9/tigmint-1.2.9.tar.gz_tar](https://github.com/bcgsc/tigmint/releases/download/v1.2.9/tigmint-1.2.9.tar.gz_tar) xvzf tigmint-1.2.9.tar.gz_cd tigmint-1.2.9_ iii. Compile the required binaries_cd src_make iv. Append the path to the Tigmint installation to your PATH environment variable. _export_PATH=/path/to/tigmint/install/tigmint-1.2.9/bin:$PATH
3. Download the sample _C. elegans_ long reads and reference genome. These files are the same as used in Basic Protocols 1-3. _For detailed instructions describing downloading this sample data, see Basic Protocol 1, steps 3 and 5._
4. Download the sample draft _C. elegans_ ABySS short-read assembly. This draft assembly FASTA is the same as used in Alternate Protocol 1. _For detailed instructions describing downloading the ABySS short-read assembly, see Alternate Protocol 1, step 4._
5. Run Tigmint-long on the draft _C. elegans_ ABySS assembly using ntlink mapping and the downloaded long reads to detect and cut at putative misassemblies. Ensure that the downloaded sample data files are in your current working directory. _tigmint-make tigmint-long draft=celegans-abyss reads=SRR10028109_mapping=ntLink t=5 span=2
6. Check the logs and output files from Tigmint-long to ensure that the run executed successfully. _A successful run of Tigmint will finish with the following log messages: "Cutting assembly at breakpoints... DONI". There will also be a file named "celegans-abyss.cut500.tigmint.fa" in your current working directory which contains the corrected draft assembly sequences._
Use the reference-based assessment tool QUAST to compare the contiguity and correctness of the corrected genome assembly and the initial baseline assembly. See Table 5 for results from running QUAST on these assemblies.
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{**Assembly**} & **Number of sequences** & **Scaffold NG50** & **Scaffold NG50** & **Number of** \\ & **>= 3 kbp** & **length (bp)** & **length (bp)** & **miss assemblies** \\ \hline \hline
**Aby5S baseline** & 4,552 & 30,293 & 27,015 & 433 \\ \hline
**Aby5S + Tigmint-long** & 4,793 & 27,564 & 27,015 & 162 \\ \hline \hline \end{tabular}
## Support Protocol: Installing ntLink
ntLink can be installed using the conda package manager or from the source code. For a more straightforward installation process, and to ensure that all dependencies are properly installed, we recommend installing ntlink using conda.
### Necessary Resources:
#### Hardware
ntLink requires a 64-bit Linux or MacOS operating system with sufficient RAM and available disk space (See Strategic Planning for more detail).
#### Software
Miniconda ([https://docs.conda.io/en/latest/miniconda.html](https://docs.conda.io/en/latest/miniconda.html))
#### Protocol steps:
#### Option A: Installing ntLink using the conda package manager
1. If miniconda is not already installed:
\end{tableular} Table 5: Results from analyzing the baseline and Tigmint-long corrected _C. elegans_ ABy5S assembly. Following Tigmint-long misassembly correction using mappings from ntLink, the number of misassemblies decreases by more than 2-fold.
Install the following dependencies, and ensure that each is available in your PATH environment variable. We recommend installing the dependencies using a package manager such as conda. Otherwise, visit the tool homepages for information about installing from source. \(\circ\) Python 3.7+ \(\circ\) Python modules: \(\bullet\) Numpy: ([https://numpy.org/](https://numpy.org/)) \(\bullet\) Python-igraph: ([https://igraph.org/python/](https://igraph.org/python/)) \(\circ\) btllib: ([https://github.com/bcgsc/btllib](https://github.com/bcgsc/btllib)) \(\circ\) ABySS (v2.3.0+): ([https://github.com/bcgsc/abyss](https://github.com/bcgsc/abyss)) \(\circ\) Zlib: ([https://github.com/github/bcgsc/abyss](https://github.com/github/bcgsc/abyss)) \(\circ\) Make: ([https://github.com/github/bcgsc/abyss](https://github.com/github/bcgsc/abyss))
2. Change your directory to the desired folder for the ntLink installation, then clone the ntLink repository from GitHub. cd /path/to/desired/location/for/ntlink git clone [https://github.com/bcgsc/ntLink.git](https://github.com/bcgsc/ntLink.git)
3. Append the location of the ntLink installation to your PATH environment variable export PATH=/path/to/desired/location/for/ntlink/ntLink:$PATH
### Checking your installation
To verify that your installation is working properly, you can follow any of the basic protocols, or run the small demo provided on GitHub.
#### Running test demo
1. If you haven't already cloned ntLink during the installation process, clone the GitHub repository to download the small test demo. git clone [https://github.com/bcgsc/ntLink.git](https://github.com/bcgsc/ntLink.git)
2. Change your working directory to the cloned ntLink repository, then to the directory containing the test demo script. cd ntLink/tests
3. Run the provided demo shell script./test_installation.sh
4. If the test was successful, indicating that your installation is working as expected, you will see this message: "Done testsl Compare your generated files with the files in the expected_outputs folder to ensure the tests were successful.".
### Guidelines for understanding results
For all ntlink runs, it is important to look through the log messages to ensure that there are no errors. If there are error messages at any stage, the results are not reliable, and the error(s) needs to be resolved prior to any downstream genome analysis. See Table 6 for some common errors and suggested solutions.
Running ntLink for scaffolding or mapping will generate various intermediate files. For scaffolding runs, the most important output file is the FASTA file containing the final, improved scaffolds. However, the other intermediate files contain useful information about both the evidence used for generating the final scaffolds as well as the composition of the output scaffolds themselves. The constructed scaffold graph is output in DOT format (".scaffold.dot"). In this graph, the nodes are contigs, and the directed edges represent long-read evidence between the incident contigs. When running ntlink scaffolding, this graph is traversed using abyss-scaffold (Jackman et al., n.d.) to produce the final ordered and oriented scaffolds. The ntLink output files with the suffices "trimmed_scafs.path" and "trimmed_scafs.agp" each describe the composition of the output scaffolds in different formats. These files allow the user to deduce the order and orientation of the input contigs in the output scaffolds, as well as any gap sequences between the contigs. The ".path" format describes one scaffold per tab-separated line, with the first column denoting the scaffold name, and the second listing the order and orientation of the contigs, with gap sizes indicated by "<number>N". The ".agp" file follows the standard AGP specifications. The AGP file with the suffix "gap_fill.fa.agp" is only
generated when the gap-filling step of ntLink is performed, and additionally reports the identity and coordinates of the input long reads used to fill gaps.
Following successful ntLink scaffolding, it is expected that there will be fewer sequences in the scaffolded assembly compared to the baseline assembly, since input contigs will be joined together to form scaffolds. Consequently, the contiguity of the scaffolded assembly (as assessed by abys-fac, QUAST or other assembly assessment tools) is expected to increase. If there is no change in the contiguity or number of sequences, it is possible that parameters such as \(k\) and \(w\) (controlling the generation of the minimizers) need to be optimized (See Critical Parameters). For example, if using a long-read dataset with a high error rate, a smaller \(k\) value may be needed to increase the sensitivity of the long-read mapping. When running rounds of ntLink, it is expected that the contiguity will not increase after several rounds, as demonstrated in Basic Protocol 3. This is not a cause for concern, but just an indication that the long-read evidence leveraged by ntLink may be exhausted.
As demonstrated in the protocols, it is important to analyse the output scaffold FASTA files with tools such as abys-fac or QUAST to assess the scaffolding success. While abys-fac analysis does not require a reference, QUAST is reference-based, and is thus not suitable for all studies. For assembly projects without an available reference genome, or if many structural variants are expected, BUSCO (Manni, Berkeley, Seppey, Simao, et al., 2021; Manni, Berkeley, Seppey, & Zdobnov, 2021) is a useful tool for reference-free assessment of the assembly quality. BUSCO, or Benchmarking Universal Single Copy Orthlogs, searches the input genome assembly for genes that are evolutionarily expected to be found in single copy. Since BUSCO assesses the assembly completeness in the gene space, it provides complementary information to the reference-free contiguity metrics.
Following ntLink scaffolding and quality control of the resulting assembly, there are a variety of downstream analyses that can be performed, from comparative genomics to annotation. The direction that these analyses take will be guided by the particular research lab and study focus, making this assembly stage broadly important.
## 6 Supplementary
### Background Information
Scaffolding tools, such as ntlink, can play important roles in _de novo_ assembly pipelines through further improving upon draft assemblies. Multiple new features and modes have been integrated into ntlink to help users obtain the best possible assemblies and results from their sequencing data. The efficiency of the new and existing features of ntlink are largely attributable to the use of minimizer sketches for the various mapping tasks.
As described in Roberts et al. (2004) and implemented in bitlib (Nikolic et al., 2022), ntlink generates ordered minimizer sketches by first breaking the input sequences into their constituent \(k\)-mers (substrings of length \(k\)), and computing a hash value for each \(k\)-mer using ntHash2 (Kazemi et al., 2022). Then, for each \(w\) (window size) \(k\)-mers, the \(k\)-mer with the smallest hash value is chosen as the minimizer for that window. Sliding this window across the entire sequence generates the ordered minimizer sketch, a particular subset of \(k\)-mers (represented by hash values) which is much smaller than the entire \(k\)-mer spectrum. Using this sketching approach for sequence mapping provides a great computational advantage for ntlink in both memory usage and time efficiency, enabling ntlink to scale to large genomes (Table 1).
The newly developed overlap detection, gap-filling and liflower-based round functionalities of ntlink benefit the final quality of the assemblies and allow the scaffolded to be more flexible to the specific needs of the users. Prior to the integration of overlap detection, ntlink would simply join sequences end-to-end with an intervening gap, whether the sequences had a putative overlap or not. This could lead to small insertion misassemblies being introduced at the join point, which could have a negative impact on such downstream applications as annotation, if the insertion is in a gene region, for example. The overlap detection feature resolves these overlaps, avoiding the introduction of small misassemblies at the join point and allowing for a cleaner join. When using ntlink to scaffold assemblies without the gap-filling feature, the regions between joined contigs that have a gap between them, or missing genomic sequence, are filled with ambiguous bases ("N"S). While it is valuable to have sequences ordered and oriented relative to one another, there is also genomic information in those gaps that will then be missing in the assembly. Gap-filling can be performed as a downstream, often computationally intensive, assembly step (Chu et al., 2019; Paulino et al., 2015), but performing gap-filling within ntlink is efficient and effective in recovering these missing regions. Finally, running liflower-based rounds of ntlink enables additional improvements to the draft assembly by fully leveraging the long-read evidence, while also avoiding the computational burden of re-mapping the reads at each round.
Other state-of-the-art long-read assemblers, such as lRScaf and OPERA-LG, rely on sequence alignments instead of mapping and do not provide users with the same features and flexibility as ntLink. Neither lRScaf nor OPERA-LG provides gap-filling functionality, nor an in-code approach for running rounds of scaffolding. Therefore, if a user wants to run multiple rounds of scaffolding, they would have to do so in a naive manner (manually executing ntLink multiple times). Furthermore, while some long-read scaffolding tools such as lRScaf do also have logic to deal with overlapping joined sequences, their algorithms use alignments, while ntlink uses a more lightweight minimizer-mapping guided approach. OPERA-LG is not currently maintained
(the last release was in 2016), so may not properly leverage more recent improvements in both sequencing and bioinformatics technologies.
Finally, we also demonstrate the flexibility of using the mapping functionality of ntLink for other applications in Alternate Protocols 1 and 2. Sequence mapping is a foundational process in bioinformatics, and often exact coordinates are not needed for the desired application. In this case, ntLink is a great resource for lightweight mapping, which can find numerous applications such as in misassembly correction (as demonstrated in Alternate Protocol 2), targeted assembly and targeted polishing, to name a few.
### Critical Parameters
_k (k-mer size) and w [window size]_
The \(k\) and w parameters control the generation of minimizers for mapping the long reads to the draft assembly in ntLink, and are therefore the most influential parameters. Generally, the default settings (_k=_32, w=_100) produce good results for a variety of input assemblies and reads, but in order to obtain the best final scaffolds, these parameters can be optimized using a grid search. If undertaking a grid search, the approximate recommended ranges of \(k\) and w to test would be _k=_[24-80] and _w=_[75-1000]. Generally, we recommend a lower \(k\) and w setting when the long reads and/or draft assembly are more erroneous. However, if the draft assembly is very contiguous and/or the base quality of the input data is high, higher values can be successful.
### Troubleshooting
The ntLink Makefile should complete with an exit code of 0 and a message indicating a successful run. If an error occurs, the pipeline should stop running and output an error message. Some common errors are documented in Table 6. If you encounter additional errors not discussed in Table 6, please create new issue at the ntLink GitHub repository ([https://github.com/bcgcs/ntLink](https://github.com/bcgcs/ntLink)).
### Advanced Parameters
There are several ntLink parameters that can be tweaked in addition to \(k\) and \(w\) that may provide benefits for more advanced users. The default settings of each of these parameters have been found to work well for most assemblies.
_z [minimum contig length]_
By default, only sequences greater than 1 kb (z=1000) will be considered for integration into an output scaffold. Depending on the contiguity of the input draft assembly, the user may want to adjust this parameter if the input assembly is very contiguous (increase z) or very fragmented (decrease z).
_a [minimum number of anchoring reads]_
When ntLink parses the long-read evidence to create edges in the scaffold graph, it requires (by default) at least one 'anchoring read' for an edge to retained. An 'anchoring read' is defined as a read that has at least 2 mapped minimizers on each contig in the putative pair. If more stringent scaffold pairing is desired, this parameter can be increased to require more 'anchoring reads' before retaining an edge.
_v [verbose benchmarking mode]_
If the user specifies v=1 in their ntLink command, the time and peak memory will be tracked for each step, and output to separate files. This option can be useful when benchmarking the execution of ntLink.
_soft_mask (soft mask filled gaps)_
If soft_mask=True is specified in the ntLink command when gap-filling is enabled, the gaps will be filled with lowercase bases instead of uppercase bases. This soft masking could be useful for downstream analyses such as targeted polishing, for example.
### Conflict of interest
The authors declare that they have no conflicts of interest.
### Data availability
The sample assemblies that support the protocol are available from [https://doi.org/10.5281/zenodo.7526395](https://doi.org/10.5281/zenodo.7526395). The sample long reads and reference genome are publicly available under SRA accession SRR10028109 and GenBank accession GCA_000002985.3, respectively.
## Acknowledgements
This study is supported by the Canadian Institutes of Health Research (CIRR) [PJT-183608] and the National Institutes of Health [2R01HG007182-04A1]. The content of this article is solely the responsibility of the authors, and does not necessarily represent the official views of the National Institutes of Health or other funding organizations. The funding organizations did not have a role in the design of the study, the collection, analysis and interpretation of the data, or in writing the manuscript.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Problem** & **Possible Cause** & **Solution** \\ \hline Error “make: *** No rule to & Input files are not in the current working directory, or full paths to files are used. & Make soft links to ensure that input files are available in the current working directory, and do not use absolute or relative paths to input files. \\ \hline Error “zsh: no such option: pipefail” & An older version of zsh is installed, or zsh is missing. & Install zsh or update to the newest version. \\ \hline “UnsatisfiableError” when & Incompatible versions of tools were previously installed in the conda environment. & Install ntlink in a fresh conda environment. \\ \hline “source file is in invalid format!” when running ntLink & Input long reads or draft assembly files are in an unexpected format. & Ensure that the input long reads are in correctly-formatted FASTA or FASTQ format (gzipped or uncompressed), and the draft assembly is in FASTA format (uncompressed). \\ \hline “ModuleNotFoundError: No module named & A required python module is not installed properly. & Ensure that the expected python version is being used (ex. with which python3), and install the missing package (using conda or pip). \\ \hline “Error 127” in ntink log file after abyss-scaffold step & AbysS is not installed, or not found in your PATH & Install ABysS if needed, and ensure that the ABysS executables are found in your PATH. \\ \hline Running ntLink -h prints the make help page instead of the ntLink help page & ntLink requires parameters to be specified in the form “variable\_name=variable\_value”, so using options with the form “\(\sim\)letter\_” will specify options to make itself, not the ntLink program. \\ \hline Contiguity gains post-ntlink are minimal & Incorrect selection of k/w & If ntLink makes minimal joins, it is likely that the \(k\) and \(w\) parameters specified are not optimal. See the _Critical Parameters_ section for more details about setting \(k\) and \(w\). \\ \hline Error when running Tigmint-long: “samtools: error while loading shared libraries” & An older version of samtools is installed. & Update the samtools installation. \\ \hline \end{tabular}
\end{table}
Table 6: Sources and Solutions to Potential Errors |
2307.02898 | The JWST view of the barred galaxy population in the SMACS0723 galaxy
cluster | The cosmic evolution of the barred galaxy population provides key information
about the secular evolution of galaxies and the settling of rotationally
dominated discs. We study the bar fraction in the SMACSJ0723.37323 (SMACS0723)
cluster of galaxies at z = 0.39 using the Early Release Observations obtained
with the NIRCam instrument mounted on the JWST telescope. As already found in
nearby galaxy samples, we find that the bar fraction distribution of SMACS0723
is a strong function of the galaxy stellar luminosity/mass. The analogy with
local clusters, such as Virgo and Coma, reveals a similar distribution among
the three clusters for low-mass galaxies (log(M_star/M_sun) \leq 9.5). The
comparison with a sample of local galaxies in a field environment shows a
remarkable lack of bars in this low-mass regime for the SMACS0723 cluster (and
therefore in Virgo and Coma) with respect to the field. At high masses
(log(M_star/M_sun) \geq 10.25), galaxies in SMACS0723 show a slightly lower bar
fraction than those in Coma. Our results support a scenario where cluster
environment affects the formation of bars in a mass-dependent way. At high
masses, the mild increase in the bar fraction of local clusters (Coma) with
respect to both SMACS0723 and local field galaxies suggests a weak effect of
cluster environment possibly triggering bar formation. On the other hand,
low-mass galaxies show the same bar fraction in the three clusters (different
redshifts) and a significant drop with respect to field galaxies at z=0,
therefore suggesting that: i) the bar fraction of low-mass galaxies in clusters
is not evolving during the last 4~Gyr, and ii) bar formation is severely
inhibited in low-mass galaxies living in clusters (Abridged). | Jairo Méndez-Abreu, Luca Costantin, Sandor Kruk | 2023-07-06T10:11:28Z | http://arxiv.org/abs/2307.02898v2 | # The JWST view of the barred galaxy population in the SMACS0723 galaxy cluster
###### Abstract
Context:The cosmic evolution of the barred galaxy population provides key information about the secular evolution of galaxies and the settling of rotationally dominated discs.
Aims:We study the bar fraction in the SMACSJ0723.37323 (SMACS0723) cluster of galaxies at \(z=0.39\) using the Early Release Observations obtained with the NIRCam instrument mounted on the JWST telescope.
Methods:We visually inspected all cluster member galaxies using the images from the NIRCam F200W filter. We classified the galaxies into ellipticals and discs and determine the presence of a bar. The cluster member selection was based on a combined method using both the available spectroscopy and the color-magnitude relation.
Results:As has previously been found in nearby galaxy samples, we find that the bar fraction distribution of SMACS0723 is a strong function of the galaxy stellar luminosity (or stellar mass). The analogy with local clusters, such as Virgo and Coma, reveals a similar distribution among the three clusters for low-mass galaxies (\(\log(M_{\star}/M_{\odot})\leq 9.5\)). The comparison with a sample of local galaxies in a field environment shows a remarkable lack of bars in this low-mass regime for the SMACS0723 cluster (and, therefore, in Virgo and Coma) with respect to the field. At high masses (\(\log(M_{\star}/M_{\odot})\geq 10.25\)), galaxies in SMACS0723 show a slightly lower bar fraction than those in Coma. At these high masses, we find a much larger bar fraction in SMACS0723 than previous works on field galaxies at \(z\sim 0.4\). Nevertheless, the difference is only marginal when we compare with a sample of well-resolved local field galaxies. Thus, we suggest that the improved capabilities of JWST with respect to HST in terms of both spatial resolution and image depth are responsible for the higher bar fraction we obtained.
Conclusions:Our results support a scenario where cluster environment affects the formation of bars in a mass-dependent way. At high masses, the mild increase in the bar fraction of local clusters (Coma) with respect to both SMACS0723 and local field galaxies suggests a weak effect coming from the cluster environment possibly triggering bar formation. On the other hand, low-mass galaxies show the same bar fraction in the three clusters (different redshifts) and a significant drop with respect to field galaxies at \(z=0\), thus suggesting that: i) the bar fraction of low-mass galaxies in clusters is not evolving during the last 4 Gyr; and ii) bar formation is severely inhibited in low-mass galaxies residing in clusters.
## 1 Introduction
The central role of stellar bars in the secular evolution of disc galaxies is widely accepted. They represent the main structure modifying the morphology of galaxies in the central \(\sim\)10 kpc (Hubble 1926; Buta et al. 2015) and influence the angular momentum redistribution between the baryonic and dark matter components of the galaxy (Debattista & Sellwood 2000; Martinez-Valpuesta et al. 2007; Sellwood 2014). Moreover, they have the ability to funnel material towards the galaxy center where starbursts can ignite (Martinet & Friedli 1997; Sheth et al. 2005; Ellison et al. 2011), contributing to the formation of bulge-like structures (Kormendy & Kennicutt 2004; Athanassoula 2005; Bittner et al. 2020; Gadotti et al. 2020), inner star-forming rings (Buta 1995; Munoz-Tunon et al. 2004), and inner bars (Erwin 2004; de Lorenzo-Caceres et al. 2019; Mendez-Abreu et al. 2019; de Lorenzo-Caceres et al. 2020), thus feeding the central black hole (Shlosman et al. 1990).
The importance of bars in shaping our understanding of galaxy evolution is also supported by their ubiquity in disc galaxies in the local Universe (\(z<0.1\)). The general consensus indicates that bars are present in \(\sim\)50% of disc galaxies if observed at optical wavelengths (Aguerri et al. 2009; Barazza et al. 2008) and this fraction is slightly increased when using infrared images (Eskridge et al. 2000; Marinova & Jogee 2007; Menendez-Delmestre et al. 2007; Erwin 2018). Nevertheless, large differences on the bar fraction are still found when analysing different samples. Some authors claim that these differences can be accounted for once the mass dependence of the bar fraction is taken into account (Mendez-Abreu et al. 2010a, 2012), others refer to the gas fraction as the culprit of these variations (Masters et al. 2011; Skibba et al. 2012); also, the effect of spatial resolution in detecting the smallest bars might also have some influence (Erwin 2018). Numerical simulations predict that bars spontaneously form due to instabilities in dynamically cold discs
(Ostriker & Peebles 1973), so the answer to the question of why not all local spirals have a bar is still unclear.
The role of the environment in triggering the formation of bars has been a matter of discussion for a long time. Thompson (1981) claimed that the bar fraction of Coma galaxies increases toward the core of the cluster. Similar results were found for the Virgo and Fornax Clusters (Andersen 1996; Eskridge et al. 2000) and for clusters at intermediate redshifts (Barazza et al. 2009). In addition, observations seem to favor an increase of the bar fraction in galaxy pairs (Kumai et al. 1986; Elmegreen et al. 1990; Giuricin et al. 1993; Varela et al. 2004). Tidal interactions in galaxy pairs have been suggested to induce off-center bars in low-mass galaxies (Pardy et al. 2016), but the observational evidence is still inconclusive (Kruk et al. 2017). On the other hand, according to van den Bergh (2002), Aguerri et al. (2009), and Li et al. (2009) the bar fraction strongly depends on the properties of the host galaxies but not on their environment. Additionally, Lee et al. (2012) claimed that the bar fraction does not depend on the environment below and central velocity dispersion are fixed. Martinez & Muriel (2011) found that the bar population does not significantly depend on either group mass or on the distance to the nearest neighbour. Giordano et al. (2011) compared two carefully selected samples that are representative of isolated and cluster galaxies, whereas Marinova et al. (2012) investigated the bar fraction in lenticular galaxies across different environments which span two orders of magnitude in galaxy density. Neither of them found significant differences. In Mendez-Abreu et al. (2012), they found that the effect of the environment on the bar formation depends on the mass of the galaxy. They proposed that interactions trigger bar formation in massive galaxies, which are stable enough to keep their cold discs even in galaxy clusters. In contrast, the discs of low-mass galaxies are heated by interactions inhibiting the bar formation.
Numerical simulations have also addressed the influence of the environment in the formation of bars. In addition to the spontaneous bar formation occurring during the secular evolution of galaxies, interactions with other galaxies represent another path to the formation of bars (Noguchi 1987; Lokas 2018). These tidally induced bars might be the result of a minor merger (Gerin et al. 1990) or a fly-by interaction (Martinez-Valpuesta et al. 2017; Peschken & Lokas 2019). Bars formed by interaction-driven mechanisms present a different evolution with respect to those that are spontaneously formed, and their properties will depend on several internal (mass surface density, stellar velocity dispersion, gas fraction) and external (mass of the perturber, impact orbit) properties.
Still, most of our knowledge about the formation and evolution of bars has been produced using local galaxy samples. The Universe at high redshift (\(z>\)1) was much more violent and turbulent than nowadays. Thus, since a dynamically cold disc is a necessary condition to form a bar, the evolution of the bar fraction is directly related to the evolution of discs. Previous studies carried out using the Hubble Space Telescope (HST) have shown a mixed bag of results regarding the bar fraction evolution with redshift. Abraham et al. (1999) and van den Bergh (2002) argued in favour of a decreasing bar fraction with increasing redshift up to \(z\sim\)1. Later, the analyses carried out by Elmegreen et al. (2004) and Jogee et al. (2004) measured a constant bar fraction up to \(z\sim\) 1. The situation moves back to a decrease with redshift when the work by Sheth et al. (2008) was published. Since then, a possible solution to this discrepancy was presented by Cameron et al. (2010) and later by Melvin et al. (2014). They found that this trend is more acute for low-mass galaxies (\(\log(M_{\star}/M_{\odot})<10.34\)) than for high-mass galaxies (\(\log(M_{\star}/M_{\odot})\geq 10.64\)). This is consistent with an scenario where more massive, dynamically cold, stellar discs are already settled during the first Gyrs of the history of the Universe, and therefore they have had enough time to develop bars and reach the observed low-redshift bar fraction early in time. The recent discovery of six massive barred spirals at \(z>1\) by Guo et al. (2023) also points in this direction, setting the formation of cold disc very early in the history of the Universe.
Three common caveats are always associated with the identification of bars at high redshift: i) limited spatial resolution and evolution of the physical scale with redshift; ii) the morphological K-correction; and iii) surface brightness dimming. The impact of spatial resolution in the detection of bars is a widely discussed topic even when comparing galaxy samples in the local Universe. The common agreement is that bars can be detected if they are at least about two times the full width at half maximum (\(\sim\)2 \(\times\) FWHM) of the point spread function (PSF) in terms of the radius (Aguerri et al. 2009). Erwin (2018) showed that detecting the short end of the bar length distribution might be a critical point to compare bar fractions from different studies, but the vast majority of HST studies have explored bars in the rest-frame optical light. This might also have implications when comparing bar fractions since bars are stellar structures (easily detected at redder wavelengths) and dust effects can obscure small bars. Finally, a generally untouched problem is the cosmological surface brightness dimming. This might affect the observability of the outer disc making it more difficult to separate stellar bars from elliptical galaxies (Melvin et al. 2014). Nevertheless, the effects of the surface brightness dimming on the detection of bar-built structures such as boxy/peanut bulges was studied by Kruk et al. (2019), without finding significant discrepancies between the local Universe and \(z\sim\)0.4.
This paper represents the first attempt to use the new capabilities of JWST to measure the fraction of barred galaxies in a cluster at \(z\)=0.39. The characteristics of the NIRCam imaging of the Early Release Observations (ERO) of the SMACS0723 cluster overcome previous issues related to spatial resolution, bar identification at rest-frame wavelengths, and depth of the observations. Therefore, we are able to provide a robust estimation of the bar fraction and pave the way for future studies of bar evolution.
This paper is organized as follows. Section 2 describes our fiducial sample of galaxy cluster members. Section 3 shows the process of visually classifying whether (or not) cluster member galaxies host a bar. Section 4 highlights the main results of our study. Section 5 places our results in the context of bar fraction evolution with redshift and environment. Finally, Section 6 provides a summary of our main conclusions. Throughout the paper, we assume a flat cosmology with \(\Omega_{\rm m}=0.3\), \(\Omega_{\rm d}=0.7\), and a Hubble constant of \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\).
## 2 SMACS0723 and the cluster membership
The galaxy cluster SMACSJ0723.3-7323 (hereafter, SMACS0723) is part of the southern extension of the MACS sample (Ebeling et al. 2010; Repp & Ebeling 2018). Mahler et al. (2022), using the ROSTAT package (Beers et al. 1990) derived a cluster redshift of \(z\)=0.3877 using a sample of 26 spectroscopically confirmed members of the cluster. They also derived a cluster velocity dispersion of \(\sigma\sim\)1180\(\pm\)160 km s\({}^{-1}\). The cluster total mass estimated by Planck is 8.39\(\times\)10\({}^{14}\) M\({}_{\odot}\) (Coe et al. 2019). Using this value and the equations given by Coe (2010) we derived the cluster viral radius (conventionally
defined as the radius within which the mean density is 200 times the background density) as \(r_{200}=1.95\) Mpc = 6.15 arcmin.
The SMACS0723 cluster has been observed as part of the RELICS program (Coe et al. 2019). They observed a sample of 41 massive galaxy clusters with HST (PI: D. Coe), and the Spitzer Space Telescopes (PI: M. Bradac). Deep observations (26.5 AB mag) were obtained for these clusters in 7 HST bands: F435W, F606W, F814W with the Advanced Camera for Surveys (ACS), and F105W, F125W, F140W, and F160W with the Wide Field Camera Three (WFC3). The RELICS data products for SMACS0723 include reduced and color images, photometric catalogs generated with SExtractor (Bertin & Arnouts 1996), and photometric redshifts computed with the Bayesian Photometric Redshifts code (Benitez 2000, BPZ). These are publicly available through the RELICS website1.
Footnote 1: [https://relics.stsci.edu/data.html](https://relics.stsci.edu/data.html)
In order to select the galaxy cluster members we first used a criterion based on the color-magnitude relation. Galaxy clusters are known to display a well-defined red-sequence that can be used to photometrically identify cluster members (Gladders & Yee 2005). After inspecting all possible combinations of colors available in the RELICS catalogue, we found that the red-sequence was better defined when using the F606W and F814W filters. Using a similar analysis, Golubchik et al. (2022) identified a sample of 130 cluster members with magnitudes brighter than 23 in the F814W band. Using the same magnitude criteria we found 116 cluster members, which we consider a good agreement since we might have applied a different color cut. Then, we used the E-MILES library of single stellar population models (Vazdekis et al. 2016) to derive the colors of an old (14 Gyr), metal-rich ([M/H]=0.4), and high [\(\alpha\)/Fe]=0.4 model (red galaxy) and a young (1 Gyr), solar-metallicity ([M/H]=-0.25), solar [\(\alpha\)/Fe]=0 abundance (blue galaxy) at the redshift of the cluster (\(z=0.39\)). Figure 1 shows the color-magnitude relation for all galaxies in the ACS/RELICS field of view (FoV), with the two colors defining the red sequence of SMACS0723 (0.6134 \(<\)F606W\(-\)F814W\(<\)1.2478). Both reddest and bluest galaxies can also be hosted by the cluster due to dust reddening (formerly or recent star formation (latter), but both effects are not very common in massive clusters such as SMACS0723. This preliminary selection provided us with an initial sample of 590 galaxies with absolute magnitudes F814W \(<-16\) mag (\(m_{\rm F814W}\) = 25.6). This low magnitude limit was set to avoid cluster membership confusion at the dwarf end of the luminosity function and because in previous works we find no bars at fainter magnitudes (Mendez-Abreu et al. 2012). We also imposed a limit at the bright end of the color-magnitude relation. This was set to \(m_{\rm F814W}=18.35\) mag which corresponds to the magnitude of the brightest cluster galaxy.
In order to check the reliability of our red sequence selection process, we show in Fig. 1 the position of the 22 spectroscopically confirmed galaxies as cluster members in Mahler et al. (2022, Table 1). We also included in Fig. 1 a sample of 61 spectroscopically selected member galaxies from the recent database of Noirot et al. (2022). These were chosen by imposing a simple redshift cut \(0.36<z<0.42\). We found that all SMACS0723 spectroscopic members are selected as possible cluster members following our color selection.
In this work, we use the Early Release Observations (ERO) of the SMACS0723 cluster (proposal ID: 27361; Pontoppidan et al. 2022). Observations were taken on P June 2022, using nine dither positions to optimize image quality, exposures of a total of 7 537 s per filter to achieve a point source sensitivity of AB \(\sim\) 29.8 mag (\(\sim\) 3 magnitudes deeper than RELICS), and the MEDIUM8 readout pattern to minimize detector read noise. The public release includes calibrated mosaics in six broad-band NIRCam filters (i.e., F090W, F150W, F200W, F277W, F356W, and F444W), available on the Mikulski Archive for Space Telescopes (MAST). Since the analysis of the visual morphology of galaxies (bar detection) does not require absolute flux calibration or high-precision astrometry, a careful visual inspection of the public dataset reveals a good quality of the automatic reduction. In particular, we created postage stamps of each galaxy member of the SMACS0723 cluster using the F200W filter, since it provides the best spatial resolution (0.031 arcsec/px; PSF FWHM of 0.066 arcsec) and sensitivity. We notice here that all photometric information about the galaxies used in this paper was derived from HST data mainly because a more robust photometric calibration and catalogue selection was available when this work was in progress.
The JWST/NIRCam FoV observes a 9.7 arcmin\({}^{2}\) field with a \(\sim\)44 arcsec gap separating two 2.2 arcmin \(\times\) 2.2 arcmin areas. NIRCam observations of the SMACS0723 cluster were taken with one camera centered on the cluster, and another on an adjacent field. Therefore, they cover a smaller area of the cluster with respect to the RELICS HST/ACS imaging (3.36 arcmin \(\times\) 3.36 arcmin). We found that 300 galaxies out of the initial 590 were present in the NIRCam imaging of SMACS0723. This final number already includes the removal of some obvious stars and duplicated object in the initial RELICS sample. The final number of cluster members analysed in this study also includes the following cuts: i) in order to avoid non-resolved point sources we imposed a stellarity parameter lower than 0.9 (see RELICS catalogue for details) and the condition that they are not visually classified as point source (see Sect. 3); ii) galaxies should be relatively face-on (\(\epsilon<0.5\)) to avoid projection problems and to be comparable with previous studies (e.g., Mendez-Abreu et al. 2012); and iii) galaxies should have a photometric redshift (see RELICS catalogue for details) \(z<1\) to avoid contamination
Figure 1: Color (F606W-F814W) vs. magnitude (F814W) diagram of all sources present in the RELICS catalogue (Black points). Possible cluster members selected using both the red sequence colors (0.6134 \(<\)F606W-F814W\(<\)1.2478 mag; orange horizontal lines) and the magnitude limits (25.62\(<m_{\rm F814W}<\)18.35 mag; orange vertical lines) are shown in red. The subsample of 22 spectroscopically confirmed cluster members by Mahler et al. (2022) are shown in blue. The subsample of 61 spectroscopically confirmed cluster members by Noirot et al. (2022) are shown in orange. The brightest cluster galaxy (BCG) of the cluster is represented with a grey star.
from background galaxies. We checked that all spectroscopically confirmed members satisfy this condition. Our final sample of SMACS0723 cluster members consist of 188 galaxies.
Figure 2 shows the stellar mass distribution (as computed in Sect. 4) of both all SMACS0723 cluster members and only those galaxies classified as discs (see Sect. 3). The stellar mass distribution for the nearby Coma and Virgo clusters, obtained by Mendez-Abreu et al. (2012), are also represented for comparison. The spatial distribution of the SMACS0723 cluster members is shown in Fig. 3.
## 3 Galaxy morphological classification
To identify bars in the JWST cutout images of the galaxies more efficiently, we set up a private project using the Zooniverse Panoptes Project Builder.
We created rrrs cutouts of the cluster members and converted them to 424 \(\times\) 424 pixels jpe postage stamps, applying an arc-sinh stretch to the images. Following Costantin et al. (2023), we also derived parametric and non-parametric morphology of the sample galaxies using ststamorph (Rodriguez-Gomez et al. 2019). We focused on the F200W image (CAS parameters, 1-component Sersic model and residuals) and show this output to the classifiers. All of these images were informative on whether the galaxy hosts a bar or not.
We set up a simple workflow in the Zooniverse Panoptes framework for classification. First, we filtered on whether the galaxy has been classified before (to remove potential duplicates due to shredding or multiple identifications of the same object in the photometric catalogue). Secondly, we classified the orientation of the objects, in order to remove edge-on cases, where the bar identification is difficult. Then we classified the global morphology of the galaxies into four classes: spheroid, disc, irregular or point source. In cases where the galaxy was identified to be a disc or irregular type, we classified whether a bar was present in the galaxy image. In total, 188 galaxies were classified by all three authors for the presence of a bar, with a total of 564 classifications.
We then aggregated the classifications. If all three classifiers agreed that there was a bar present in the galaxy, we classified the galaxy as having a secure bar. If at least one of the three classifiers identified the galaxy as being barred, the galaxy was classified as hosting an uncertain bar. Examples of galaxies with secure, uncertain, and unbarred discs classified in this work are shown in Figure 4, in comparison with the HST ACS F814W images. In total, there are 20 secure bars and 15 uncertain bars out of 188 galaxies in the sample. To account for both secure and uncertain bars in the sample, in the following analysis the lower error bars include the secure bars and binomial errors, while the higher error bars include the secure and uncertain bars, as well as binomial errors.
## 4 The bar fraction in the SMACS0723 cluster
Our analysis of the bar fraction in the SMACS0723 galaxy cluster includes two different definitions: we derived the ordinary bar fraction, f\({}_{\rm D}\), (as it is usually calculated using only disc galaxies) and the overall bar fraction, f\({}_{\rm F7}\), (calculated using all galaxies independently of their Hubble type). Since bars can only be triggered in discs, f\({}_{\rm D}\), has been historically deemed as the correct way of computing the bar fraction. However, the visual morphological separation between massive non-barred lenticulars and elliptical galaxies is very difficult and introduces a large uncertainty in f\({}_{\rm D}\). On the other hand, f\({}_{\rm F7}\) avoids this problem and allows us to probe a larger range of luminosities and masses than f\({}_{\rm D}\), but it assumes that the luminosity/mass distribution of elliptical versus disc galaxies is the same in the different samples under comparison, which might not be the case when comparing clusters with very different masses or when comparing different galactocentric regions of the clusters. We used both in our analysis to provide a more complete picture of the bar fraction and to compare them with local studies using similar quantities.
Figure 5 shows f\({}_{\rm D}\) and f\({}_{\rm T}\) as functions of both the SDSS \(r\)-band absolute magnitude and stellar mass of the galaxies in the SMACS0723 cluster. We transformed the magnitudes obtained from the RELICS catalogue using the HST/ACS F814W filter to the SDSS \(r\)-band system by using the E-MILES library of single stellar population (SSP) models (Vazdekis et al. 2016). To this aim, we first assumed our galaxies to be represented with four extreme SSP properties (as described in Sect. 2: i) an old (14 Gyrs) and metal-rich ([M/H]=0.4); ii) an old (14 Gyrs) and metal-poor ([M/H]=-0.25); iii) a young (1 Gyr) and metal-rich ([M/H]=0.4); and iii) a young (1 Gyr) and metal-poor ([M/H]=-0.25). All these representing extreme cases of the possible galaxy population in our cluster. Then, we computed the magnitudes of these SSP models for both the F814W and \(r-\)band filters at the redshift of the cluster \(z\)=0.39 and \(z\)=0, respectively. The differences obtained between the magnitudes at different bands (redshift) will provide us with the typical correction for each particular SSP. We finally computed the mean difference of the 4 SSP models to be 0.155 mag. This correction was then applied to transform F814W magnitudes into \(r\)-band ones. The same procedure was carried out to transform the F606W filter into the SDSS \(g-\)band obtaining a correction factor of 0.058 mag. This was necessary to compute the galaxy stellar masses using the prescriptions given by Zibetti et al. (2009). Figure 5 also shows the bar fractions for the Virgo and Coma cluster as derived in Mendez-Abreu et al. (2012). The three clusters are now directly comparable since the magnitudes, colors, and stellar masses have been computed in the same way. To avoid issues related to the bin size, we applied a moving-average (boxcar) smoothing over the histograms using box widths of both 1 mag and 0.5 dex and steps of 0.5 mag and 0.25 dex in magnitude and mass, respectively. The number of galaxies in each bin is
Figure 2: Total number of galaxies (solid lines) and disc galaxies (dashed lines) as a function of the stellar mass for the Coma (red), Virgo (blue), and SMACS0723 (violet) clusters. The values for both Coma and Virgo clusters were obtained from Méndez-Abreu et al. (2012) and they are described in Sect. 4.
shown at the top of each panel in Fig. 5. It is worth noting that due to our smoothing method some galaxies can be counted in two adjacent bins. The bar fraction errors are calculated by considering only the secure bars and both the secure and uncertain bars, respectively, and including their statistical uncertainties. The latter were computed by estimating the confidence intervals on binomial population proportions following the prescriptions by Cameron (2011).
Figure 5 shows that, independently on how we compute it, the bar fraction for the three clusters is a strong function of galaxy luminosity (stellar mass) as already discussed by several authors (Mendez-Abreu et al. 2010a; Nair & Abraham 2010; Mendez-Abreu et al. 2012; Erwin 2018; Kruk et al. 2018). The overall bar fraction (f\({}_{T}\)) in all clusters shows a peak around M\({}_{\star}\)\(\sim-20.5\) in absolute magnitude and log(\(M_{\star}/M_{\odot}\))\(\sim\)10.5, followed by a similar decrease towards fainter (low-mass) galaxies. The observed trends are similar when using the ordinary bar fraction (f\({}_{D}\)), but at higher luminosities and masses, the Virgo cluster presents a lack of discs in our sample that makes uncertain the computation of the bar fraction. We calculated the weighted mean, peak value, and corresponding errors in magnitude (and mass) of the bar fraction distributions of the three cluster by performing a series of 1000 Monte Carlo simulations taking into account the confidence intervals. These results are shown in Table 4.
In the low luminosity and mass range (M\({}_{r}>-18.5\) mag; log(\(M_{\star}/M_{\odot}\)) \(\leq 9.5\)) the bar fraction distribution in the three clusters is essentially the same. At M\({}_{r}\)\(\sim-18.5\) mag (log(\(M_{\star}/M_{\odot}\)) \(\sim\) 9.55), the typical bar fraction in the three clusters is \(\sim 30\%\) dropping to \(0\%\) at M\({}_{r}\)\(\sim-16\) mag. The mean bar fractions in this luminosity/mass range are shown in Table 2. At intermediate luminosities and masses (\(-18.5\geq{\rm M}_{r}\geq-20\) mag; 9.5\(\leq{\rm log}(M_{\star}/M_{\odot})\leq 10.25\)), the bar fraction in the Virgo cluster shows a secondary peak in all distributions (f\({}_{T}\) and f\({}_{D}\)) which is not clear in either SMACS0723 nor Coma. Actually, the Coma cluster shows a dip in the bar fraction when looking at the magnitude distributions of both f\({}_{T}\) and f\({}_{D}\), however, it does not appear in the mass distribution so we believe it might be a statistical fluctuation due to low number statistics. At high luminosities and masses (M\({}_{r}<-20\) mag; log(\(M_{\star}/M_{\odot}\)) \(>10.25\)), the discs based bar fractions of SMACS0723 is lower than in Coma. When considering all galaxies, the differences are even more acute with the bar fraction of SMACS0723 being lower than Virgo, and Coma showing the highest values. The mean bar fractions in this luminosity/mass range are shown in Table 2. The differences observed at high luminosities and masses between f\({}_{D}\) and f\({}_{T}\) might indicate a different fraction of ellipticals versus disc galaxies in
Figure 3: JWST color image of SMACS0723 showing the spatial distribution of cluster members confirmed spectroscopically (green) and using our color-magnitude cuts (white). This image was produced from our reduced data products via a composite of data in 3 bands: F090W, F150W, F200W, F090W was assigned blue colours, F150W green, and F200W red.
the three clusters, rather than an actual difference in the bar fraction. In this case, the ellipticals-to-discs fraction should be larger in SMACS0723 and Virgo than in Coma.
One possible caveat when comparing the results of the SMACS0723 cluster with those of Virgo and Coma is the different spatial coverage. Our results on the SMACS0723 cluster have been obtained using the NIRCam photometry centred in the brightest cluster galaxy. Therefore, we are mapping a clustercentric radius of \(\sim 0.3\times\) r\({}_{200}\). Considering values of r\({}_{200}\)=2.86 Mpc (Lokas & Mamon 2003) and r\({}_{200}\)=2.86 Mpc (Ferrarese et al. 2012) for Coma and Virgo, respectively, we limited their samples to match the r \(\sim 0.3\times\)r\({}_{200}\) spatial coverage in all three clusters. The ordinary bar fraction on these restricted samples are shown in Fig. 7. The results discussed previously do not change when considering the same spatial coverage despite the more limited ranges they probe.
A potential bias that might also affect the comparison between the bar fractions of the nearby clusters (Virgo and Coma) and SMACS0723 is the different wavelength range used to identified the bars. In order to take advantage of the full capabilities of NIRCam, we used the F200W images that best represent the combination of both image quality and depth of the JWST observations. The F200W band at a \(z\)=0.39 correspond to an intermediate wavelength between the J-band and H-band at rest-frame; therefore, we are mapping the near-infrared population of bars in SMACS0723. However, the study of the Virgo and Coma clusters were carried out using optical imaging in the SDSS r-band and ACS-HST F814W images, respectively. In order to quantify the impact of the different wavelength range in the identification of bars, we performed a further visual inspection of the barred galaxies detected in the F200W images, but this time using the F090W NIRCam filter. The F090W band at a \(z\)=0.39 correspond to approximately to the SDSS r-band at rest
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Bar Fraction Distribution & Statistical Parameter & Galaxy Property & SMACS0723 & Virgo & Coma \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline \multirow{4}{*}{Ordinary} & \multirow{4}{*}{Peak} & Luminosity & \(-19.4\pm\)0.3 & \(-18.5\pm\)0.2 & \(-19.4\pm\)0.2 \\ & & Mass & 9.8\(\pm\)0.1 & 9.7\(\pm\)0.1 & 9.9\(\pm\)0.1 \\ \cline{1-1} & & Luminosity & \(-20.2\pm\)0.6 & \(-18.9\pm\)0.4 & \(-20.2\pm\)0.6 \\ \cline{1-1} & & Mass & 10.1\(\pm\)0.4 & 9.9\(\pm\)0.3 & 10.2\(\pm\)0.4 \\ \hline \multirow{4}{*}{Overall} & \multirow{4}{*}{Mean} & Luminosity & \(-19.7\pm\)0.3 & \(-19.5\pm\)0.2 & \(-19.7\pm\)0.2 \\ \cline{1-1} & & Mass & 10.0\(\pm\)0.1 & 9.9\(\pm\)0.1 & 10.1\(\pm\)0.1 \\ \cline{1-1} & & Luminosity & \(-20.4\pm\)0.7 & \(-20.4\pm\)0.8 & \(-20.6\pm\)0.4 \\ \cline{1-1} & & Mass & 10.3\(\pm\)0.4 & 10.4\(\pm\)0.4 & 10.5\(\pm\)0.2 \\ \hline \end{tabular} 1
\end{table}
Table 1: Weighted mean and peak value of the luminosity and mass bar fraction distributions
Figure 4: Examples of JWST SMACS0723 galaxies in the F200W filter, classified in this study: secure bars (first row), uncertain bars (second row), unbarred disc galaxies (third row) compared to the HST F814W images. The galaxies are ordered from left to right by increasing stellar mass. The postage stamps are 4.65 \(\times\) 4.65 arcsec in size.
frame. We found 10 secure bars and 18 uncertain bars in the sample corresponding to a global bar fraction of 25%, instead of the 31% derived using the F200W. The lower bar fraction in the rest-frame optical with respect to the rest-frame infrared is expected and it has been also reported in low-redshift studies (e.g., Erwin 2018). Figure 8 displays the comparison of the bar fraction derived with both NIRCam filters (F200W and F090W) as a function of the stellar mass. The trend of the bar fraction with stellar mass is the same as discussed before, but the bar fraction is smaller at all masses. In the low-luminosity and mass range (M\({}_{r}>-18.5\) mag; log(\(M_{\star}/M_{\odot})\leq 9.5\)), the bar fractions between the two filters is similar and therefore our results do not change. At intermediate luminosities and masses (\(-18.5\geq\) M\({}_{r}\geq-20\) mag; 9.5\(\leq\) log(\(M_{\star}/M_{\odot})\leq 10.25\)), we found the largest differences depending on the filter, even if the overall shape of the distribution is the same. At high luminosities and masses (M\({}_{r}<-20\) mag; log(\(M_{\star}/M_{\odot})>10.25\)), the bar fraction is lower in the F090W, this will reinforce our previous result that the bar fraction for high-mass galaxies in SMACS0723 is lower than in the Coma cluster.
Another critical aspect when comparing the bar fractions of different samples is the spatial resolution of the observations. At
Figure 5: Bar fraction for the Coma (red), Virgo (blue), and SMACS0723 (violet) clusters as a function of both the absolute \(r\)-band magnitude (left panels) and the stellar mass (right panels). The bar fractions have been computed using both only disc galaxies (ORDINARY, upper panels) and all galaxies (OVERALL, lower panels). The number of galaxies in each bin is shown at the top of each panel coloured accordingly. As explained in Sect.4, bins are represented either every 0.5 mag or 0.25 dex, but the fraction (and therefore the number of galaxies) is averaged over bins of 1 mag and 0.5 dex for the magnitudes and masses, respectively.
Figure 6: Bar fraction for the SMACS0723 cluster as a function of stellar mass computed using the F090W band (dark blue) and the F200W band (violet) images to identify bars. The bar fractions have been computed using only disc galaxies (ORDINARY). The labels and binning scheme is the same as in Fig. 5.
the redshift of the SMACS0723 cluster (\(z\)=0.39), and using a cosmology such that \(\Omega_{M}=0.3\), \(\Omega_{\Lambda}=0.7\), and a Hubble parameter H\({}_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), the physical scale is 5.290 kpc/\({}^{n}\). Therefore assuming a NIRCam F200W PSF FWHM \(\sim\) 0.066 arcsec, our spatial resolution will be 370 pc. Previous studies on the detectability of bars as a function of the spatial resolution have found a limiting resolution of \(\sim\)2 \(\times\) PSF FWHM for a robust bar detection (Aguerri et al., 2009; Erwin, 2018), so we should be able to detect bars \(\geq\) 740 pc in size in the NIRCam observations of SMACS0723.
It is worth mentioning that the galaxy images used for both the Virgo and Coma samples (SDSS and HST-ACS, respectively) have a spatial resolution corresponding to \(\sim\) 75 pc (see Mendez-Abreu et al., 2012, for details) at the corresponding distance of Virgo and Coma, so they would allow us to resolve bars down to sizes \(\sim\) 150 pc. This difference in spatial resolution with respect to SMACS0723 might have an impact in the low-luminosity and low-mass end of the bar distribution since smaller bars are hosted in smaller galaxies (Aguerri et al., 2009). However, Erwin (2018) has recently shown using a sample of galaxies from the Spitzer Survey of Stellar Structure in Galaxies (S\({}^{4}\)G; Sheth et al., 2010) that only 0.02% and 0.3% of their bars have sizes smaller than 370 pc and 740 pc, respectively. Therefore, we expect a minimal effect on our ability to detect bars in SMACS0723 due to the spatial resolution of the observations. We note here that we are not seeking to detect inner/nuclear bars in the sample, since these can have sizes as short as 11% of the main (outer) bar (de Lorenzo-Caceres et al., 2020).
Finally, another possibility for missing bars in our SMACS0723 sample, with respect to Virgo and Coma, is the fact that bars at high redshift are expected to be shorter, since they should grow in size over time (Debattista & Sellwood, 2000; Martinez-Valpuesta et al., 2007). Simulations predict that this growth in size can be significant (50%-100%), but it is not yet clear which galaxy parameters control the growth rate of the bars.
## 5 Discussion
### Bar fraction evolution with redshift
The evolution of the bar fraction with cosmic time has been a matter of several studies due to its implications on the settlement of the first rotationally dominated discs. As numerical simulations predict, bars can be formed spontaneously in cold discs. Since bars develop in a relatively quick phase (\(\leq\)1 Gyr; Sellwood, 2014), and assuming that they are long lived, the presence of a bar can be used as a clock to time the formation of discs. Observationally, the studies carried out using the HST suggest a decrease of the bar fraction towards higher redshifts (Sheth et al., 2008; Cameron et al., 2010). However, the strength of this trend, as well as its dependence on galaxy properties, bar characteristics, and observational effects, is still not clear (Melvin et al., 2014; Simmons et al., 2014; Erwin, 2018). The theoretical perspective is not much different. Earlier studies based on zoom-in numerical simulations showed a clear decrease on the bar fraction with redshift (Kraljic et al., 2012). However, recent analyses using IllustrsING cosmological simulations have shown this trend might be milder when considering similar massive discs at different redshift (Zhao et al., 2020) or even inverted (higher bar fractions at higher redshifts; Rosas-Guevara et al., 2022). Figure 8 shows the comparison of the bar fraction derived for the disc galaxies in the SMACS0723 cluster with the state of the art observational and theoretical studies. Despite performing quantitative comparisons is not straightforward due to technical and different sample selection biases, some interesting trends can be seen. From the observational side, it is clear that the measurements carried out by Melvin et al. (2014) in the redshift range 0.4 \(\leq z\leq\) 0.6 provide a much lower bar fraction at all masses. One possible explanation for this difference could be the different sample selection used in Melvin et al. (2014), mainly field galaxies, with respect to this work, a massive cluster. However, we go on to demonstrate in Sect. 5.2, that this does not seem to be the case. Thus, we suggest that the improved capabilities of JWST with respect to HST in terms of both spatial resolution and image depth are responsible for our higher bar fraction. The comparison with numerical simulations shows in general a different trend, with simulations predicting a larger bar fraction than observations at this redshift range (\(z\sim\)0.4). The comparison with the IllustrsTNG50 (Pillepich et al., 2019; Nelson et al., 2019) analysis by Rosas-Guevara et al. (2022) shows a similar bar fraction in their lower mass bins (\(\log(M_{\star}/M_{\odot})<10.5\)), but it continuously grow at high masses (\(\log(M_{\star}/M_{\odot})>10.5\)) reaching values as high as f\({}_{D}\sim\)80%. This high-mass end is not covered by our SMACS sample (we do not observe such massive discs) and therefore a direct comparison cannot be done. The study by Zhao et al. (2020) using the IllustrsTNG100 simulations obtained lower bar fractions than Rosas-Guevara et al. (2022) and, therefore, are more similar to observational results, but still higher. In general, it seems that numerical simulations are able to identify very massive disc galaxies (\(\log(M_{\star}/M_{\odot})>10.5\)) which are not present in the clusters (neither in SMACS0723, Coma, nor Virgo) and that contain a higher fraction of bars with respect to the observations. A possible explanation of this difference might be the selection criteria of disc galaxies. Observational studies (including ours) consider a morphology-based classification between disc galaxies and elliptical, whereas most numerical simulations generally define discs with a certain threshold on the angular momentum of the particles. This is the case of the IllustrsTNG results. A different approach was followed in the analysis of the EAGLE simulations by Cavanagh et al. (2022), they used a machine learning approach to morphologically classify
\begin{table}
\begin{tabular}{l c c c} \hline \hline Range & Coma & Virgo & SMACS0723 \\ \hline \multicolumn{4}{c}{OVERALL} \\ Low luminosity & 11\% & 11\% & 7\% \\ Low mass & 8\% & 7\% & 6\% \\ Interm. luminosity & 28\% & 34\% & 24\% \\ Interm. mass & 32\% & 31\% & 25\% \\ High luminosity & 58\% & 36\% & 27\% \\ High mass & 49\% & 37\% & 31\% \\ \hline \multicolumn{4}{c}{ORDINARY} \\ Low luminosity & – & 11\% & 11\% \\ Low mass & – & 8\% & 10\% \\ Interm. luminosity & 28\% & 39\% & 37\% \\ Interm. mass & 34\% & 36\% & 39\% \\ High luminosity & 60\% & – & 43\% \\ High mass & 50\% & – & 50\% \\ \hline \end{tabular} 1
\end{table}
Table 2: Mean bar fraction in the three luminosity and mass intervals
galaxies, mimicking the way it is done in observations. The work by Cavanagh et al. (2022) provides overall lower bar fractions and a declining trend towards massive galaxies, thus providing a better match to observational.
### Bar fraction vs. environment
Figure 5 shows the comparison of the SMACS0723 cluster located at \(z\)=0.39 with respect to the Virgo and Coma clusters located at \(z\)=0.0044 and \(z\)=0.023, respectively. As discussed in Sect. 4, at low luminosities/masses (M\({}_{r}\geq-18.5\) mag; \(\log(M_{\star}/M_{\odot})\leq 9.5\)) the bar fraction in the different clusters is essentially the same whereas at high luminosities/masses (M\({}_{r}<-20\) mag; \(\log(M_{\star}/M_{\odot})>10.25\)) the discs based (f\({}_{D}\)) bar fractions of SMACS0723 is slightly lower than in Coma, trend that is enhanced when considering all galaxies (f\({}_{T}\)), where the bar fraction of SMACS0723 is smaller than Virgo, with Coma showing the highest values. Figure 9 shows the bar fraction (f\({}_{D}\)) as a function of the stellar mass for the SMACS0723 cluster and the sample of field galaxies described in Mendez-Abreu et al. (2012). This field galaxy sample includes the galaxies analysed in Aguerri et al. (2009), selected from the SDSS-DR5 (Adelman-McCarthy et al. 2006) in the redshift range \(0.01<z<0.04\), and a sample of fainter field galaxies containing all the galaxies in the SDSS-DR7 (Abazajian et al. 2009) within \(2500<cz<3000\) km s\({}^{-1}\). Figure 9 shows a remarkable lack of bars in the low-mass regime (\(\log(M_{\star}/M_{\odot})<9.75\)) of the SMACS0723 cluster (and therefore in Virgo and Coma) with respect to the field. The bar fraction in the field peaks at (\(\log(M_{\star}/M_{\odot})\sim 9.4\)) which roughly coincides with the minimum thickness of discs for galaxies in the field (see Sanchez-Janssen et al. 2010). The field bar fraction at its peak is \(\sim 52\)% whereas at the same mass (\(\log(M_{\star}/M_{\odot})\sim 9.4\)) the bar fraction in the SMACS0723 cluster is \(\sim 22\)%. This clearly indicate a strong influence of the environment in the low-mass discs of cluster galaxies already at \(z=0.39\). The combined information that the bar fractions of SMACS0723, Virgo, and Coma clusters are the same at these galaxy masses, but different from the field at the same time, indicates that the mechanism inhibiting the formation of bars in cluster environment must be acting in early phases of the cluster assembly. At high masses (\(\log(M_{\star}/M_{\odot})>10.25\)), Fig. 9 shows that the bar fraction (f\({}_{D}\)) in SMACS0723 is only marginally larger than in the local field. This reinforces the idea that previous works on field galaxies at \(z\sim 0.4\) are hindered by observational biases. The combined fact that SMACS0723 and Coma have comparable cluster masses, and the bar fractions among the local field, SMACS0723 (z=0.39), and Coma show a slight increase points towards an scenario with only a mild evolution in the bar fraction of high-mass galaxies during the last \(\sim 4\)Gyr of evolution.
This mass-dependent influence of the environment in the bar fraction can be explained by a scenario in which interactions affect differently the structure of massive and faint discs. On the massive side, we suggest that these discs are stable enough
Figure 8: Comparison of the bar fraction distribution as a function of the stellar mass of the SMACS0723 cluster (violet) with different theoretical and observational studies. The numerical simulations by Rosas-Guevara et al. (2022) at \(z=0.5\), Cavanagh et al. (2022) at \(z=0.4\), and Zhao et al. (2020) are shown as black, blue, and green circles, respectively. The observational results by Melvin et al. (2014) at \(0.4\leq z\leq 0.6\) are shown with orange stars.
Figure 7: Bar fraction for the Coma (red), Virgo (blue), and SMACS0723 (violet) clusters as a function of both the absolute \(r-\)band magnitude (left panel) and the stellar mass (right panel). The bar fractions have been computed using only disc galaxies (ORDINARY) within a clustercentric radius of \(0.3\)\(x_{\rm 7200}\) (see text for details). The number of galaxies in each bin is shown at the top of each panel coloured accordingly. As explained in the text, bins are represented either every 0.5 mag or 0.25 dex, but the fraction (and therefore the number of galaxies) is averaged over bins of 1 mag and 0.5 dex for the magnitudes and masses, respectively.
against (tidal or galactic) interactions to keep their cold structure, thus, the increasingly larger fraction of barred galaxies in cluster with time might be explained as interactions triggering bar formation (Lokas et al., 2016; Martinez-Valpuesta et al., 2017; Lokas, 2020). For faint galaxies, the same interaction due to the cluster environment might have a more destructive role. We speculate that interactions in these systems become strong enough to heat up (or destroy) the discs, thereby inhibiting bar formation and producing a lower bar fraction, with respect to the field observed in Fig. 9.
## 6 Conclusions
In this work, we study the bar fraction distribution in the SMACS0723 galaxy cluster using JWST ERO observations with the NIRCam instrument. This is the first statistical analysis of the barred population of galaxies using JWST data and it demonstrates the unique capabilities of JWST/NIRCam imaging for this kind of studies at high redshift.
We find that the bar fraction distribution in SMACS0723 is a strong function of galaxy mass, as previously shown for low redshift clusters and field galaxies (Mendez-Abreu et al., 2010, 2012; Erwin, 2018). The comparison with both Virgo and Coma clusters show that, at low luminosities and masses (M\({}_{r}\geq-18.5\) mag; \(\log(M_{\star}/M_{\odot})\leq 9.5\)), the bar fraction distribution is similar in the three cases. At high luminosities (M\({}_{r}<-20\) mag; \(\log(M_{\star}/M_{\odot})>10.25\)), the bar fraction distribution (computed using only disc galaxies; f\({}_{D}\)) of SMACS0723 is only marginally lower than in Coma, with this trend getting stronger when using the overall bar fraction (computed using all cluster member galaxies; f\({}_{T}\)). We suggest this is due to a different relative fraction of ellipticals and discs at these luminosities/masses between the clusters. We demonstrate that our results are not dependent on neither the spatial coverage of the observations for the different clusters nor on the spatial resolution of our JWST/NIRCam observations at the distance of SMACS0723 (\(z=0.39\)).
We compared our results with state of the art observational and theoretical studies on the bar fraction. Numerical simulations only cover the high-mass end (\(\log(M_{\star}/M_{\odot})>10.25\)) of the galaxy distribution and generally show larger bar fractions with respect to observations. We suggest this is due to the different selection criteria used in simulations (based on angular momentum) with respect to observations (based on morphology). At these high galaxy masses, we find a much larger bar fraction in SMACS0723 than previous works on field galaxies at the same redshift (\(z\sim 0.4\) Melvin et al., 2014). Nevertheless, the difference is only marginal when we compare SMACS0723 with a sample of well-resolved local field galaxies (Mendez-Abreu et al., 2012). Thus, we suggest that the improved capabilities of JWST with respect to HST in terms of both spatial resolution and image depth are responsible for our higher bar fraction.
The comparison between the SMACS0723 bar fraction and that of field galaxies at \(z=0\) remarks the influence of environment on the formation of bars. We find a strong drop in the bar fraction distribution of SMACS0723 low-mass galaxies (M\({}_{r}\geq-18.5\) mag; \(\log(M_{\star}/M_{\odot})\leq 9.75\)) with respect to local field galaxies. This behaviour is also found when using local clusters (Virgo and Coma), thus indicating that the mechanism inhibiting the formation of bars in cluster must acting relatively quickly after the galaxy enters into the cluster potential. On the other hand, at high luminosities and masses (M\({}_{r}<-20\) mag; \(\log(M_{\star}/M_{\odot})>10.25\)), the bar fraction in SMACS0723 is slightly higher than for local (\(z=0\)) field galaxies. This points towards a more weak influence of the environment in triggering the formation of bars at these luminosities and masses.
Our results support a scenario where cluster environment affects the formation of bars in a mass-dependent way. At high masses, the mild increase in the bar fraction of local clusters (Coma) with respect to both SMACS0723 and local field galaxies suggest a weak effect of cluster environment possibly triggering bar formation. On the other hand, low-mass galaxies show the same bar fraction in the three clusters (different redshifts) and a significant drop with respect to field galaxies at \(z=0\), therefore suggesting that: i) the bar fraction of low-mass galaxies in clusters is not evolving during the last \(\sim\)4 Gyr and ii) bar formation is severely inhibited in low-mass galaxies living in clusters.
The work presented in this paper is the first step towards a better characterization of the bar fraction (and their properties) as a function of redshift and environment. The error bars computed on the bar fraction of individual cluster are difficult to narrow down, mainly due to the fixed or otherwise limited number of cluster members. Therefore, similar analyses on a statistical number of clusters is necessary to confirm our mass-dependent scenario. Similarly, a better characterization of the bar fraction in field galaxies at different redshifts with the new JWST/NIRCam capabilities is necessary to further understand the effect of environment on the formation of bars.
###### Acknowledgements.
J.M.A. acknowledges the support of the Viera y Clavijo Senior program funded by ACIISI and ULL. J.M.A. acknowledges support from the Agencia Estatal de Investigacion del Ministerio de Ciencia e Innovacion (MCIN/AE/I 0.13039/501100011033) under grant (PID2021-12813N-100) and the European Regional Development Fund (EEDR) "A way of making europex". L. would like to thank P. G. Perez-Gonzalez for the expertise acquired dealing with JWST observations. LC acknowledges support from Agencia Estatal de Investigacion de Ministerio de Ciencia e Innovacion (MCIN/AE/I 10.13039/501100011033) under grant (PGC2018-093499-B-I00) and by "European Union NextGenerationEU/PRTR". LC acknowledges financial support from Comunidad de Madrid under Atraccion de Talento grant 2018-T2/TIC-11612. This publication uses data generated via the Zoouisverno g platform, development of which is funded by generous support, including a Global Impact Award from Google, and by a grant from the Alfred P. Sloan Foundation
Figure 9: Comparison of the bar fraction distribution with stellar mass of the SMACS0723 cluster (violet) and in the field at \(z\)=0 (Mendez-Abreu et al., 2012), i.e., low-density environments (green). The arrow at (\(\log(M_{\star}/M_{\odot})\sim 9.3\)) marks the minimum thickness of discs for galaxies in the field (see Sánchez-Janssen et al., 2010) |
2310.10853 | Manta Ray Inspired Flapping-Wing Blimp | Lighter-than-air vehicles or blimps, are an evolving platform in robotics
with several beneficial properties such as energy efficiency, collision
resistance, and ability to work in close proximity to human users. While
existing blimp designs have mainly used propeller-based propulsion, we focus
our attention to an alternate locomotion method, flapping wings. Specifically,
this paper introduces a flapping-wing blimp inspired by manta rays, in contrast
to existing research on flapping-wing vehicles that draw inspiration from
insects or birds. We present the overall design and control scheme of the blimp
as well as the analysis on how the wing performs. The effects of wing shape and
flapping characteristics on the thrust generation are studied experimentally.
We also demonstrate that the flapping-wing blimp has a significant range
advantage over a propeller-based system. | Kentaro Nojima-Schmunk, David Turzak, Kevin Kim, Andrew Vu, James Yang, Sreeauditya Motukuri, Ningshi Yao, Daigo Shishika | 2023-10-16T22:00:05Z | http://arxiv.org/abs/2310.10853v1 | # Manta Ray Inspired Flapping-Wing Blimp
###### Abstract
Lighter-than-air vehicles or blimps, are an evolving platform in robotics with several beneficial properties such as energy efficiency, collision resistance, and ability to work in close proximity to human users. While existing blimp designs have mainly used propeller-based propulsion, we focus our attention to an alternate locomotion method, flapping wings. Specifically, this paper introduces a flapping-wing blimp inspired by manta rays, in contrast to existing research on flapping-wing vehicles that draw inspiration from insects or birds. We present the overall design and control scheme of the blimp as well as the analysis on how the wing performs. The effects of wing shape and flapping characteristics on the thrust generation are studied experimentally. We also demonstrate that the flapping-wing blimp has a significant range advantage over a propeller-based system.
## I Introduction
Lighter-than-air (LTA) vehicles or blimps are an interesting platform for aerial autonomy and have been attracting interest in the robotics community [1, 2]. Some advantages of working with blimps come from the non-rigid airship design providing safety around human users and robustness to collisions [3, 4]. Another feature of LTA vehicles is the naval relevance; LTA vehicles demonstrate fluid mechanics enabling an aquatic design without dependence on water. Recent research include control methods [5, 6, 7], Human Robot Interactions [8, 9, 10], and localization [11, 12]. However, the means of locomotion in existing autonomous blimp research is simplified to a combination of propellers that provide sufficient control authority. In this paper we focus on the locomotion aspect and investigate the use of flapping wings as an alternative propelling mechanism.
Focus on the development of flapping-wing robots have mostly come from terrestrial animals. There are plenty of examples inspired by hummingbirds [13, 14], insects [15, 16, 17], and even bats [18]. Most of the actuation methods proposed for flapping-wing micro-aerial vehicles (FWMAVs) are not transferable to a larger scale vehicle like a blimp. For our LTA vehicle, we find inspiration from one of the most efficient swimmers, the manta ray [19, 20].
There are existing robots inspired by manta rays, but they are underwater vehicles [21, 22]. As a major difference from those studies, one of the challenges in incorporating the flapping wing mechanism for LTA vehicles is the limited payload. The buoyancy achieved by helium in the air is much less than the buoyancy given by air in water. Some sophisticated undulating mechanisms built for underwater robots are realized by a series of strong actuators and mechanical components that a blimp will not be able to carry [23]. There have been some flapping-wing blimp robots in the past, most notably the Air Ray by Festo [24] and the Cornell ornithoptic blimp [25], but they require large payloads of 1.6 kg and 420 g respectively. This paper studies a simple minimally viable approach in reproducing efficient flapping motion inspired by manta rays at low payload requirements.
The shape and structure of the wings play a key role in the behavior of an agent. Flexibility along the manta ray's pectoral fins has been well documented as a reason for their swimming efficiency [26]. Their fins are rigid near the root with the thicker tissue, then become thinner and more flexible near the tip with cartilage [27]. We replicate a flexible wing using thin carbon fiber rods.
The two main contributions of this paper are: the design of a low-cost LTA vehicle that moves with manta ray inspired flapping wings and tail; and an analysis of the wing shape and flapping parameters. We also show how our design is more efficient than a propeller-based agent, which results in a 68% increase in the range. We believe this platform serves as a base line for the exploration of flapping-wing blimps. Furthermore, due to its simplicity and low cost (each vehicle can be made with less than 100 USD using off-the-shelf components), the platform is also appropriate for STEM education and outreach.
Fig. 1: The flapping-wing LTA vehicle: “Flappy”. |
2302.01597 | Lead perovskites as CE$ν$NS detectors | The recent discovery of Coherent Elastic neutrino-Nucleus Scattering
(CE$\nu$NS) has created new opportunities to detect and study neutrinos. The
interaction cross-section in CE$\nu$NS scales quadratically with the number of
neutrons, making heavy-nuclei targets such as active lead-based detectors
ideal. In this Letter, we discuss for the first time the potential of
semiconductor lead perovskites for building neutrino detectors. Lead
perovskites have emerged in the last decade as revolutionary materials for
radiation detection due to their heavy and flexible element composition and
their unique optoelectronic properties that result in an excellent energy
resolution at an economic cost. While dedicated research and development will
be necessary, we find great benefits and no inherent obstacles for the
development of lead perovskites as CE$\nu$NS detectors. | César Jesús-Valls, Federico Sánchez | 2023-02-03T08:39:22Z | http://arxiv.org/abs/2302.01597v2 | # Lead perovskites as CE\(\nu\)NS detectors
###### Abstract
The recent discovery of Coherent Elastic neutrino-Nucleus Scattering (CE\(\nu\)NS) has created new opportunities to detect and study neutrinos. The interaction cross-section in CE\(\nu\)NS scales quadratically with the number of neutrons, making heavy-nuclei targets such as active lead-based detectors ideal. In this Letter, we discuss for the first time the potential of semiconductor lead perovskites for building neutrino detectors. Lead perovskites have emerged in the last decade as revolutionary materials for radiation detection due to their heavy and flexible element composition and their unique optoelectronic properties that result in an excellent energy resolution at an economic cost. While dedicated research and development will be necessary, we find great benefits and no inherent obstacles for the development of lead perovskites as CE\(\nu\)NS detectors.
## I Introduction
Neutrinos are the only known fermions carrying exclusively weak charge and therefore are clean probes of the weak interaction and unique messengers of dense matter environments, unaffected by the strong and electromagnetic interactions. This appeals, however, result in a notably suppressed interaction cross-sections, hampering the study of neutrino physics and rendering most applications impractical.
In 1974, the existence of coherent elastic neutrino-nucleus scattering (CE\(\nu\)NS) was pointed out to be a consequence of the Standard Model [1]. In CE\(\nu\)NS, a neutrino transfers momentum to a whole nucleus via the exchange of a virtual Z boson, forcing it to recoil. The interaction cross-section for this process is
\[\frac{d\sigma^{\rm CE\nu NS}}{dE_{R}}= \frac{G_{F}^{2}}{8\pi\cdot(\hbar c)^{4}}(N+(1-4\sin^{2}\theta_{W} )Z)^{2}\] \[\cdot m_{N}\cdot(2-E_{R}m_{N}/E_{\nu}^{2})|f(q)|^{2}\,, \tag{1}\]
where \(G_{F}\) is the Fermi constant, N (Z) is the number of neutrons (protons), \(\theta_{W}\) is the Weinberg angle and \(m_{N}\) and \(E_{R}\) are the nucleon mass and its recoil energy respectively. The nuclear form factor \(f(q)\) characterizes the loss of coherence as a function of the transferred momentum \(q\)=\(\sqrt{2m_{N}E_{R}}/\hbar\) and it is close to unity for small \(q\), associated to typical neutrino energies \(E_{\nu}\lesssim 50\) MeV. Notably, given that \(4\sin^{2}\theta_{W}\sim 1\), \(\sigma^{\rm CE\nu NS}\lesssim N^{2}\)[2]. This remarkable interaction cross-section enhancement, however, offers a very challenging detection signal, as the nucleon recoil needs to be identified. The maximum recoil energy scales as \(E_{R}^{\rm max}\approx 2E_{\nu}^{2}/m_{N}\), so that detectors need to be able to measure recoil energies of, at most, several tens of keV. Due to this, it has not been until recently, thanks to progress in detector technology, that CE\(\nu\)NS have been experimentally demonstrated by the COHERENT collaboration, using a CsI target [3] and an Ar target [4].
## II Motivations
The discovery of CE\(\nu\)NS and its enhanced cross-section has the potential to mitigate the elusiveness of neutrinos and therefore to revolutionize its study at energies on the order of a few tens of MeV, which includes geonetrinos [5], reactor neutrinos [6], accelerator neutrinos from meson decays at rest [7; 8; 9; 10], solar neutrinos [11] and supernova neutrino bursts [12]. Characterizing the cross-section of CE\(\nu\)NS is also essential for dark matter searches, as CE\(\nu\)NS constitute an irreducible background, the so-called neutrino floor [13]. Being mediated by flavor insensitive neutral currents, the detection of CE\(\nu\)NS provides extended sensitivity to sterile neutrinos [14; 15; 16] and other new physics signatures [17; 18; 19; 20] and allows the study of the neutrino magnetic moment [21; 22], its effective charge radius [23] and the nuclear neutron form factor [24; 25]. Applications, such as deploying neutrino detectors to increase nuclear security [26; 27] might also be possible. Moreover, CE\(\nu\)NS is relevant to theoretical astrophysics, as a key actor during stellar collapse [28; 29; 30].
## III CE\(\nu\)NS experiments
Because of all of the above, an increasing number of CE\(\nu\)NS detector technologies have been proposed [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42] and several experiments are ongoing or have been proposed: COHERENT [43], using CsI, NaI, high-purity Ge (HPGe) and liquid-Ar targets; CONUS [44], NCC-1701[45; 46] and \(\nu\)GEN [47] using cryogenic HPGe; MINER [48], using cryogenic HPGe/Si; NUCLEUS [49] using cryogenic CaWO\({}_{4}\) and Al\({}_{2}\)O\({}_{3}\); CONNIE [44], using Si charge coupled devices (CCDs); TEXONO [50] using p-type point-contact Ge; RES-NOVA using cryogenic PbWO\({}_{4}\)[51; 52]; RICOCHET [53] using cryogenic HPGe bolometers; and RED100 [54] using liquid-Xe.
To get the most from CE\(\nu\)NS, an ideal detector should
be inexpensive to produce and operate, have excellent energy resolutions to measure nuclear recoils with an energy of a few keV and be made of heavy nuclear target to exploit the quadratic scaling of the cross-section. In this Letter, we point out for the first time the excellent prospects of lead perovskites to build up future CE\(\nu\)NS detectors.
## II Lead perovskites
Lead halide perovskites (LHP) are novel semiconductors with exceptional optoelectronic properties, versatile chemical composition and low cost synthesis. They typically consist of crystals with structure APbX\({}_{3}\), see Fig. 1, where A is CH\({}_{3}\)NH\({}_{3+}\) (MA\({}^{+}\)), CHNH\({}_{3}^{+}\) (FA\({}^{+}\)), or Cs\({}^{+}\); B is Pb\({}^{2+}\); and X is Cl\({}^{-}\), Br\({}^{-}\), and I\({}^{-}\)[55].
The study of halide perovskites as photosensors was sparked about a decade ago in the context of solar cell development [56] and quickly emerged as an active field of research due to record energy conversion efficiencies [57; 58; 59; 60; 61; 62; 63; 64]. Along the process, much has been learned about the basic properties of this material, which combines a low exciton binding energy on the order of few meV [65] with exceptionally long electron-hole diffusion lengths exceeding 1 micrometer [66], a tunable band gap in the range of 1.2-2.4 eV [67; 68], and a high bulk resistivity of 10\({}^{7-10}\Omega\)-cm at room temperature [69]. The combination above is unique, as it pairs efficient charge carrier production and mobility at low voltage bias with a high bulk resistivity, orders of magnitude higher than those of Si and Ge, suppressing dark current and noise. Moreover, LHP naturally allow to manufacture crystals with very high atomic numbers, such as CsPbI\({}_{3}\), and to design application-specific perovskite sensors by means of stoichiometry engineering [70; 71]. Furthermore, the synthesis of LHP is easy and flexible through techniques such as solution processing. The production cost is also low, with an estimated price of \(<0.3\$/\mathrm{cm^{3}}\)[55], namely, at the density of 4 g/cm\({}^{3}\) an inexpensive cost of 75\(\$\)/kg. Finally, LHP can be operated inexpensively at room temperature.
## III Existing metrics as \(x/\gamma\)-ray detectors
Their striking performance as solar cells and their high atomic number1 quickly attracted the interest of the medical imaging community towards lead perovskites [72; 73; 74; 75; 76; 77; 78; 79; 80]. In 2015, MAPbI\({}_{3}\) was proven to detect \(\gamma\)-rays from \({}^{137}\)Cs [81] and first x-ray images were obtained [82]. That same year the first detection of single photons in LHP was achieved [82], resolving at room temperature the \(E_{\gamma}=59.6\) keV emission from \({}^{241}\)Am with about 35% resolution at full-width-half-maximum (FWHM). In 2016, the reported X-ray sensitivities at a small bias of 0.1 V were already four-fold those of commercial \(\alpha\)-Se detectors [83], the dominating material for x-ray imaging. In 2017 perovskites where successfully integrated with CMOS read-out circuitry leading to an almost 30-fold improvement on the x-ray sensitivity [73]. In 2017, the spectrum of \({}^{137}\)Cs \(\gamma\)-ray energy spectrum was obtained, reaching a 6.5% energy resolution (ER) for the 662 keV peak [84]. This same metric increased to 3.9% in 2018 [85] using small 10 mm\({}^{2}\) crystals. In 2021, the ER at room temperature for 662 keV photons to 1.4% while using large CsPbBr\({}_{3}\) crystals of 1.5 inches (3.75 cm) in diameter and achieving a stable operation of the sensors for over 18 months [86]. In that study, the spectrum of multiple radioactive sources was resolved, such as 22 keV \({}^{109}\)Cd x-rays at a FWHM ER of 19.7%. In 2021, promising results were also achieved with CsPbI\({}_{3}\), i.e. 20% ER for 122 keV photons from \({}^{59}\)Co. In 2022, LHP crystals of unprecedented quality have been reported, leading to the best x-ray sensitivities yet achieved in any material [87; 88].
Footnote 1: Photon attenuation increases \(\propto Z^{4}\), where Z is the atomic number.
## IV Prospects as CE\(\nu\)NS detectors
Existing measurement with \(x\)-rays and \(\gamma\)-rays are the most abundant tests of lead perovskites as prospective materials to build up radiation detectors. In addition, perovskites-based devices have been already demonstrated to be able to detect \(\alpha\)[89] and \(\beta\)[90] particles, as well as neutrons using a hybrid configuration [91]. For the detection of CE\(\nu\)NS it is important to note that nuclear and electronic recoils have different ionization efficiencies. For Ge it has been measured that nuclear recoils generate
Figure 1: Schematic of the perovskite ABX\({}_{3}\) crystal structure.
about a third of the ionization signal of their electronic counterparts [92]. For lead perovskites this fraction, the so-called quenching, is still unknown. Nonetheless the demonstrated ability of perovskites to resolve energy deposits of few tens of keV is encouraging. The observation of low energy spectral lines, such as \({}^{59}\)Co 22 keV \(x\)-rays with 19.7% FWHM ER, suggests that detecting nuclear recoils with energies below 100 keV might be already possible if quenching values are not drastically larger than those in Ge. If lead perovskites are proven to be succesful as nuclear recoil detectors, they have enormous potential as future CE\(\nu\)NS detectors.
Firstly, as earlier reviewed a large fraction of existing or planned CE\(\nu\)NS detectors use semiconductor materials, typically HPGe, a choice driven by its gold-standard energy resolution. Lead perovskites are rapidly approaching their ultimate energy resolution expected to be similar to that of HPGe [86]. Consequently, in the medium-term LHP might be directly competing with HPGe detectors.
Secondly, producing low activity lead perovskites should be possible, e.g. CsPbI\({}_{3}\) consists of Cs and I both used in the first historical detection of CE\(\nu\)NS [3] and archaeological Pb has been recently demonstrated to be adequate for CE\(\nu\)NS detection [93]. Moreover, CsPbI\({}_{3}\) and other lead perovskites are made up of strikingly heavy elements, significantly advantaging the CE\(\nu\)NS interaction cross-section of mainstream alternative materials and in particular that of Ge. However, the maximum recoil energy decreases linearly with \(m_{N}\) and therefore the ability of the detector to identify the recoiling nucleus needs to be considered. To account for it, we define the effective cross-section, \(\sigma_{\rm eff}\), as a figure-of-merit, defined as
\[\sigma_{\rm eff}\equiv\int_{E_{\rm threshold}^{\rm recoil}}^{E_{R}^{\rm max }}\frac{d\sigma}{dE_{R}}\,\epsilon\,dE_{R}\,, \tag{2}\]
which can be calculated from Eq. 1 if the detector efficiency, \(\epsilon\), is specified. Using it, in Fig.2 CsPbI\({}_{3}\) and Ge targetsii are directly compared for some neutrino energies assuming a detector with perfect (null) efficiency above (below) a certain energy recoil threshold, \(E_{\rm threshold}^{\rm recoil}\). As expected, observing interactions in CsPbI\({}_{3}\) would require a smaller \(E_{\rm threshold}^{\rm recoil}\) than in Ge, but if the detection threshold is achieved and mildly lowered it results in a large enhancement of the interaction cross-section. This trade-off is characterized by the ratio \(\sigma_{\rm eff}^{CsPbI_{3}}/\sigma_{\rm eff}^{Ge}\) presented in Fig.3 as a function of the neutrino energy and the recoil energy threshold.
Footnote ii: For CsPbI\({}_{3}\), the weighted average (Cs+Pb+3I)/5 is used on the result of Eq. 2.
Achieving \(E_{\rm threshold}^{\rm recoil}\) below 20 keV would in general result in an event rate advantage for CsPbI\({}_{3}\) detectors compared to Ge when dealing with neutrino energies similar to a few tens of MeV, e.g. neutrinos from pion decays at rest, useful to study sterile neutrinos and NSI, or neutrinos from the high-energy tail of supernova neutrino bursts. Reaching even lower thresholds might result in additional applications involving neutrinos from other sources but it will also likely require a longer term R&D. Lastly, perovskites are orders of magnitude cheaper to
Figure 2: CE\(\nu\)NS interaction cross-section per nucleus, \(\sigma\), multiplied by the detector efficiency, \(\epsilon\), as a function of the recoil energy threshold, \(E_{\rm threshold}^{\rm recoil}\). Solid (dashed) lines correspond to CsPbI\({}_{3}\) (Ge).
manufacture and potentially to operateiii than existing alternatives, including HPGe. Due to their inexpensive production cost, the budget to build a large perovskite detectors would be driven by the number of electronic channels. If volumes similar to a fraction of 1 cm\({}^{3}\) could be adequately readout by a single channel, i.e. sizes slightly above of those in use for x-ray detection, a ton of perovskite might be read by \(O(10^{6})\) electronic channels, making it potentially scalable.
Footnote iii: Perovskites might be able to operate without the need of cryogenic systems, as supported by existing \(x\)-ray data.
## IV Discussion and Outlook
In just one decade lead perovskites have been established as novel materials with transformative potential as radiation detectors due to their unique optoelectronic properties. In this Letter, we have pointed out their potential as neutrino detectors for the first time and discussed its suitability for the study of CE\(\nu\)NS. To bring perovskites to their ultimate detection potential and enable their full range of applications active R&D is required. Demonstrating and characterizing their ability to detect nuclear recoils in the energy range of interest is an essential milestone.
Lastly, we note that CE\(\nu\)NS and some dark-matter models share the same signal mechanism, i.e. the detection of nuclear recoils. Therefore, any progress on that direction might benefit both the neutrino and the dark-matter research communities.
## V Acknowledgments
We acknowledge fruitful discussions with E. Palomares, and valuable feedback from J.I. Collar and L. Pattavina. This project was partially inspired by the ZPro project funded by the Barcelona Institute of Technology (BIST).
|
2305.04448 | LPS-Type Ramanujan Graphs from Definite Quaternion Algebras over
$\mathbb Q$ of Class Number One | In this paper we construct explicit LPS-type Ramanujan graphs from each
definite quaternion algebra over $\mathbb Q$ of class number 1, extending the
constructions of Lubotzky, Phillips, Sarnak, and later Chiu, and answering in
the affirmative a question raised by Jo and Yamasaki. We do this by showing
that for each definite quaternion algebra $\mathcal H$ over $\mathbb Q$ of
class number 1 with maximal order $\mathcal O$, if $G = \mathcal
H^\times/Z(\mathcal H^\times)$ and $p$ is prime such that $G(\mathbb Q_p) \cong
PGL_2(\mathbb Q_p)$, then there exists a congruence $p$-arithmetic subgroup of
$G$ which acts simply transitively on the Bruhat-Tits tree of $G(\mathbb Q_p)$. | Jonah Mendel, Jiahui Yu | 2023-05-08T03:56:28Z | http://arxiv.org/abs/2305.04448v2 | # LPS-type Ramanujan graphs from definite quaternion algebras over \(\mathbb{Q}\) of class number one
###### Abstract.
In this paper we construct explicit LPS-type Ramanujan graphs from each definite quaternion algebra over \(\mathbb{Q}\) of class number \(1\), extending the constructions of [10] and [1], and answering in the affirmative a question raised in [6]. We do this by showing that for each definite quaternion algebra \(\mathcal{H}\) over \(\mathbb{Q}\) of class number \(1\) with maximal order \(\mathcal{O}\), if \(G=\mathcal{H}^{\times}/Z(\mathcal{H}^{\times})\) and \(p\) is prime such that \(G(\mathbb{Q}_{p})\cong\mathrm{PGL}_{2}(\mathbb{Q}_{p})\) then there exists a congruence \(p\)-arithmetic subgroup of \(G\) which acts simply transitively on the Bruhat-Tits tree of \(G(\mathbb{Q}_{p})\).
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Ramanujan Graphs and Algebraic Groups
* 4 Construction of Explicit Maximal Orders and LPS-Type Ramanujan Graphs
* 5 The Ramanujan Property for Certain LPS-Type Graphs
* 6 Further Directions for Finding Congruence Pairs
## 1. Introduction
A \(k\)-regular graph is called a Ramanujan graph if the second largest eigenvalue in absolute value is less than or equal to \(2\sqrt{k-1}\). Ramanujan graphs are optimal expander graphs from a spectral perspective. In their paper [10], Lubotzky, Philips, and Sarnak explicitly construct a family of degree \(p+1\) Ramanujan graphs, for each odd prime \(p\equiv 1\pmod{4}\) (for the case \(p\equiv 3\pmod{4}\) see [2]). The LPS Ramanujan graphs are Cayley graphs of the group \(\mathrm{PSL}(\mathbb{F}_{q})\) for primes \(q\equiv 1\pmod{4}\), with respect to a well-chosen generating set coming from the Hamiltonian quaternion algebra over \(\mathbb{Q}\). In [1], Chiu addresses the case when \(p=2\) and gives an explicit construction of a family of \(3\)-regular Ramanujan graphs, by using the definite quaternion algebra over \(\mathbb{Q}\) with discriminant \(13\).
In [6], Jo and Yamasaki generalize the construction of LPS and Chiu's Ramanujan graphs. More precisely, they consider any definite quaternion algebra over \(\mathbb{Q}\) of class number one, make use of Ibukiyama's [5] explicit construction of maximal orders for all definite quaternion algebras, and construct Cayley graphs of \(\mathrm{PSL}_{2}(\mathbb{F}_{q})\) with generating sets coming from these quaternion algebras, which they call LPS-type graphs.
The question of whether these LPS-type graphs are indeed Ramanujan was raised in [6], who were able to answer it only in the special case of the definite quaternion algebra over \(\mathbb{Q}\) with discriminant \(13\). In this
paper we answer this question affirmatively, showing that for any definite quaternion algebra over \(\mathbb{Q}\) of class number one we can construct from it LPS-type Ramanujan graphs.
**Theorem 1.1**.: _For any \(P\in\{2,3,5,7,13\}\), let \(\mathcal{H}\) be a quaternion algebra over \(\mathbb{Q}\) with discriminant \(P\) and let \(\mathcal{O}\) be the maximal order of \(\mathcal{H}\). For almost any prime \(p\), there is a specific subset \(S\subseteq\{\alpha\in\mathcal{O}:N(\alpha)=p\}\) of size \(p+1\), such that the LPS-type graphs of \(\mathrm{PSL}_{2}(\mathbb{F}_{q})\), for infinitely many primes \(q\), with respect to the generating set \(S\pmod{q}\), are Ramanujan graphs._
For the precise construction of these Ramanujan graphs, see Section 5. The key ingredient in proving Theorem 1.1 is to find congruence \(p\)-arithmetic subgroups of \(G=\mathcal{H}^{\times}/Z(\mathcal{H}^{\times})\), where \(p\nmid\mathrm{disc}(\mathcal{H})\), which acts simply transitively on the vertices of the Bruhat-Tits tree of \(G(\mathbb{Q}_{p})\cong\mathrm{PGL}_{2}(\mathbb{Q}_{p})\). The fact that it is a congruence subgroup is crucial for the proof that these graphs are Ramanujan, see for example [9, Theorem 7.3.1].
Therefore, our second main result, which might be of independent interest, is the construction of such congruence subgroups which acts simply transitively on Bruhat-Tits trees.
**Theorem 1.2**.: _Let \(\mathcal{H}\) be a quaternion algebra over \(\mathbb{Q}\) of class number one and let \(\mathcal{O}\) be the maximal order of \(\mathcal{H}\). For any commutative ring with unity \(A\), define the group \(\mathcal{G}(A)=(A\otimes_{\mathbb{Z}}\mathcal{O})^{\times}/A^{\times}\). Then there exists \(m\in\mathbb{N}\) and \(H\leq\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\) such that for any prime \(p\nmid m\cdot\mathrm{disc}(\mathcal{H})\), the congruence subgroup_
\[\Lambda^{p}=\{g\in\mathcal{G}(\mathbb{Z}[1/p]):g\pmod{m}\in H\}\leq\mathcal{G }(\mathbb{Q}_{p})\]
_acts simply transitively on the vertices of the Bruhat-Tits tree of \(\mathcal{G}(\mathbb{Q}_{p})\cong\mathrm{PGL}_{2}(\mathbb{Q}_{p})\)._
Organization of the paper: In section 2, we collect definitions and results on quaternion algebras, their orders and Bruhat-Tits trees. In section 3, we explain the construction of Ramanujan graphs as Cayley graphs of \(\mathrm{PSL}_{2}(\mathbb{F}_{q})\). In section 4, we describe the construction of LPS-type graphs and show how Theorem 1.1 is obtained from Theorem 1.2. In section 5, we prove Theorem 1.2, and show that the definite quaternion algebra over \(\mathbb{Q}\) of discriminant \(3\) does not contain a simply transitively congruence subgroup of level \(m\), for any prime \(m\).
## 2. Preliminaries
**Notation**.: _Let \(p\) be a prime number and \(R\) a ring. We will use the following conventions:_
* \(\mathbb{F}_{p}\) _denotes the field of_ \(p\) _elements._
* \(\mathbb{Z}_{p}\) _denotes the set of_ \(p\)_-adic integers and_ \(\mathbb{Q}_{p}\) _the set of_ \(p\)_-adic rationals._
* \(R^{\times}\) _denotes the multiplicative group of units in_ \(R\)_._
* \(\mathcal{M}_{n}(R)\) _denotes the ring of_ \(n\times n\) _matrices over_ \(R\)_._
* \(\mathrm{GL}_{n}(R)=\mathcal{M}_{n}(R)^{\times}\) _and_ \(\mathrm{SL}_{n}(R)=\mathrm{Ker}(\det:\mathrm{GL}_{n}(R)\to R^{\times})\)_._
* \(\mathrm{PGL}_{2}(R)=\mathrm{GL}_{2}(R)/R^{\times}\) _and_ \(\mathrm{PSL}_{2}(R)=\mathrm{SL}_{2}(R)/\left\{\pm 1\right\}\)_._
* \(S_{n},A_{n}\) _are the symmetric and alternating group on_ \(n\) _elements respectively._
* \(C_{n}\) _is the cyclic group of_ \(n\) _elements._
* \(D_{n}\) _is the dihedral group of_ \(2n\) _elements._
* \(\{1\}\) _is the trivial group._
* _For a finite group_ \(G\)_,_ \(C_{G}\) _denotes the conjugacy classes of subgroups of_ \(G\)_._
* _Let a group_ \(G\) _act on a set_ \(X\)_. For_ \(x\in X\)_,_ \(\mathrm{Stab}_{G}(x)=\{g\in G:gx=x\}\)_._
### Quaternion Algebras
Let \(k\) be a field of characteristic \(\neq 2\) and \(a,b\in k^{\times}\). The quaternion algebra \(\mathcal{H}_{a,b}(k)\) is an algebra over \(k\) generated by the elements \(1,i,j,ij\) satisfying
\[ij+ji=0,\quad i^{2}=a,\quad j^{2}=b.\]
Let \(\alpha=x+yi+zj+wij\in\mathcal{H}_{a,b}(k)\). We define the conjugate and norm of \(\alpha\) to be
\[\overline{\alpha}=x-yi-zj-wij,\quad\text{and}\quad N(\alpha)=\alpha\overline{ \alpha}=x^{2}-ay^{2}-bz^{2}-abw^{2}\]
respectively. We write \(\mathcal{H}_{a,b}\) to denote quaternion algebras over \(\mathbb{Q}\).
We say that a quaternion algebra \(\mathcal{H}_{a,b}(k)\) splits if \(\mathcal{H}_{a,b}(k)\cong\mathcal{M}_{2}(k)\). Otherwise we say that \(\mathcal{H}_{a,b}(k)\) ramifies. Similarly, we say that a quaternion algebra \(\mathcal{H}_{a,b}\) over \(\mathbb{Q}\) splits at a prime \(p\) if \(\mathcal{H}_{a,b}\otimes\mathbb{Q}_{p}\cong\mathcal{M}_{2}(\mathbb{Q}_{p})\). Otherwise, we say that \(\mathcal{H}_{a,b}\) ramifies at \(p\). We say that a quaternion algebra \(\mathcal{H}_{a,b}\) is definite if \(\mathcal{H}_{a,b}\otimes\mathbb{R}\) is a division algebra. The following lemma gives us helpful facts about splitting.
**Proposition 2.1**.: _[_13_, Corollary 7.1.2, Theorem 14.1.3]_
1. _A quaternion algebra_ \(\mathcal{H}_{a,b}(k)\) _that is not a division ring splits._
2. _For any finite field_ \(\mathbb{F}_{q}\)_, all quaternion algebras over_ \(\mathbb{F}_{q}\) _split._
3. _Two definite quaternion algebras_ \(\mathcal{H}_{a,b}\) _and_ \(\mathcal{H}_{a^{\prime},b^{\prime}}\) _over_ \(\mathbb{Q}\) _are isomorphic if and only if they ramify at exactly the same primes._
4. _A quaternion algebra_ \(\mathcal{H}\) _over_ \(\mathbb{Q}\) _ramifies at only finitely many primes._
The product of the finitely many primes at which \(\mathcal{H}_{a,b}=\mathcal{H}_{a,b}(\mathbb{Q})\) ramifies at is called the discriminant of \(\mathcal{H}_{a,b}\) and is denoted \(\operatorname{disc}(\mathcal{H}_{a,b})\).
### Orders of Quaternion Algebras
An order of a quaternion algebra \(\mathcal{H}_{a,b}\) is a rank 4 module over \(\mathbb{Z}\) that is also a subring of \(\mathcal{H}_{a,b}\). An order \(\mathcal{O}\) is called a maximal order if it is not properly contained in an order of \(\mathcal{H}_{a,b}\). A right ideal of an order \(\mathcal{O}\) is a module \(M\) satisfying \(M\mathcal{O}=M\). The class number of an order \(\mathcal{O}\) is the number of inequivalent right ideals of \(\mathcal{O}\), where two ideals \(M,N\) are equivalent if \(M=\alpha N\) for some \(\alpha\in\mathcal{H}_{a,b}^{\times}\). It turns out that all maximal orders of a quaternion algebra have the same class number [13, Chapter 17] and we define the class number of the quaternion algebra to be the class number of its maximal orders. The following theorem provides a way to calculate the class number of a quaternion algebra.
**Theorem 2.2**.: _[_4_, Theorem 6]_ _The class number \(h\) of a definite quaternion algebra \(\mathcal{H}\) is given by_
\[h=\frac{1}{12}\prod_{p|\operatorname{disc}(\mathcal{H})}(p-1)+\frac{1}{4}\prod _{p|\operatorname{disc}(\mathcal{H})}\left(1-\left(\frac{-4}{p}\right)\right)+ \frac{1}{3}\prod_{p|\operatorname{disc}(\mathcal{H})}\left(1-\left(\frac{-3}{p }\right)\right)\]
_where \(\left(\frac{n}{p}\right)\) is the Legendre symbol defined as_
\[\left(\frac{a}{p}\right)=\begin{cases}1&\text{if $a$ is a quadratic residue modulo $p$}\\ -1&\text{if $a$ is not a quadratic residue modulo $p$}\\ 0&\text{$p\mid a$}.\end{cases}\]
**Corollary 2.3**.: _The class number \(h\) of \(\mathcal{H}\) is \(1\) if and only if \(\operatorname{disc}(\mathcal{H})\in\{2,3,5,7,13\}\)._
A class number one quaternion algebra over \(\mathbb{Q}\) has only one maximal order up to conjugation. When the quaternion algebra is definite and has class number 1, the following theorem gives us a way to count the number of elements whose norm is a prime power, unique up to units, in the maximal ideal.
**Proposition 2.4**.: _[_1_, Theorem 3.2]_ _Let \(\mathcal{H}\) be a definite quaternion algebra of class number 1 over \(\mathbb{Q}\) which splits at \(p\). Let \(\mathcal{O}\) be a maximal order of \(\mathcal{H}\). Then,_
\[\#\left\{\alpha\in\mathcal{O}:N(\alpha)=p^{k}\right\}/\mathcal{O}^{\times}=1+ p+\cdots+p^{k}.\]
### The Bruhat-Tits Tree
A \(\mathbb{Z}_{p}\) lattice in \(\mathbb{Q}_{p}\oplus\mathbb{Q}_{p}\) is a rank \(2\)\(\mathbb{Z}_{p}\)-submodule of \(\mathbb{Q}_{p}\oplus\mathbb{Q}_{p}\). Given two \(\mathbb{Z}_{p}\)-lattices \(L_{1},L_{2}\) in \(\mathbb{Q}_{p}\oplus\mathbb{Q}_{p}\), if \(L_{1}=cL_{2}\) for some \(c\in\mathbb{Q}_{p}^{\times}\), then we say that \(L_{1}\) and \(L_{2}\) are homothetic, and we write \(L_{1}\sim L_{2}\). Let \(\widetilde{V}\) be the set of \(\mathbb{Z}_{p}\)-lattices in \(\mathbb{Q}_{p}\oplus\mathbb{Q}_{p}\). It is clear that the homoethetic relation \(\sim\) defines an equivalence relation on \(\widetilde{V}\). We define the set of homothetic classes of lattices to be \(V=\widetilde{V}/\sim\). Let \(\mathcal{T}_{p+1}\) be the graph whose vertices are elements of \(V\). Given two vertices \([L]\) and \([M]\), there is an edge between \([L]\) and \([M]\) if and only if there exists \(L_{1}\in[L]\) and \(L_{2}\in[M]\) such that
\[pL_{2}\subsetneq L_{1}\subsetneq L_{2}.\]
**Theorem 2.5**.: _[_12_, Chapter II Theorem 1]_ _The graph \(\mathcal{T}_{p+1}\) defined above is a tree of degree \(p+1\)._
We call \(\mathcal{T}_{p+1}\) the Bruhat-Tits tree of degree \(p+1\). The following proposition relates the Bruhat-Tits tree to the groups \(\mathrm{PGL}_{2}(\mathbb{Q}_{p})\) and \(\mathrm{PGL}_{2}(\mathbb{Z}_{p})\).
**Proposition 2.6**.: _[_12_, Chapter II, Sections 3, 4]_ _The group \(\mathrm{PGL}_{2}(\mathbb{Q}_{p})\) acts transitively on the Bruhat-Tits tree \(\mathcal{T}_{p+1}\) and the stabilizer of the root \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\) is \(\mathrm{PGL}_{2}(\mathbb{Z}_{p})\)._
## 3. Ramanujan Graphs and Algebraic Groups
### The Algebraic Group \(\mathcal{G}\)
Fix a prime \(p\). Let \(\mathcal{H}=\mathcal{H}_{a,b}(\mathbb{Q})\) be a definite class number \(1\) quaternion algebra over \(\mathbb{Q}\) which splits at \(p\). Let \(\mathcal{O}=\mathbb{Z}\oplus\mathbb{Z}\omega_{1}\oplus\mathbb{Z}\omega_{2} \oplus\mathbb{Z}\omega_{3}\) be the unique (up to conjugacy) maximal order of \(\mathcal{H}\). Let \(N:\mathcal{H}\to\mathbb{Q}\) be the norm. Note that \(N(\mathcal{O})\subseteq\mathbb{Z}\) and
\[\mathcal{O}^{\times}=\left\{\alpha\in\mathcal{O}:\mathbb{N}(\alpha)=\pm 1 \right\}.\]
For any abelian ring with unity \(A\), define
\[\mathcal{G}(A)=\mathcal{G}_{\mathcal{O}}(A):=\left(\mathcal{O}\otimes A \right)^{\times}/A^{\times}.\]
When the context is clear, we drop the subscript \(\mathcal{O}\). Note that \(\mathcal{G}(\mathbb{Z})=\mathcal{O}^{\times}/\left\{\pm 1\right\}\). Furthermore, \(\mathbb{Z}[1/p]^{\times}=\left\{p^{k}:k\in\mathbb{Z}\right\}\). Therefore,
\[\mathcal{G}(\mathbb{Z}[1/p])\cong\left\{\gamma\in\mathcal{O}:N(\gamma)=p^{k}, \ k\in\mathbb{Z}\right\}/\left\{\pm p^{k}:k\in\mathbb{Z}\right\} \tag{1}\]
The following proposition relates \(\mathcal{G}(\mathbb{Z}[1/p])\) to the Bruhat-Tits tree \(\mathcal{T}_{p+1}\).
**Proposition 3.1**.: _The group \(\mathcal{G}(\mathbb{Z}[1/p])\) acts transitively on the vertices of \(\mathcal{T}_{p+1}\) and \(\mathcal{G}(\mathbb{Z})\) is the stabilizer of \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\)._
Proof.: Because \(\mathcal{H}\) splits at \(p\), there is an isomorphism \(\psi:\mathcal{H}\otimes\mathbb{Q}_{p}\to\mathcal{M}_{2}(\mathbb{Q}_{p})\) such that \(N(\alpha)=\det\psi(\alpha)\) for \(\alpha\in\mathcal{H}\). The map \(\psi\) induces an embedding, \(\mathcal{G}(\mathbb{Z}[1/p])\hookrightarrow\mathrm{PGL}_{2}(\mathbb{Q}_{p})\). Matrices of determinant \(p\) map any vertex of \(\mathcal{T}_{p+1}\) to a neighbor. Thus, elements in \(\mathcal{G}(\mathbb{Z}[1/p])\) of norm \(p\) map any vertex to a neighbor. By Proposition 2.4, there exist \(p+1\) elements in \(G(\mathbb{Z}[1/p])\) that have norm \(p\). They map each vertex of \(\mathcal{T}_{p+1}\) to the \(p+1\) neighbors of it. Thus, we can reach any vertex of \(\mathcal{T}_{p+1}\) by a product of elements of norm \(p\) acting on the origin. Therefore, \(\mathcal{G}(\mathbb{Z}[1/p])\) acts transitively on \(\mathcal{T}_{p+1}\).
We have that \(\mathbb{Z}_{p}^{\times}\cap\left\{\pm p^{k}:k\in\mathbb{Z}\right\}=\left\{\pm 1\right\}\). Elements in \(\mathrm{GL}_{2}(\mathbb{Z}_{p})\) have determinants in \(\mathbb{Z}_{p}^{\times}\). Thus, \(\mathcal{G}(\mathbb{Z}[1/p])\cap\mathrm{PGL}_{2}(\mathbb{Z}_{p})=\mathcal{G}( \mathbb{Z})\). Therefore, when \(\mathcal{G}(\mathbb{Z}[1/p])\) acts on \(\mathcal{T}_{p+1}\), the stabilizer of the root \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\) is \(\mathcal{G}(\mathbb{Z})\).
We want to find a subgroup \(\Lambda\) of \(\mathcal{G}(\mathbb{Z}[1/p])\) which acts simply transitively on \(\mathcal{T}_{p+1}\). Let \(m\) be a positive integer. We define the reduction modulo \(m\) map to be
\[\bar{\varphi}_{m}:\mathcal{O}\to\mathcal{O}\otimes\mathbb{Z}/m\mathbb{Z},\quad w +x\omega_{1}+y\omega_{2}+z\omega_{3}\mapsto\bar{w}+\bar{x}\omega_{1}+\bar{y} \omega_{2}+\bar{z}\omega_{3}\]
where \(\bar{a}\) is the reduction modulo \(m\) of the integer \(a\). We know from (1) that elements of \(\mathcal{G}(\mathbb{Z}[1/p])\) are represented by elements of \(\mathcal{O}\), so \(\bar{\varphi}_{m}\) induces a map,
\[\varphi_{m}:\mathcal{G}(\mathbb{Z}[1/p])\to\mathcal{G}(\mathbb{Z}/m\mathbb{Z}).\]
**Definition 3.2**.: Let \(G\) be a group and let \(H,K\) be subgroups of \(G\). We call \(K\) a right complement of \(H\) if
\[H\cap K=\left\{1\right\},\quad H\cdot K=G.\]
**Definition 3.3**.: Let \((m,H)\) be a pair where \(m\) is a positive integer and \(H\) is a subgroup of \(\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\) such that
1. \(\mathcal{G}(\mathbb{Z})\) embeds into \(\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\) via \(\varphi_{m}\),
2. \(H\) is a right complement of \(\varphi_{m}(\mathcal{G}(\mathbb{Z}))\) in \(\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\).
Then we call \((m,H)\) a **congruence pair** of \(\mathcal{O}\). If \(p\nmid m\), we define the following congruence subgroup of \(\mathcal{G}(\mathbb{Z}[1/p])\),
\[\Lambda^{p}=\Lambda^{p}_{\mathcal{O}}(m,H):=\left\{\gamma\in\mathcal{G}( \mathbb{Z}[1/p]):\varphi_{m}(\gamma)\in H\right\}=\varphi_{m}^{-1}(H)\]
with a subset
\[\mathcal{S}^{p}=\mathcal{S}^{p}_{\mathcal{O}}(m,H)=\left\{\alpha\in\Lambda^{p }:N(\alpha)=p\right\}.\]
When the context is clear, we write \(\Lambda^{p}\) and \(\mathcal{S}^{p}\).
**Proposition 3.4**.: _Let \((m,H)\) be a congruence pair of \(\mathcal{O}\). Then \(\mathcal{G}(\mathbb{Z})\) and \(\Lambda^{p}\) are complementary subgroups of \(\mathcal{G}(\mathbb{Z}[1/p])\),_
Proof.: We have that \(\Lambda^{p}=\varphi_{m}^{-1}(H)\), so \(\Lambda^{p}\) is a subgroup of \(\mathcal{G}(\mathbb{Z}[1/p])\). We can deduce that \(\varphi_{m}^{-1}(H)\cap\mathcal{G}(\mathbb{Z})=\left\{1\right\}\) from the assumptions \(H\cap\mathcal{G}(\mathbb{Z})=\left\{1\right\}\) and \(\varphi_{m}|_{\mathcal{G}(\mathbb{Z})}\) is injective. Let \(\gamma\in\mathcal{G}(\mathbb{Z}[1/p])\). We have that \(\mathcal{G}(\mathbb{Z})\cdot H=\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\), so we can choose \(\alpha\in\mathcal{G}(\mathbb{Z})\leq\mathcal{G}(\mathbb{Z}[1/p])\) and \(\beta\in H\) be such that \(\alpha\beta=\varphi_{m}(\gamma)\). We have that
\[\varphi_{m}(\alpha^{-1}\gamma)=\varphi_{m}(\alpha^{-1})\varphi_{m}(\gamma)= \alpha^{-1}(\alpha\beta)=\beta\in H.\]
Therefore, \(\alpha^{-1}\gamma\in\Lambda^{p}\). Thus, \(\alpha(\alpha^{-1}\gamma)=\gamma\), so \(\gamma\in\mathcal{G}(\mathbb{Z})\cdot\Lambda^{p}\). Therefore, \(\mathcal{G}(\mathbb{Z})\cdot\Lambda^{p}=\mathcal{G}(\mathbb{Z}[1/p])\).
**Proposition 3.5**.: _Let \((m,H)\) be a congruence pair of \(\mathcal{O}\). Then the group \(\Lambda^{p}\) acts simply transitively on the Bruhat-Tits tree \(\mathcal{T}_{p+1}\)._
Proof.: We first show that \(\Lambda^{p}\) acts transtively on \(\mathcal{T}_{p+1}\). Let \(u,v\) be vertices of \(\mathcal{T}_{p+1}\). Let \(L\) be a \(\mathbb{Z}_{p}\)-lattice. There exists \(\gamma\in\mathcal{G}(\mathbb{Z}[1/p])\) such that \(\gamma\cdot[\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]=[L]\). By Proposition 3.4, we can write \(\gamma=\lambda\alpha\) for \(\alpha\in\mathcal{G}(\mathbb{Z})\) and \(\lambda\in\Lambda^{p}\). We have that
\[[L]=\gamma\cdot[\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]=(\lambda\alpha)\cdot[ \mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]=\lambda\cdot(\alpha\cdot[\mathbb{Z}_{p} \oplus\mathbb{Z}_{p}])=\lambda\cdot[\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\]
because \(\mathcal{G}(\mathbb{Z})\) is the stabilizer of \([Z_{p}\oplus\mathbb{Z}_{p}]\). Therefore, \(\Lambda^{p}\) acts transitively on \(\mathcal{T}_{p+1}\). Lastly, \(\Lambda^{p}\) acts imply on \(\mathcal{T}_{p+1}\) because
\[\operatorname{Stab}_{\Lambda^{p}}([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}])= \Lambda^{p}\cap\operatorname{Stab}_{\mathcal{G}(\mathbb{Z}[1/p])}([Z_{p}\oplus \mathbb{Z}_{p}])=\Lambda^{p}\cap\mathcal{G}(\mathbb{Z})=\left\{1\right\}.\]
**Proposition 3.6**.: _For any congruence pair \((m,H)\), \(\#\mathcal{S}^{p}=p+1\) and \(\mathcal{S}^{p}\) generates \(\Lambda^{p}\)._
Proof.: The group \(\Lambda^{p}\) acts transitively on the vertices of \(\mathcal{T}_{p+1}\). Therefore, there exist at least \(p+1\) elements of \(\Lambda^{p}\) which take \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\) to each of its \(p+1\) neighbors respectively. Each of these elements have norm \(p\). Let \(\alpha\) and \(\beta\) be distinct elements of norm \(p\) such that \(\alpha\cdot[\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]=\beta\cdot[\mathbb{Z}_{p} \oplus\mathbb{Z}_{p}]\). Then \(\alpha\beta^{-1}\) is a nontrivial element of \(\Lambda^{p}\) which acts trivially on \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\). This is a contradiction because \(\Lambda^{p}\) acts simply on the vertices of \(\mathcal{T}_{p+1}\). Therefore, there are exactly \(p+1\) elements in \(\Lambda^{p}\) of norm \(p\). We now want to show that \(\mathcal{S}^{p}\) generates \(\Lambda^{p}\). Fix a vertex \([L]\) of \(\mathcal{T}_{p+1}\) of distance \(d\) from \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\). For every neighbor \([M]\) of \([L]\)
there exists a unique element of \(\mathcal{S}^{p}\) which takes \([L]\) to \([M]\). Therefore, there exists a unique word of symbols in \(\mathcal{S}^{p}\) of length \(d\) which takes \([Z_{p}\oplus\mathbb{Z}_{p}]\). Therefore, there is an isomorphism of graphs \(\mathcal{T}_{p+1}\cong\operatorname{Cay}(\Lambda^{p},\mathcal{S}^{p})\) with \([\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}]\) taken as the identity. Because \(\mathcal{T}_{p+1}\) is connected, we have that \(\mathcal{S}^{p}\) generates \(\Lambda^{p}\).
The following proposition characterizes the structure of \(\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\). This proposition also helps us find congruence pairs \((m,H)\).
**Proposition 3.7**.: _When \(m=p^{k}\) for \(k\geq 1\), then_
\[\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\cong\operatorname{PGL}_{2}(\mathbb{Z}/m \mathbb{Z})\]
Proof.: \(\mathcal{H}\) splits at \(p\). Thus, \(\mathcal{O}\otimes\mathbb{Z}_{p}\cong\mathcal{M}_{2}(\mathbb{Z}_{p})\) because \(\mathcal{O}\) is a maximal order of \(\mathcal{H}\). Therefore, we have the following isomorphisms of rings,
\[\mathcal{O}\otimes\mathbb{Z}/m\mathbb{Z}\cong\mathcal{O}/m\mathcal{O}\cong( \mathcal{O}\otimes\mathbb{Z}_{p})/m(\mathcal{O}\otimes\mathbb{Z}_{p})\cong \mathcal{M}_{2}(\mathbb{Z}_{p})/m\mathcal{M}_{2}(\mathbb{Z}_{p})\cong\mathcal{ M}_{2}(\mathbb{Z})/mM_{2}(\mathbb{Z})\cong\mathcal{M}_{2}(\mathbb{Z}/m\mathbb{Z}).\]
We have that \(\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\cong(\mathcal{O}\otimes\mathbb{Z}/m \mathbb{Z})^{\times}/(\mathbb{Z}/m\mathbb{Z})^{\times}\). Therefore, by showing that \(\mathcal{O}\otimes\mathbb{Z}/m\mathbb{Z}\cong\mathcal{M}_{2}(\mathbb{Z}/m \mathbb{Z})\), we have that \(\mathcal{G}(\mathbb{Z}/m\mathbb{Z})\cong\operatorname{PGL}_{2}(\mathbb{Z}/m \mathbb{Z})\).
**Proposition 3.8**.: _Let \(q\neq p\) be a prime such that \(\mathcal{H}\) splits at \(q\). Let_
\[\Lambda^{p}(q)=\operatorname{Ker}(\varphi_{q})\cap\Lambda^{p},\]
_i.e., the kernel of \(\varphi_{q}\) restricted to \(\Lambda^{p}\). Then we have the following isomorphism of groups,_
\[\Lambda^{p}(q)\backslash\Lambda^{p}\cong\begin{cases}\operatorname{PGL}_{2}( \mathbb{Z}/q\mathbb{Z})&\left(\frac{p}{q}\right)=-1\\ \operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})&\left(\frac{p}{q}\right)=1. \end{cases}\]
Proof.: By Proposition 3.7, \(\varphi_{q}(\Lambda^{p})\leq\operatorname{PGL}_{2}(\mathbb{Z}/q\mathbb{Z})\). We want to show that \(\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\leq\varphi_{q}(\Lambda^{p})\), for which we use Strong Approximation. By [11, Lemma 1.2], the reduction modulo \(q\) map induces a surjection
\[\left\{\alpha\in\mathcal{G}(\mathbb{Z}[1/p]):N(\alpha)=1\right\}\to\left\{ \alpha\in\mathcal{G}(\mathbb{Z}/q\mathbb{Z}):N(\alpha)=1\right\}.\]
Let \(N(\beta)\equiv 1\pmod{q}\) for \(\beta\in\mathcal{G}(\mathbb{Z}/q\mathbb{Z})\cong\operatorname{PGL}_{2}( \mathbb{Z}/q\mathbb{Z})\). By [11, Lemma 1.1], we can find \((a_{0},a_{1},a_{2},a_{3})\in\mathbb{Z}^{4}\) such that \(\alpha=a_{0}+a_{1}\omega_{1}+a_{2}\omega_{2}+a_{3}\omega_{3}\) is of norm \(p^{k}\) where \(p^{k}\equiv 1\pmod{q}\) and \(\varphi_{q}(\alpha)=\beta\). Therefore,
\[\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\leq\Lambda^{p}(q)\backslash \Lambda^{p}\cong\varphi_{q}(\Lambda^{p}).\]
Let \(\alpha\in\Lambda^{p}\). Then, \(N(\alpha)=p^{k}\) for some \(k\in\mathbb{Z}\). Suppose that \(\left(\frac{p}{q}\right)=1\). Then there exists \(x\in\mathbb{Z}\) such that \(q\nmid x\) and \(p\equiv x^{2}\pmod{q}\). Let \(l\in\mathbb{Z}\) such that \(p^{l}x\equiv 1\pmod{q}\). Note that \(\alpha\sim p^{kl}\alpha\) in \(\Lambda^{p}\). Moreover,
\[N(p^{l}\alpha)=p^{2kl}N(\alpha)=p^{2kl}x^{2k}\equiv 1\pmod{q}.\]
Therefore, \(\varphi_{q}(\alpha)\in\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\). Now suppose that \(\left(\frac{p}{q}\right)=-1\). Suppose that \(N(\alpha)=p\) and \(\varphi_{q}(\alpha)\in\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\). Then there exists \(y\in\mathbb{Z}\), \(q\nmid y\), such that \(N(y\alpha)=y^{2}p=1\) but this implies \(p\) is a square modulo \(q\), which is a contradiction. Therefore, \(\varphi_{q}(\alpha)\not\in\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\) and \(\varphi_{q}(\Lambda^{p})\supsetneq\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\). But \(\operatorname{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\) is an index \(2\) subgroup of \(\operatorname{PGL}_{2}(\mathbb{Z}/q\mathbb{Z})\), so \(\Lambda^{p}(q)\backslash\Lambda^{p}\cong\operatorname{PGL}_{2}(\mathbb{Z}/q \mathbb{Z})\).
Let \(q\) be an odd prime such that \(\left(\frac{p}{q}\right)=1\). Let \(\mathcal{S}^{p,q}=\varphi_{q}(\mathcal{S}^{p})\). Note that when \(\varphi_{q}\) is not injective on \(\mathcal{S}^{p}\), we consider \(\mathcal{S}^{p,q}\) to be a multi-set where each element has multiplicity of the size of its pre-image in \(\mathcal{S}^{p}\). As shown above, we can consider this set to be a generating set of \(\operatorname{PSL}_{2}(\mathbb{F}_{q})\). Therefore, we have the Cayley graph
\[\mathcal{X}^{p,q}=\mathcal{X}^{p,q}_{\mathcal{O}}(m,H)=\operatorname{Cay}( \operatorname{PSL}_{2}(\mathbb{F}_{q}),\mathcal{S}^{p,q}).\]
**Proposition 3.9**.: _When \(q\) is an odd prime such that \(\left(\frac{p}{q}\right)=1\), we have an isometry of graphs_
\[\mathcal{X}^{p,q}\cong\Lambda^{p}(q)\backslash\mathcal{T}_{p+1}.\]
Proof.: As a subgroup of \(\Lambda^{p}\), the group \(\Lambda^{p}(q)\) acts simply on \(\mathcal{T}_{p+1}\). By Proposition 3.8, the orbits of the action are in 1-1 correspondence with \(\operatorname{PSL}_{2}(\mathbb{F}_{q})\). We can thus view the vertices of the quotient graph as elements of \(\operatorname{PSL}_{2}(\mathbb{F}_{q})\). We see this in the sequence of bijections of sets,
\[\operatorname{PSL}_{2}(\mathbb{F}_{q})\cong\Lambda^{p}(q)\backslash\Lambda^{p }\cong\Lambda^{p}(q)\backslash\operatorname{PGL}_{2}(\mathbb{Q}_{p})/ \operatorname{PGL}_{2}(\mathbb{Z}_{p})\cong\Lambda^{p}(q)\backslash V( \mathcal{T}_{p+1})\]
where \(V(\mathcal{T}_{p+1})\) are the vertices of \(\mathcal{T}_{p+1}\). By Proposition 3.6, \(\mathcal{S}^{p}\) generates \(\Lambda^{p}\) so \(\mathcal{S}^{p,q}\) generates the quotient graph.
**Theorem 3.10**.: _[_9_, Theorem 7.3.1]_ _Let \(\mathcal{O}\) be a maximal order of class number 1 which splits at a prime \(p\). Let \(q\neq p\) be a prime such that \(\left(\frac{p}{q}\right)=1\). Let \((m,H)\) be a congruence pair of \(\mathcal{O}\). Then the Cayley graph \(\mathcal{X}_{\mathcal{O}}^{p,q}(m,H)\) is a Ramanujan graph._
Note that \(\#\operatorname{PSL}_{2}(\mathbb{F}_{q})=q(q-1)(q+1)/2\) so the graph \(\mathcal{X}_{\mathcal{O}}^{p,q}\) has \(q(q-1)(q+1)/2\) vertices.
## 4. Construction of Explicit Maximal Orders and LPS-Type Ramanujan Graphs
In this section, we explain Jo and Yamasaki's construction of LPS-type graphs as in [6]. In the construction, we need a quaternion algebra of class number 1 and a maximal order. In [5], Ibukiyama explicitly constructs a maximal order for any definite quaternion algebra over \(\mathbb{Q}\). We discuss a special case for when the quaternion algebra only ramifies at one prime.
**Proposition 4.1** ([5]).: _Let \(P\) be a prime number. Take a prime \(Q\) such that_
\[Q\equiv 3\mod 8,\quad\left(\frac{-Q}{P}\right)=-1\text{ unless }P=2. \tag{2}\]
_Let \(T\) be an integer such that \(T^{2}\equiv-P\mod Q\). Then, \(\mathcal{H}_{-P,-Q}\) (i.e. \(i^{2}=-P\), \(j^{2}=-Q\), \(ij=-ji=k\)) is a definite quaternion algebra which only ramifies at \(P\) and the quaternion algebra \(\mathcal{H}_{-P,-Q}\) has a maximal order \(\mathcal{O}_{-P,-Q}=\mathbb{Z}\oplus\mathbb{Z}\omega_{1}\oplus\mathbb{Z} \omega_{2}\oplus\mathbb{Z}\omega_{3}\) where_
\[\omega_{1}=\frac{1+j}{2},\quad\omega_{2}=\frac{i+ij}{2},\quad\omega_{3}=\frac {Tj+k}{Q}.\]
**Definition 4.2**.: Let \((P,Q)\) be a pair of positive integers such that \(P\) is prime and \(Q\) satisfies (2). We call \((P,Q)\) an **Ibukiyama Pair**.
Note that by quadratic reciprocity, we can find an appropriate integer \(T\) that satisfies the required conditions. By Corollary 2.3, the class number of \(\mathcal{H}_{-P,-Q}\) is 1 if and only if \(P\in\{2,3,5,7,13\}\). Also, if \(\alpha=x+y\omega_{1}+z\omega_{2}+w\omega_{3}\in\mathcal{O}_{-P,-Q}\) then
\[N(\alpha)=x^{2}+\frac{Q+1}{4}y^{2}+\frac{P(Q+1)}{4}z^{2}+\frac{T^{2}+P}{Q}w^{2 }+xy+Tyw+Pzw. \tag{3}\]
Note that our choice of \(Q\) and \(T\) is irrelevant. We now make the following adjustments to our notation:
\[\mathcal{G}_{-P,-Q}(A)=\mathcal{G}_{\mathcal{O}_{-P,-Q}}(A),\qquad\Lambda_{-P, -Q}^{p}(A)=\Lambda_{\mathcal{O}_{-P,-Q}}^{p}(A)\]
**Proposition 4.3**.: _[_6_, Remark V.2]_ _Let \(q\) be a prime, \(q\neq 2\), \(\left(\frac{-P}{q}\right)=\left(\frac{Q}{q}\right)=1\) and \(\left(\frac{p}{q}\right)=1\). The map \(\beta_{q}:\mathcal{O}_{-P,-Q}\to\mathcal{M}_{2}(\mathbb{F}_{q})\) induced by the reduction modulo \(q\) map on \(\mathcal{O}_{-P,-Q}\) is defined by_
\[x+y\omega_{1}+z\omega_{2}+w\omega_{3}\mapsto\frac{1}{2Q}\begin{pmatrix}A_{11} &A_{12}\\ A_{21}&A_{22}\end{pmatrix}\]
_where_
\[A_{11} =(2Qx+Qy)+Qz\sqrt{-P}\] \[A_{12} =\sqrt{Q}(Qy+2Tw+(Qz+2w)\sqrt{-P})\] \[A_{21} =-\sqrt{Q}(Qy+2Tw-(Qz+2w)\sqrt{-P})\] \[A_{22} =(2Qx+Qy)-Qz\sqrt{-P}\]
_gives an isomorphism satisfying \(\det(\beta_{q}(\alpha))=N(\alpha)\)._
**Lemma 4.4**.: _Let \((P,Q)\) be an Ibukiyama pair. Then_
\[\mathcal{G}_{-P,-Q}(\mathbb{Z}/P\mathbb{Z})\cong N\rtimes H\]
_where_
\[N=\left\{1+xi+yij:x,y\in\mathbb{Z}/P\mathbb{Z}\right\},\quad H=\left\{x+yj:x, y\in\mathbb{Z}/P\mathbb{Z},\ (x,y)\neq(0,0)\right\}/(\mathbb{Z}/P\mathbb{Z})^{\times}.\]
Proof.: Using (3), we see that the number of elements \(\alpha=a_{0}+a_{1}\omega_{1}+a_{2}\omega_{2}+a_{3}\omega_{3}\) with \(0\leq a_{i}<P\) and \(N(\alpha)\not\equiv 0\pmod{P}\) is \(P^{2}(P^{2}-1)\). Therefore, \(\#\mathcal{G}(\mathbb{Z}/P\mathbb{Z})=P^{2}(P^{2}-1)/(P-1)=P^{2}(P+1)\).
Because \(P\neq 2,Q\), we can break up the \(\omega_{i}\) to put the elements of \(\mathcal{G}_{-P,-Q}(\mathbb{Z}/m\mathbb{Z})\) in terms of \(i,j,ij\). It is clear that the elements of \(N\) and \(H\) have nonzero norm. Furthermore, \(N\) and \(H\) are both closed under multiplication. Thus, \(N\) and \(H\) can be identified as subgroups of \(\mathcal{G}(\mathbb{Z}/P\mathbb{Z})\). Observe that \(N\cap H=\{1\}\), \(\#N=P^{2}\), \(\#H=(P^{2}-1)/(P-1)=P+1\). We have computed that \(\#\mathcal{G}(\mathbb{Z}/P\mathbb{Z})=P^{2}(P+1)\), which shows that \(\mathcal{G}(\mathbb{Z}/P\mathbb{Z})=N\cdot H\). Lastly, \(N\) is normal in \(\mathcal{G}(\mathbb{Z}/P\mathbb{Z})\) because for \(n\in N\) and \(h\in H\), we have that \(h^{-1}=z-yj\) and
\[hnh^{-1}=(x+yj)(1+Xi+Yij)(x-yj)\\ =(x+(xX-yY)i+(xY-yX)ij+yj)(x-yj)=A+Bi+Cij\in N.\]
Therefore, \(\mathcal{G}(\mathbb{Z}/P\mathbb{Z})\cong N\rtimes H\).
**Definition 4.5**.: An **LPS-type graph** is any graph constructed in the following way:
1. Fix a prime \(p\).
2. Let \((P,Q)\) be an Ibukiyama pair such that \(P\in\{2,3,5,7,13\}\), \(P\neq p\). We have a definite class number \(1\) rational maximal order \(\mathcal{O}_{-P,-Q}\) which splits at \(p\).
3. Find all elements in \(\mathcal{O}_{-P,-Q}^{\times}\) by solving the norm equation \(N(\alpha)=1\).
4. Find \(p+1\) elements of \(\mathcal{O}_{-P,-Q}\) of norm \(p\), unique up to units, which form a set \(\mathcal{S}^{p}\).
5. Take a prime \(q\) satisfying (4) \[q\neq p,\quad\left(\frac{-P}{q}\right)=\left(\frac{Q}{q}\right)=1,\quad\left( \frac{p}{q}\right)=1.\]
6. Via the reduction map \(\varphi_{q}\) then the isomorphism \(\varphi_{q}\) in Proposition 4.3, realize \(\mathcal{S}^{p}\) as a multi-set of elements of \(\mathrm{PSL}_{2}(\mathbb{F}_{q})\) and write it as \(\mathcal{S}^{p,q}\). Note that each element of \(\mathcal{S}^{p,q}\) has multiplicity of the size of its pre-image in \(\mathcal{S}^{p}\).
7. Construct the Cayley graph \[\mathcal{X}^{p,q}=\mathcal{X}_{-P,-Q}^{p,q}\left(\mathcal{S}^{p,q}\right)= \mathrm{PSL}_{2}\left(\mathbb{F}_{q},\mathcal{S}^{p,q}\right).\] The graph \(\mathcal{X}^{p,q}\) is a LPS-type graph.
**Theorem 4.6**.: _Let \((P,Q)\) be an Ibukiyama pair. Let \(\mathcal{S}^{p}\) be \(p+1\) elements of norm \(p\) in \(\mathcal{O}_{-P,-Q}\). Let \(q\neq p\) be a prime such that \(q\) satisifies \((\dagger)\). Let \(\mathcal{X}_{-P,-Q}^{p,q}\left(\mathcal{S}^{p,q}\right)\) be the corresponding LPS-type graph. If we can find a congruence pair \((m,H)\) of \(\mathcal{O}_{-P,-Q}\) such that \(\mathcal{S}^{p}\subseteq\Lambda_{-P,-Q}^{p}(m,H)\), then \(\mathcal{X}_{-P,-Q}^{p,q}\) is Ramanujan._
Proof.: We have that \(\mathcal{O}_{-P,-Q}\) is a rational maximal class number one maximal order which splits at \(p\). Because \(\mathcal{S}^{p}\subseteq\Lambda_{-P,-Q}^{p}(m,H)\) for some congruence pair \((m,H)\), we have that \(\mathcal{X}_{-P,-Q}^{p,q}\left(\mathcal{S}_{-P,-Q}^{p,q}\right)\cong\mathcal{X }_{\mathcal{O}_{-P,-Q}}^{p,q}(m,H)\). Theorem 3.10 implies that \(\mathcal{X}_{-P,-Q}^{p,q}(m,H)\) is Ramanujan.
**Corollary 4.7**.: _Theorem 1.2 implies Theorem 1.1._
## 5. The Ramanujan Property for Certain LPS-Type Graphs
Our goal is to show that we can construct Ramanujan LPS-type graphs from definite class number 1 rational quaternion algebras. That is, we want to prove Theorem 1.2, which we restate more explicitly below in terms of congruence pairs. Note that by Proposition 3.5, we have that Theorem 5.1 is equivalent to Theorem 1.2.
**Theorem 5.1**.: _Let \(\mathcal{H}\) be a definite class number 1 quaternion algebra over \(\mathbb{Q}\). Let \(\mathcal{O}\) be the unique maximal order of \(\mathcal{H}\). There exists a positive integer \(m\) and a subgroup \(H\) of \(\mathcal{G}_{\mathcal{O}}(\mathbb{Z}/m\mathbb{Z})\) such that \((m,H)\) is a congruence pair._
To prove Theorem 5.1, we need the following calculation.
**Proposition 5.2**.: _The unit groups of \(\mathcal{O}_{-P,-Q}\) for \(P\in\{2,3,4,7,13\}\) are_
\[\mathcal{G}_{-P,-Q}(\mathbb{Z})=\begin{cases}A_{4}&P=2,\\ S_{3}&P=3,\\ A_{3}&P=5,\\ S_{2}&P=7,\\ A_{2}&P=13.\end{cases}\]
Proof.: For \(P=2\), see Lemma 5.3. For \(P=3\), see Lemma 5.5. For \(P=5\), see Lemma 5.7. For \(P=7\), see Lemma 5.10. The case \(P=13\) is proven in [1, Section 4]. Note that \(A_{2}\) is the trivial group.
Proof of Theorem 5.1.: For \(P=2\), see Proposition 5.4. For \(P=3\), see Proposition 5.6. For \(P=5\), see Proposition 5.9. For \(P=7\), see Proposition 5.11. For \(P=13\), \(A_{2}\) is the trivial group so \(\mathcal{G}_{-P,-Q}(\mathbb{Z}[1/p])\) acts trivially on \(\mathcal{T}_{p+1}\).
Note that \(\mathcal{O}_{-P,-Q}\) is completely determined by the ramified primes, so the choice of \(Q\) does not matter.
### The Quaternion Algebra Ramifies at 2
Set \(P=2\) and \(Q=11\). Observe that \(Q\equiv 3\mod 8\) (we don't need the reciprocity condition because \(P=2\)). Set \(T=3\). Observe that \(T^{2}\equiv-2\mod 11\). \(\mathcal{H}_{-2,-11}\) is a quaternion algebra over \(\mathbb{Q}\) which is of class number 1 and ramifies only at 2. By Theorem 4.1, \(\mathcal{H}_{-2,-11}\) has a maximal order \(\mathcal{O}_{-2,-11}=\mathbb{Z}+\mathbb{Z}\omega_{1}+\mathbb{Z}\omega_{2}+ \mathbb{Z}\omega_{3}\), where
\[\omega_{1}=\frac{1+j}{2},\quad\omega_{2}=\frac{i+ij}{2},\quad\omega_{3}=\frac {3j+ij}{11}.\]
**Lemma 5.3**.: \(\mathcal{G}_{-2,-11}(\mathbb{Z})\cong A_{4}\)__
Proof.: For \(\alpha=x+y\omega_{1}+z\omega_{2}+w\omega_{3}\in\mathcal{O}_{-2,-11}\), it follows from (3) that
\[N(\alpha)=x^{2}+3y^{2}+6z^{2}+w^{2}+xy+3yw+2zw.\]
Solving \(N(\alpha)=1\), we get 24 solutions in \(\mathbb{Z}^{4}\), which form a group under multiplication. When we quotient by \(\{\pm 1\}\), we get a group which is isomorphic to \(A_{4}\). Representations of the elements of \(\mathcal{G}_{-2,-11}(\mathbb{Z})\) are enumerated in Figure 1.
**Proposition 5.4**.: _There exists a subgroup \(H\leq\mathcal{G}_{-2,-11}(\mathbb{Z}/3\mathbb{Z})\) such that \(H\cong C_{2}\) and \((3,H)\) is a congruence pair of \(\mathcal{O}_{-2,-11}\)._
Proof.: We see from Figure 1 that \(\mathcal{G}_{-2,-11}(\mathbb{Z})\) embeds in \(\mathcal{G}_{-2,-11}(\mathbb{Z}/3\mathbb{Z})\). We identify \(\mathcal{G}_{-2,-11}(\mathbb{Z})\) as a subgroup of \(\mathcal{G}_{-2,-11}(\mathbb{Z}/3\mathbb{Z})\). Lemma 5.3 tells us that \(\mathcal{G}_{-3,-19}(\mathbb{Z})\cong A_{4}\). Proposition 3.7 implies that \(\mathcal{G}_{-2,-11}(\mathbb{Z}/3\mathbb{Z})\cong\mathrm{PGL}_{2}(\mathbb{Z}/3 \mathbb{Z})\cong S_{4}\). There is only one copy of \(A_{4}\) in \(S_{4}\) and
\[S_{4}=A_{4}\rtimes H\]
for \(H\cong C_{2}\). Therefore, \((3,H)\) is a congruence pair of \(\mathcal{O}_{-2,-11}\).
### The Quaternion Algebra Ramifies at 3
Set \(P=3\) and \(Q=19\). Observe that \(Q\equiv 3\mod 8\) and \(\left(\frac{-Q}{3}\right)=-1\). Set \(T=4\). Observe that \(T^{2}\equiv-3\mod 19\). Thus, \(\mathcal{H}_{-3,-19}\) is a quaternion algebra over \(\mathbb{Q}\) which is of class number \(1\) and ramifies only at \(3\). By Theorem 4.1, \(\mathcal{H}_{-3,-19}\) has a maximal order \(\mathcal{O}_{-3,-19}=\mathbb{Z}+\mathbb{Z}\omega_{1}+\mathbb{Z}\omega_{2}+ \mathbb{Z}\omega_{3}\), where
\[\omega_{1}=\frac{1+j}{2},\quad\omega_{2}=\frac{i+ij}{2},\quad\omega_{3}=\frac {4j+ij}{19}.\]
**Lemma 5.5**.: \(\mathcal{G}_{-3,-19}(\mathbb{Z})\cong S_{3}\)__
Proof.: For \(\alpha=x+y\omega_{1}+z\omega_{2}+w\omega_{3}\in\mathcal{O}_{-3,-19}\), it follows from (3) that
\[N(\alpha)=x^{2}+5y^{2}+15z^{2}+w^{2}+xy+4yw+3zw.\]
Solving \(N(\alpha)=1\), we get \(12\) solutions in \(\mathbb{Z}^{4}\), which form a group under multiplication. When we quotient by \(\{\pm 1\}\), we get a noncommutative group with six elements. There is only one noncommutative group of order \(6\), \(S_{3}\), so \(\mathcal{G}_{-3,-19}(\mathbb{Z})\cong S_{3}\). Representations of the elements of \(\mathcal{G}_{-3,-19}(\mathbb{Z})\) are enumerated in Figure 2.
**Proposition 5.6**.: _There exist subgroups \(H,H^{\prime},H^{\prime\prime}\leq\mathcal{G}_{-3,-19}(\mathbb{Z}/4\mathbb{Z})\) such that_
\[H\cong C_{2}^{3},\quad H^{\prime}\cong C_{4}\times C_{2}^{2},\quad H^{\prime \prime}\cong D_{4}\]
_and \((4,H)\), \((4,H^{\prime})\), and \((4,H^{\prime\prime})\) are congruence pairs of \(\mathcal{O}_{-3,-19}\)._
Figure 1. The elements of \(\mathcal{G}_{-2,-11}(\mathbb{Z})\). Each row is a distinct element with a representation in terms of \(\omega_{i}\) and in terms of \(i\) and \(j\).
Proof.: We see from Figure 2 that \(\mathcal{G}_{-3,-19}(\mathbb{Z})\) embeds in \(\mathcal{G}_{-3,-19}(\mathbb{Z}/4\mathbb{Z})\). We identify \(\mathcal{G}_{-3,-19}(\mathbb{Z})\) as a subgroup of \(\mathcal{G}_{-3,-19}(\mathbb{Z}/4\mathbb{Z})\). Proposition 3.7 implies that \(\mathcal{G}_{-3,-19}(\mathbb{Z}/4\mathbb{Z})\cong\mathrm{PGL}_{2}(\mathbb{Z}/4 \mathbb{Z})\). Lemma 5.5 tells us that \(\mathcal{G}_{-3,-19}(\mathbb{Z})\cong S_{3}\). Calculation shows that there are two classes \([K],[K^{\prime}]\in C_{\mathrm{PGL}_{2}(\mathbb{Z}/4\mathbb{Z})}\) such that \(K,K^{\prime}\cong S_{3}\). Both \(K\) and \(K^{\prime}\) have complementary subgroups isomorphic to \(C_{8}\), \(C_{4}\times C_{2}^{2}\), and \(D_{4}\).
### The Quaternion Algebra Ramifies at 5
Set \(P=5\) and \(Q=3\). Observe that \(Q\equiv 3\mod 8\) and \(\left(\frac{-Q}{5}\right)=-1\). Set \(T=1\). Observe that \(T^{2}\equiv-5\mod 3\). \(\mathcal{H}_{-5,-3}\) is a quaternion algebra over \(\mathbb{Q}\) which is of class number \(1\) and ramifies only at \(5\). By Theorem 4.1, \(\mathcal{H}_{-5,-3}\) has a maximal order \(\mathcal{O}_{-2,-11}=\mathbb{Z}+\mathbb{Z}\omega_{1}+\mathbb{Z}\omega_{2}+ \mathbb{Z}\omega_{3}\), where
\[\omega_{1}=\frac{1+j}{2},\quad\omega_{2}=\frac{i+ij}{2},\quad\omega_{3}=\frac {j+ij}{3}.\]
**Lemma 5.7**.: \(\mathcal{G}_{-3,-19}(\mathbb{Z})\cong\langle 1,\omega_{1},1-\omega_{1}\rangle \cong A_{3}\)__
Proof.: For \(\alpha=x+y\omega_{1}+z\omega_{2}+w\omega_{3}\in\mathcal{O}_{-5,-3}\), it follows from (3) that
\[N(\alpha)=x^{2}+y^{2}+5z^{2}+2w^{2}+xy+yw+5zw.\]
Solving \(N(\alpha)=1\), we get \(6\) solutions in \(\mathbb{Z}^{4}\), which form a group under multiplication. When we quotient by \(\{\pm 1\}\), we get a group which is isomorphic to \(A_{3}\).
**Lemma 5.8**.: _Let_
\[A=\left\{1+xi+yij:x,y\in\mathbb{Z}/5\mathbb{Z}\right\},\quad B= \left\{j+xi+yij:x,y\in\mathbb{Z}/5\mathbb{Z}\right\},\] \[H=A\cup B.\]
_The set \(H\) is a group isomorphic with \(C_{5}\rtimes D_{5}\)._
Proof.: We first check that \(H\) contains inverses. Suppose that \(\alpha=1+xi+yij\in A\). Then
\[\alpha(1-xi-yij)=1+x^{2}i^{2}+y^{2}ijij=1.\]
Therefore, \(\alpha^{-1}\in A\). Next suppose that \(\alpha=j+xi+yij\in B\), then
\[\alpha^{2} =j^{2}+xji+yjij+xij+x^{2}i^{2}+xyi^{2}j+yij^{2}+xyiji+y^{2}ijij\] \[=-3-xij+3yi+xij-3yi\] \[=-3.\]
In particular, every element of \(B\) has order \(2\). Therefore, \(\alpha^{-1}=\alpha\) for \(\alpha\in B\). We now show that \(H\) is a group. We check the following cases.
Figure 2. Elements of \(\mathcal{G}_{-3,-19}(\mathbb{Z})\).
1. Let \(\alpha,\beta\in A\). Let \(\alpha=1+xi+yij\) and \(\beta=1+x^{\prime}i+y^{\prime}ij\). Then \[\alpha\beta =(1+xi_{y}ij)(1+x^{\prime}i+y^{\prime}ij)\] \[=1+x^{\prime}i+y^{\prime}ij+xi+xx^{\prime}i^{2}+xy^{\prime}i^{2}j+ yij+yx^{\prime}iji+yy^{\prime}ijij\] \[=1+x^{\prime}i+y^{\prime}ij+xi-5xx^{\prime}-5xy^{\prime}j+yij+5 yx^{\prime}j-15yy^{\prime}\] \[=1+(x+x^{\prime})i+(y+y^{\prime})ij.\] Therefore, \(\alpha\beta\in A\).
2. Let \(\alpha,\beta\in B\). Let \(\alpha=j+xi+yij\) and \(\beta=j+x^{\prime}k+y^{\prime}ij\). Then \[\alpha\beta =(j+xi+yij)(j+x^{\prime}i+y^{\prime}ij)\] \[=j^{2}+x^{\prime}ji+y^{\prime}jij+xij+xx^{\prime}i^{2}+xy^{\prime }i^{2}j+yx^{\prime}iji+yy^{\prime}iji\] \[=-3+x^{\prime}ji+y^{\prime}3i+xij-3yi\] \[=-3+3(y^{\prime}-y)i+(x-x^{\prime})ij.\] Therefore, \(\alpha\beta\in A\).
3. Let \(\alpha\in A\) and \(\beta\in B\). Let \(\alpha=1+xi+yij\) and \(\beta=j+x^{\prime}i+y^{\prime}ij\). Then \[\alpha\beta =(1+xi+yij)(j+x^{\prime}i+y^{\prime}ij)\] \[=j+x^{\prime}i+y^{\prime}ij+xij+xx^{\prime}i^{2}+xy^{\prime}i^{2}j +yij^{2}+yx^{\prime}iji+yy^{\prime}ijij\] \[=j+x^{\prime}i+y^{\prime}ij+xij-3yi\] \[=j+(x^{\prime}-3y)i+(x+y^{\prime})ij.\] Therefore, \(\alpha\beta\in B\).
4. Let \(\alpha\in B\) and \(\beta\in A\). Let \(\alpha=j+xi+yij\) and \(\beta=1+x^{\prime}i+y^{\prime}ij\). Then \[\alpha\beta =(j+xi+yij)(1+x^{\prime}i+y^{\prime}ij)\] \[=j+x^{\prime}ji+y^{\prime}jiji+xi+xx^{\prime}i^{2}+xy^{\prime}i^{2}j +yij+yx^{\prime}iji+yy^{\prime}ijij\] \[=j-x^{\prime}ij+3y^{\prime}i+xi+yij\] \[=j+(3y^{\prime}+x)+(y-x^{\prime})ij.\] Therefore, \(\alpha\beta\in B\).
Therefore, \(\alpha\beta\in B\).
Therefore, \(H\) is in fact a group. \(H\) is noncommutative of order \(50\), and \(H\) contains \(25\) elements of order \(2\). Therefore, \(H\cong C_{5}\rtimes D_{5}\).
**Proposition 5.9**.: _There exists a subgroup \(H\leq\mathcal{G}_{-5,-3}(\mathbb{Z})\) such that_
\[H\cong C_{5}\rtimes D_{5}.\]
_and \((5,H)\) is a congruence pair of \(\mathcal{O}_{-5,-3}\)._
Proof.: We see from Lemma 5.9 that \(\mathcal{G}_{-5,-3}(\mathbb{Z})\cong A_{3}\) embeds in \(\mathcal{G}_{-5,-3}(\mathbb{Z}/5\mathbb{Z})\). We have that
\[\varphi_{5}(\mathcal{G}_{-5,-3}(\mathbb{Z}))\cong\langle 1,1+j,1-j\rangle.\]
By Lemma 4.4, \(\#\mathcal{G}_{-5,-3}(\mathbb{Z}/5\mathbb{Z})=150\). Let \(H\subseteq\mathcal{G}_{-5,-3}(\mathbb{Z}/5\mathbb{Z})\) be as in Lemma 5.8, so \(H\) is a subgroup of \(\mathcal{G}_{-5,-3}(\mathbb{Z}/5\mathbb{Z})\) isomorphic to \(C_{5}\rtimes D_{5}\). By construction, \(H\cap\mathcal{G}_{-5,-3}(\mathbb{Z})=\{1\}\) and
\[\#\left(\mathcal{G}_{-5,-3}(\mathbb{Z})\cdot H\right)=3\cdot 50=150=\#\mathcal{G}_{ -5,-3}(\mathbb{Z}/5\mathbb{Z})\]
so \(\mathcal{G}_{-5,-3}(\mathbb{Z}/5\mathbb{Z})=\mathcal{G}_{-5,-3}(\mathbb{Z})\cdot H\). Therefore, \((5,H)\) is a congruence pair of \(\mathcal{O}_{-5,-3}\).
### The Quaternion Algebra Ramifies at 7
Set \(P=7\) and \(Q=11\). Observe that \(Q\equiv 3\mod 8\) and \(\left(\frac{-Q}{3}\right)=-1\). Set \(T=2\). Observe that \(T^{2}\equiv-3\mod 19\). \(\mathcal{H}_{-7,-11}\) is a quaternion algebra over \(\mathbb{Q}\) which is of class number \(1\) and ramifies only at \(7\). By Theorem 4.1, \(\mathcal{H}_{-3,-19}\) has a maximal order \(\mathcal{O}_{-3,-19}=\mathbb{Z}+\mathbb{Z}\omega_{1}+\mathbb{Z}\omega_{2}+ \mathbb{Z}\omega_{3}\), where
\[\omega_{1}=\frac{1+j}{2},\quad\omega_{2}=\frac{i+ij}{2},\quad\omega_{3}=\frac {2j+ij}{11}.\]
**Lemma 5.10**.: \(\mathcal{G}_{-7,-11}(\mathbb{Z})\cong\langle 1,\omega_{3}\rangle\cong A_{2}\)__
Proof.: For \(\alpha=x+y\omega_{1}+z\omega_{2}+w\omega_{3}\in\mathcal{O}_{-7,-11}\), it follows from (3) that
\[N(\alpha)=x^{2}+3y^{2}+21z^{2}+w^{2}+xy+2yw+7zw.\]
Solving \(N(\alpha)=1\), we get \(4\) solutions in \(\mathbb{Z}^{4}\) which form a group under multiplication. The solutions to \(N(\alpha)=1\) are \(\pm 1,\pm\omega_{3}\). When we quotient by \(\{\pm 1\}\), we get the group \(A_{2}\).
**Proposition 5.11**.: _There exists a subgroup \(H\leq\mathcal{G}_{-7,-11}(\mathbb{Z}/2\mathbb{Z})\) such that \(H\cong C_{3}\) and \((H,2)\) is a congruence pair of \(\mathcal{O}_{-7,-11}\)._
Proof.: Lemma 5.10 tells us that \(\mathcal{G}_{-7,-11}(\mathbb{Z})\cong A_{2}\) embeds in \(\mathcal{G}_{-7,-11}(\mathbb{Z}/2\mathbb{Z})\) and Proposition 3.7 implies that \(\mathcal{G}_{-7,-11}(\mathbb{Z}/2\mathbb{Z})\cong\mathrm{PGL}_{2}(\mathbb{Z}/2 \mathbb{Z})\cong S_{3}\). There is only one class \([K]\in C_{S_{3}}\) such that \(K\cong A_{2}\). Furthermore, there exists \(H\leq S_{3}\) such that \(H\cong C_{3}\) and \(H\) is a complementary subgroup of \(C_{2}\). Therefore, \((2,H)\) is a congruence pair of \(\mathcal{O}_{-7,-11}\).
## 6. Further Directions for Finding Congruence Pairs
Theorem 4.6 tells us that a LPS-Type graph \(\mathcal{X}\) is Ramanujan if we can find a congruence pair whose corresponding congruence subgroup contains the generating set of \(\mathcal{X}\). Therefore, it is of some interest to know what the congruence pairs of a given class number \(1\) maximal order are.
**Question 6.1**.: _What are all the possible congruence pairs for class number one quaternion algebras?_
We take some steps in answering this question. In the cases \(P=3\) and \(P=7\), we show that there is no odd prime \(m\) for which there exists a congruence pair \((m,H)\) of the respective maximal order. For the case \(P=3\), we use Propostion 6.2 to bound the possible \(m\) for congruence pairs \((m,H)\) of \(\mathcal{O}_{-P,-Q}\). For a full treatment of Proposition 6.2, see [3, Chapter 12] or [7, Corollary 2.3]. While we only show the case \(P=3\) and \(P=7\), Proposition 6.2 can be used for \(P=2\) and \(P=5\) to bound \(m\).
**Proposition 6.2**.: _For \(q>3\) an odd prime, the maximal subgroups of \(\mathrm{PGL}_{2}(\mathbb{Z}/q\mathbb{Z})\) are as follows:_
1. _dihedral groups of order_ \(2(q-1)\) _for_ \(q>5\)_,_
2. _dihedral groups of order_ \(2(q+1)\)_,_
3. _a group of order_ \(q(q-1)\)_,_
4. \(\mathrm{PSL}_{2}(\mathbb{Z}/q\mathbb{Z})\)_,_
5. \(S_{4}\) _when_ \(q\equiv\pm 3\pmod{8}\)_._
**Proposition 6.3**.: _There is no congruence pair \((m,H)\) associated with \(\mathcal{O}_{-3,-19}\) for odd primes \(m\)._
Proof.: We consider the following cases:
1. Let \(m=3\). Let \(N\) and \(H\) be as in Proposition 4.4, i.e., \[N=\{1+xi+yij:x,y\in\mathbb{Z}/3\mathbb{Z}\},\quad H=\{x+yj:x,y\in\mathbb{Z}/3 \mathbb{Z}:(x,y)\neq(0,0)\}/\mathbb{Z}/3\mathbb{Z}^{\times}.\]
Then, \(\#N=9\), \(\#H=4\), and by Lemma 3.7, \(\mathcal{G}_{-3,-19}(\mathbb{Z}/3\mathbb{Z})=H\rtimes N\). Therefore, \(\#\mathcal{G}_{-3,-19}(\mathbb{Z}/3\mathbb{Z})=36\). Suppose that we can find \(K\leq\mathcal{G}_{-3,-19}(\mathbb{Z}/3\mathbb{Z})\) that is a complement of \(\mathcal{G}_{-3,-19}(\mathbb{Z})\) in \(\mathcal{G}_{-3,-19}(\mathbb{Z}/3\mathbb{Z})\). We have that \(\#\mathcal{G}(\mathbb{Z})\) has order \(6\). Thus, \(\#K=6\), so \(K\) has an element of order \(4\). We have that \(2\omega_{1}=1+j\in\mathcal{G}_{-3,-19}(\mathbb{Z}/3\mathbb{Z})\) has order \(4\). Therefore, \(1+j\not\in\mathcal{G}_{-3,-19}(\mathbb{Z})\) and \(1+j\not\in K\). We have that \[\varphi_{3}(\mathcal{G}_{-3,-19}(\mathbb{Z}))=\left\{1,i+2j+2ij,i+j+ij,1+2ij,1+ ij,j+ij\right\}.\] One can check that for all \(\alpha\in\mathcal{G}_{-3,-19}(\mathbb{Z})\), \(\alpha^{-1}(1+j)\) has order \(4\). Therefore, \(1+j\neq\alpha\beta\) for \(\alpha\in\mathcal{G}_{-3,-19}(\mathbb{Z})\) and \(\beta\in K\). This is a contradiction.
2. Let \(m=5\). One can check that every subgroup isomorphic to \(S_{3}\) of \(\mathrm{PGL}_{2}(\mathbb{Z}/5\mathbb{Z})\cong S_{5}\) does not have a complementary subgroup.
3. Let \(m>5\). Using Proposition 6.2, we see that for any prime \(m>5\), there is no index \(6\) subgroup of \(\mathrm{PGL}_{2}(\mathbb{Z}/m\mathbb{Z})\).
**Proposition 6.4**.: _There is no congruence pair \((m,H)\) associated with \(\mathcal{O}_{-7,-11}\) for odd primes \(m\)._
Proof.: Case 1 (\(m=7\)): Let \(N\) and \(H\) be as in Proposition 3.7, i.e.,
\[N=\left\{1+xi+yij:x,y\in\mathbb{Z}/7\mathbb{Z}\right\},\quad H=\left\{x+yj:x,y \in\mathbb{Z}/7\mathbb{Z}:(x,y)\neq(0,0)\right\}/\mathbb{Z}/7\mathbb{Z}^{\times}.\]
Then \(\#N=49\), \(\#H=8\), and by Proposition 3.7, \(\mathcal{G}_{-7,-11}(\mathbb{Z}/7\mathbb{Z})=H\rtimes N\). Therefore, \(\#\mathcal{G}_{-7,-11}(\mathbb{Z}/7\mathbb{Z})=392\). Suppose that we can find \(K\) that is the complement of \(\mathcal{G}_{-7,-11}(\mathbb{Z})\) in \(\mathcal{G}_{-7,-11}(\mathbb{Z}/7\mathbb{Z})\). Then \(\#K=196\). Therefore, \(K\) does not have any element of order \(8\). Therefore, \(1+j\not\in K\) but \(1+j\in\mathcal{G}(\mathbb{Z})\cdot K\). We have that
\[\omega_{3}=\frac{2j+ij}{11}\equiv 4j+2ij\pmod{7}\]
and we quotient by scalers, so \(\varphi_{7}(\omega_{3})=2j+ij\). Therefore,
\[(2j+ij)^{-1}(1+j)=(2j+ij)(1+j)=-22-11i+2j+ij\equiv-1+3i-2j+ij\pmod{7}.\]
One can check that \(-1+3i-2j+ij\in K\) has order \(8\), which is a contradiction. Therefore, it is impossible to find \(K\) that satisfies our desired conditions.
Case 3 (\(m\neq 7\)): Observe that \(1\not\equiv\omega_{3}\pmod{m}\), so \(\mathcal{G}_{-7,-11}(\mathbb{Z})\) embeds into \(\mathcal{G}_{-7,-11}(\mathbb{Z}/m\mathbb{Z})\). By Proposition 3.7, \(\mathcal{G}_{-7,-11}(\mathbb{Z}/m\mathbb{Z})\cong\mathrm{PGL}_{2}(\mathbb{Z}/m \mathbb{Z})\). Note that \(\mathcal{G}_{-7,-11}(\mathbb{Z})\) maps to to a cyclic group of order \(2\) in \(\mathrm{PSL}_{2}(\mathbb{Z}/m\mathbb{Z})\). Suppose that we can find \(K\) that is the complement of \(\mathcal{G}_{-7,-11}(\mathbb{Z})\) in \(\mathcal{G}_{-7,-11}(\mathbb{Z}/m\mathbb{Z})\). Let \(\overline{K}=K\cap\mathrm{PSL}_{2}(\mathbb{Z}/m\mathbb{Z})\). Then, \(\mathcal{G}_{-7,-11}(\mathbb{Z})\cdot\overline{K}=\mathrm{PSL}_{2}(\mathbb{Z}/m \mathbb{Z})\) and \(\mathcal{G}_{-7,-11}(\mathbb{Z})\cap\overline{K}=\{1\}\). However, this implies that \(\overline{K}\) has index \(2\) as a subgroup of \(\mathrm{PSL}_{2}(\mathbb{Z}/m\mathbb{Z})\) which implies that \(\overline{K}\) is normal in \(\mathrm{PSL}_{2}(\mathbb{Z}/m\mathbb{Z})\). This is a contradiction because \(\mathrm{PSL}_{2}(\mathbb{Z}/m\mathbb{Z})\) is simple for odd primes \(m\geq 3\).
**Question 6.5**.: _Let \(X\) and \(Y\) be two Ramanujan graphs constructed from a Quaternion algebra. Suppose that \(X\) and \(Y\) have the same degree and same number of vertices. Are \(X\) and \(Y\) isomorphic? In particular, let \(\mathcal{S}^{p},\mathcal{R}^{p}\subseteq\mathcal{O}_{-P,-Q}\) be sets with \(p+1\) elements of norm \(p\) which satisfy the conditions of Theorem 4.6. Then does there exist an isomorphism_
\[\mathcal{X}_{-P,-Q}^{p,q}\left(\mathcal{S}^{p,q}\right)\cong\mathcal{X}_{-P,-Q}^ {p,q}\left(\mathcal{R}^{p,q}\right)?\]
**Question 6.6**.: _In Table 8.2 of [8], Kirschmer and Voight classify all definite class number one Eichler orders, which gives all the class number one maximal orders of quaternion algebras over a number field. For which of these orders can one find a congruence pair?_
## Acknowledgements
We would like to thank Professor Shai Evra and the Einstein Institute of Mathematics REU for their support during our research project. Professor Evra provided invaluable guidance and feedback, and the REU program's funding and resources were essential to our success. We are grateful for the experience and knowledge gained from this opportunity.
|
2306.05206 | The Attractor Flow for AdS$_5$ Black Holes in $\mathcal{N} = 2$ Gauged
Supergravity | We study the flow equations for BPS black holes in $\mathcal{N} = 2$
five-dimensional gauged supergravity coupled to any number of vector multiplets
via FI couplings. We develop the Noether-Wald procedure in this context and
exhibit the conserved charges as explicit integrals of motion, in the sense
that they can be computed at any radius on the rotating spacetime. The boundary
conditions needed to solve the first order differential equations are discussed
in great detail. We extremize the entropy function that controls the near
horizon geometry and give explicit formulae for all geometric variables at
their supersymmetric extrema. We have also considered a complexification of the
near-horizon variables that elucidates some features of the theory from the
near-horizon perspective. | Marina David, Nizar Ezroura, Finn Larsen | 2023-06-08T14:03:25Z | http://arxiv.org/abs/2306.05206v2 | # The Attractor Flow for AdS\({}_{5}\) Black Holes in \(\mathcal{N}=2\) Gauged Supergravity
###### Abstract
We study the flow equations for BPS black holes in \(\mathcal{N}=2\) five-dimensional gauged supergravity coupled to any number of vector multiplets via FI couplings. We develop the Noether-Wald procedure in this context and exhibit the conserved charges as explicit integrals of motion, in the sense that they can be computed at any radius on the rotating spacetime. The boundary conditions needed to solve the first order differential equations are discussed in great detail. We extremize the entropy function that controls the near horizon geometry and give explicit formulae for all geometric variables at their supersymmetric extrema. We have also considered a complexification of the near-horizon variables that elucidates some features of the theory from the near-horizon perspective.
## 1 Introduction
* 2 The Effective 2D Lagrangian
* 2.1 The 5D theory
* 2.2 The effective 2D theory
* 2.3 An effective 1D theory
* 3 Noether-Wald surface charges
* 3.1 The Noether-Wald surface charge: general formulae
* 3.2 Killing vector fields
* 3.3 Incorporating gauge invariance
* 3.4 Chern-Simons Terms
* 3.5 The 2D conserved charges
* 4 The flow equations
* 4.1 Supersymmetry conditions
* 4.2 Dictionary between the \((1+4)\) and the \((2+3)\) splits
* 4.3 The attractor flow equations
* 4.4 Solution of the attractor flow equations
* 4.4.1 Boundary conditions
* 4.4.2 Perturbative solution starting from the horizon
* 4.4.3 Perturbative solution starting from asymptotic AdS
* 4.4.4 Summary and discussion of perturbative solutions
* 5 Entropy Extremization
* 5.1 Near-horizon setup
* 5.2 Extremization of the Entropy Function
* 5.2.1 Solving for the entropy and the charge constraint
* 5.2.2 Near-horizon variables as function of conserved charges
* 5.3 Complexification of the near-horizon variables
* 6 Discussion
* A Conventions and notations
* B Real special geometry
* C Supersymmetry conditions
* C.1 The Kahler condition on the base geometry
* C.2 Kahler potential
* C.3 Supersymmetry conditions
*
C.3.1 The gaugino equation C.3.2 The gravitino equation
## 1 Introduction
The radial dependence of physical fields in a black hole background relates the field configuration far from the black hole to the region near the black hole horizon. It is important in holography because the radial evolution is identified with the renormalization group flow in the dual quantum field theory, so it determines the low energy QFT observables in terms of UV data. For supersymmetric black holes in asymptotically flat spacetimes, these ideas are realized beautifully by the so-called attractor flow. Unfortunately, the analogous construction for BPS black holes in asymptotically AdS spacetimes, where holography is more precise, is less developed. The goal of this article is to give an explicit and detailed account of the attractor flow for BPS black holes in AdS\({}_{5}\).
All extremal black holes, whether supersymmetric or not, enjoy an attractor _mechanism_, in that the end point of the radial flow is a horizon region with enhanced \(SO(2,1)\) symmetry. The field configuration in this AdS\({}_{2}\) region is determined by the entropy function formalism, an extremization principle that was studied in many contexts [1; 2; 3; 4; 5; 6; 7; 8; 9], including its applications to subleading corrections of the black hole entropy [10; 11; 12]. The attractor _flow_ refers to the entire evolution from the asymptotic space to the event horizon. The long throat characterizing the final approach to the horizon gives a geometric reason to intuit that only a restricted set of endpoints are possible. In this case the flow is _attracted_ to specific points in configuration space.
The study of supersymmetric attractor flows was initiated in \(\mathcal{N}=2\) ungauged supergravity [13; 14]. In this context an attractor mechanism was realized. The horizon values of scalars in \(\mathcal{N}=2\) vector multiplets is independent of their freely chosen asymptotic values. The attractor flow for asymptotically flat black holes was later generalized to different amounts of supersymmetry in various dimensions [15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. In this work we inquire about the analogous radial flow in gauged supergravity, the setting for asymptotically AdS black holes. This research direction is very well motivated by holography, but it is technically more involved and much less developed. Moreover, the reference to an _attractor_ is a misnomer in gauged supergravity: in general no _initial_ data is introduced at the asymptotic AdS boundary, so nothing is lost when the horizon is reached. However, the attractor terminology is so ingrained by now that we keep it, even though it can be misleading. Indeed, versions of attractor flows and attractor mechanisms in gauged supergravity previously appeared in [25; 26; 27; 28; 29; 30].
In the canonical set-up [13; 14], ungauged \(\mathcal{N}=2\) supergravity coupled to vector multiplets, scalar fields are the only variables needed to characterize the attractor flow. Classical black hole solutions involve vector fields as well but, in stationary black hole backgrounds, their radial dependence is entirely determined by the conserved electric charges, augmented by magnetic charges in \(D=4\). The scalars can take any values in the asymptotically flat
space as they are moduli that parametrize the vacuum. However, for given charges, their radial flow is governed by an effective black hole potential that guides them to an attractor value that depends only on the charges.
BPS black holes in AdS are fundamentally different because the scalar fields in gauged supergravity are subject to a potential that depends on the couplings of the theory, so generally the scalars are not moduli that parametrize vacua. When scalars do not depend on free parameters at infinity, there is no scope for an attractor mechanism that imposes horizon values independent of such parameters. There is an exception when FI-couplings are fine-tuned to create flat directions in the scalar potential. In this case the scalar fields do obey an attractor mechanism.
This work focuses on black holes with nonzero electrical charge and rotation in 5D asymptotically AdS spacetimes [31; 32; 33; 34; 35] whose solution has been well studied in various contexts. Several of the technical challenges we encounter in our setup arise because all known BPS black holes in AdS\({}_{5}\) have non-vanishing angular momentum. In our implementation of the attractor flow, the angular momentum \(J\) is a conserved quantity like any other: it can be computed as a flux integral over any topological sphere surrounding the black hole. The attractor flow from asymptotically AdS to the horizon corresponds to decreasing radial coordinate. Because of the rotation, the spatial sphere at any given radius is squashed and co-rotating.
There has been much previous work on the attractor mechanism in gauged supergravity, including [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49] and references therein. As a guide, we highlight some of the features we focus on:
* We construct the entire attractor flow of gauged supergravity, ie. the radial solution that interpolates between the AdS\({}_{5}\) vacuum far from the black hole and the horizon vacuum of AdS\({}_{2}\)-type. We do this directly in 5D, without the dimensional reduction to 4D pursued by some researchers [30; 26].
* We construct _conserved charges_ in terms of flux integrals. They are not specifically defined asymptotically far from the black hole, nor in the horizon region. Rather, they can be evaluated on any member of a nested family of surfaces that interpolate between the horizon and a homologous asymptotic surface. This is essential for the attractor flow. Technically, we construct these conserved charges using the covariant Noether-Wald procedure. This is similar to what is done in [50], although their focus is on the near-horizon region. One can also take the approach of a generalized Komar integral [51; 52; 53; 54; 55; 56], where a specific gauge must be chosen to deal with diverging asymptotics in AdS spacetimes.
* Our work incorporates _general special geometry_, including symplectic invariance. As such, we incorporate general cubic prepotentials. We provide a reader friendly guide to the widely studied STU and minimal SUGRA models that correspond to special cases. For a recent study on the attractor mechanism in the STU model, see [57].
* We carry out the _entropy extremization_ procedure in complete detail, reconstructing all details of the horizon structure. This elucidates the relation between real and complex fields.
It may be useful to also note some aspects of the attractor mechanism for gauged supergravity that are interesting but _not_ developed in this article:
* We focus on electric black holes in _only five_ spacetime dimensions.
* We study _supersymmetric_ attractor flows but other extremal flows are interesting as well.
* We specialize to black holes with _equal angular momenta_, so our ansatz for the geometry preserves an \(S^{2}\) throughout the flow. More general AdS\({}_{5}\) black holes with unequal angular momenta, studied in [33; 34], depend on a complex structure on \(S^{1}\times S^{3}\) that evolves radially and is defined only in Euclidean signature [58; 59].
* We study AdS\({}_{5}\) supergravity with FI-gauging, so only \(U(1)\) gauge groups appear. This class of theories is technically simpler, because it is entirely specified by a linear superpotential, and there is no need for the moment map. Moreover, in this theory all scalar fields are neutral [60].
* We do not connect to standard aspects of the AdS\({}_{5}\)/CFT\({}_{4}\) correspondence, such as holographic renormalization, time-dependent sources, and so on. This direction was recently studied in [57].
We hope to expand on some of these omissions in future work.
This article is organized as follows. In section 2, we introduce the \(\mathcal{N}=2\) gauged supergravity action, the 5D form of the black hole solution, and its dimensional reduction to both 2D and 1D. section 3 is devoted to an analysis of Noether-Wald surface charges, and the subtleties of gauge invariant conserved charges for actions with Chern-Simons terms. section 4 is the longest and most detailed. We derive the first order differential equations imposed by preserved supersymmetry, in the context of our ansatz. We study the boundary conditions needed to solve the equations perturbatively, from both the horizon and the asymptotic AdS point of view. With both these perspectives, we recover the known black hole solutions by establishing truncation of the pertubative expansion. In section 5 we develop the entropy extremization formalism and compute all near horizon aspects of the black hole, including its entropy. We also construct a complexification of the near-horizon variables that elucidates some aspects of the solution. We conclude in section 6 with a discussion of open problems concerning the attractor flow in gauged supergravity and related topics. A series of appendices are devoted to technical details, conventions and notations regarding differential forms in Appendix A, real special geometry in B, and the supersymmetry conditions in Appendix C.
The Effective 2D Lagrangian
In this section we introduce the action of \(\mathcal{N}=2\) 5D supergravity with coupling to \(n_{V}\) vector multiplets and gauging by Fayet-Iliopoulos couplings, as well as its dimensional reduction to a 2D theory. This also serves to define conventions and notation. For additional details on real special geometry and supersymmetry we refer to Appendices B and C, respectively.
### The 5D theory
We study five dimensional \(\mathcal{N}=2\) gauged supergravity with bosonic action
\[S=\frac{1}{16\pi G_{5}}\int_{\mathcal{M}}\mathcal{L}_{5}+\frac{1}{8\pi G_{5}} \int_{\partial\mathcal{M}}d^{4}x\sqrt{|h|}\operatorname{Tr}K\, \tag{1}\]
where the 5D Lagrangian density is given by
\[\mathcal{L}_{5}=(-\mathcal{R}_{5}-2V)\star_{5}1-G_{IJ}F_{5}^{I} \wedge\star_{5}F_{5}^{J}+G_{IJ}dX^{I}\wedge\star_{5}dX^{J}-\frac{1}{6}c_{IJK} F_{5}^{I}\wedge F_{5}^{J}\wedge A_{5}^{K}. \tag{2}\]
We have included the subscript 5 to emphasize that we are in five dimensions and the five dimensional Hodge dual is given by \(\star_{5}\). The Gibbons-Hawking-York boundary term must be included to have a well-defined variation of the action (1) and is given by the trace of the second fundamental form \(K\) which is integrated over the induced metric \(h\) on the boundary. Other conventions and notations regarding differential forms and the Hodge dual are in Appendix A.
The field content includes the field strengths \(F_{5}^{I}=dA_{5}^{I}\) where \(I=1,\ldots,n\) and the scalars \(X^{I}\), correspond to \(n-1\) physical scalars, constrained via the following relation
\[\frac{1}{6}c_{IJK}X^{I}X^{J}X^{K}=1. \tag{3}\]
The scalar potential is given by
\[V=-c^{IJK}\xi_{I}\xi_{J}X_{K}=-\xi_{I}\xi_{J}\left(X^{I}X^{J}- \frac{1}{2}G^{IJ}\right)\, \tag{4}\]
where \(\xi_{I}\) are the real Fayet-Iliopoulos parameters. The scalars with lowered index
\[X_{I}=2G_{IJ}X^{J}\, \tag{5}\]
obey the analogous constraint
\[\frac{1}{6}c^{IJK}X_{I}X_{J}X_{K}=1\, \tag{6}\]
when closure relation (14) is satisfied. For further details on definitions, conventions and identities, we refer the reader to Appendix B.
Alternatively, the scalar potential can be expressed as
\[V=-\left(\frac{2}{3}W^{2}-\frac{1}{2}G^{IJ}D_{I}WD_{J}W\right)\, \tag{7}\]
where the superpotential \(W\) is
\[W=\xi_{I}X^{I}\, \tag{8}\]
and the Kahler covariant derivative \(D_{I}\) takes the constraint (6) into acount. Using this form of the potential \(V\), the condition for a supersymmetric minimum becomes
\[D_{I}W=\xi_{I}-\frac{1}{3}X_{I}(\xi\cdot X)\underset{\text{min}}{=}0. \tag{9}\]
According to this equation, the asymptotic values of the scalars \(X_{I,\infty}\) must be parallel to \(\xi_{I}\), in the sense of real special geometry vectors, and the constraint (6) determines the proportionality constant between the two:
\[X_{I,\infty}=\left(\frac{1}{6}c^{JKL}\xi_{J}\xi_{K}\xi_{L}\right)^{-1/3}\xi_{I}. \tag{10}\]
The value of the potential \(V\) at the minimum must be related to the AdS\({}_{5}\) length scale \(\ell\) and the cosmological constant in the usual manner
\[V_{\infty}=-c^{IJK}\xi_{I}\xi_{J}X_{K,\infty}\equiv-6\ell^{-2}. \tag{11}\]
This gives the constraint
\[\frac{1}{6}c^{IJK}\xi_{I}\xi_{J}\xi_{K}=\ell^{-3}\, \tag{12}\]
on the FI-parameters \(\xi_{I}\) and the simple relation for the asymptotic values of the scalars
\[X_{I,\infty}=\ell\xi_{I}. \tag{13}\]
Thus the \(n_{V}+1\) independent FI-parameters \(\xi_{I}\) determine the asymptotic values \(X_{\infty}^{I}\) of the \(n_{V}\) scalars, as well as the AdS\({}_{5}\) scale \(\ell\).
For contrast, recall that in ungauged supergravity, the scalar fields are moduli as they experience no potential. Then their asymptotic values \(X_{I,\infty}\) far from the black hole are set arbitrarily by boundary conditions, which is related to the fact that the spacetime is asymptotically flat. The fact that the value of the scalars \(X_{I}\) at the _horizon_ is independent of the asymptotic values \(X_{I,\infty}\) is the attractor mechanism for BPS black holes in ungauged supergravity.
As we have seen, the present context is very different in that the asymptotic values of the scalars are set by the theory through the FI-parameters \(\xi_{I}\), rather than by boundary conditions. This is a generic feature of gauged supergravity, theories with asymptotically AdS vacuum. It precludes an attractor mechanism that is analogous to the one in asymptotically flat space. We will discuss this point in more depth in section 4 when we study the linear flow equations derived from supersymmetry.
The equations of motion \(\mathcal{E}_{\Phi}\), where \(\Phi\) is any field in the theory corresponding to the Lagrangian density (2), are the Einstein equation
\[\begin{split}\mathcal{E}_{g}&=R_{AB}-\tfrac{1}{2}g_ {AB}R+G_{IJ}\left(F^{I}_{5,AC}F^{J,C}_{5,B}-\tfrac{1}{4}g_{AB}F^{I}_{5,CD}F^{J,CD}_{5}\right)\\ &\quad-G_{IJ}\left(\nabla_{A}X^{I}\nabla_{B}X^{J}-\tfrac{1}{2}g_ {AB}\nabla_{C}X^{I}\nabla^{C}X^{J}\right)-g_{AB}V=0\,,\end{split} \tag{14}\]
and the matter equations for the Maxwell field \(A_{5}^{I}\) and the constrained scalars \(X^{I}\)
\[\mathcal{E}_{A} =d\,\big{(}G_{IJ}\star F_{5}^{J}\big{)}+\tfrac{1}{4}c_{IJK}F_{5}^{J} \wedge F_{5}^{K}=0, \tag{15}\] \[\mathcal{E}_{X^{I}} =-d\star dX_{I}+\tfrac{1}{3}X_{I}X^{J}d\star dX_{J}+2c^{JKL}\xi_{ K}\xi_{L}\,\big{(}\tfrac{2}{3}X_{I}X_{J}-c_{IJM}X^{M}\big{)}\star 1+\big{(}X_{J}X^{L}c_{ IKL}\] \[\quad-\tfrac{1}{2}c_{IJK}-\tfrac{2}{3}X_{I}X_{J}X_{K}+\tfrac{1}{ 6}X_{I}c_{JKN}X^{N}\big{)}\,(F_{5}^{J}\wedge\star F_{5}^{K}-dX^{J}\wedge\star dX ^{K})=0. \tag{16}\]
### The effective 2D theory
We do not study all solutions to the 5D theory (1), just stationary black holes. Then it is sufficient to consider a reduction to 2D -- and eventually to 1D. We impose the metric ansatz1
Footnote 1: In our conventions the metric has a mostly negative signature.
\[ds_{5}^{2}=ds_{2}^{2}-e^{-U_{1}}d\Omega_{2}^{2}-e^{-U_{2}}( \sigma_{3}+a^{0})^{2}\, \tag{17}\]
with \(ds_{2}^{2}\) a general 2D metric and the 1-form ansatz for the gauge potential
\[A_{5}^{I}=a^{I}+b^{I}(\sigma_{3}+a^{0})\ . \tag{18}\]
In our conventions, the left invariant 1-forms
\[\sigma_{1} =\sin\phi\,d\theta-\cos\phi\sin\theta\,d\psi\, \tag{19}\] \[\sigma_{2} =\cos\phi\,d\theta+\sin\phi\sin\theta\,d\psi\,\] \[\sigma_{3} =d\phi+\cos\theta\,d\psi\,\]
parametrize \(SU(2)\) with
\[0\leq\theta\leq\pi\,\qquad 0\leq\phi\leq 4\pi\,\qquad 0 \leq\psi\leq 2\pi\,. \tag{20}\]
The ansatz (17) suggests the vielbein
\[e^{0} =e^{0}_{\mu}dx^{\mu}\, e^{1} =e^{1}_{\mu}dx^{\mu}\, e^{2} =e^{-\tfrac{1}{2}U_{1}}\sigma_{1}\, \tag{21}\] \[e^{3} =e^{-\tfrac{1}{2}U_{1}}\sigma_{2}\, e^{4} =e^{-\tfrac{1}{2}U_{2}}(\sigma_{3}+a^{0})\.\]
We use Greek indices to denote the curved coordinates \(t\) and \(R\) in 2D. For extremal near-horizon geometries, the 2D coordinates describe the AdS\({}_{2}\) throat of the solution. The dimensional reduction via (17) and (18) of the 5D Lagrangian (2) introduces the scalar fields \(U_{1},U_{2}\) and \(b^{I}\), along with the 1-forms \(a^{0},a^{I}\). All these fields depend only on the 2D coordinates.
The effective 2D Lagrangian density that follows from (2) is given by
\[\mathcal{L}_{2} =\frac{\pi}{G_{5}}e^{-U_{1}-\tfrac{1}{2}U_{2}}\Big{\{}(-\mathcal{ R}_{2}+2e^{U_{1}}-\tfrac{1}{2}e^{2U_{1}-U_{2}})\star 1-\tfrac{1}{2}dU_{1} \wedge\star d\,(U_{1}+2U_{2}) \tag{22}\] \[\qquad-\tfrac{1}{2}e^{-U_{2}}da^{0}\wedge\star da^{0}-2V-G_{IJ} \Big{(}(da^{I}+b^{I}da^{0})\wedge\star(da^{J}+b^{J}da^{0})+e^{2U_{1}}b^{I}b^{J }\star 1\] \[\qquad+e^{U_{2}}db^{I}\wedge\star db^{J}-dX^{I}\wedge\star dX^{ J}\Big{)}+\tfrac{1}{3}e^{U_{1}+\tfrac{1}{2}U_{2}}c_{IJK}\,\big{(}\tfrac{3}{2}b^{I}b^{J }da^{K}+b^{I}b^{J}b^{K}da^{0}\big{)}\,\Big{\}}\] \[\qquad+\frac{\pi}{G_{5}}d\,\Big{(}(e^{-U_{1}-\tfrac{1}{2}U_{2}} \star d(2U_{1}+U_{2}))-\tfrac{1}{6}b^{I}b^{J}a^{K}\Big{)}\.\]
We denote the Ricci scalar of the reduced 2D metric \({\cal R}_{2}\) and the Hodge dual is now in 2D. The overall exponential factor \(e^{-U_{1}-\frac{1}{2}U_{2}}\) comes from the 5D metric on a deformed \(S^{3}\). The first line in (2.22) is due to the reduction of the 5D Ricci scalar, which introduces additional kinetic and potential terms associated to the scalars \(U_{1}\) and \(U_{2}\), as well as for the 1-form \(a^{0}\) in the beginning of the second line. The terms preceded by \(G_{IJ}\) are the reduction of the Maxwell field which yield kinetic terms for the 1-forms \(a^{0},a^{I}\) and the scalars \(b^{I}\), and the reduction of the kinetic term of \(X^{I}\). The remainder of the third line of (2.22) is the Chern-Simons term. Finally, in the last line, there is a total derivative that is inconsequential for the equations of motion but is required in order that \({\cal L}_{2}\) (2.22) is the dimensional reduction of the 5D Lagrangian (2). The latter does not include the Gibbons-Hawking-York boundary term, the extrinsic curvature that appears separately in (1).
Boundary terms present an important subtlety that we will return to repeatedly in our study. The Chern-Simons term in the 5D Lagrangian (2) is not manifestly gauge invariant, but it transforms to a total derivative under a gauge variation. Gauge invariance could be restored by introducing a total derivative in the action. Such a term does not change the equations of motion but the resulting theory is not covariant in 5D, so there is a tension between important principles. The bulk part of the 2D Lagrangian (2.22) is not only covariant, it is also manifestly gauge invariant: \(a^{I}\) appears only as the field strength \(da^{I}\). Manifest gauge invariance also applies to \(a^{0}\) which encodes 5D rotational invariance. These are benefits of reducing to 2D.
From the dimensionally reduced Lagrangian density (2.22), we can derive the equations of motion for the fields \(U_{1},U_{2},a^{0},a^{I}\) and \(b^{I}\). The solutions to these 2D equations of motion are solutions of the 5D theory. The field equations for the 2D scalar fields are given by
\[\begin{split}{\cal E}_{U_{1}}&=d(e^{-U_{1}-\frac{1} {2}U_{2}}\star(dU_{1}+dU_{2}))+e^{-U_{1}-\frac{1}{2}U_{2}}\{\big{(}{\cal R}_{2 }+2V-\frac{1}{2}e^{2U_{1}-U_{2}}\big{)}\star 1\\ &\quad+\frac{1}{2}dU_{1}\wedge\star(dU_{1}+2dU_{2})+\frac{1}{2}e^ {-U_{2}}da^{0}\wedge\star da^{0}+G_{IJ}\big{(}(b^{I}da^{0}+da^{I})\wedge\star( b^{J}da^{0}+da^{J})\\ &\quad-e^{2U_{1}}b^{I}b^{J}\star 1+e^{U_{2}}db^{I}\wedge\star db ^{J}-dX^{I}\wedge\star dX^{J}\big{)}\}=0\,,\end{split} \tag{2.23}\]
\[\begin{split}{\cal E}_{U_{2}}&=d(e^{-U_{1}-\frac{1} {2}U_{2}}\star dU_{1})+\frac{1}{2}e^{-U_{1}-\frac{1}{2}U_{2}}\{\big{(}{\cal R}_ {2}+2V-2e^{U_{1}}+\frac{3}{2}e^{2U_{1}-U_{2}}\big{)}\star 1\\ &\quad+\frac{1}{2}dU_{1}\wedge\star(dU_{1}+2dU_{2})+\frac{3}{2}e ^{-U_{2}}da^{0}\wedge\star da^{0}+G_{IJ}\big{(}(b^{I}da^{0}+da^{I})\wedge\star( b^{J}da^{0}+da^{J})\\ &\quad+e^{2U_{1}}b^{I}b^{J}\star 1-e^{U_{2}}db^{I}\wedge\star db ^{J}-dX^{I}\wedge\star dX^{J}\big{)}\}=0\,,\end{split} \tag{2.24}\]
\[\begin{split}{\cal E}_{b^{I}}&=2d(e^{-U_{1}+\frac{1} {2}U_{2}}G_{IJ}\star db^{J})-2e^{-U_{1}-\frac{1}{2}U_{2}}G_{IJ}da^{0}\wedge \star(b^{J}da^{0}+da^{J})-2G_{IJ}e^{U_{1}-\frac{1}{2}U_{2}}b^{J}\star 1\\ &\quad+c_{IJK}b^{J}da^{K}+c_{IJK}b^{J}b^{K}da^{0}=0\,,\end{split} \tag{2.25}\]
and the 1-forms satisfy
\[\begin{split}{\cal E}_{a^{0}}&=-d(e^{-U_{1}-\frac{3} {2}U_{2}}\star da^{0})-2d(G_{IJ}b^{I}e^{-U_{1}-\frac{1}{2}U_{2}}\star(b^{J}da^{ 0}+da^{J}))+\frac{1}{3}c_{IJK}d(b^{I}b^{J}b^{K})=0\,\\ {\cal E}_{a^{I}}&=-2d\left(e^{-U_{1}-\frac{1}{2}U_{2} }G_{IJ}\star\left(b^{J}da^{0}+da^{J}\right)\right)+\frac{1}{2}c_{IJK}d\left(b^{ J}b^{K}\right)=0\.\end{split} \tag{2.26}\]
### An effective 1D theory
We conclude the section by reducing the 2D reduced Lagrangian (2.22) to a one-dimensional radial effective theory where all of the functions that appear in the effective Lagrangian (2.22) are set to be exclusively radial functions, with respect to the radial coordinate \(R\). In this additional reduction we pick a diagonal gauge for the 2d line element of (2.17):
\[ds_{2}^{2}=e^{2\rho}dt^{2}-e^{2\sigma}dR^{2}. \tag{2.27}\]
The operators \(d\) and \(\star\) acting on the fields in the Lagrangian (2.22) simplify with this ansatz. For example, the 2D Ricci scalar becomes
\[\mathcal{R}_{2}=2e^{-\rho-\sigma}\partial_{R}(e^{-\sigma}\partial_{R}e^{\rho} )\,. \tag{2.28}\]
Second derivatives are awkward so it is advantageous to rewrite this term as
\[e^{\rho+\sigma-U_{1}-\frac{1}{2}U_{2}}\mathcal{R}_{2}=2\partial _{R}\left(e^{\rho-\sigma-U_{1}-\frac{1}{2}U_{2}}\partial_{R}\rho\right)+e^{ \rho+\sigma-U_{1}-\frac{1}{2}U_{2}}(e^{-2\sigma}\partial_{R}\rho)\partial_{R} \left(2U_{1}+U_{2}\right)\ . \tag{2.29}\]
The first term is a total derivative, an additional boundary term. To examine the total boundary contribution, we consider a constant radial slice at infinity. As we are now reducing to 1D, the boundary terms must be evaluated at the bounds for the time coordinate. This is trivial since there is no explicit time dependence. After dimensional reduction, the Gibbons-Hawking-York term in (2.1) corresponds to the total derivative
\[\mathcal{L}_{\text{GHY}}=\frac{2\pi}{G_{5}}\partial_{R}\left(e^{\rho-\sigma-U _{1}-\frac{1}{2}U_{2}}\partial_{R}(\rho-U_{1}-\tfrac{1}{2}U_{2})\right)\,. \tag{2.30}\]
The total derivative term in (2.29), the Gibbons-Hawking-York term (2.30), and the boundary terms in the last line of (2.22) after dimensional reduction to 1D, precisely cancel. This leaves only the contribution arising from the Chern-Simons term
\[\mathcal{L}_{\text{bdry}}=-\frac{1}{6}\frac{\pi}{G_{5}}d\left(c_{IJK}b^{I}b^{J }a_{t}^{K}\right)\,. \tag{2.31}\]
This remaining boundary term in (2.31) is crucial as it will affect the conserved charges we seek to compute. We will comment on this in depth in the subsequent section 3. In summary, the 1D Lagrangian density takes the form
\[\begin{split}\mathcal{L}_{1}&=\frac{\pi}{G_{5}}e^{ \rho+\sigma-U_{1}-\frac{1}{2}U_{2}}\left[-e^{-2\sigma}(\partial_{R}\rho) \partial_{R}\left(2U_{1}+U_{2}\right)-\tfrac{1}{2}e^{-2\sigma}(\partial_{R}U _{1})(\partial_{R}U_{1}+2\partial_{R}U_{2})\right.\\ &\quad\left.-G_{IJ}e^{-2\sigma}\left(\partial_{R}X^{I}\partial_{ R}X^{J}-e^{U_{2}}\partial_{R}b^{I}\partial_{R}b^{J}\right)+\tfrac{1}{2}e^{-U_{2}-2 \rho-2\sigma}(\partial_{R}a_{t}^{0})^{2}\right.\\ &\quad\left.+G_{IJ}e^{-2\rho-2\sigma}(\partial_{R}a_{t}^{I}+b^{I }\partial_{R}a_{t}^{0})(\partial_{R}a_{t}^{J}+b^{J}\partial_{R}a_{t}^{0})-2V+ 2e^{U_{1}}-\tfrac{1}{2}e^{2U_{1}-U_{2}}\\ &\quad\left.-G_{IJ}e^{2U_{1}}b^{I}b^{J}\right]+\frac{\pi}{G_{5}} \frac{1}{3}c_{IJK}\left[-\frac{3}{2}b^{I}b^{J}\partial_{R}a_{t}^{K}-b^{I}b^{J }b^{K}\partial_{R}a_{t}^{0}\right]\,.\end{split} \tag{2.32}\]
Having established the effective Lagrangian in 2D (2.22) and 1D (2.32), we proceed in the next subsection with construction of the Noether-Wald surface charges in our theory.
Noether-Wald surface charges
In this section, we review the Noether-Wald procedure for computing the conserved charge due to a general symmetry [61; 62]. We specifically consider an isometry generated by a Killing vector and a gauge symmetry in the presence of Chern-Simon terms. In each case, we express the conserved charge as a flux integral that is the same for any surface surrounding the black hole.
### The Noether-Wald surface charge: general formulae
We consider a theory in \(D\) dimensions described by a Lagrangian \(\mathcal{L}\) that is presented as a \(D\)-form. The Lagrangian depends on fields \(\Phi_{i}\) that include both the metric \(g_{\mu\nu}\) and matter fields, as well as the derivatives of these fields.
A symmetry \(\zeta\) is such that the variation of \(\mathcal{L}\) with respect to \(\zeta\) is a closed form (locally), i.e. \(d\) acting on a \(D-1\) form \(\mathcal{J}_{\zeta}\):
\[\begin{array}{ccc}\mathcal{L}&\underset{\zeta}{\rightarrow}&\mathcal{L}+ \delta\mathcal{L}=\mathcal{L}+d\mathcal{J}_{\zeta}\.\end{array} \tag{10}\]
The variation of the Lagrangian due to _any_ change in the fields is given by2
Footnote 2: In practice, when we solve for the Einstein equations, we will consider a variation of the metric and will not directly use (10).
\[\begin{array}{ccc}\delta\mathcal{L}&=&\delta\Phi_{i}\frac{ \partial\mathcal{L}}{\partial\Phi_{i}}+(\partial_{\mu}\delta\Phi_{i})\frac{ \delta\mathcal{L}}{\delta\partial_{\mu}\Phi_{i}}\\ &=&\delta\Phi_{i}\left[\frac{\partial\mathcal{L}}{\partial\Phi_{i} }-\partial_{\mu}\left(\frac{\delta\mathcal{L}}{\delta\partial_{\mu}\Phi_{i}} \right)\right]+\partial_{\mu}\left(\delta\Phi_{i}\frac{\delta\mathcal{L}}{ \delta\partial_{\mu}\Phi_{i}}\right)\.\end{array} \tag{11}\]
The usual variational principle determines the equations of motion \(\mathcal{E}_{\Phi}\) as the vanishing of the expression in the square bracket. The remaining term, by definition, is the total derivative of the presymplectic potential
\[\Theta^{\mu}\equiv\delta\Phi_{i}\frac{\delta\mathcal{L}}{\delta \partial_{\mu}\Phi_{i}}. \tag{12}\]
In our informal notation, the left hand side of this equation is indistinguishable from a vector. However, the Lagrangian is a \(D\)-form and the \(\delta\)-type "derivative" removes an entire 1-form. Therefore, the presymplectic potential \(\Theta\) becomes a \(D-1\) form, with indices obtained by contracting the volume form with the vector that is normal to the boundary. A more precise version of (10) reads
\[\delta\mathcal{L}=\delta\Phi_{i}\left[\frac{\partial\mathcal{L}}{ \partial\Phi_{i}}-\partial_{\mu}\left(\frac{\delta\mathcal{L}}{\delta \partial_{\mu}\Phi_{i}}\right)\right]+d\Theta[\Phi_{i},\delta\Phi_{i}]. \tag{13}\]
Comparing this formula for a general variation with its analogue (10) for a symmetry establishes \(d\mathcal{J}_{\zeta}=d\Theta\) and so the \(D-1\) form
\[J_{\zeta}=\mathcal{J}_{\zeta}-\Theta[\Phi_{i},\delta\Phi_{i}] \tag{14}\]
is closed when the equations of motion \({\cal E}_{\Phi}\) are imposed. This identifies the familiar conserved Noether current associated to the symmetry \(\zeta\). The corresponding Noether charge is
\[Q_{\zeta,{\rm Noether}}=\int_{\Sigma}J_{\zeta}\, \tag{10}\]
where \(\Sigma\) is a Cauchy surface on the background manifold. Conservation amounts to this charge being the same on all Cauchy surfaces. Conceptually, the total charge is the same at all times. That is the point of conservation in a truly dynamical setting, but it is not terribly interesting in a stationary black hole spacetime which is, by definition, independent of time.
For black holes it is important that, given the closed \((D-1)\) form \(J_{\zeta}\), there exists a \((D-2)\)-form \(Q_{\zeta}\) such that
\[J_{\zeta}\cong dQ_{\zeta}. \tag{11}\]
The \(Q_{\zeta}\) is the Noether-Wald _surface_ charge. It amounts to a conserved _flux_ in the sense of Gauss' law: integration of the flux over any surface enclosing the source gives the same result.
The surface charge \(Q_{\zeta}\) is more subtle than the conserved charge integrated over an entire Cauchy surface. The semi-equality \(\cong\) reminds us that generally the closed form \(J_{\zeta}\) is only \(d\) of something locally so, in general, the charge \(Q_{\zeta}\), is only defined up to \(d\) of some \(D-3\) form. Therefore, it does not necessarily satisfy Gauss' law.
One way around this is to evaluate the surface charge at infinity. For example, the presence of a Chern-Simons term can be interpreted physically as a charge density that obstructs flux conservation but this contribution is subleading at infinity and will not contribute to \(Q_{\zeta,{\rm Noether}}\).
Alternatively, following [52; 53; 55; 56; 63], we can modify our definition of the surface charge by adding a \(D-2\) form to \(Q_{\zeta}\). This new surface charge satisfies a Gauss law and can be integrated at any given surface \(\Sigma\).
A third approach [4], is the one taken in this paper. It is to compute the surface charges in a dimensionally reduced 2D theory.
Moreover, we integrate by parts such that in the process of dimensional reduction to 2D, we ensure gauge invariance. We will carry this procedure out in section 3.4. Therefore, in this case, \(Q_{\zeta}\) will satisfy a Gauss law.
The procedure for computing the conserved charges is extremely general. In the following, we make the abstract procedure explicit for two particular symmetries: isometries generated by a spacetime Killing vector \(\xi\) and gauge symmetries \(\lambda\) in the presence of Chern-Simons terms.
### Killing vector fields
A Killing vector \(\xi\) generates a spacetime isometry. It transforms the Lagrangian as
\[\delta_{\xi}{\cal L}=L_{\xi}{\cal L}. \tag{12}\]
Here \(L_{\xi}\) is the Lie derivative along the Killing vector \(\xi\).
The Lie derivative acting on a general form \(\omega\) is given by Cartan's magic formula
\[L_{\xi}\omega=d(i_{\xi}\omega)+i_{\xi}d\omega. \tag{3.9}\]
Since \(\mathcal{L}\) is a \(D\)-form it must be closed \(d\mathcal{L}=0\) and then the Lie derivative becomes
\[\delta_{\xi}\mathcal{L}=L_{\xi}\mathcal{L}=d(i_{\xi}\mathcal{L})+i_{\xi}(d \mathcal{L})=d(\xi\cdot\mathcal{L})\, \tag{3.10}\]
where \(\cdot\) denotes the contraction of \(\xi\) with the first index of \(\mathcal{L}\). Comparing (3.10) with (3.1), we identify
\[\mathcal{J}_{\xi}=\xi\cdot\mathcal{L}\, \tag{3.11}\]
up to a closed form that is unimportant in our application. Thus, for a Killing vector \(\xi\), the Noether current (3.5) becomes
\[J_{\xi}=\xi\cdot\mathcal{L}-\Theta[\Phi,\mathcal{L}_{\xi}\Phi]. \tag{3.12}\]
The computations show that this current \((D-1)\) form is closed on-shell. In other words, it is conserved when the equations of motion are satisfied.
### Incorporating gauge invariance
We now consider a _gauge invariant_ Lagrangian and compute the conserved current as defined in (3.5) for the conserved charges of the theory, whether derived from spacetime isometries or gauge invariance.
The relevant gauge invariant Lagrangian is the one defined in (2.1) _without_ the Chern-Simons term. In other words, we consider the Lagrangian density
\[\begin{split}\mathcal{L}_{5,\text{pot}}&=-\frac{1} {16\pi G_{5}}\sqrt{g_{5}}\left(\mathcal{R}_{5}+2V\right)\,,\\ \mathcal{L}_{5,\text{kin}}&=\frac{1}{16\pi G_{5}} \sqrt{g_{5}}\left(-\frac{1}{2}G_{IJ}F^{I}_{5,AB}F^{J,AB}_{5}+G_{IJ}\nabla^{A} X^{I}\nabla_{A}X^{J}\right)\.\end{split} \tag{3.13}\]
We use early capital Latin indices \(A,B,\ldots\) to denote 5D coordinates. The Lagrangian \(\mathcal{L}_{5,\text{kin}}+\mathcal{L}_{5,\text{pot}}\) is manifestly gauge invariant
\[\delta_{\alpha}(\mathcal{L}_{5,\text{pot}}+\mathcal{L}_{5,\text{kin}})=0\, \tag{3.14}\]
As detailed in the previous subsection, there is a conserved charge for any Killing vector that generates a spacetime isometry. According to (3.10), the Lagrangian \(\mathcal{L}_{5,\text{kin}}+\mathcal{L}_{5,\text{pot}}\) transforms as
\[\delta_{\xi}(\mathcal{L}_{5,\text{pot}}+\mathcal{L}_{5,\text{kin}})=\nabla_{ A}\left(\xi^{A}(\mathcal{L}_{5,\text{pot}}+\mathcal{L}_{5,\text{kin}})\right). \tag{3.15}\]
The presymplectic potential (3.3) for \(\mathcal{L}_{5,\text{pot}}\) is
\[\begin{split}\Theta^{A,5}_{\xi,\text{pot}}&=\frac{1} {16\pi G_{5}}\sqrt{g_{5}}\left(\nabla_{B}\nabla^{A}\xi^{B}+\nabla_{B}\nabla^{B} \xi^{A}-2\nabla^{A}\nabla_{B}\xi^{B}\right)\\ &=\frac{1}{16\pi G_{5}}\sqrt{g_{5}}\left(\nabla_{B}\left(\nabla^{ B}\xi^{A}-\nabla^{A}\xi^{B}\right)+2R^{AB}\xi_{B}\right)\,,\end{split} \tag{3.16}\]
where in the second line, we have used the commutator relation for two covariant derivatives. In addition, the presymplectic potential for the kinetic terms \(\mathcal{L}_{5,\text{kin}}\) is
\[\Theta^{A,5}_{\alpha,\xi,\text{kin}}=\frac{1}{16\pi G_{5}}\sqrt{g_{5}}G_{IJ} \left(2F_{5}^{I,AB}\left(\xi^{C}F_{5,CB}^{J}+\nabla_{B}\left(\xi^{C}A_{5,C}^{J }+\alpha_{5}^{J}\right)\right)\right)\,, \tag{3.17}\]
where we have used the variation
\[\delta A_{A,5}^{I}=\delta_{\xi}A_{5,A}^{I}+\delta_{\alpha}A_{5,A}^{I}=\xi^{B} F_{5,BA}^{I}+\nabla_{A}\left(\xi^{B}A_{5,B}^{I}\right)+\nabla_{A}\alpha^{I}\,. \tag{3.18}\]
Inserting the variations (3.14) and (3.15) and the presymplectic potentials given in (3.16) and (3.17) into the current density (3.5), we find
\[\begin{split} J^{A}_{\alpha,\xi}=\frac{1}{16\pi G_{5}}\sqrt{g_{5 }}\Big{[}-\nabla_{B}\left(\nabla^{B}\xi^{A}-\nabla^{A}\xi^{B}\right)-2\nabla_ {B}\left(G_{IJ}F^{I,AB}(\xi^{C}A_{C}^{J}+\alpha^{J})\right)\\ -2\xi_{B}\mathcal{E}_{g}^{B}-2\mathcal{E}_{J,A_{5}}^{A}(\xi^{C} A_{C}^{J}+\alpha^{J})\;\Big{]}\;,\end{split} \tag{3.19}\]
where the second line is proportional to the equations of motion \(\mathcal{E}_{g}^{B}\) and \(\mathcal{E}_{J,A_{5}}^{A}\) and vanish on-shell giving
\[J^{A}_{\alpha,\xi}=-\frac{1}{16\pi G_{5}}\sqrt{g_{5}}\nabla_{B}\Big{[}\left( \nabla^{B}\xi^{A}-\nabla^{A}\xi^{B}\right)+2\left(G_{IJ}F^{I,AB}(\xi^{C}A_{C} ^{J}+\alpha^{J})\right)\Big{]}. \tag{3.20}\]
The Noether-Wald surface charges of the theory can now be read off from the current (3.20). To find the conserved charges, we integrate over a surface \(\Sigma\) enclosing the source and we find
\[\begin{split} Q_{\alpha}&=-\frac{1}{8\pi G_{5}} \int_{\Sigma}d\Sigma_{AB}\sqrt{g_{5}}G_{IJ}F^{I,AB}\alpha^{J}\,,\\ Q_{\xi}&=-\frac{1}{16\pi G_{5}}\int_{\Sigma}d\Sigma _{AB}\sqrt{g_{5}}\left[\left(\nabla^{B}\xi^{A}-\nabla^{A}\xi^{B}\right)+2G_{ IJ}F^{I,AB}\xi^{C}A_{C}^{J}\right]\,.\end{split} \tag{3.21}\]
### Chern-Simons Terms
The charge \(Q_{\xi}\) that corresponds to angular momentum depends explicitly on the gauge field \(A^{J}\) whereas the electric charges \(Q_{\alpha}\) depend on the field strength. When Chern-Simons terms are taken into account, \(Q_{\alpha}\) also depends on the gauge field \(A^{J}\). This gauge dependence renders the value of the charges ambiguous.
To address the situation, we dimensionally reduce the theory (2.1) to 2D, as was done in subsection 2.2 and express the resulting action as a covariant theory in 2D [4]. As part of the process, we must ensure that the field strength does not have a nonzero flux through the squashed sphere. This can be achieved by adding total derivatives before the dimensional reduction to remove the derivatives acting on the gauge potentials and gauge fields that act nontrivially through the squashed sphere.
We now show how this can be done. Let us consider the Lagrangian (3.13) along with the five-dimensional Chern-Simons term of the form
\[\mathcal{L}_{5,\text{CS}}=-\frac{1}{16\pi G_{5}}\frac{1}{6}c_{IJK}F_{5}^{I} \wedge F_{5}^{J}\wedge A_{5}^{K}\,\,. \tag{3.22}\]
We are interested in transforming (3.22) by the inclusion of total derivatives such that the potential term associated to the electric charge is manifestly gauge invariant. Note this procedure is not covariant in 5D and therefore we explicitly break covariance along the way. However, because of the dimensional reduction, the 2D Lagrangian still remains covariant.
We consider the ansatz in (2.17) such that the potential and gauge fields are of the form
\[\begin{split} A^{I}_{5}&=A^{I}_{5,4}dx^{A}=A^{I}_{5,\mu}dx^{\mu}+A^{I}_{5,a}dx^{a}\,\\ F^{I}_{5}&=\frac{1}{2}F^{I}_{5,AB}\,dx^{A}\wedge dx^{ B}=\frac{1}{2}F^{I}_{5,\mu\nu}dx^{\mu}\wedge dx^{\nu}+F^{I}_{5,\mu a}dx^{\mu} \wedge dx^{a}+\frac{1}{2}F^{I}_{5,ab}dx^{a}\wedge dx^{b}\,\end{split} \tag{3.23}\]
where lowercase Latin indices denote the indices on the compact space and as before, the Greek indices correspond to the 2D space. Expanding out the Chern-Simons term in component form using (3.23), there are two types of terms, having the following structure of indices: \(F^{I}_{\mu\nu}F^{J}_{bc}A^{A}_{a}\) and \(F^{I}_{\mu a}F^{J}_{bc}A^{K}_{\nu}\). Only for the second expression we must transfer the derivative such that in the process of dimensional reduction, we find it to be gauge invariant in the 2D theory. This means the integration by parts of this term takes the form
\[c_{IJK}\epsilon^{\mu abcv}F^{I}_{\mu a}F^{J}_{bc}A^{K}_{\nu}=2c_{ IJK}\epsilon^{\mu abcv}\left(\partial_{\mu}(A^{K}_{\nu}A^{I}_{a}F^{J}_{bc})-( \partial_{\mu}A^{K}_{\nu})(A^{I}_{a}F^{J}_{bc})\right)\, \tag{3.24}\]
and the presymplectic potential is found to be
\[\begin{split}\Theta^{A,5}_{\alpha,\xi,\text{CS}}&= \frac{1}{16\pi G_{5}}\sqrt{g_{5}}\left[\frac{1}{6}c_{IJK}\left(\epsilon^{ABCDE}F ^{I}_{BC}A^{J}_{D}\left(\xi^{F}F^{K}_{FE}+\nabla_{E}(\xi^{F}A^{K}_{F})+\nabla_ {E}\alpha^{K})\right)\right]\\ &\quad-\frac{1}{8\pi G_{5}}\sqrt{g_{5}}\left[c_{IJK}\epsilon^{Aabc \nu}A^{I}_{a}F^{J}_{bc}\nabla_{\nu}\alpha^{K}\right]\,,\end{split} \tag{3.25}\]
where the last term is the contribution of (3.24) coming from adding a total derivative. To investigate the current and the Noether-Wald surface charges, we proceed to dimensionally reduce over the squashed \(S^{3}\) where covariance over the 2D spacetime is still maintained. The 5D rotational isometries in \(\varphi\) and \(\psi\) take on a different role in the 2D perspective. Moreover, we find that they become 2D gauge transformations of \(a^{0}\) and \(a^{I}\) coming from the dimensionally reduced potential \(A^{I}\) (2.18).
### The 2D conserved charges
The 2D Lagrangian (2.22) inherits some symmetries from the 5D theory (2.2), including gauge symmetry associated with the 5D gauge potential \(A^{I}\) and rotational isometries associated to the Killing vectors \(\partial_{\phi}\) and \(\partial_{\psi}\). In the 2D theory, all symmetries become gauge symmetries and have associated charges. We denote the 2D charge originally coming from the 5D rotational isometries \(J\) and the 2D charges originally coming from the 5D gauge transformations \(Q_{I}\). These 2D gauge transformations are associated to \(a^{0}\) and \(a^{I}\) as they come from the dimensionally reduced potential \(A^{I}\) (2.18). Therefore, we consider the following symmetries
\[\delta_{\lambda}a^{0}=d\lambda,\qquad\delta_{\chi}a^{I}=d\chi^{I}\, \tag{3.26}\]
with total corresponding conserved current
\[J_{\lambda,\chi}=J_{\lambda}+J_{\chi}=\sum_{i=\lambda,\chi}\left(\mathcal{J}_{i}- \Theta_{i}\right), \tag{3.27}\]
where \(J_{\lambda}\) and \(J_{\chi}\) are the currents corresponding to \(\lambda\) and \(\chi\), respectively, and the second equality is given by (3.5). The effective 2D Lagrangian (2.22) is manifestly gauge invariant and the variations with respect to each symmetry (3.26) yield
\[\delta_{\lambda}\mathcal{L}_{2}=d\mathcal{J}_{\lambda}=0\,\quad\delta_{\chi} \mathcal{L}_{2}=d\mathcal{J}_{\chi}=-\frac{\pi}{G_{5}}\frac{1}{6}c_{IJK}d \left(b^{I}b^{J}d\chi^{K}\right). \tag{3.28}\]
The presymplectic potentials given in (3.4) become
\[\Theta_{\lambda}=-\frac{\pi}{G_{5}}e^{-U_{1}-\frac{1}{2}U_{2}} \left[e^{-U_{2}}\star da^{0}+2G_{IJ}b^{I}\star\left(b^{J}da^{0}+da^{J}\right) \right]d\lambda+\frac{\pi}{G_{5}}\frac{1}{3}c_{IJK}b^{I}b^{J}b^{K}d\lambda\, \tag{3.29}\] \[\Theta_{\chi}=-\frac{\pi}{G_{5}}e^{-U_{1}-\frac{1}{2}U_{2}}\left[ 2G_{IJ}d\chi^{I}\wedge\star\left(b^{J}da^{0}+da^{J}\right)\right]+\frac{\pi}{G _{5}}\frac{1}{3}c_{IJK}b^{I}b^{J}d\chi^{K}. \tag{3.30}\]
We used the symmetries (3.26) and included the additional total derivative term (3.24). Using the equations of motion (2.26), the on-shell current (3.27) can be recast in the form of (3.7):
\[\begin{split} J_{\lambda,\chi}&\cong\frac{\pi}{G_{ 5}}d\left[\lambda\left(e^{-U_{1}-\frac{1}{2}U_{2}}\left[e^{-U_{2}}\star da^{0 }+2G_{IJ}b^{I}\star\left(b^{J}da^{0}+da^{J}\right)\right]-\frac{1}{3}c_{IJK}b^ {I}b^{J}b^{K}\right)\right.\\ &\qquad\qquad\left.+\chi^{I}\left(2e^{-U_{1}-\frac{1}{2}U_{2}}G_ {IJ}\star\left(b^{J}da^{0}+da^{J}\right)-\frac{1}{2}c_{IJK}b^{J}b^{K}\right) \right].\end{split} \tag{3.31}\]
The conserved charges \(J\) and \(Q_{I}\) can be directly read off from (3.31) and we find
\[\begin{split} J&=\frac{\pi}{G_{5}}\left[e^{-U_{1}- \frac{1}{2}U_{2}}\left[e^{-U_{2}}\star da^{0}+2G_{IJ}b^{I}\star\left(b^{J}da^{0 }+da^{J}\right)\right]-\frac{1}{3}c_{IJK}b^{I}b^{J}b^{K}\right]\,\\ Q_{I}&=\frac{\pi}{G_{5}}\left[2e^{-U_{1}-\frac{1}{2}U _{2}}G_{IJ}\star\left(b^{J}da^{0}+da^{J}\right)-\frac{1}{2}c_{IJK}b^{J}b^{K} \right]\.\end{split} \tag{3.32}\]
From now on, we use the rescaled charges
\[\widetilde{J}\equiv\frac{4G_{5}}{\pi}J\,\qquad\widetilde{Q}\equiv\frac{4G_{5}}{ \pi}Q_{I}. \tag{3.33}\]
The charges and the current are indeed conserved including the charge associated to \(a^{I}\) since we demanded gauge invariance at the level of the Lagrangian in (2.22). This added a total derivative that shifted the charge but did not affect the equations of motion. Moreover, the charges computed in 2D are proportional to those computed in 5D. In the 1D reduction (2.32), the charges take the following form
\[\begin{split}&\widetilde{J}=4\left(e^{-U_{1}-\frac{3}{2}U_{2}- \rho-\sigma}\left(\partial_{R}a^{0}_{t}+2G_{IJ}e^{U_{2}}b^{I}\left(\partial_{R }a^{J}_{t}+b^{J}\partial_{R}a^{0}_{t}\right)\right)-\frac{1}{3}c_{IJK}b^{I}b^{ J}b^{K}\right)\,\\ &\widetilde{Q}_{I}=4\left(2e^{-U_{1}-\frac{1}{2}U_{2}-\rho-\sigma }G_{IJ}\left(\partial_{R}a^{J}_{t}+b^{J}\partial_{R}a^{0}_{t}\right)-\frac{1}{ 2}c_{IJK}b^{J}b^{K}\right)\.\end{split} \tag{3.34}\]
These formulae are essential for the radial flow in the black hole background. A _very_ rough reading is that each of the conserved charges \(\widetilde{J}\) and \(\widetilde{Q}_{I}\) are radial derivatives of their conjugate potentials \(a_{t}^{0}\), \(a_{t}^{I}\), as in elementary electrodynamics. With this naive starting point, the overall factors depending on \(U_{1},U_{2},\rho\) and \(\sigma\) serve to take on the non-flat spacetime into account and \(G_{IJ}\) incorporates special geometry as required by symmetry. All remaining terms depend on \(b^{I}\) and take rotation into account in a manner that combines kinematics (rotation "looks" like a force) and electrodynamics (electric and magnetic fields mix in a moving frame). These effects defy simple physical interpretations.
From our point of view, the formulae (3.34) for the charges \(\widetilde{J}\) and \(\widetilde{Q}_{I}\) are complicated functions of various fields, each of which are themselves nontrivial functions of the radial coordinate. Our construction shows that symmetry guarantees that these _combinations_ must be independent of the radial position, within the framework of our _ansatz_.
In the following section we study the conditions that _supersymmetric_ AdS\({}_{5}\) black holes must satisfy. The vanishing of the supersymmetry variations of the theory for a subset of the supersymmetries always imposes first-order radial differential equations on the joint geometry/matter configuration. We refer to these first order equations as _flow equations_. They are very constraining but, as usual for first order equations imposed by supersymmetry, they are not sufficient to determine the solution. The raison d'etre of this entire section is that the additional data needed, sometimes referred to as the integrability conditions, is furnished by the conserved charges.
## 4 The flow equations
In this section we derive the first order flow equations for AdS\({}_{5}\) black holes. They follow from preservation of supersymmetry, complemented by conservation of the charges. We study the flow equations using two perturbative expansions: one starting at the near-horizon and one starting at the asymptotic boundary. Enforcing the conservation of charges at both the horizon and at the asymptotic boundary allows us to make contact between the two expansions.
### Supersymmetry conditions
We study bosonic backgrounds that preserve some supersymmetry [64; 32; 65]. Thus there exists a supersymmetric spinor \(\epsilon^{\alpha}\) for which the gravitino and the dilatino variations vanish. This condition amounts to
\[0 =\left[G_{IJ}\left(\frac{1}{2}\gamma^{AB}F^{J}_{AB}-\gamma^{A} \nabla_{A}X^{J}\right)\epsilon^{\alpha}-\xi_{I}\epsilon^{\alpha\beta}\epsilon^ {\beta}\right]\partial_{i}X^{I}\, \tag{4.1}\] \[0 =\left[(\partial_{A}-\frac{1}{4}\omega_{A}^{BC}\gamma_{BC})+ \frac{1}{24}(\gamma_{A}^{\ BC}-4\delta_{A}^{\ B}\gamma^{C})X_{I}F^{I}_{BC}\right] \epsilon^{\alpha}+\frac{1}{6}\xi_{I}(3A^{I}_{A}-X^{I}\gamma_{A})\epsilon^{ \alpha\beta}\epsilon^{\beta}\, \tag{4.2}\]
where \(\epsilon^{\alpha}\) (\(\alpha=1,2\)) are symplectic Majorana spinors. In the following, we recast these variations as radial flow equations.
For the analysis of supersymmetry, it is convenient to split the 5D spacetime geometry into \((1+4)\) dimensions as
\[ds_{5}^{2} =f^{2}(dt+w\sigma_{3})^{2}-f^{-1}ds_{4}^{2}\, \tag{4.3}\] \[ds_{4}^{2} =g_{m}^{-1}dR^{2}+\frac{1}{4}R^{2}(\sigma_{1}^{2}+\sigma_{2}^{2}+g _{m}\sigma_{3}^{2})\,. \tag{4.4}\]
This form highlights the 4D base space \(ds_{4}^{2}\) which is automatically Kahler. This can be shown by picking the flat vielbein
\[e^{1}=g_{m}^{-1/2}dR\,\quad e^{2}=\frac{1}{2}R\sigma_{1}\,\quad e^{3}=\frac{1}{ 2}R\sigma_{2}\,\quad e^{4}=\frac{1}{2}Rg_{m}^{1/2}\sigma_{3}\, \tag{4.5}\]
which gives the manifestly closed Kahler 2-form
\[J^{(1)}=\epsilon(e^{1}\wedge e^{4}-e^{2}\wedge e^{3}). \tag{4.6}\]
The symbol \(\epsilon=\pm 1\) denotes the orientation of the base manifold. It should not be confused with the supersymmetry parameter \(\epsilon^{\alpha}\).
The \((1+4)\) split of the 5D gauge potential \(A^{I}\) defined in (2.18) can be expressed as
\[A^{I}=fY^{I}(dt+w\sigma_{3})+u^{I}\sigma_{3}. \tag{4.7}\]
In the rest of the paper, as well as in Appendix C, we use lowercase Latin letters for the four spatial indices.
### Dictionary between the \((1+4)\) and the \((2+3)\) splits
We can relate the \((1+4)\) split introduced in the previous subsection to simplify the supersymmetric variations (4.1) and (4.2), to the \((2+3)\) split (2.17) and (2.18) that was used earlier to perform the reduction from 5D to 2D. The 5D geometry (2.17) with the diagonal gauge (2.27) for the 2D line element is
\[ds_{5}^{2}=e^{2\rho}dt^{2}-e^{2\sigma}dR^{2}-e^{-U_{1}}(\sigma_{1}^{2}+\sigma_ {2}^{2})-e^{-U_{2}}(\sigma_{3}+a^{0})^{2}. \tag{4.8}\]
By identifying the metric components of (4.3) and (4.8), we find the dictionary of variables in the \((2+3)\) split of the 5D line element \(ds_{5}^{2}\), expressed in terms of the variables in the \((1+4)\) split
\[e^{-U_{1}} =\frac{1}{4}R^{2}f^{-1}\,\qquad e^{-U_{2}}=\frac{1}{4}R^{2}g_{m}f^{ -1}-f^{2}w^{2}\,\qquad b^{I}=fY^{I}w+u^{I}\, \tag{4.9}\] \[e^{2\sigma} =f^{-1}g_{m}^{-1}\,\qquad\quad a_{t}^{0}=\frac{-f^{2}w}{\frac{1}{4}R^{2}g_{m}f^{ -1}-f^{2}w^{2}}\,\qquad e^{2\rho}=f^{2}-\frac{f^{4}w^{2}}{(\frac{1}{4}R^{2}g_{m}f^{ -1}-f^{2}w^{2})^{2}}\.\]
In this section we primarily use the \((1+4)\) variables \(fX^{I}\), \(u^{I}\), \(w\) and \(g_{m}\), along with the conserved charges \(Q_{I}\) and \(J\).
As noted in the previous subsection, the 4D base of the \((1+4)\) split (4.3) is automatically Kahler. In the variables of the \((2+3)\) split in (4.9), the Kahler condition amounts to the relation
\[e^{\sigma+\rho-U_{2}/2}=\frac{1}{2}R\, \tag{4.10}\]
between \(\rho\), \(\sigma\), and \(U_{2}\). This is explained further in Appendix C.1.
### The attractor flow equations
The preserved supersymmetries are defined by the projections on the spinors \(\epsilon^{\alpha}\)
\[\gamma^{0}\epsilon^{\alpha} =\epsilon^{\alpha}\, \tag{4.11}\] \[\frac{1}{4}J^{(1)}_{mn}\gamma^{mn}\epsilon^{\alpha} =-\epsilon^{\alpha\beta}\epsilon^{\beta}. \tag{4.12}\]
The \(J^{(1)}_{mn}\) are components of the Kahler form \(J^{(1)}\) (4.6), and the spatial gamma matrices \(\gamma^{m}\) satisfy the usual Clifford algebra. The details on the simplification of the equations (4.1) and (4.2) are presented in Appendix C. The result is the following differential conditions on the variables \(fX^{I}\), \(u^{I}\), \(w\) and \(g_{m}\)
\[0 =G_{IJ}\left(\partial_{R}(fY^{I})-\partial_{R}(fX^{I})\right) \partial_{i}X^{J}\, \tag{4.13}\] \[0 =\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}-\frac{1}{2} \epsilon f^{-1}c^{IJK}X_{J}\xi_{K}\,\] (4.14) \[0 =\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)w+\frac{1}{2}f^{-1} X_{I}\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)u^{I}\,\] (4.15) \[0 =-\epsilon R^{2}(\partial_{R^{2}}g_{m})+2\epsilon(1-g_{m})+2\xi_ {I}u^{I}. \tag{4.16}\]
The variation (4.13) allows for the electric potential \(fY^{I}\) in (4.7) to be identified with the scalar field \(fX^{I}\), and thus
\[A^{I}=fX^{I}(dt+w\sigma_{3})+u^{I}\sigma_{3}. \tag{4.17}\]
This sets the variables \(a^{I}\), \(a^{0}\) and \(b^{I}\) in the decomposition (2.18) to
\[a^{I}+b^{I}a^{0}=fX^{I}dt\,\ b^{I}=fX^{I}w+u^{I}\, \tag{4.18}\]
with \(a^{0}=a^{0}_{t}dt\) given in (4.9).
In the limit of ungauged supergravity, \(fX^{I}\) is a harmonic function. Then, the identification of \(X^{I}=Y^{I}\) means that the corresponding electric potential is also a harmonic function. We may expect the same functional dependence in the case of gauged supergravity.3
Footnote 3: There are \(n_{V}+1\) potentials \(Y^{I}\) and \(n_{V}\) scalars \(X^{I}\) so there is freedom to adjust a single integration constant that we do not exploit. It is unclear to us if this freedom is physically significant.
In the context of a black hole, solutions to the supersymmetry conditions (4.13-4.16) are specified in part by the conserved charges of the theory. The charges in \(\widetilde{J}\) and \(\widetilde{Q}_{I}\) in (3.34) are expressed in terms of the \((2+3)\) variables \(U_{1},U_{2},b^{I},a^{0}_{t},a^{I}_{t}\) but we can recast them in terms of \((1+4)\) variables \(f,u^{I},w,g_{m}\) using the dictionary for the geometry (4.9) and the potential (4.18). We can also remove most of the derivatives in the equations (3.34) for the charges \(\widetilde{J}\) and \(\widetilde{Q}_{I}\) using the radial equations for the variables \(u^{I}\), \(w\) and \(g_{m}\) (4.14-4.16). Our final expressions of the charges, which we use for the remainder of the
section are given by
\[\widetilde{J} =\widetilde{Q}_{I}u^{I}+\frac{2}{3}c_{IJK}u^{I}u^{J}u^{K}-R^{2}g_{m} \left(f^{-1}X\cdot u+2w-\frac{1}{2}\epsilon R^{2}f^{-3}\xi\cdot fX\right) \tag{4.19}\] \[\quad+2R^{2}w(1+\epsilon\xi\cdot u)\,\] \[\widetilde{Q}_{I} =-2c_{IJK}u^{J}u^{K}-2\epsilon wR^{2}\xi_{I}-g_{m}R^{4}\partial_{ R^{2}}(f^{-1}X_{I})\.\]
The second of these equations is a first order differential equation for \(f^{-1}X_{I}\). Together with the three radial differential equations (4.14-4.16) for the variables \(u^{I}\), \(g_{m}\) and \(w\), we find the four equations
\[\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I} =\frac{1}{2}\epsilon c^{IJK}(f^{-1}X_{J})\xi_{K}\, \tag{4.20a}\] \[\left(\partial_{R^{2}}+\frac{2}{R^{2}}\right)g_{m} =\frac{2}{R^{2}}+\frac{2}{R^{2}}\epsilon\xi_{I}u^{I}\,\] (4.20b) \[\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)w =-\frac{1}{2}f^{-1}X_{I}\left(\partial_{R^{2}}-\frac{1}{R^{2}} \right)u^{I}\,\] (4.20c) \[R^{4}\partial_{R^{2}}(f^{-1}X_{I}) =-\frac{1}{g_{m}}\left(\widetilde{Q}_{I}+2c_{IJK}u^{J}u^{K}+2 \epsilon wR^{2}\xi_{I}\right). \tag{4.20d}\]
We refer to the set of first order differential equations (4.20a-4.20d) as the attractor flow equations for black hole solutions to the theory (2.1).
### Solution of the attractor flow equations
The attractor flow equations (4.20a-4.20d) are first order differential equations. In this subsection we discuss the boundary conditions needed to specify their solutions completely. This turns out to be surprisingly subtle. We then solve the equations using perturbative expansions.
#### 4.4.1 Boundary conditions
The attractor flow equations (4.20a-4.20d) are first order differential equations with \(\xi_{I}\) and \(\tilde{Q}_{I}\) as given parameters. As such the superficial expectation is that the specification of all the unknown functions \(u^{I},g_{m},w,f^{-1}X_{I}\) at any coordinate \(R^{2}\) yields the corresponding derivatives at that position. Further iterations should then be sufficient to reconstruct the entire radial dependence, at least in principle. We seek to implement this strategy starting from either asymptotically AdS\({}_{5}\), or from the horizon. We consider each in turn.
For a solution to be asymptotically AdS\({}_{5}\), the metric ansatz (4.3) requires the leading order behavior \(f\to R^{0}\), \(g_{m}\to R^{2}\), and \(w\to R^{2}\) as \(R\to\infty\). With these boundary conditions for \(f\), \(g_{m}\), and \(w\), (4.20a) and (4.20d) yield \(u^{I}\to c^{IJK}\xi_{J}\xi_{K}R^{2}\) and \(X_{I}\to\xi_{I}\cdot R^{0}\) for the matter fields as \(R\to\infty\).
Alternatively, we can impose boundary conditions at the horizon of the black hole. There, the near-horizon geometry has a manifest AdS\({}_{2}\) factor of the form
\[ds_{2}^{2}=R^{4}dt^{2}-\frac{dR^{2}}{R^{2}}\, \tag{4.21}\]
and so \(g_{m}\to R^{0}\), \(f\to R^{2}\), and \(w\to R^{-2}\). With these leading asymptotics, the attractor flow equations (4.20a) and (4.20d) determine \(u^{I}\to R^{0}\) and \(X_{I}\to R^{0}\) near the horizon.
Staying with our superficial expectation, we would start from either asymptotically AdS\({}_{5}\) or from the AdS\({}_{2}\) horizon. Mathematically, it could be a concern that the differential equations are coupled and non-linear, because then the expansions might fail to converge. This situation is most likely incompatible with a black hole solution that is regular throughout the entire flow from asymptotically AdS\({}_{5}\) to the AdS\({}_{2}\) horizon, or _vice versa_. Nonlinearity does not appear to pose a conceptual challenge.
In order to study the unknown functions \(u^{I},g_{m},w\) and \(f^{-1}X_{I}\) around a regular point, we multiply by an appropriate factor of \(R^{2}\). Near the horizon we consider \(R^{2}u^{I},R^{2}g_{m},R^{2}w\), and \(R^{2}f^{-1}X_{I}\). At infinity, we expand the functions \(R^{-2}u^{I},R^{-2}g_{m},R^{-2}w\) and \(R^{-2}f^{-1}X_{I}\). After such rescalings the left hand sides of each of the attractor flow equations (4.20a-4.20d) will take the form
\[\left(\partial_{R^{2}}+\frac{\alpha+\beta}{R^{2}}\right)P=R^{-2 \beta}\left(\partial_{R^{2}}+\frac{\alpha}{R^{2}}\right)(R^{2\beta}P), \tag{4.22}\]
for some field \(P\) and some integers \(\alpha\) and \(\beta\) that can either be positive or negative. The challenge we will encounter repeatedly is that, when \(P\sim R^{-2(\alpha+\beta)}\), this expression vanishes. We refer to this situation as a _zero-mode_ of the perturbative expansion. What it means is that an attractor flow equation does not reveal a derivative, contrary to expectation. Instead, it yields a constraint between the unknown functions on the right hand side of the equation in question. This constraint will be nonlinear and, in general, difficult to implement. In other words, the initial value problem, at both the horizon and at asymptotic infinity, turns out to be unexpectedly complicated.
In the following subsections we develop this general theme explicitly, first starting from the horizon, and then from asymptotic infinity. We subsequently merge the two perturbative expansions to seek a global understanding.
\begin{table}
\begin{tabular}{l|l|l} & \(R\to\infty\) & \(R\to 0\) \\ \hline \(g_{m}\) & \(R^{2}\) & \(R^{0}\) \\ \hline \(f\) & \(R^{0}\) & \(R^{2}\) \\ \hline \(w\) & \(R^{2}\) & \(R^{-2}\) \\ \hline \(u^{I}\) & \(R^{2}\) & \(R^{0}\) \\ \hline \(X_{I}\) & \(R^{0}\) & \(R^{0}\) \\ \end{tabular}
\end{table}
Table 1: Asymptotics of the various functions
#### 4.4.2 Perturbative solution starting from the horizon
To satisfy the regularity conditions at the horizon, we take \(\beta=1\) in (4.22) and rewrite the attractor flow equations (4.20a-4.20d) as
\[\partial_{R^{2}}(R^{2}u^{I}) =\frac{1}{2}\epsilon c^{IJK}(R^{2}f^{-1}X_{J})\xi_{K}\, \tag{4.23a}\] \[\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)(R^{2}g_{m}) =2+\frac{2}{R^{2}}\epsilon\xi_{I}(R^{2}u^{I})\,\] (4.23b) \[\left(\partial_{R^{2}}-\frac{2}{R^{2}}\right)(R^{2}w) =-\frac{1}{2}R^{-2}(R^{2}f^{-1}X_{I})\left(\partial_{R^{2}}-\frac {2}{R^{2}}\right)(R^{2}u^{I})\,\] (4.23c) \[\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)(R^{2}f^{-1}X_{I}) =-R^{-2}g_{m}^{-1}\left(\widetilde{Q}_{I}+2R^{-4}c_{IJK}(R^{2}u^ {J})(R^{2}u^{K})+2\epsilon(R^{2}w)\xi_{I}\right). \tag{4.23d}\]
We then expand the unknown functions near the horizon. Since the radial dependence is of the form \(R^{2n}\) where \(n\) is some integer, the expansion can be written as
\[R^{2}u^{I} =\sum_{n=1}^{\infty}u^{I}_{(n)}R^{2n}\, \tag{4.24a}\] \[R^{2}g_{m} =\sum_{n=1}^{\infty}g_{m,(n)}R^{2n}\,\] (4.24b) \[R^{2}w =\sum_{n=0}^{\infty}w_{(n)}R^{2n}\,\] (4.24c) \[R^{2}f^{-1}X_{I} =\sum_{n=0}^{\infty}x_{I,(n)}R^{2n}. \tag{4.24d}\]
With the asymptotic structure of Table 1, the expansions for \(R^{2}u^{I}\) and \(R^{2}g_{m}\) do not start with a constant term. Moreover, with the horizon expansions above, the differential operators on the left hand sides of (4.23c) and (4.23d) are such that the coefficients \(w_{(2)}\) and \(x_{(1)}\) drop out. These coefficients are the zero-modes that make the initial value problem more complicated. There are no analogous zero-modes for \(u^{I}\) and \(g_{m}\).
To study the structure of the attractor equations (4.23a-4.23d), we temporarily treat the unknown scalar field \(f^{-1}X_{I}\) as a given function of the radial coordinate \(R\). Then the linear flow equation (4.23a), which is sourced by \(f^{-1}X_{I}\), yields all the coefficients \(u^{I}_{(n)}\) in terms of \(x_{I,(n)}\). At this point we know both of the functions \(f^{-1}X_{I}\) and \(u^{I}\) and then the attractor flow equation (4.23b) similarly yields the series coefficients \(g_{m,(n)}\) in terms of \(x_{I,(n)}\). Given all of \(f^{-1}X_{I}\), \(u^{I}\), and \(g_{m}\), it would seem straightforward to exploit (4.23c) and find all the coefficients \(w_{(n)}\) in terms of \(x_{I,(n)}\). This mostly works, but the zero-mode \(w_{(2)}\) can _not_ be determined this way. That is the obstacle where, as advertized, the derivatives are such that an expansion coefficient simply drops out.
The final flow equation (4.23d), due to the conserved charge \(Q_{I}\) (4.19), is crucial for the complete story. Assuming for a moment that the zero mode \(w_{(2)}\) is given as an initial condition, along with the entire function \(f^{-1}X_{I}\), this equation determines the expansion
parameters \(x_{I,(n)}\) in terms of the \(x_{I,(n)}\) themselves, and so the entire system would appear to be solved. However, this last equation also has a zero mode \(x_{I,(1)}\) which cannot be determined by the iterative procedure. In short, a careful analysis must be considered and accordingly, this is what we proceed to do now: we solve the expansion coefficients order by order, following the procedure we have outlined.
First, inserting the expansions (4.24a) and (4.24d) into the flow equations (4.23a), we find
\[\sum_{n=1}^{\infty}nu^{I}_{(n)}R^{2n-2}=\frac{\epsilon}{2}\sum_{n=0}^{\infty} c^{IJK}\xi_{K}x_{J,(n)}R^{2n}. \tag{4.25}\]
Comparing each order in \(R^{2}\) leads to a relation between the coefficients \(u^{I}_{(n)}\) and \(x_{I,(n)}\):
\[u^{I}_{(n)}=\frac{\epsilon}{2n}c^{IJK}\xi_{K}x_{J,(n-1)}\,\quad n \geq 1. \tag{4.26}\]
We insert this result in the flow equation (4.23b) for the expansion of \(g_{m}\) (4.24b) and find
\[\sum_{n=1}^{\infty}(n+1)g_{m,(n)}R^{2n-2}=2+\sum_{n=1}^{\infty} \frac{1}{n}c^{IJK}\xi_{I}\xi_{J}x_{K,(n-1)}R^{2n-2}. \tag{4.27}\]
Thus all expansion coeficients of \(g_{m}\) can be expressed in terms of the \(x_{I,(n)}\):
\[g_{m,(n)}=\frac{1}{n+1}\left(2\delta_{n,1}+\frac{1}{n}c^{IJK}\xi_{I}\xi_{J}x_{ K,(n-1)}\right),\quad n\geq 1\,. \tag{4.28}\]
The steps we have taken so far leads us to express the functions \(u^{I}\) and \(g_{m}\) solely in terms of \(f^{-1}X_{I}\). This was expected from the general discussion.
We then start with (4.23c) and use the expansions (4.24a-4.24d) to obtain
\[\sum_{n=0}^{\infty}(n-2)w_{(n)}R^{2n-2}=-\frac{1}{2}\sum_{n=0}^{ \infty}\left(\sum_{k=0}^{n}(n-1-k)x_{I,(k)}u^{I}_{(n+1-k)}\right)R^{2n-2}. \tag{4.29}\]
Comparing powers of \(R^{2}\) we find
\[(n-2)w_{(n)}=-\frac{\epsilon}{4}\sum_{k=0}^{n}\frac{n-1-k}{n+1-k}c^{IJK}\xi_{ I}x_{J,(k)}x_{K,(n-k)}. \tag{4.30}\]
We see explicitly that the differential equation (4.23c) fails to express the zero mode \(w_{(2)}\) in the expansion (4.24c) in terms of other data. However, the right hand side of (4.30) still reveals important information at \(n=2\) since it imposes a constraint on the \(x_{I,(n)}\) expansion coefficients
\[0=\frac{\epsilon}{3}c^{IJK}x_{I,(0)}x_{J,(2)}\xi_{K}. \tag{4.31}\]
Thus determination of \(w_{(2)}\) is replaced by a constraint on the functions \(f^{-1}X_{I}\) which we have considered given so far.
To make further progress it remains to study the constants of motion due to the conservation (4.19) of electric charge and angular momentum. The electric charge (4.23d) yields
\[-\sum_{n=1}^{\infty}\sum_{k=1}^{n}(n-1-k)g_{m,(k)}x_{I,(n-k)}R^{2n-2} \tag{4.32}\] \[\qquad\qquad=\widetilde{Q}_{I}+2c_{IJK}\sum_{n=1}^{\infty}\sum_{k =1}^{n}u^{J}_{(k)}u^{K}_{(n+1-k)}R^{2n-2}+2\epsilon\xi_{I}\sum_{n=0}^{\infty} w_{(n)}R^{2n}\.\]
Comparing each power of \(R^{2}\) we find the sequence of relations
\[-\sum_{k=1}^{n+1}(n-k)g_{m,(k)}x_{I,(n-k+1)}=\widetilde{Q}_{I}\delta_{n,0}+2c_ {IJK}\sum_{k=1}^{n+1}u^{J}_{(k)}u^{K}_{(n+2-k)}+2\epsilon\xi_{I}w_{(n)}\, \tag{4.33}\]
where it is understood in this relation that the coefficients \(g_{m,(k)}\), \(u^{I}_{(k)}\), and \(w_{(k\neq 2)}\) depend on the \(x_{I,(k)}\) through (4.26), (4.28), and (4.30). Thus the relations (4.33) constrain the given functions \(f^{-1}X_{I}\) significantly. Unfortunately, these constraints are nonlinear and difficult to solve.
For the constant order in the \(R^{2}\) expansion, we take \(n=0\) in (4.33) and find the electric charge
\[\widetilde{Q}_{I} =x_{I,(0)}\left(1+\tfrac{1}{2}c^{JKL}x_{J,(0)}\xi_{K}\xi_{L} \right)-\tfrac{1}{2}c_{IJK}c^{JML}c^{KNP}\xi_{M}\xi_{N}x_{L,(0)}x_{P,(0)} \tag{4.34}\] \[\quad+\tfrac{1}{4}\xi_{I}c^{LMN}x_{L,(0)}x_{M,(0)}\xi_{N}\.\]
The charges \(\widetilde{Q}_{I}\) depend only on the scalar fields at the horizon \(x_{I,(0)}\) and the FI-parameters \(\xi_{I}\). In fact, if positivity conditions are imposed on the \(x_{I,(0)}\), the \(x_{I,(0)}\) in (4.34) can be inverted in terms of the \(\widetilde{Q}_{I}\), allowing to replace the pair of inputs \((\xi_{I},x_{I,(0)})\) by the pair \((\xi_{I},\widetilde{Q}_{I})\). This fits nicely with the understanding of the physical inputs and charges that go in defining the radial flow at every radial hypersurface.
To rewrite (4.34) in a more canonical form, we simplify the second term involving a triple product of \(c_{IJK}\) by contracting (B.3) with \(\xi_{M}\xi_{Q}x_{L,(0)}x_{P,(0)}\). This gives
\[\widetilde{Q}_{I}=x_{I,(0)}-\frac{1}{2}\xi_{I}\left(\frac{1}{2}c^{JKL}x_{J,(0 )}x_{K,(0)}\xi_{L}\right)+\frac{1}{2}c_{IJK}\left(\frac{1}{2}c^{JNO}\xi_{N} \xi_{O}\right)c^{KLM}x_{L,(0)}x_{M,(0)}. \tag{4.35}\]
This expression makes contact with the form of the charge given in [32].4
Footnote 4: The equation agrees with \(Q_{I}\) given in (3.53) of [32] with the following map between notations:
\[q_{I}=\frac{1}{3}x_{I,(0)},\ \bar{X}_{I}=\frac{1}{3}\xi_{I}\,\ \bar{X}^{I}=\frac{1}{2}\ell^{2}c^{IJK}\xi_{J}\xi_{K}\,\ Q_{\rm there}=\frac{\pi}{4G} \widetilde{Q}_{\rm here}. \tag{4.36}\]
The improvement in our work is that we introduce the charge independently of the radial coordinate so it can be computed at any hypersurface we choose, which -- in this case -- is the black hole horizon.
Before analyzing the consequences of electric charge conservation (4.33) for \(n\geq 1\), we consider the analogous equations due to conservation of the black hole angular momentum
\(J\) (4.19). As the first line of (4.19) involves the scalar field \(X^{I}\), we must recast it in terms of the scalar field with a lowered index, as our expansion (4.24d) dictates. Utilizing (B.6), we have
\[f^{-3}fX^{I}=\frac{1}{2}c^{IJK}(f^{-1}X_{J})(f^{-1}X_{K}). \tag{4.37}\]
Introducing the near-horizon expansions (4.24a-4.24d) we find
\[\begin{split}\widetilde{J}\delta_{n,0}&=\widetilde {Q}_{I}u^{I}_{(n+1)}+\frac{2}{3}c_{IJK}\sum_{k=1}^{n+1}\sum_{\ell=1}^{k}u^{I}_{ (\ell)}u^{J}_{(k+1-\ell)}u^{K}_{(n+2-k)}-\sum_{k=0}^{n}\sum_{\ell=0}^{n-k}g_{m, (k+1)}x_{I,(\ell)}u^{I}_{(n+1-\ell-k)}\\ &\quad-2\sum_{k=0}^{n}g_{m,(k+1)}w_{(n-k)}+\frac{\epsilon}{4}c^{ IJK}\xi_{I}\sum_{k=0}^{n}\sum_{\ell=0}^{n-k}g_{m,(k+1)}x_{J,(\ell)}x_{K,(n-k- \ell)}\\ &\quad+2w_{(n)}+2\epsilon\xi_{I}\sum_{k=0}^{n}w_{(k)}u^{I}_{(n- k+1)}\.\end{split} \tag{4.38}\]
As before, it is understood that \(g_{m,(k)}\), \(u^{I}_{(k)}\), and \(w_{(k)}\) depends on the \(x_{I,(k)}\) according to (4.26), (4.28) and (4.30), and here we also need the explicit form of \(\widetilde{Q}_{I}\) (4.35). Thus angular momentum conservation gives another infinite set of relations between the \(x_{I,(k)}\). Unfortunately, they are even more nonlinear than their analogues for conservation of electric charge.
For \(n=0\) (4.38) gives the angular momentum expressed in terms of \(x_{I(0)}\) and \(\xi_{I}\)
\[\begin{split}\widetilde{J}&=\frac{\epsilon}{4}c^{ IJK}x_{I,(0)}x_{J,(0)}\xi_{K}-\frac{\epsilon}{4}c^{IJK}c^{LMN}\xi_{I}\xi_{N}\xi_{K}x_{J, (0)}x_{L,(0)}x_{M,(0)}\\ &\quad+\epsilon c_{IJK}c^{ILM}c^{JNO}c^{KPQ}\left(\tfrac{1}{8}x_ {L,(0)}x_{P,(0)}x_{Q,(0)}\xi_{N}\xi_{0}\xi_{M}+\tfrac{1}{12}x_{N,(0)}x_{P,(0) }\xi_{M}\xi_{0}\xi_{Q}\right)\,\end{split} \tag{4.39}\]
where we have used the value of \(\widetilde{Q}_{I}\) given in (4.35). To make contact with the form of the angular momentum in [32], we rewrite the formula as5
Footnote 5: This agrees with the angular momentum reported in (3.50) of [32] with the following map between conventions:
\[q_{I}=\frac{1}{3}x_{I,(0)}\,\ \bar{X}_{I}=\frac{1}{3}\delta\xi_{I}\,\ J_{\rm there }=\frac{\pi}{4G}\widetilde{J}_{\rm here}. \tag{4.40}\]
\[\widetilde{J}=\frac{\epsilon}{4}c^{IJK}x_{I,(0)}x_{J,(0)}\xi_{K}+\frac{1}{36} \left(c^{IJK}\xi_{I}\xi_{J}\xi_{K}\right)\left(c^{LMN}x_{L,(0)}x_{M,(0)}x_{N, (0)}\right). \tag{4.41}\]
Again, we are able to express the final result for the conserved charge entirely in terms of near horizon data. Moreover, since \(\widetilde{Q}_{I}\) and \(\widetilde{J}\) depend on the same integration constants \(x_{I,(0)}\), the charges are indeed not independent of each other.
We now turn to the \(n=1\) component of (4.33), i.e., electric charge conservation at order \(R^{2}\) away from the horizon. It amounts to
\[g_{m,(2)}x_{I,(0)}=4c_{IJK}u^{J}_{(1)}u^{K}_{(2)}+2\epsilon\xi_{I}w_{(1)}. \tag{4.42}\]
The absence of \(x_{I,(1)}\) in this equation is due to the zero-mode in (4.24d). However, a constraint on the \(x_{I,(1)}\) will follow, in analogy with the zero-mode \(w_{(2)}\) giving the condition (4.31). The values of \(g_{m,(2)}\), \(u^{I}_{(1)}\), \(u^{I}_{(2)}\), and \(w_{(1)}\) from (4.26), (4.28) and (4.30) give the vector relation
\[\left[\frac{1}{6}c^{JKL}\xi_{K}\xi_{L}x_{I,(0)}-\frac{1}{2}c_{IKM}c^{JLM}c^{KNP }\xi_{L}\xi_{N}x_{P,(0)}+\frac{1}{2}c^{JKL}\xi_{K}x_{L,(0)}\xi_{I}\right]x_{J,( 1)}=0. \tag{4.43}\]
We see that \(x_{I,(1)}\) is constrained even though it is a zero-mode of the differential operator. Using the cubic condition on the \(c_{IJK}\) (B.4), we can show that the matrix in square brackets has the null vector
\[x_{I,(1)}=\ell\xi_{I}. \tag{4.44}\]
It is unique, at least for generic structure constants \(c_{IJK}\) and generic charges, which are parametrized by \(x_{I,(0)}\). The constraint (4.43) does not determine the overall normalization. However, the scale of the radial coordinate \(R^{2}\) is arbitrary from the near horizon point of view, so the choice (4.44) involves no loss of generality.
The \(n=2\) component of (4.33) gives another vector-valued relation
\[T_{I}^{J}x_{J,(2)}=2\epsilon\left(w_{(2)}+\frac{\epsilon}{2\ell}\right)\xi_{I }\, \tag{4.45}\]
where we have simplified using (4.44) and
\[T_{I}^{J}=-\left(1+\frac{1}{2}c^{KLM}\xi_{K}\xi_{L}x_{M,(0)}\right)\delta_{I}^ {J}+\frac{1}{12}c^{JKL}\xi_{K}\xi_{L}x_{I,(0)}-\frac{1}{3}c_{IKL}c^{KMJ}c^{LNP }\xi_{M}\xi_{N}x_{P,(0)}. \tag{4.46}\]
The matrix \(T_{I}^{J}\) is invertible so, given the inputs \(\xi_{I},x_{I,(0)},w_{(2)}\), the coefficient \(x_{J,(2)}\) is completely determined by (4.45). However, the value of \(x_{J,(2)}\) computed this way fails to satisfy the previously established constraint (4.31). This apparent contradiction can be avoided only if
\[x_{I,(2)}=0. \tag{4.47}\]
Because the left hand side of (4.45) vanishes, the right side requires
\[w_{(2)}=-\frac{\epsilon}{2\ell}. \tag{4.48}\]
At this point, we can finally consider generic components of the electric charge conservation (4.33), i.e. the infinite set of equations \(n\geq 3\). The coefficients \(w_{(n\geq 3)}\) can be eliminated using (4.30) for \(w_{(n)}\). The \(u^{I}_{(n)}\) and \(g_{m,(n)}\) are similarly traded for \(x_{I,(n)}\), this time using (4.26) and (4.28). For all \(n\geq 3\) this gives
\[\begin{split}&-\sum_{k=0}^{n}\frac{n-1-k}{k+2}\left(2\delta_{0,k}+ \frac{1}{k+1}c^{JKL}\xi_{J}\xi_{K}x_{L,(k)}\right)x_{I,(n-k)}\\ &=\frac{1}{2}c_{IJK}c^{JLM}c^{KNP}\xi_{L}\xi_{N}\sum_{k=0}^{n} \frac{1}{(k+1)(n-k+1)}x_{M,(k)}x_{P,(n-k)}\\ &\quad-\frac{1}{2(n-2)}\xi_{I}\sum_{k=0}^{n}\frac{n-1-k}{n+1-k}c ^{JKL}\xi_{J}x_{K,(k)}x_{L,(n-k)}\.\end{split} \tag{4.49}\]
This messy expression can be reorganized as a recurrence relation giving \(x_{I,(n)}\) in terms of the preceding \(x_{I,(0\leq k\leq n-1)}\):
\[\begin{split}&\left[-\frac{n-1}{2}\left(2+c^{KLM}\xi_{K}\xi_{L}x_{M, (0)}\right)\delta_{I}^{J}+\frac{1}{(n+1)(n+2)}c^{JKL}\xi_{K}\xi_{L}x_{I,(0)} \right.\\ &\left.-\frac{1}{n+1}c_{IKL}c^{KMJ}c^{LPQ}\xi_{M}\xi_{P}x_{Q,(0) }-\frac{1}{(n-2)(n+1)}c^{JKL}\xi_{K}x_{L,(0)}\xi_{I}\right]x_{J,(n)}\\ &=\sum_{k=1}^{n-1}\left[\frac{(n-1-k)}{(k+1)(k+2)}c^{JKL}\xi_{J} \xi_{K}x_{L,(k)}x_{I,(n-k)}+\frac{1}{2}c_{IJK}c^{JLM}c^{KNP}\xi_{L}\xi_{N}x_{M,(k)}x_{P,(n-k)}\right.\\ &\left.-\frac{1}{2(n-2)}\xi_{I}\frac{n-1-k}{n+1-k}c^{JKL}\xi_{J} x_{K,(k)}x_{L,(n-k)}\right]\.\end{split} \tag{103}\]
The left hand side can be inverted, at least for some specific models of \(c_{IJK}\), such as the STU model (\(c_{IJK}=|\epsilon_{IJK}|\) for \(I,J,K\) running from 1 to 3). In such cases the recurrence relation (103) determines all higher-order \(x_{I,(n\geq 3)}\) in terms of the coefficients \(x_{I,(0)}\), \(x_{I,(1)}\) and \(x_{I,(2)}\) as well as the FI-parameters \(\xi_{I}\). In fact, the constraints (100) and (104) from low \(n\) will be sufficient to show that the series _truncates_ at \(n=2\). This is discussed in subsection 4.4.4.
At this point we have exhausted the information that comes from the conservation of electric charge \(\widetilde{Q}_{I}\) charge in (102). We did not yet study the \(\widetilde{J}\) conservation relations (104). As noted already, the constant order \(n=0\) determines the angular momentum from a near horizon perspective. We have worked out the first few orders \(n\geq 1\) and found either redundant relations, involving already known coefficients such as \(x_{I,(0)}\), \(\xi_{I}\) and \(x_{I,(1)}\), or relations that tie together higher order \(x_{I,(k\geq 2)}\) with lower-order ones. We do not foresee any further constraints due to the \(\widetilde{J}\) relations.
In summary, starting from the near horizon region, we have exploited supersymmetry and found the entire black hole solution. The fields \(u^{I},g_{m},w\) and \(f^{-1}X_{I}\) are reported in (101), (102), (103) and (104). Additionally, we computed the electric charges \(\widetilde{Q}_{I}\) and the angular momenta in terms of the horizon values of the scalars \(x_{I,(0)}\), and the subleading coefficients \(x_{I,(1)}\) which, according to (100), coincide with the FI-parameters \(\xi_{I}\).
#### 4.4.3 Perturbative solution starting from asymptotic AdS
We now adapt the approach from the previous subsection and expand the unknown functions \(u^{I}\), \(g_{m}\), \(w\) and \(f^{-1}X_{I}\) at large \(R\), near the asymptotic AdS\({}_{5}\) boundary.
Given the asymptotic behaviors listed in Table 1, regularity requires taking \(\beta=-1\) in
(4.22). We then recast the flow equations (4.20a-4.20d) as
\[\left(\partial_{R^{2}}+\frac{2}{R^{2}}\right)(R^{-2}u^{I}) =\frac{1}{2}\epsilon c^{IJK}(R^{-2}f^{-1}X_{J})\xi_{K}\, \tag{4.51a}\] \[\left(\partial_{R^{2}}+\frac{3}{R^{2}}\right)(R^{-2}g_{m}) =\frac{2}{R^{4}}+\frac{2}{R^{2}}\epsilon\xi_{I}(R^{-2}u^{I})\.\] (4.51b) \[\partial_{R^{2}}(R^{-2}w) =-\frac{1}{2}R^{2}(R^{-2}f^{-1}X_{I})\partial_{R^{2}}(R^{-2}u^{I} )\,\] (4.51c) \[\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)(R^{-2}f^{-1}X_{I}) =-\frac{R^{-8}}{(R^{-2}g_{m})}\left(\widetilde{Q}_{I}+2c_{IJK}R^{ 4}(R^{-2}u^{J})(R^{-2}u^{K})\right.\] (4.51d) \[\left.+2\epsilon R^{4}(R^{-2}w)\xi_{I}\right)\.\]
We define the perturbative expansions at infinity as
\[R^{-2}u^{I} =\sum_{n=0}^{\infty}\bar{u}^{I}_{(n)}R^{-2n}\, \tag{4.52a}\] \[R^{-2}g_{m} =\sum_{n=0}^{\infty}\bar{g}_{m,(n)}R^{-2n}\,\] (4.52b) \[R^{-2}w =\sum_{n=0}^{\infty}\bar{w}_{(n)}R^{-2n}\,\] (4.52c) \[R^{-2}f^{-1}X_{I} =\sum_{n=1}^{\infty}\bar{x}_{I,(n)}R^{-2n}. \tag{4.52d}\]
The bar distinguishes the expansion coefficients at the asymptotically AdS\({}_{5}\) boundary from their analogues at the horizon.
As before, we initially specify the entire series \(\bar{x}_{I,(n)}\). Additionally, examination of (4.51a-4.51d) shows that \(\bar{u}^{I}_{(2)}\), \(\bar{g}_{m,(3)}\), \(\bar{w}_{(0)}\), and \(\bar{x}_{I,(1)}\) do not appear on the left hand sides of the equations. These are the zero modes that we also regard as inputs, at least provisionally. Among the zero-modes, we can determine \(\bar{x}_{I,(1)}\) from the outset because they give the asymptotic values of the scalars
\[\bar{x}_{I,(1)}=\ell\xi_{I}\,, \tag{4.53}\]
as we found in (2.13), by extremizing the potential of gauged supergravity.
We now proceed to solve for the expansion coefficients of each variable, order by order. Starting with the \(u^{I}\) flow equation (4.20a), and using the expansions (4.52a) and (4.52d), we find
\[\sum_{n=0}^{\infty}(2-n)\bar{u}^{I}_{(n)}R^{-2n-2}=\frac{1}{2} \epsilon c^{IJK}\xi_{J}\sum_{n=1}^{\infty}\bar{x}_{K,(n+1)}R^{-2n-2}. \tag{4.54}\]
Comparing inverse powers of \(R^{2}\), we find \(\bar{u}^{I}_{(n)}\) for \(n\neq 2\):
\[(2-n)\bar{u}^{I}_{(n)}=\frac{1}{2}\epsilon c^{IJK}\xi_{J}\bar{x}_ {K,(n+1)}\,\quad n\geq 0\,. \tag{4.55}\]
The zero mode \(\bar{u}^{I}_{(2)}\) drops out of the equation. Instead, we find a vectorial constraint on \(x_{I,(3)}\)
\[c^{IJK}\xi_{J}\bar{x}_{K,(3)}=0\ \,. \tag{101}\]
It has the obvious solution
\[\bar{x}_{I,(3)}=0\, \tag{102}\]
for all values of \(I\). This solution is unique if the matrix \(c^{IJK}\xi_{J}\) is non-singular. In (100), we show that it is indeed invertible
\[(c^{IJK}\xi_{J})^{-1}=\frac{1}{2}\ell^{3}\left(c_{IJK}c^{JLM}\xi_{L}\xi_{M}- \xi_{I}\xi_{K}\right)\ \,. \tag{103}\]
Next, we consider the \(g_{m}\) flow equation (99b). The expansions (100a) and (100b) give
\[\sum_{n=0}^{\infty}(3-n)\bar{g}_{m,(n)}R^{-2n-2}=2R^{-4}+2\epsilon\xi_{I}\sum_ {n=0}^{\infty}\bar{u}^{I}_{(n)}R^{-2n-2}\,. \tag{104}\]
The expansion coefficients \(\bar{g}_{m,(n)}\) -- with the exception of the zero mode \(g_{m,(3)}\) -- can be expressed in terms of \(x_{I,(n)}\) and the zero mode \(u^{I}_{(2)}\) as
\[(3-n)\bar{g}_{m,(n)}=\left\{\begin{array}{cc}2\delta_{1,n}+\frac{1}{2-n}c^{ IJK}\xi_{I}\xi_{J}\bar{x}_{K,(n+1)}&n\neq 2,\\ 2\epsilon\xi_{I}\bar{u}^{I}_{(2)}&n=2\,.\end{array}\right. \tag{105}\]
In compensation for not determining \(\bar{g}_{m,(3)}\), we find the constraint \(\xi_{I}\bar{u}^{I}_{(3)}=0\). Rewriting this constraint using (100) gives a projection on \(x_{I,(4)}\)
\[c^{IJK}\xi_{I}\xi_{J}\bar{x}_{K,(4)}=0. \tag{106}\]
This constraint is a real special geometry scalar, unlike the vector-valued condition (101). It will nevertheless prove useful when simplifying results at large \(R\).
We now turn to the nonlinear flow equation for \(w\) (99c). After using the expansions (100a), (100b) and (100c), we find
\[\sum_{n=1}^{\infty}(n-1)\bar{w}_{(n-1)}R^{-2n}=-\frac{1}{2}\sum_{n=0}^{\infty }\sum_{k=1}^{n}(n-k)\bar{x}_{I,(k)}\bar{u}^{I}_{(n-k)}R^{-2n}\, \tag{107}\]
and so
\[(n-1)\bar{w}_{(n-1)}=-\frac{1}{2}\sum_{k=1}^{n}(n-k)\bar{x}_{I,(k)}\bar{u}^{I }_{(n-k)}\,\qquad n\geq 1\,. \tag{108}\]
For \(n=1\), the left hand side vanishes, so the zero-mode \(\bar{w}_{(0)}\) is undetermined. The right hand side also vanishes for \(n=1\) so in this case the equation with a zero-mode offers no additional information. We omit the \(n=1\) case and rewrite (108) to
\[\bar{w}_{(n)}=-\frac{1}{2n}\sum_{k=1}^{n}(n+1-k)\bar{x}_{I,(k)}\bar{u}^{I}_{(n +1-k)}\,\qquad n\geq 1. \tag{109}\]
We have refrained from eliminating \(\bar{u}^{I}_{(k)}\) in favor of \(\bar{x}_{I,(n)}\) via (111) because, generally, the equation involves the zero mode \(\bar{u}^{I}_{(2)}\) which cannot be removed this way.
The final flow equation (110) was derived by combining supersymmetry with conservation of electric charge. Using the expansions (111a-111b), we find
\[\begin{split}& R^{-4}\sum_{n=0}^{\infty}\sum_{k=0}^{n}\bar{g}_{m,(k)}( n-k)\bar{x}_{I,(n+1-k)}R^{-2n}\\ &=R^{-4}\widetilde{Q}_{I}\delta_{n,2}+2c_{IJK}R^{-4}\sum_{n=0}^{ \infty}\sum_{k=0}^{n}\bar{u}^{J}_{(k)}\bar{u}^{K}_{(n-k)}R^{-2n}+2\epsilon \xi_{I}R^{-4}\sum_{n=0}^{\infty}\bar{w}_{(n)}R^{-2n}\.\end{split} \tag{112}\]
For all \(n\geq 0\), this gives
\[\sum_{k=0}^{n}\bar{g}_{m,(k)}(n-k)\bar{x}_{I,(n+1-k)}=\widetilde{Q}_{I}\delta_ {n,2}+2c_{IJK}\sum_{k=0}^{n}\bar{u}^{J}_{(k)}\bar{u}^{K}_{(n-k)}+2\epsilon\xi_ {I}\bar{w}_{(n)}\ \,. \tag{113}\]
Again, we have chosen to maintain (113) as implicit functions of \(\bar{x}_{I,(n)}\) due to the presence of the zero modes.
It is worth examining the first few orders of (113) in detail. The \(n=0\) component of (113) gives
\[\xi_{I}\bar{w}_{(0)}=-\epsilon c_{IJK}\bar{u}^{J}_{(0)}\bar{u}^{K}_{(0)}=- \frac{\epsilon}{16}c_{IJK}c^{JLM}c^{KNP}\bar{x}_{L,(1)}\xi_{M}\bar{x}_{N,(1)} \xi_{P}=-\frac{\epsilon}{2\ell}\xi_{I}\, \tag{114}\]
where we have used (111). Thus, it provides the value of the zero mode \(w_{(0)}\)
\[\bar{w}_{(0)}=-\frac{\epsilon}{2\ell}\,. \tag{115}\]
At the next order, the \(n=1\) component of (113) is redundant as it just confirms the value for \(\bar{w}_{(1)}\) already obtained from (112).
The \(n=2\) component of (113) is particularly important, because it relates the electric charge to the expansion parameters at infinity
\[\begin{split}\widetilde{Q}_{I}&=\bar{x}_{I,(2)}- \frac{1}{2}\xi_{I}\left(\frac{1}{2}c^{JKL}\bar{x}_{J,(2)}\bar{x}_{K,(2)}\xi_{ L}\right)+\frac{1}{2}c_{IJK}\left(\frac{1}{2}c^{JNO}\xi_{N}\xi_{O}\right)c^{ KLM}\bar{x}_{L,(2)}\bar{x}_{M,(2)}\\ &\quad+\epsilon\ell\left(\xi_{I}\xi_{J}-c_{IJK}c^{KLM}\xi_{L}\xi_ {M}\right)\bar{u}^{J}_{(2)}\,,\end{split} \tag{116}\]
where we imposed (111) and (114) and recast the charge in a form similar to (111). Since the electric charge is conserved, the expression (116) for \(\widetilde{Q}_{I}\), written in terms of the expansion parameters at infinity, must be equal to its analogue (111) obtained from expansion near the horizon.
We have established that \(\widetilde{Q}_{I}\) at infinity has been determined with the only inputs necessary being the coefficients \(\bar{x}_{I,(2)}\), \(\xi_{I}\) and \(\bar{u}^{I}_{(2)}\). We now move on to the components of (113) for \(n\geq 3\), to establish the recursion relation for the coefficients at infinity.
For \(n=3\), (113) simplifies after eliminating the \(g_{m}\), \(u^{I}\) and \(w\) coefficients using (111), (113) and (113) to
\[4\ell^{-2}\bar{x}_{I,(4)}+2\epsilon\left(\bar{x}_{I,(2)}\xi_{J}+\frac{1}{3}\xi_ {I}\bar{x}_{J,(2)}-c_{IJK}c^{KLM}\xi_{L}\bar{x}_{m,(2)}\right)\bar{u}^{J}_{(2) }=0. \tag{117}\]
This relation indicates that \(\bar{x}_{I,(4)}\) is described only with the help of \(\bar{x}_{I,(2)}\), \(\xi_{I}\) and \(\bar{u}_{(2)}^{I}\).
Furthermore, we note that for \(n\geq 4\), the simplification of (4.66) yields a generalization of (4.70)
\[\begin{split}&\frac{(n-1)^{2}}{n-2}\ell^{-2}\bar{x}_{I,(n+1)}\\ &=-(n-1)\left(1+\tfrac{1}{2}(c\cdot\xi\xi\bar{x}_{(2)})\right) \bar{x}_{I,(n)}-2\epsilon(n-2)\xi_{J}\bar{u}_{(2)}^{J}\bar{x}_{I,(n-1)}-(n-3) \bar{g}_{m,(3)}\bar{x}_{I,(n-2)}\\ &\quad-\sum_{k=4}^{n}\frac{n-k}{(k-2)(k-3)}(c\cdot\xi\xi\bar{x}_{ (k+1)})\bar{x}_{I,(n+1-k)}+4c_{IJK}\bar{u}_{(2)}^{J}\bar{u}_{(n-2)}^{K}\\ &\quad+2c_{IJK}c^{JLM}c^{KNP}\xi_{L}\xi_{N}{\sum_{k=1}^{n-1}}^{ \prime}\,\frac{\bar{x}_{M,(k+1)}\bar{x}_{P,(n-k+1)}}{4(k-2)(n-k-2)}+\xi_{I}{ \sum_{k=2}^{n}}^{\prime}\,\frac{n+1-k}{n-1-k}\frac{(c\cdot\xi\bar{x}_{(k)}\bar {x}_{(n-k+2)})}{2n}\,,\end{split} \tag{4.71}\]
where the apostrophes on the summation symbols indicate that we exclude the terms in the sum with vanishing denominators. We have imposed the value of \(\bar{x}_{I,(1)}\) as given in (4.53) and products of the form \(c\cdot xyz\) indicate special geometry contractions under \(c^{JKL}\) of the form \(c^{JKL}x_{J}y_{K}z_{L}\). The expression (4.71) has been expanded to distinguish contributions coming from \(\bar{x}_{I,(n+1)}\), given by the left hand side of (4.71), and \(\bar{x}_{I,(2\leq k\leq n)}\), given by the right hand side of the equality in (4.71). It becomes clear that a given \(\bar{x}_{I,(n+1)}\) depends only on the expansion parameters \(\bar{x}_{I,(1)},\bar{x}_{I,(2)},\ldots,\bar{x}_{I,(n)}\). By recursion, i.e. applying (4.71) repeatedly, we can now determine all \(\bar{x}_{I,(4\leq k\leq n)}\) in terms of of \(\bar{x}_{I,(2)}\), \(\xi_{I}\), \(\bar{u}_{(2)}^{I}\) and \(\bar{g}_{m,(3)}\).
Lastly, we analyse the conservation of angular momentum \(\widetilde{J}\) by expanding the first equation in (4.19) at infinity, making sure to rescale the functions \(u^{I}\), \(g_{m}\), \(w\) and \(f^{-1}X_{I}\) appropriately
\[\begin{split}\widetilde{J}&=R^{2}Q_{I}(R^{-2}u^{I}) +\tfrac{2}{3}R^{6}c_{IJK}(R^{-2}u^{I})(R^{-2}u^{J})(R^{-2}u^{K})-R^{8}(R^{-2}g _{m})\left(2R^{-2}(R^{-2}w)\right.\\ &\quad\left.+(R^{-2}f^{-1}X)\cdot(R^{-2}u)-\tfrac{1}{2}\epsilon R ^{-2}f^{-3}\xi\cdot fX)\right)+2R^{4}(R^{-2}w)(1+\epsilon R^{2}\xi\cdot(R^{-2 }u))\.\end{split} \tag{4.72}\]
This can be expanded like (4.38) in terms of the expansions at infinity (4.52a-4.52b), leading to a relation for the component of \(R^{-2n}\). At constant order (\(n=0\)), we obtain
\[\begin{split}\widetilde{J}&=\frac{\epsilon}{4}c^{ IJK}\bar{x}_{I,(2)}\bar{x}_{J,(2)}\xi_{K}+\frac{1}{36}\left(c^{IJK}\xi_{I}\xi_{J} \xi_{K}\right)\left(c^{LMN}\bar{x}_{L,(2)}\bar{x}_{M,(2)}\bar{x}_{N,(2)}\right) +\frac{\epsilon}{\ell}\bar{g}_{m,(3)}\\ &\quad+\ell\left(\frac{1}{2}\xi_{I}c^{JKL}\xi_{J}\xi_{K}\bar{x}_ {L,(2)}+\xi_{I}+\frac{5}{3}\ell^{-3}\bar{x}_{I,(2)}\right)\bar{u}_{(2)}^{I}\,\end{split} \tag{4.73}\]
where we have imposed (4.53) and (4.57). This expression at most depends on the inputs \(\bar{x}_{I,(2)}\), \(\xi_{I}\), \(\bar{g}_{m,(3)}\) and \(\bar{u}_{(2)}^{I}\). All higher order powers of the \(\widetilde{J}\) relation at infinity in (4.72) are redundant because they yield relations between the coefficients that have already been established.
In summary, we have studied the first order equations (4.51a-4.51d) due to supersymmetry and conservation of electric charge, by expanding perturbatively near infinity. Given the asymptotic values of the scalars fields (4.53), as well as the conserved charges
\(\widetilde{Q}_{I}\), \(\widetilde{J}\) defined by fall-off conditions at infinity, the simplest outcome would have been for supersymmetry to determine the entire black hole geometry. Our finding is much more complicated: all physical fields can be expressed as a perturbative series with expansion parameters that depend not only on \(\xi_{I}\) and \(\bar{x}_{I,(2)}\) but also the zero-modes \(\bar{u}^{I}_{(2)}\) and \(\bar{g}_{m,(3)}\).
#### 4.4.4 Summary and discussion of perturbative solutions
The study in the subsection so far focused on technical details. This was needed because the interplay between supersymmetry, boundary conditions, and conserved charges proved to be rather intricate. We now conclude the subsection with a summary of the final results and discussion of their interpretation.
The black hole solution is parametrized primarily by the matter fields: scalar fields \(f^{-1}X_{I}\), with the prefactor \(f\) such that the combination \(f^{-1}X_{I}\) is unconstrained by real special geometry, and the magnetic potentials \(u^{I}\). Because of supersymmetry, the electric potentials \(fY^{I}\) can be identified with the scalar fields \(fX^{I}\), see (100). Given the matter fields, \(f^{-1}X_{I}\) and \(u^{I}\), as well as supersymmetry, the geometry is specified by a Kahler base that depends on the function \(g_{m}\), and a fibre encoding rotation through the potential \(w\). All unknown functions \(f^{-1}X_{I}\), \(u_{I}\), \(g_{m}\) and \(w\) can depend only on a single radial coordinate \(R^{2}\) and they must satisfy specified first order differential equations (105a-105d).
Supersymmetry is never sufficient to specify an entire solution, because it is first order, and there is always an integrability condition that is of second order. Taking into account the Noether-Wald procedure, we find a second order constraint that satisfies a Gauss' law that was subject to detailed discussion in section 3. With this augmentation, the first order differential equations form a complete system. Angular momentum, with its conservation law also discussed in section 3, yields nothing new, except for a formula giving the angular momentum in terms of the same parameters that define the electric charge in the near-horizon expansion. In the case of the asymptotic infinity expansion, the electric charge depends on one of the zero modes \(\bar{u}^{I}_{(2)}\) in addition to \(\bar{x}_{I,(2)}\), \(\xi_{I}\) whereas the angular momentum depends on both the zero modes \(\bar{g}_{m,(3)}\) and \(\bar{u}^{I}_{(2)}\) as well as \(\bar{x}_{I,(2)}\) and \(\xi_{I}\).
Consistent boundary conditions for the differential equations can be specified at any radius, in principle. They must depend, at the very least, on the FI-coupling constants \(\xi_{I}\) and the electric charges \(Q_{I}\). We find that, when _starting from the black hole horizon_, this data is sufficient. Because of supersymmetry, these parameters specify the entire near horizon geometry, including the squashing of the horizon due to angular momentum. This explains why the electric charge and the angular momentum are only described with the use of the leading \(x_{I,(0)}\) term in \(f^{-1}X_{I}\), but any further subleading information about any of the fields \(u^{I}\), \(g_{m}\) or \(w\) requires subleading contributions away from the horizon, with _derivative_ information of the \(f^{-1}X_{I}\) expansion (105d) supplied by the \(x_{I,(1)}\) coefficients.
The linchpin for establishing this claim about the near horizon expansion is the scalar field. In the series expansion for \(R^{2}f^{-1}X_{I}\), we have the constant at the horizon \(x_{I,(0)}\) and then at \(\mathcal{O}(R^{2})\), we have \(x_{I,(1)}\). For the third expansion coefficent, we find \(x_{I,(2)}=0\) (100). With this starting point, the recursion relation (100) shows that all \(x_{I,(k\geq 2)}\) actually _vanish_. The fact that the scalar field \(f^{-1}X_{I}\) truncates after the first two terms is the near
horizon version of the fact that \(f^{-1}X_{I}\) is a harmonic function, as is familiar from ungauged supergratity.
When analyzing the supersymmetry conditions we provisionally considered \(f^{-1}X_{I}\) an input that all other variables were expressed in terms of. The truncation \(x_{I,(k\geq 2)}=0\) has the immediate effect of truncating \(u^{I}_{(n\geq 3)}=g_{m,(n\geq 3)}=w_{(n\geq 3)}=0\). The expansions at the horizon (4.24a-4.24d) simplify and we find
\[R^{2}u^{I} =\frac{\epsilon}{2}c^{IJK}\xi_{J}x_{K,(0)}R^{2}+\frac{\epsilon \ell}{4}c^{IJK}\xi_{J}\xi_{K}R^{4}\,, \tag{4.74a}\] \[R^{2}g_{m} =\left(1+\frac{1}{2}c^{IJK}\xi_{I}\xi_{J}x_{K,(0)}\right)R^{2}+ \frac{1}{\ell^{2}}R^{4}\,,\] (4.74b) \[R^{2}w =-\frac{\epsilon}{8}c^{IJK}\xi_{I}x_{J,(0)}x_{K,(0)}-\frac{ \epsilon\ell}{4}c^{IJK}\xi_{I}\xi_{J}x_{K,(0)}R^{2}-\frac{\epsilon}{2\ell}R^{ 4}\,,\] (4.74c) \[R^{2}f^{-1}X_{I} =x_{I,(0)}+\ell\xi_{I}R^{2}. \tag{4.74d}\]
These expressions exactly match the well-known Gutowski-Reall solution [32], with the appropriate identifications of notation6
Footnote 6: \(u^{I}\), \(g_{m}\), \(w\) and \(f^{-1}X_{I}\) match \(U^{I}\), \(g\), \(w\) and \(f^{-1}X_{I}\) respectively in [32] via:
\[q_{I}=\frac{1}{3}\bar{x}_{I,(2)}\,\ \bar{X}_{I}=\frac{1}{3}\ell\xi_{I}. \tag{4.75}\]
. The electric charge \(\widetilde{Q}_{I}\) (4.35) and the angular momentum \(\widetilde{J}\) (4.39) computed in the near horizon expansion similarly agree with the familiar results.
The analogous analysis starting from asymptotic AdS\({}_{5}\) turned out to be less straightforward. Recalling that coefficients starting from infinity are denoted by barred expansion coefficients, we find that, when shooting in (going from infinity towards the horizon), we must not only specify \(\xi_{I}\) and \(\bar{x}_{I,(2)}\), but also \(\bar{u}_{I,(2)}\) and \(\bar{g}_{m,(3)}\). The harmonic function we established at the horizon reproduces the Gutowski-Reall solution and has features in common with their very familiar analogues in ungauged supergravity. At infinity, it corresponds to
\[x_{I,(0)}=\bar{x}_{I,(2)}\,\ \ \bar{u}^{I}_{(2)}=\bar{g}_{m,(3)}=0. \tag{4.76}\]
With these special values, the recursion relation (4.71) simplifies greatly
\[\frac{(n-1)^{2}}{n-2}\ell^{-2}\bar{x}_{I,(n+1)}\] \[=-(n-1)\left(1+\tfrac{1}{2}(c\cdot\xi\xi\bar{x}_{(2)})\right) \bar{x}_{I,(n)}-\sum_{k=4}^{n}\frac{n-k}{(k-2)(k-3)}(c\cdot\xi\xi\bar{x}_{(k+1 )})\bar{x}_{I,(n+1-k)}\] \[\quad+2c_{IJK}c^{JLM}c^{KNP}\xi_{L}\xi_{N}{\sum_{k=1}^{n-1}}^{ \prime}\,\frac{\bar{x}_{M,(k+1)}\bar{x}_{P,(n-k+1)}}{4(k-2)(n-k-2)}+\xi_{I}{ \sum_{k=2}^{n}}^{\prime}\,\frac{n+1-k}{n-1-k}\frac{(c\cdot\xi\bar{x}_{(k)} \bar{x}_{(n-k+2)})}{2n}. \tag{4.77}\]
Since we already know \(\bar{x}_{I,(3)}=0\) from (4.57), and vanishing \(\bar{u}^{I}_{(2)}\) leads to vanishing \(\bar{x}_{I,(4)}\) as well via (4.61), it is not difficult to show that the expansion coefficients \(\bar{x}_{I,(k\geq 3)}\) all
vanish. Thus the perturbative series for \(\bar{x}_{I,(n)}\) truncates after two terms, as expected for a harmonic function. The identification (110) identifies the subleading coefficient in the harmonic function at infinity with the leading one at the horizon, and _vice versa_.
While (110) are the default, it is interesting that asymptotic boundary conditions with nonzero \(\bar{u}_{I,(2)}\), \(\bar{g}_{m,(3)}\) are consistent with supersymmetry. It has been argued that there may be missing solutions in certain supergravity theories that may not satisfy the canonical nonlinear charge constraint, see for example [60; 66; 67; 68]. Since the value of the conserved charges do not take the canonical form, one way wonder if those parameters are somehow related to these missing solutions.
From this point of view, the possibility of \(\bar{u}_{I,(2)}\), \(\bar{g}_{m,(3)}\) perturbing asymptotic AdS\({}_{5}\) might be desirable. In the following, we discuss this possibility.
First, recall that the electric charge \(\widetilde{Q}_{I}\) and the angular momentum \(\widetilde{J}\) are conserved charges, which means that they are the same whether evaluated at infinity or the horizon. Identifying (109) with (111) we find
\[\begin{split}& x_{I,(0)}-\frac{1}{2}\xi_{I}\left(\frac{1}{2}c^{ JKL}x_{J,(0)}x_{K,(0)}\xi_{L}\right)+\frac{1}{2}c_{IJK}\left(\frac{1}{2}c^{ JNO}\xi_{N}\xi_{O}\right)c^{KLM}x_{L,(0)}x_{M,(0)}\\ &=\bar{x}_{I,(2)}-\frac{1}{2}\xi_{I}\left(\frac{1}{2}c^{JKL}\bar {x}_{J,(2)}\bar{x}_{K,(2)}\xi_{L}\right)+\frac{1}{2}c_{IJK}\left(\frac{1}{2}c^ {JNO}\xi_{N}\xi_{O}\right)c^{KLM}\bar{x}_{L,(2)}\bar{x}_{M,(2)}\\ &\quad+\epsilon\ell\left(\xi_{I}\xi_{J}-c_{IJK}c^{KLM}\xi_{L}\xi _{M}\right)\bar{u}_{(2)}^{J}\,\end{split} \tag{111}\]
from matching \(\widetilde{Q}_{I}\), and similarly (108) with (111) give
\[\begin{split}&\frac{\epsilon}{4}c^{IJK}x_{I,(0)}x_{J,(0)}\xi_{K}+ \frac{1}{36}\left(c^{IJK}\xi_{I}\xi_{J}\xi_{K}\right)\left(c^{LMN}x_{L,(0)}x_ {M,(0)}x_{N,(0)}\right)\\ &=\frac{\epsilon}{4}c^{IJK}\bar{x}_{I,(2)}\bar{x}_{J,(2)}\xi_{K} +\frac{1}{36}\left(c^{IJK}\xi_{I}\xi_{J}\xi_{K}\right)\left(c^{LMN}\bar{x}_{L, (2)}\bar{x}_{M,(2)}\bar{x}_{N,(2)}\right)\\ &\quad+\frac{\epsilon}{\ell}\bar{g}_{m,(3)}+\ell\left(\frac{1}{2 }\xi_{I}c^{JKL}\xi_{J}\xi_{K}\bar{x}_{L,(2)}+\xi_{I}+\frac{5}{3}\ell^{-3}\bar {x}_{I,(2)}\right)\bar{u}_{(2)}^{I}\,\end{split} \tag{112}\]
from matching \(\widetilde{J}\). These conservation laws are consistent with a UV solution specified by \(x_{I,(0)}\) (and the FI-couplings \(\xi_{I}\)) that flows to an IR configuration with \(\bar{x}_{I,(2)}\) that may not even remotely agree with (110). This consideration suggests that supersymmetry and charge conservation do little to constrain the IR limit of the flow.
However, there is a different source of intuition. If the perturbative series of \(f^{-1}X_{I}\) from infinity did _not_ truncate after exactly two terms, the third term would diverge at the horizon \(R^{2}\to 0\), rather than approaching a constant. Other fields excited at the same order would similarly suggest a singularity. It could happen that, taking into account successive powers \(R^{-2k}\) to all orders, there would be a finite limit \(R^{2}\to 0\), after all, but determining by explicit computation whether this possibility is realized for any \(\bar{u}_{I,(2)}\), \(\bar{g}_{m,(3)}\) is technically challenging.
From a different perspective, since the conserved charges from the near-horizon expansion do satisfy the typical charge constraint, the possibly new black hole solutions that do not seem to satisfy the typical charge constraint at infinity, would not flow to the expected
near-horizon extremal AdS\({}_{2}\) geometry, which implies that these solutions may not be black holes after all.
Moreover, a change in the electric potential \(fY^{I}\to fY^{I}+\beta^{I}\) with \(\beta^{I}\) constant is trivial as it does not change the electromagnetic field strength. However, with the vielbein we have picked, such a shift must be accompanied by \(u^{I}\to u^{I}-w\beta^{I}\). Because \(w\) includes a term \(w\sim R^{-2}\) at large \(R\), such a gauge transformation has the ability to remove \(\bar{u}^{I}_{(2)}\). This mechanism shows the \(\bar{u}^{I}_{(2)}\) are allowed, in principle, but also that they are not physical deformations. Indeed, these coefficients diverge at the horizon, so they correspond to a singular gauge which is ill-advised.
## 5 Entropy Extremization
In this section, we consider the near-horizon limit of the Legendre transform of the radial Lagrangian (32), leading to a near-horizon entropy function. Extremizing this entropy function with respect to the near-horizon variables leads to an expression for the entropy in terms of the aforementioned charges.
### Near-horizon setup
First, we consider the near-horizon of the line element \(ds_{2}^{2}\) (27), where we recall that \(e^{2\rho}\) and \(e^{2\sigma}\) can be expressed in terms of the variables \(f\), \(g_{m}\), and \(w\) as in (114). At the horizon \(R\to 0\), these variables have known near-horizon behaviors according to Table 1. Thus, the near-horizon limit of (27) becomes
\[ds_{2,\text{nh}}^{2}=v\left(\frac{R^{4}}{\ell_{2}^{2}}dt^{2}-\frac{dR^{2}}{R^ {2}}\right)\,, \tag{115}\]
with \(v\) and \(\ell_{2}\) defined based on the \(R\to 0\) behavior of \(e^{2\rho}\) and \(e^{2\sigma}\):
\[e^{2\sigma}\big{|}_{\text{nh}}\equiv\frac{v}{R^{2}}\,\qquad e^{2\rho}\big{|}_{ \text{nh}}\equiv\frac{v}{\ell_{2}^{2}}R^{4}. \tag{116}\]
Furthermore, \(v^{\frac{1}{2}}\) and \(\ell_{2}^{\frac{1}{3}}\) are near-horizon length scales defining the 2D \((t,R)\) part of the line element (113)
\[ds_{5,\text{nh}}^{2}=v\left(\frac{R^{4}}{\ell_{2}^{2}}dt^{2}-\frac{dR^{2}}{R^ {2}}\right)-e^{-U_{1}}(\sigma_{1}^{2}+\sigma_{2}^{2})-e^{-U_{2}}(\sigma_{3}+a ^{0})^{2}. \tag{117}\]
The role of the variable \(\ell_{2}\) is elucidated by noting the near-horizon limit of the Kahler condition (115):
\[\ell_{2}=2ve^{-\frac{1}{2}U_{2}}. \tag{118}\]
This relation will be used to eliminate \(\ell_{2}\) in the rest of the near-horizon analysis.
Having reviewed the near-horizon 2D line element, and in anticipation of applying the entropy function formalism [1; 2; 3; 4; 5; 6; 7; 8; 9] to the Lagrangian density in (32), we use the following coordinate transformation
\[dt \to\frac{1}{2}\ell_{2}dt\, \tag{119}\] \[dR \to\frac{1}{2R}dR\,,\]
to bring the coordinates \((t,R)\) in (5.1) to the canonical AdS\({}_{2}\) form
\[ds^{2}_{2,\rm{nh}}=\frac{v}{4}\left(R^{2}dt^{2}-\frac{dR^{2}}{R^{2}}\right)\, \tag{5.6}\]
where now it is clear that \(v^{\frac{1}{2}}\) is related to the AdS\({}_{2}\) length scale.
The coordinate transformation (5.5) will have the effect of rescaling the Lagrangian 2-form \(\mathcal{L}_{2}=\mathcal{L}_{1}dt\wedge dR\) (2.22) by a factor
\[\mathcal{L}_{2}\rightarrow\frac{\ell_{2}}{4R}\mathcal{L}_{2}. \tag{5.7}\]
With the prescription of defining the entropy function through omitting the \(dt\wedge dR\) volume form from the dimensionally-reduced action, we anticipate dividing the density \(\mathcal{L}_{1}\) by a factor of \(\frac{4R}{\ell_{2}}\).
Combining the near-horizon behaviors of \(e^{2\rho}\) and \(e^{2\rho}\) studied above with the dictionary definitions (4.9), \(e^{-U_{1}}\), \(e^{-U_{2}}\) and \(b^{I}\) can be shown to be constants to leading order in the near-horizon limit based on the leading-order behaviors of \(f\), \(g_{m}\) and \(w\) consistent with the small \(R\) asymptotics in Table 1.
Concerning the matter fields, the electric fields \(a^{I}\) and \(a^{0}\) in (2.18) become in the near-horizon limit
\[a^{I}\big{|}_{\rm{nh}}\equiv\frac{e^{I}R^{2}}{2v}e^{\frac{1}{2}U_{2}}dt,\quad a ^{0}\big{|}_{\rm{nh}}\equiv-\frac{e^{0}R^{2}}{2v}e^{\frac{1}{2}U_{2}}dt. \tag{5.8}\]
The total 1D Lagrangian density (2.32) then becomes
\[\begin{split}\mathcal{L}_{1,\rm{nh}}&=\frac{\pi}{ 2G_{5}}e^{-U_{1}-\frac{1}{2}U_{2}}\frac{4R}{\ell_{2}}v\left[\frac{1}{v^{2}}e^{ -U_{2}}(e^{0})^{2}-\frac{4}{v}+e^{U_{1}}-\frac{1}{4}e^{2U_{1}-U_{2}}-\frac{1} {2}G_{IJ}e^{2U_{1}}b^{I}b^{J}\right.\\ &\quad+\left.\frac{2}{v^{2}}G_{IJ}(e^{I}-b^{I}e^{0})(e^{J}-b^{J}e ^{0})-V\right]-\frac{\pi}{2G_{5}}\frac{4R}{\ell_{2}}\frac{1}{2}c_{IJK}b^{I}b^{ J}(e^{K}-\frac{2}{3}e^{0}b^{K})\,.\end{split} \tag{5.9}\]
Apart from an overall prefactor in the integration measure, every other appearance of \(\ell_{2}\) has been re-expressed in terms of \(v\) and \(U_{2}\) by using (5.4). The \(\frac{4R}{\ell_{2}}\) factor has been factored out of the volume element, and we follow the prescription made earlier to exactly cancel it out with the \(\frac{\ell_{2}}{4R}\) factor from (5.7) in order to obtain the Lagrangian density suitable for the entropy function. We also note the presence of the Chern-Simons boundary terms in (5.9) that are crucial for calculating the near-horizon charges.
We now obtain a near-horizon Lagrangian (5.9) that is a function of the variables \(v\), \(U_{1}\), \(U_{2}\), \(e^{I}\), \(e^{0}\), and \(b^{I}\). We will next derive the charges \(Q_{I}\) and \(J\) from \(\mathcal{L}_{1,\rm{nh}}\), with the goal of Legendre transforming the Lagrangian into an entropy function \(\mathcal{S}\) that can be ultimately extremized towards a function purely of the charges \(\mathcal{S}=\mathcal{S}(Q_{I},J)\).
We note from the earlier Noether procedure (3.32) that \(Q_{I}\) and \(J\) were obtained in terms of radially dependent variables. In terms of the near-horizon limit of the electric fields (5.8), this becomes
\[Q_{I} =\frac{\pi}{G_{5}}\left[e^{-U_{1}-\frac{1}{2}U_{2}}\frac{4}{v}G_{ IJ}(e^{J}-b^{J}e^{0})-\frac{1}{2}c_{IJK}b^{J}b^{K}\right]\, \tag{5.10}\] \[J =\frac{\pi}{G_{5}}\left[-\frac{2}{v}e^{-U_{1}-\frac{3}{2}U_{2}}e^ {0}+\frac{4}{v}e^{-U_{1}-\frac{1}{2}U_{2}}G_{IJ}b^{I}(e^{J}-b^{J}e^{0})-\frac{1 }{3}c_{IJK}b^{I}b^{J}b^{K}\right]. \tag{5.11}\]
This introduces the charges \(Q_{I}\) and \(J\) as conjugates to the electric fields \(e^{I}\) and \(e^{0}\), allowing for the inversion
\[e^{I}-b^{I}e^{0} =\frac{v}{16}e^{U_{1}+\frac{1}{2}U_{2}}G^{IJ}\left(\widetilde{Q}_{J }+2c_{JKL}b^{K}b^{L}\right)\, \tag{5.12}\] \[e^{0} =-\frac{v}{8}e^{U_{1}+\frac{3}{2}U_{2}}\left(\widetilde{J}- \widetilde{Q}_{I}b^{I}-\frac{2}{3}c_{IJK}b^{I}b^{J}b^{K}\right)\, \tag{5.13}\]
where now the rescaled \(\widetilde{Q}_{I}\) and \(\widetilde{J}\) (3.33) have been used. The near-horizon entropy function can now be defined as a Legendre transform of the Lagrangian density (5.9) with fixed charges
\[\mathcal{S}=2\pi\left(e^{I}\frac{\partial\mathcal{L}_{1,\text{nh}}}{\partial e ^{I}}+e^{0}\frac{\partial\mathcal{L}_{1,\text{nh}}}{\partial e^{0}}-\mathcal{ L}_{1,\text{nh}}\right)\, \tag{5.14}\]
which, after eliminating the electric fields through (5.12) and (5.13), yields
\[\mathcal{S} =\frac{4\pi^{2}}{G_{5}}e^{-U_{1}-\frac{1}{2}U_{2}}\left[1+\frac{ v}{4}\left(\frac{1}{4}e^{2U_{1}-U_{2}}-e^{U_{1}}+\frac{1}{64}e^{2U_{1}+2U_{2}} \left(\widetilde{J}-\widetilde{Q}_{I}b^{I}-\frac{2}{3}c_{IJK}b^{I}b^{J}b^{K} \right)^{2}\right.\right.\] \[\left.\left.+V+\frac{1}{128}e^{2U_{1}+U_{2}}G^{IJ}\left( \widetilde{Q}_{I}+2c_{IKL}b^{K}b^{L}\right)\left(\widetilde{Q}_{J}+2c_{JMN}b^ {M}b^{N}\right)+\frac{1}{2}e^{2U_{1}}G_{IJ}b^{I}b^{J}\right)\right]. \tag{5.15}\]
This entropy function depends on the physical variables \(v\), \(U_{1}\), \(U_{2}\), \(b^{I}\), \(X^{I}\) describing the near horizon geometry and matter fields, with the conserved charges \(\widetilde{Q}_{I}\), \(\widetilde{J}\) appearing as fixed parameters. At its extremum, it yields the physical variables and the black hole entropy as a function of the charges.
### Extremization of the Entropy Function
It is exceedingly simple to extremize with respect to \(v\) which appears only as a Lagrange multiplier in front of the large round bracket that comprises nearly all of (5.15). This leaves the extremized value of \(\mathcal{S}\):
\[\mathcal{S}=\frac{4\pi^{2}}{G_{5}}e^{-U_{1}-\frac{1}{2}U_{2}}. \tag{5.16}\]
This is exactly the black hole entropy computed via the area law for a horizon defined by the volume 3-form \(e^{-U_{1}-\frac{1}{2}U_{2}}\sigma_{1}\wedge\sigma_{2}\wedge\sigma_{3}\) with the angular ranges specified in (2.20). However, the explicit dependence of \(U_{1}\) and \(U_{2}\) on the charges remains to be determined. For this we must extremize with respect to the remaining variables
\[\partial_{U_{1}}\mathcal{S}=\partial_{U_{2}}\mathcal{S}=\partial_{b^{I}} \mathcal{S}=D_{I}\mathcal{S}=0. \tag{5.17}\]
Here \(D_{I}\) is the Kahler-covariantized derivative with respect to the scalars \(X^{I}\). It is defined such that \(X^{I}D_{I}=0\), which is the correct way to vary the scalars while also implementing
the constraint (101). The conditions (107) give
\[0= V-e^{U_{1}}+\frac{1}{4}e^{2U_{1}-U_{2}}+\frac{1}{64}e^{2U_{1}+2U_{2 }}\mathcal{M}^{2}+G^{IJ}\frac{1}{128}e^{2U_{1}+U_{2}}\mathcal{K}_{I}\mathcal{K} _{J}+\frac{1}{2}G_{IJ}e^{2U_{1}}b^{I}b^{J}\, \tag{108}\] \[0= -4-vV+\frac{v}{4}e^{2U_{1}-U_{2}}+\frac{v}{64}e^{2U_{1}+2U_{2}} \mathcal{M}^{2}+G^{IJ}\frac{v}{128}e^{2U_{1}+U_{2}}\mathcal{K}_{I}\mathcal{K} _{J}+\frac{v}{2}G_{IJ}e^{2U_{1}}b^{I}b^{J}\,\] (109) \[0= -2-\frac{v}{2}V+\frac{v}{2}e^{U_{1}}-\frac{3v}{8}e^{2U_{1}-U_{2}} +\frac{3v}{128}e^{2U_{1}+2U_{2}}\mathcal{M}^{2}+G^{IJ}\frac{v}{256}e^{2U_{1}+ U_{2}}\mathcal{K}_{I}\mathcal{K}_{J}\] \[\qquad-\frac{v}{4}G_{IJ}e^{2U_{1}}b^{I}b^{J}\,\] (110) \[0= vD_{I}V+\frac{v}{128}e^{2U_{1}+U_{2}}(D_{I}G^{JK})\mathcal{K}_{J} \mathcal{K}_{K}+\frac{v}{2}e^{2U_{1}}(D_{I}G_{JK})b^{J}b^{K}\,\] (111) \[0= \frac{1}{32}e^{U_{2}}\left(-e^{U_{2}}\mathcal{K}_{I}\mathcal{M}\ + 2G^{JK}c_{IJN}b^{N}\mathcal{K}_{K}\right)+G_{IJ}b^{J}\, \tag{112}\]
where \(\mathcal{M}\) and \(\mathcal{K}_{I}\) are shorthand for
\[\mathcal{M}\equiv\widetilde{J}-\widetilde{Q}_{I}b^{I}-\frac{2}{3}c_{IJK}b^{I}b ^{J}b^{K}\,\ \mathcal{K}_{I}\equiv\widetilde{Q}_{I}+2c_{IJK}b^{J}b^{K}. \tag{113}\]
Additionally, \(D_{I}\) acts on the scalars \(X^{J}\) following7
Footnote 7: This can be generalized to other quantities via the product rule on \(D_{I}\), for instance,
\[D_{I}X_{J}=\frac{1}{2}c_{JKL}D_{I}(X^{K}X^{L})=c_{IJK}X^{K}-\frac{2}{3}X_{I}X_ {J}. \tag{114}\]
\[D_{I}X^{J}=\delta^{J}_{I}-\frac{1}{3}X_{I}X^{J}. \tag{115}\]
Ideally we seek the most general extremum that solves (108-112) that is consistent with our ansatz (103). However, the extremization equations are highly nonlinear in the variables of interest \((v,U_{1},U_{2},X^{I},b^{I})\). Therefore, in the following we specialize and find all supersymmetric solutions.
#### Near-horizon supersymmetric conditions
It is straightforward to take the near-horizon limit of the supersymmetry conditions (104-106), along with the identification \(X^{I}=Y^{I}\) (107). After inverting \(e^{I}\) and \(e^{0}\) in terms of \(\widetilde{Q}_{I}\) and \(\widetilde{J}\), following (112) and (113)), we obtain the following near-horizon supersymmetric relations
\[0 =\widetilde{Q}_{I}+2c_{IJK}b^{J}b^{K}-4e^{-U_{2}}X_{I}\, \tag{116}\] \[0 =\widetilde{J}-\widetilde{Q}_{I}b^{I}-\frac{2}{3}c_{IJK}b^{I}b^{J }b^{K}-4e^{-U_{1}-U_{2}}(\xi\cdot X)\,\] (117) \[0 =b^{I}-e^{-U_{1}}\left(X^{I}(\xi\cdot X)-G^{IJ}\xi_{J}\right)\,\] (118) \[0 =e^{U_{1}}-\frac{4}{v}-2V. \tag{119}\]
We also need the near-horizon version of the Kahler condition (108). We can trade the variables \(\rho\) and \(\sigma\) describing the 2D geometry for \(f\) and \(g_{m}\) following (107) and eliminate \(a_{t}^{0}\) that results in favor of the charges through (106). These steps lead to
\[\frac{4}{v}-(\xi\cdot X)^{2}=e^{2U_{1}-U_{2}}. \tag{109}\]
We have verified that when the five supersymmetric relations (107-109) are satisfied, then the five \(\mathcal{S}\) extremization equations (106-108) are satisfied as well. The details of this computation are not instructive so we omit them. The reverse logic would be that _all_ extremal solutions within the scope of our ansatz are supersymmetric. This we have not shown, and it is indeed not true, i.e. there are no known nonextremal supersymmetric Lorentzian black holes. Thus the specialization to supersymmetric solutions addresses a genuine subset of the extremal black holes.
In the remainder of this subsection we solve the supersymmetry relations (107-109) explicitly and find all variables as functions of the charges \(Q_{I}\) and \(J\).
#### 5.2.1 Solving for the entropy and the charge constraint
The supersymmetry conditions (107-109) are all algebraic, but they are far from trivial. Straightforward contractions, followed by taking simple linear combinations, give scalar identities
\[X\cdot b =e^{-U_{1}}\xi\cdot X\,\] \[\widetilde{Q}\cdot b =\frac{3}{2}\widetilde{J}-8e^{-U_{1}-U_{2}}\xi\cdot X\,\] \[\xi\cdot b =e^{U_{1}-U_{2}}-1\, \tag{110}\]
which will prove useful later. Our strategy will be to exploit identities like these to find simple combinations of variables that can be expressed entirely in terms of the charges \(Q_{I}\), \(J\) and the couplings \(\xi_{I}\). Combinations of those will in turn give explicit formulae for physical variables.
In this spirit, we expand \(\widetilde{Q}_{J}\widetilde{Q}_{K}\) using the square of (107), and then simplify the terms that are products of \(b\)'s using (108). We obtain
\[\frac{1}{2}c^{IJK}\widetilde{Q}_{J}\widetilde{Q}_{K}=32e^{-2U_{1}-U_{2}}c^{IJK }\xi_{J}\xi_{K}+16e^{-U_{1}-U_{2}}X^{I}-2\widetilde{J}b^{I}. \tag{111}\]
Contracting (111) with \(\xi_{I}\), the first term on the right becomes proportional to \(e^{-2U_{1}-U_{2}}\), which is related to the black hole entropy through \(\mathcal{S}\) (109). There will also be a term \((\xi\cdot X)\) that we can eliminate with the help of
\[\widetilde{J}=8e^{-2U_{1}}(\xi\cdot X)+\frac{32}{3}e^{-3U_{1}}c^{IJK}\xi_{I} \xi_{J}\xi_{K}\, \tag{112}\]
which is a simplification of (108) with \(\widetilde{Q}_{I}\) and \(b^{I}\) eliminated using (107) and (109), respectively. These steps give
\[\frac{1}{6}c^{IJK}\xi_{I}\xi_{J}\xi_{K}\left(\frac{\mathcal{S}}{2\pi}\right)^ {2}=\frac{1}{2}c^{IJK}\xi_{I}Q_{J}Q_{K}-\frac{\pi}{4G_{5}}2J\, \tag{113}\]
which amounts to an explicit formula for the black hole entropy as function of the conserved charges
\[\mathcal{S}=2\pi\sqrt{\frac{1}{2}c^{IJK}\ell^{3}\xi_{I}Q_{J}Q_{K}-N^{2}J}. \tag{108}\]
This is in full agreement with the entropy of supersymmetric extremal AdS\({}_{5}\) black holes [69; 70; 71]. We have expressed the entropy using the untilded charges \(Q_{I}\) and \(J\) (104) and traded \(G_{5}\) for \(N^{2}\) using \(\frac{\pi\ell^{3}}{4G_{5}}=\frac{1}{2}N^{2}\) and applied \(\frac{1}{6}c^{IJK}\xi_{I}\xi_{J}\xi_{K}=\ell^{-3}\) from (12) to explicitly show all the dimensionful quantities.
Continuing with the strategy of evaluating natural combinations of the conserved charges, we evaluate the cubic invariant of the charges \(c^{IJK}\widetilde{Q}_{I}\widetilde{Q}_{J}\widetilde{Q}_{K}\) by taking the cube of \(\widetilde{Q}_{I}\) from (103), with the resulting contractions of \(X_{I}\) and \(b^{I}\) such as \(c_{IJK}X^{I}b^{J}b^{K}\) and \(X_{I}b^{I}\) simplified using the \(b^{I}\) relation (105) as well as (106) and (107) for the terms quadratic in \(\xi\). This yields
\[c^{IJK}\widetilde{Q}_{I}\widetilde{Q}_{J}\widetilde{Q}_{K}=6e^{-U_{1}-2U_{2}}+ \frac{1}{8}\widetilde{J}c_{IJK}b^{I}b^{J}b^{K}. \tag{109}\]
Alternatively, we can arrive at the cubic product of electric charges by contracting (104) with \(Q_{I}\) from (103), giving
\[\begin{split}&\tfrac{1}{64}c^{IJK}\widetilde{Q}_{I}\widetilde{Q}_{ J}\widetilde{Q}_{K}\\ &=4e^{-U_{1}-2U_{2}}+2e^{-2U_{1}-U_{2}}+e^{-2U_{1}-U_{2}}c^{IJK} \widetilde{Q}_{I}\xi_{J}\xi_{K}-\tfrac{1}{4}e^{-U_{1}-U_{2}}\widetilde{J}(\xi \cdot X)+\tfrac{1}{8}\widetilde{J}c_{IJK}b^{I}b^{J}b^{K}\.\end{split} \tag{110}\]
Comparing (109) and (110), we find the identity
\[\frac{1}{2}c^{IJK}\widetilde{Q}_{I}\xi_{J}\xi_{K}+1=e^{U_{1}}\left(e^{-U_{2}}+ \frac{1}{8}\widetilde{J}\ \xi\cdot X\right). \tag{111}\]
It is useful because it gives access to a useful combination of \(U_{1}\), \(U_{2}\), and \(\xi\cdot X\). Indeed, we can simplify the cube of the electric charge in the form (109) using the \(b\) identity (105), and then (104) to eliminate \((\xi\cdot X)\), to give
\[\frac{1}{6}c^{IJK}\widetilde{Q}_{I}\widetilde{Q}_{J}\widetilde{Q}_{K}+ \widetilde{J}^{2}=64e^{-U_{1}-U_{2}}\left(e^{-U_{2}}+\frac{1}{8}\widetilde{J} \ \xi\cdot X\ \right). \tag{112}\]
The right-hand side of this equation differs from that of (111) only by a factor proportional \(e^{-2U_{1}-U_{2}}\) which is the square of the geometric measure on the black hole horizon. As such, it is related to the black hole entropy \(\mathcal{S}\) both through the area law (100) and as a function of charges (108). Collecting these relations, and reintroducing \(Q_{I}\) and \(J\) (104) in order to align with the conventional units for this result, we find
\[\left(\frac{1}{6}c^{IJK}Q_{I}Q_{J}Q_{K}+\frac{\pi}{4G_{5}}J^{2}\right)=\ell^{ 3}\left(\frac{1}{2}c^{IJK}\xi_{I}\xi_{J}Q_{K}+\frac{\pi}{4G_{5}}\right)\left( \frac{1}{2}c^{IJK}\xi_{I}Q_{J}Q_{K}-\frac{\pi}{2G_{5}}J\right). \tag{113}\]
This is the prototypical 5D nonlinear charge constraint [69; 70; 71]. However, the charge constraint (113) does not make progress towards solving the supersymmetry equations nor determining the near horizon solution in terms of conserved charges. Rather, it is a
relation between the conserved charges that, if taken at face value, all supersymmetric black holes dual to \(\mathcal{N}=4\) SYM must satisfy. This is extremely important and the continuing questions regarding this constraint is one of the motivations for the detailed study reported in this article.
Even at this point of our discussion where we are deep into solving certain nonlinear equations, it is worth noting that that the black hole entropy (108) and the charge constraint (109) can be combined into one complex-valued equation
\[\frac{1}{6}c^{IJK}\left(Q_{I}+i\frac{\mathcal{S}}{2\pi}\xi_{I} \right)\left(Q_{J}+i\frac{\mathcal{S}}{2\pi}\xi_{J}\right)\left(Q_{K}+i\frac{ \mathcal{S}}{2\pi}\xi_{K}\right)+\frac{\pi}{4G_{5}}\left(-J+i\frac{\mathcal{S }}{2\pi}\right)^{2}=0\ \ \ . \tag{110}\]
The real part gives the constraint (109) and the imaginary part gives the formula for the entropy (108). Complexified equations are natural in problems involving supersymmetry. Also, (110) appears as the condition for a complex saddle point that provides an accounting in \(\mathcal{N}=4\) SYM for the entropy of black hole preserving \(1/16\) of the maximal supersymmetry [69; 70; 71; 72].
#### 5.2.2 Near-horizon variables as function of conserved charges
Having discussed the black hole entropy and the constraint on charges, we move on to expressing all other aspects of the near-horizon geometry and the matter content in terms of the fixed charges \(Q_{I}\) and \(J\).
For the following computations, we make use of the expressions (107) and (108) for \(J\) and \(Q_{I}\) simplified using the relation for \(b^{I}\) in (109). We find
\[Q_{I} =\frac{\pi}{G_{5}}e^{-2U_{1}}\left[X_{I}e^{2U_{1}-U_{2}}+X_{I}( \xi\cdot X)^{2}-2\xi_{I}(\xi\cdot X)-4G_{IJ}c^{JKL}\xi_{K}\xi_{L}\right]\, \tag{111}\] \[J =\frac{16\pi}{G_{5}}e^{-3U_{1}}\left[\frac{1}{8}e^{U_{1}}\xi\cdot X +\ell^{-3}\right]. \tag{112}\]
Multiplying both sides of (105) by \(J\), using the relation (112), we can solve for \(e^{U_{1}}\)\(\xi\cdot X\) in terms of the charges
\[\frac{1}{8}\xi\cdot Xe^{U_{1}}=\frac{J\left(\frac{1}{2}c^{IJK}Q_{ I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)-\left(\frac{\mathcal{S}}{2\pi} \right)^{2}\ell^{-3}}{J^{2}+\left(\frac{\mathcal{S}}{2\pi}\right)^{2}}. \tag{113}\]
This result ties a geometrical quantity -- as it appears in the near-horizon Kahler relation (106) -- to the charges. It also has an additional immediate value as (112) relates it to \(e^{-3U_{1}}\), giving
\[e^{-3U_{1}}=\frac{G_{5}}{16\pi}\frac{J^{2}+\left(\frac{\mathcal{ S}}{2\pi}\right)^{2}}{\frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}+J \ell^{-3}}. \tag{114}\]
This expression sets the scale of the non-deformed \(S^{3}\) which has line element \(e^{-U_{1}}(\sigma_{1}^{2}+\sigma_{2}^{2}+\sigma_{3}^{2})\).
Due to the rotation of the black hole, the horizon geometry (4.8) is deformed away from \(S^{3}\). We can quantify the deformation by computing \(e^{U_{1}-U_{2}}\) via \(e^{3U_{1}}\) from (5.45) and \(e^{2U_{1}+U_{2}}\) from (5.16):
\[e^{U_{1}-U_{2}}=\frac{\frac{4G_{5}}{\pi}\left(\frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_ {K}+\frac{\pi}{4G_{5}}+J\ell^{-3}\right)\left(\frac{\mathcal{S}}{2\pi}\right)^ {2}}{J^{2}+\left(\frac{\mathcal{S}}{2\pi}\right)^{2}}. \tag{5.46}\]
The only scalar near-horizon parameter that was not yet computed is the AdS\({}_{2}\)-volume \(v\). Due to the alternate near-horizon Kahler condition (5.30), the combination \(ve^{U_{1}}\) can be expressed in terms of \(\mathcal{S}\), \((\xi\cdot X)e^{U_{1}}\) and \(e^{-3U_{1}}\). These three quantities were given as functions of the conserved charges in (5.16), (5.44) and (5.45). After simplifications, we find
\[\frac{v}{4}e^{U_{1}}=\frac{\pi\ell^{3}}{4G_{5}}\frac{\ell^{3} \left(\frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)+J}{\ell ^{6}\left(\frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)^{2}+ \left(\frac{\mathcal{S}}{2\pi}\right)^{2}}. \tag{5.47}\]
This completes the explicit extremization of the entropy function for the scalar variables which at this point have all been expressed in terms of conserved charges \(Q_{I},J\) and FI-couplings \(\xi_{I}\).
We must similarly determine the vectors \(b^{I}\) and \(X^{I}\) at the extremum which may be determined, in principle, by the input vectors \(\xi_{I}\) and \(\widetilde{Q}_{I}\). However, the position of the vector indices \(I\) do not match so the full real special geometry enters. We exploit only the vectorial symmetry, and then, \(X^{I}\) and \(b^{I}\) must be linear combinations of _three_ vectors: \(c^{IJK}\widetilde{Q}_{I}Q_{J}\), \(c^{IJK}\widetilde{Q}_{I}\xi_{J}\), and \(c^{IJK}\xi_{I}\xi_{J}\). One linear relation of this kind was given in (5.32). To find another, we contract (5.32) with \(\widetilde{Q}_{I}\) and simplify using (5.37). This gives
\[\widetilde{Q}\cdot X=8e^{-U_{2}}+4e^{-U_{1}}. \tag{5.48}\]
Combining this with (5.31) and (5.44), we have all four inner products of \(b^{I}\), \(X^{I}\), \(Q_{I}\) and \(\xi_{I}\). We already determined the scalar combinations \(c^{IJK}\xi_{I}\xi_{J}\widetilde{Q}_{K}\) and \(c^{IJK}\xi_{I}\widetilde{Q}_{J}\widetilde{Q}_{K}\) from (5.34) and (5.38), so we can establish the vectorial equation
\[\frac{1}{2}c^{IJK}\widetilde{Q}_{J}\xi_{K}=b^{I}+\frac{1}{8} \widetilde{J}e^{U_{1}}X^{I}. \tag{5.49}\]
Inversion of (5.32) and (5.49) give
\[b^{I}=\frac{4G_{5}}{\pi}\cdot\frac{1}{2}\frac{c^{IJK}\xi_{J}Q_{K }\left(\frac{\mathcal{S}}{2\pi}\right)^{2}-\left(\frac{1}{2}c^{IJK}Q_{J}Q_{K}- \frac{1}{2}c^{IJK}\xi_{J}\xi_{K}\left(\frac{\mathcal{S}}{2\pi}\right)^{2} \right)J}{J^{2}+\left(\frac{\mathcal{S}}{2\pi}\right)^{2}}\, \tag{5.50}\]
and
\[X^{I}=4e^{-U_{1}}\frac{Jc^{IJK}\xi_{J}Q_{K}+\left(\frac{1}{2}c^{ IJK}Q_{J}Q_{K}-\frac{1}{2}c^{IJK}\xi_{J}\xi_{K}\left(\frac{\mathcal{S}}{2\pi} \right)^{2}\right)}{J^{2}+\left(\frac{\mathcal{S}}{2\pi}\right)^{2}}\, \tag{5.51}\]
where, once we impose the value of \(e^{-U_{1}}\) given in (110), \(X^{I}\) is a function of the entropy (111) and the charges of the black hole.
In summary, we have found that the near-horizon limit of the supersymmetric equations implies that the near-horizon fields and variables of the geometry/matter ansatz are given by the charges \(Q_{I}\) and \(J\), through the relations (110-112), which themselves parametrize a special extremum of the near-horizon entropy function (110), for general FI coupling \(\xi_{I}\) and \(c_{IJK}\).
### Complexification of the near-horizon variables
Each of the main results derived in the previous subsection are complicated formulae. However, they resemble one another and, in particular, it stands out that several expressions, such as (110) and (111), share a common denominator. Indeed, there is an elegant way to pair them into complexified near-horizon variables
\[Z^{I}=b^{I}-ie^{-\frac{1}{2}U_{2}}X^{I}=\frac{G_{5}}{\pi}\frac{c^{IJK}(Q_{J}+i \frac{\mathcal{S}}{2\pi}\xi_{J})(Q_{K}+i\frac{\mathcal{S}}{2\pi}\xi_{K})}{-J +i\frac{\mathcal{S}}{2\pi}}. \tag{112}\]
To the extent \(X^{I}\) can be interpreted is an electric field it is indeed natural that its partner is a magnetic field \(b^{I}\). In addition to the real part \(b^{I}\) being given by (110), we recognize in the imaginary part the combination of the factor \(e^{-U_{1}-\frac{1}{2}U_{2}}\) in the entropy \(\mathcal{S}\) and \(X^{I}e^{U_{1}}\) given respectively by (111) and (111).
Some discussions of the AdS\({}_{5}\) black hole geometry invoke from the outset principles that are inherently complex, such as the Euclidean path integral or special geometry in four dimensions. This can give conceptual challenges so, in our discussion of entropy extremization, complex variables such as (112) are introduced [57; 26] only for their apparent convenience. To make precise connections with the literature, we now to reintroduce the electric fields \(e^{I}\) and \(e^{0}\) conjugate to the conserved charges \(\tilde{Q}_{I}\) and \(\tilde{J}\). For \(e^{0}\) defined in (111), simplification using (109), gives an expression for \(e^{0}\) that depends on \(ve^{U_{1}}\) in (110), \(e^{-3U_{1}}\) in (110), \(\mathcal{S}\) in (112), and \((\xi\cdot X)e^{U_{1}}\) in (110). Collecting formulae, we then find
\[e^{0}=-\frac{4\pi}{\mathcal{S}}\frac{\pi\ell^{3}}{4G_{5}}\frac{J\ell^{3} \left(\frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)-\left( \frac{\mathcal{S}}{2\pi}\right)^{2}}{\ell^{6}\left(\frac{1}{2}c^{IJK}Q_{I}\xi _{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)^{2}+\left(\frac{\mathcal{S}}{2\pi}\right) ^{2}}. \tag{113}\]
This expression for \(e^{0}\) combines nicely with (111) and gives the complex potential
\[\frac{1}{2}e^{0}+i\frac{v}{4}e^{U_{1}}=\frac{\pi\ell^{3}}{4G_{5}}\left(\frac{2 \pi}{\mathcal{S}}\right)\frac{-J+i\frac{\mathcal{S}}{2\pi}}{\ell^{3}\left( \frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)+i\left(\frac{ \mathcal{S}}{2\pi}\right)}. \tag{114}\]
Given \(e^{0}\) in (113) as well as (110) and (111), the electrical potentials dual to the vectorial charges become (110):
\[e^{I}=\frac{2\pi}{\mathcal{S}}\frac{\ell^{6}\left(\frac{1}{2}c^{IJK}Q_{J}Q_{ K}-\frac{1}{2}c^{IJK}\xi_{J}\xi_{K}\left(\frac{\mathcal{S}}{2\pi}\right)^{2} \right)(\frac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\frac{\pi}{4G_{5}})+\ell^{3}c^{ IJK}Q_{J}\xi_{K}(\frac{\mathcal{S}}{2\pi})^{2}}{\ell^{6}\left(\frac{1}{2}c^{IJK}Q_{I} \xi_{J}\xi_{K}+\frac{\pi}{4G_{5}}\right)^{2}+\left(\frac{\mathcal{S}}{2\pi} \right)^{2}}. \tag{115}\]
As preparation for the complexified version, we combine (5.50) and (5.51) as
\[\frac{v}{2}\left(b^{I}e^{U_{1}}+X^{I}\xi\cdot X\right)\] \[=\frac{\ell^{3}c^{IJK}Q_{J}\xi_{K}\left(\tfrac{1}{2}c^{LMN}Q_{L} \xi_{M}\xi_{N}+\tfrac{\pi}{4G_{5}}\right)-\ell^{3}\left(\tfrac{1}{2}c^{IJK}Q_{ J}Q_{K}-\tfrac{1}{2}c^{IJK}\xi_{J}\xi_{K}\left(\tfrac{\mathcal{S}}{2\pi}\right)^{2} \right)}{\ell^{6}\left(\tfrac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\tfrac{\pi}{4G_{5 }}\right)^{2}+\left(\tfrac{\mathcal{S}}{2\pi}\right)^{2}}\, \tag{5.56}\]
where we have imposed \(\tfrac{v}{4}e^{U_{1}}\) in (5.47), \(e^{-U_{1}}\) in (5.45), and \((\xi\cdot X)e^{U_{1}}\) in (5.44). We then find the complex special geometry vector
\[e^{I}+i\frac{v}{2}\left(b^{I}e^{U_{1}}+X^{I}(\xi\cdot X)\right)=\frac{2\pi}{ \mathcal{S}}\frac{\tfrac{1}{2}c^{IJK}(Q_{J}+i\tfrac{\mathcal{S}}{2\pi}\xi_{J} )(Q_{K}+i\tfrac{\mathcal{S}}{2\pi}\xi_{K})}{\ell^{3}(\tfrac{1}{2}c^{LMN}Q_{L} \xi_{M}\xi_{N}+\tfrac{\pi}{4G_{5}})+i\tfrac{\mathcal{S}}{2\pi}}. \tag{5.57}\]
The complex potentials (5.54-5.57) appear commonly in the literature, albeit with the normalization
\[\frac{\omega}{\pi}=\frac{\pi\ell^{3}}{4G_{5}}\left(\frac{2\pi}{\mathcal{S}} \right)\frac{-J+i\tfrac{\mathcal{S}}{2\pi}}{\ell^{3}\left(\tfrac{1}{2}c^{IJK} Q_{I}\xi_{J}\xi_{K}+\tfrac{\pi}{4G_{5}}\right)+i\left(\tfrac{\mathcal{S}}{2\pi} \right)}=\frac{1}{2}e^{0}+i\frac{v}{4}e^{U_{1}}\, \tag{5.58}\]
and
\[\frac{\Delta^{I}}{\pi}=\frac{2\pi}{\mathcal{S}}\frac{\tfrac{1}{2}c^{IJK}\ell ^{3}(Q_{J}+i\tfrac{\mathcal{S}}{2\pi}\xi_{J})(Q_{K}+i\tfrac{\mathcal{S}}{2\pi }\xi_{K})}{\ell^{3}\left(\tfrac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\tfrac{\pi}{4 G_{5}}\right)+i\left(\tfrac{\mathcal{S}}{2\pi}\right)}=e^{I}+i\frac{v}{2}(b^{I}e^{U_{1} }+X^{I}\xi\cdot X). \tag{5.59}\]
The real and imaginary parts of the complexified potentials \(\omega\) and \(\Delta^{I}\) are related to one another through
\[\xi_{I}e^{I}+e^{0}=0. \tag{5.60}\]
and from the identities (5.28) and (5.30), we find
\[2\omega+\xi_{I}\Delta^{I}=2\pi i. \tag{5.61}\]
This is our version of the well-known complex constraint that is imposed on the chemical potentials conjugate to \(J\) and \(Q_{I}\) in analyses involving complex saddle points from the outset. An important example is the Hosseini-Hristov-Zaffaroni (HHZ) extremization principle for 5D rotating BPS black holes [69, 70, 71, 26]. The complexified potentials \(\omega\) and \(\Delta^{I}\) can be exploited to simplify the Lagrangian density (5.9). The linchpin is the identity
\[\frac{\tfrac{1}{6}c_{IJK}\Delta^{I}\Delta^{J}\Delta^{K}}{\omega^{2}}=\left( \frac{2\pi^{2}}{\mathcal{S}}\right)\frac{(-J+\tfrac{i\mathcal{S}}{2\pi})^{2}} {\ell^{3}\left(\tfrac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\tfrac{\pi}{4G_{5}} \right)+i\left(\tfrac{\mathcal{S}}{2\pi}\right)}. \tag{5.62}\]
It is established using the cube of (5.59), along with (B.4) to simplify the products of \(c_{IJK}\), as well as the square of (5.58) and the complexified charge relation (5.41). The same combination of terms appears when evaluating instead
\[\Delta^{I}Q_{I}-2\omega J=-\left(\frac{N^{2}}{2}\right)\left(\frac{2\pi^{2}}{ \mathcal{S}}\right)\frac{(-J+\tfrac{i\mathcal{S}}{2\pi})^{2}}{\ell^{3}\left( \tfrac{1}{2}c^{IJK}Q_{I}\xi_{J}\xi_{K}+\tfrac{\pi}{4G_{5}}\right)+i\left( \tfrac{\mathcal{S}}{2\pi}\right)}+\mathcal{S}\, \tag{5.63}\]
with the use of the definitions of \(\omega\) and \(\Delta^{I}\) in (111) and (112) respectively, as well as the complex relations (110) and (112), and exchanging \(G_{5}\) for \(N\) via \(\frac{\pi\ell^{3}}{4G_{5}}=\frac{N^{2}}{2}\). This allows us to rewrite the black hole entropy \(\mathcal{S}\) as
\[\mathcal{S}=\Delta^{I}Q_{I}-2\omega J+\frac{N^{2}}{2}\frac{\frac{1}{6}c_{IJK} \Delta^{I}\Delta^{J}\Delta^{K}}{\omega^{2}}. \tag{113}\]
Referring back to the real-valued entropy functional \(\mathcal{S}\) as the Legendre transform of the on-shell Lagrangian \(\mathcal{L}_{1}\) (110), and noting that \((e^{I},e^{0})\) constitute the real parts of \((\Delta^{I},\omega)\), we obtain the greatly simplified expression
\[2\pi\mathcal{L}_{1,\text{nh}}=-\frac{N^{2}}{2}\text{Re}\left(\frac{\frac{1}{6} c_{IJK}\Delta^{I}\Delta^{J}\Delta^{K}}{\omega^{2}}\right). \tag{114}\]
We have thus been able to reproduce the standard HHZ entropy function result [26], although while remaining entirely in 5D (no reduction to 4D), with the help of the entropy function formalism. The derivation of (113) also makes the Legendre transformation between the entropy \(\mathcal{S}\) and the complexified entropy function manifest.
## 6 Discussion
We have analysed the first order attractor flow equations derived from the vanishing of the supersymmetric variations in \(D=5\)\(\mathcal{N}=2\) gauged supergravity with FI-couplings to \(\mathcal{N}=2\) vector multiplets. We focus on solutions with electric charges \(Q_{I}\) and one independent angular momentum \(J\). In order to analyze the flow equations and find the conserved charges, we first assume a perturbative expansion at either the near-horizon geometry or the asymptotic boundary. As usual, the supersymmetry conditions are not sufficient to guarantee a solution to the equation of motion, but we find that the conserved Noether-Wald surface charges fill this gap. This leads to a self-contained set of first order differential equations.
To integrate these differential equations we need boundary conditions, or more generally integration constants. In the present setting, this turns out to be somewhat complicated. Generically, first order differential equations, even coupled ones, just need values at one point to compute the derivative and then, by iteration, the complete solution follows.8 We find that, whether starting from the black hole horizon or the asymptotic AdS\({}_{5}\), solving the first order equations is subtle. Supersymmetry conditions exhibit zero-modes which fail to provide a derivative, as a first order differential equation is expected to do. On the positive side, in these situations supersymmetry give relations between the first few coefficients near a boundary.
Footnote 8: Locality is among the major caveats. In principle, first order differential equation give derivatives, and then the derivatives of the derivatives, and so on for the whole series. Generally, it is not easy to prove convergence for a series obtained this way, but this obstacle, and other mathematical fine points, do not appear significant at our level of analysis.
After exploiting conserved charges extensively, the initial value problem simplifies. Indeed, at the horizon, all fields must satisfy the entropy extremiz
in detail in section 5. The relative simplicity of _shooting out_ from the horizon can be construed as black hole attractor behavior. The situation starting from asymptotic AdS is much more involved, as detailed in subsection 4.4.
We are far from the first to investigate the attractor flow for rotating AdS\({}_{5}\) black holes. Some notable works are [26; 57]. In our procedure, we have remained in five dimensions, without dimensionally reducing to four dimensions, where the metric no longer contains a fibration. Our approach is complementary, in that the role of rotation is highlighted. Additionally, we have allowed for backgrounds that go beyond the omnipresent STU model. Finally, we have also considered a complexification of the near-horizon variables that elucidates some features of the theory from the near-horizon perspective. This includes the well-known complex constraint on the chemical potentials.
Many open problems persist after our analysis of AdS\({}_{5}\) rotating black holes. For example, we derived the first order attractor flow equations from the supersymmetric variations of the \(\mathcal{N}=2\) gauged theory, but it would be instructive to also derive them from the Lagrangian. After a suitable Legendre transform, the dimensionally reduced Lagrangian can be written as a sum of squares, up to a total derivative. In minimizing the Lagrangian, each square gives a condition that is equivalent to the vanishing of the supersymmetric variations. It would be interesting to extract the flow equations from this method as it can also be more directly related to the entropy extremization once the near-horizon limit is taken. We also expect this now radial entropy function to greatly simplify once the fields and variables in it are suitably complexified, such as was done at the near-horizon level. This would allow for an understanding of the underlying complex structure of the rotating AdS\({}_{5}\) black hole spacetime without the customary reduction to 4D.
Higher derivative corrections in the context of AdS\({}_{5}\) black holes have been studied by [73; 74; 75; 63] and references therein, and it would be interesting to understand the role of higher derivative corrections in the attractor flow. This is also interesting from the entropy extremization point of view and allows us to probe higher derivative corrections to the entropy from the near-horizon, which can be checked via holography. Finally, a similar analysis can then be completed in other dimensions, including the rotating AdS black holes in six and seven dimensions [76; 77]. The product of the scalar fields with one of the parameters of the metric yields a harmonic function and we would expect that one can solve the flow equations using a similar approach via a perturbative expansion. We hope to comment on these ideas in the near future.
## Acknowledgements
We thank Nikolay Bobev, Pablo Cano, Mirjam Cvetic, Alan Fukelman, Luca Illiesiu, Sameer Murthy and Enrico Turetta for valuable discussions. FL thanks the Simons Foundation for support through a sabbatical fellowship. He also thanks Stanford Institute for Theoretical Physics for hospitality and support in the course of the sabbatical. MD thanks CERN for hospitality in the final stages of this work and is especially thankful for Alejandro Cabo-Bizet for letting her borrow an adaptor for her laptop charger to complete the finishing touches of the paper. MD is supported in part by the NSF Graduate Research Fel
lowship Program under NSF Grant Number: DGE 1256260 and by KU Leuven C1 grant ZKD1118 C16/16/005, and by the Research Programme of The Research Foundation - Flanders (FWO) grant G0F9516N. NE is supported in part by the Leinweber Graduate Fellowship. This work was supported in part by the U.S. Department of Energy under grant DE-SC0007859.
## Appendix A Conventions and notations
In this Appendix, we summarize the conventions and notations used in the various expressions involving differential geometry as well as real special geometry.
We introduce components as
\[\xi=\xi^{\mu}\frac{\partial}{\partial x^{\mu}}\,\quad\omega=\frac{1}{r!} \omega_{\mu_{1}\ldots\mu_{r}}\ \mathrm{d}x^{\mu_{1}}\wedge\ldots\wedge\mathrm{d}x^{\mu_{r}}\.\] (A.1)
In this notation the interior product \(i_{\xi}\) of \(\omega\) with respect to \(\xi\) is
\[\begin{split} i_{\xi}\omega&=\frac{1}{(r-1)!}\xi^{ \nu}\omega_{\nu\mu_{2}\ldots\mu_{r}}\ \mathrm{d}x^{\mu_{2}}\wedge\ldots\wedge\mathrm{d}x^{\mu_{r}}\\ &=\frac{1}{r!}\sum_{s=1}^{r}\xi^{\mu_{s}}\omega_{\mu_{1}\ldots \mu_{s}\ldots\mu_{r}}(-1)^{s-1}\ \mathrm{d}x^{\mu_{1}}\wedge\ldots\wedge\widehat{\mathrm{d}x^{\mu_{s}}} \wedge\ldots\wedge\mathrm{d}x^{\mu_{r}}\.\end{split}\] (A.2)
The wide hat indicates that \(dx^{\mu_{s}}\) is removed.
The Hodge dual is defined by
\[\star_{r}(\ \mathrm{d}x^{\mu_{1}}\wedge\mathrm{d}x^{\mu_{2}}\wedge\ldots \wedge\mathrm{d}x^{\mu_{r}})=\frac{\sqrt{|g|}}{(m-r)!}\varepsilon^{\mu_{1} \mu_{2}\ldots\mu_{r}}{}_{v_{r+1}\ldots v_{m}}\ \mathrm{d}x^{v_{r+1}}\wedge\ldots \wedge\mathrm{d}x^{v_{m}}\,\] (A.3)
where the subscript \(r\) denotes the dimension of the spacetime and the totally antisymmetric tensor is
\[\varepsilon_{\mu_{1}\mu_{2}\ldots\mu_{m}}=\begin{cases}+1&\text{ if }\left(\mu_{1}\mu_{2}\ldots\mu_{m}\right)\text{ is an even permutation of }(12\ldots m)\\ -1&\text{ if }\left(\mu_{1}\mu_{2}\ldots\mu_{m}\right)\text{ is an odd permutation of }(12\ldots m)\ \.\\ 0&\text{ otherwise.}\end{cases}\] (A.4)
The indices on the totally antisymmetric symbol \(\varepsilon_{\mu_{1}\mu_{2}\ldots\mu_{m}}\) can be raised by the metric through
\[\varepsilon^{\mu_{1}\mu_{2}\ldots\mu_{m}}=g^{\mu_{1}v_{1}}g^{\mu_{2}v_{2}} \ldots g^{\mu_{m}v_{m}}\varepsilon_{v_{1}v_{2}\ldots v_{m}}=g^{-1}\varepsilon _{\mu_{1}\mu_{2}\ldots\mu_{m}}.\] (A.5)
The Hodge dual of the identity 1 gives the invariant volume element
\[\star_{r}1=\frac{\sqrt{|g|}}{m!}\varepsilon_{\mu_{1}\mu_{2}\ldots\mu_{m}}\ \mathrm{d}x^{\mu_{1}}\wedge\ldots\wedge\mathrm{d}x^{\mu_{m}}=\sqrt{|g|} \mathrm{d}x^{1}\wedge\ldots\wedge\mathrm{d}x^{m}\.\] (A.6)
We define the \(r\)-forms \(U\) and \(V\) as
\[U=\frac{1}{r!}U_{\mu_{1}\ldots\mu_{r}}\ \mathrm{d}x^{\mu_{1}}\wedge\ldots \wedge\mathrm{d}x^{\mu_{r}},\qquad V=\frac{1}{r!}V_{\mu_{1}\ldots\mu_{r}}\ \mathrm{d}x^{\mu_{1}}\wedge\ldots\wedge\mathrm{d}x^{\mu_{r}}\,\] (A.7)
such that
\[U\wedge\star_{r}V=V\wedge\star_{r}U=\frac{1}{r!}U_{\mu_{1}\ldots\mu_{r}}V^{\mu _{1}\ldots\mu_{r}}\sqrt{|g|}\mathrm{d}x^{1}\wedge\ldots\wedge\mathrm{d}x^{m}.\] (A.8)
Real special geometry
In this appendix we summarize the conventions and formulae needed for manipulations in real special geometry. We study \(\mathcal{N}=2\) theories with \(n_{V}\) vector multiplets and \(n_{H}=0\) hyper-multiplets. The starting point is a collection of real 5D scalar fields \(X^{I}\) with \(I=0,1,\ldots,n_{V}\). They are subject to the constraint
\[\frac{1}{6}c_{IJK}X^{I}X^{J}X^{K}=1\, \tag{114}\]
where the structure constants \(c_{IJK}\) are real numbers, completely symmetric in \(I\), \(J\), and \(K\), that satisfy the closure relation
\[c_{IJK}c_{J^{\prime}(LM}c_{PQ)K^{\prime}}\delta^{JJ^{\prime}}\delta^{KK^{ \prime}}=\frac{4}{3}\delta_{I(L}c_{MPQ)}. \tag{115}\]
The index \(I\) takes \(n_{V}+1\) distinct values but, because of the constraint (114), there are \(n_{V}\) independent scalar fields, one for each \(\mathcal{N}=2\) vector multiplet in 5D. Round brackets \((\cdots)\) indicate symmetrization of indices with weight one so, for example, \(c_{IJK}=c_{(IJK)}\).
Using the Euclidean metric to define \(c^{IJK}\) with upper indices, meaning
\(c^{IJK}=\delta^{II^{\prime}}\delta^{JJ^{\prime}}\delta^{KK^{\prime}}c_{I^{ \prime}J^{\prime}K^{\prime}}\), the closure relation (115) can be rewritten as
\[c_{IJK}c^{J(LM}c^{PQ)K}=\frac{4}{3}\delta_{I}^{(L}c^{MPQ)}. \tag{116}\]
We also note the following identities involving symmetrizations
\[c_{IJK}c^{J(LM}c^{PQ)K} =\frac{1}{3}c_{IJK}\left(c^{JLM}c^{PQK}+c^{JLP}c^{MQK}+c^{JPM}c^{ LQK}\right)\, \tag{117}\] \[\delta_{I}^{(L}c^{MPQ}) =\frac{1}{4}\left(\delta_{I}^{L}c^{MPQ}+\delta_{I}^{M}c^{LPQ}+ \delta_{I}^{P}c^{LMQ}+\delta_{I}^{Q}c^{LMP}\right). \tag{118}\]
Given the scalars \(X^{I}\) and \(c_{IJK}\) as inputs, we define the scalar \(X_{I}\) (with lower index) and the metric on field space \(G_{IJ}\) as
\[X_{I} =\frac{1}{2}c_{IJK}X^{J}X^{K}\,\] \[G_{IJ} =\frac{1}{2}\left(X_{I}X_{J}-c_{IJK}X^{K}\right). \tag{119}\]
In manipulations we often use the formulae
\[G_{IJ}X^{J} =\frac{1}{2}X_{I}\,\] \[X_{I}X^{I} =3. \tag{120}\]
The closure relation (116) then requires that the inverse matrix \(G^{IJ}\) satisfies
\[c^{IJK}X_{K}=X^{I}X^{J}-\frac{1}{2}G^{IJ}. \tag{121}\]
It follows that, just as \(G_{IJ}\) lowers indices on \(X^{J}\) indices (up to a factor of \(\frac{1}{2}\)), the inverse \(G^{IJ}\) raises indices on \(X_{J}\)
\[G^{IJ}X_{J}=2(X^{I}X^{J}-c^{IJK}X_{K})X_{J}=2X^{I}. \tag{122}\]
We also note the identity
\[(c^{IJK}X_{K})(c_{ILM}X^{M})=\left(X^{I}X^{J}-\tfrac{1}{2}G^{IJ}\right)(X_{I}X_{L} -2G_{IL})=\delta_{L}^{J}+X^{J}X_{L}\,. \tag{111}\]
In the literature, it is common to summarize real special geometry through the cubic polynomial
\[\mathcal{V}=\frac{1}{6}c_{IJK}X^{I}X^{J}X^{K}. \tag{112}\]
The constraint (110) is simply \(\mathcal{V}=1\). Differentiating _first_ and _then_ imposing the constraint \(\mathcal{V}=1\), we find
\[\mathcal{V}_{I} \equiv\frac{\partial\mathcal{V}}{\partial X^{I}}=\frac{1}{2}c_{ IJK}X^{J}X^{K}=X_{I}\, \tag{113}\] \[\mathcal{V}_{IJ} \equiv\frac{\partial^{2}\mathcal{V}}{\partial X^{I}\partial X^{J }}=c_{IJK}X^{K}\,\] (114) \[G_{IJ} =-\frac{1}{2}\frac{\partial^{2}\ln\mathcal{V}}{\partial X^{I} \partial X^{J}}=\frac{1}{2}\left(\mathcal{V}_{I}\mathcal{V}_{J}-\mathcal{V}_{ IJ}\right)=\frac{1}{2}\left(X_{I}X_{J}-c_{IJK}X^{K}\right). \tag{115}\]
The inverse \(\mathcal{V}^{IJ}\) of \(\mathcal{V}_{IJ}\) (meaning it satisfies \(\mathcal{V}^{IJ}\mathcal{V}_{JK}=\delta_{K}^{I}\)) is given by
\[\mathcal{V}^{IJ}=\frac{1}{2}(X^{I}X^{J}-G^{IJ}). \tag{116}\]
The STU-model is an important example. In this special case \(n_{V}=2\) and we shift the labels so \(I=1,2,3\) (rather than \(I=0,1,2\)). The only nonvanishing \(c_{IJK}\) are \(c_{123}=1\) and all its permutations. In our normalizations, the STU model has
\[X^{1}X^{2}X^{3}=1\,\ \ X_{I}^{-1}=X_{I}\,\ \ G_{IJ}=\frac{1}{2}X_{I}^{2} \delta_{IJ}\.\]
In these formulae there is no sum over \(I=1,2,3\). We add a special note about adapting the formalism of real special geometry, this time adapted to \(\xi_{I}\) given by the constraint
\[\frac{1}{6}c^{IJK}\xi_{I}\xi_{J}\xi_{K}=\ell^{-3}. \tag{117}\]
Following similar steps in terms of defining a raised version of the \(\xi_{I}\), imposing consistency with the raised \(c^{IJK}\) through the condition (115), we can define the following
\[\begin{split}\xi^{I}&=\frac{1}{2}c^{IJK}\xi_{J}\xi _{K}\,\\ \xi_{I}&=\frac{1}{2}\ell^{3}c_{IJK}\xi^{J}\xi^{K}. \end{split} \tag{118}\]
We then go on defining a version of the \(G_{IJ}\) and \(G^{IJ}\) for the \(\xi_{I}\)
\[\begin{split}\tilde{G}^{IJ}&=2\left(\ell^{3}\xi^{I }\xi^{J}-c^{IJK}\xi_{K}\right)\,\\ \tilde{G}_{IJ}&=\frac{1}{2}\ell^{3}\left(\xi_{I}\xi_ {J}-c_{IJK}\xi^{K}\right)\,\end{split} \tag{119}\]
which leads to the crucial inversion identity on the \(\xi_{I}\):
\[\frac{1}{2}\ell^{3}(c_{IKM}c^{MNP}\xi_{N}\xi_{P}-\xi_{I}\xi_{K})\left(c^{IJL} \xi_{L}\right)=\delta_{K}^{J}. \tag{120}\]
A final comment: in this article, we take 5D supergravity as the starting point. For an introduction to the geometric interpretation of the 5D fields and the formulae they satisfy in terms of Calabi-Yau compactification of 11D supergravity, we refer to [17].
Supersymmetry conditions
In this appendix we establish the conditions that our _ansatz_ (4.3) must satisfy in order to preserve supersymmetry.
### The Kahler condition on the base geometry
We want to establish the conditions on the variables in the 4D base geometry in (4.3) that ensure that it is Kahler. For a given vielbein basis \(e^{a}\) on the base \(ds_{4}^{2}=\eta_{ab}e^{a}e^{b}\), such as (4.5), the Kahler condition is
\[d(e^{1}\wedge e^{4}-e^{2}\wedge e^{3})=0\.\] (C.1)
In the \((1+4)\) split (4.3), the base space (4.4) is automatically Kahler as
\[\left(g_{m}^{-1/2}\right)\left(\tfrac{1}{2}Rg_{m}^{1/2}dR\wedge\sigma_{3} \right)-\tfrac{1}{4}R^{2}\sigma_{1}\wedge\sigma_{2}=d\left(\tfrac{1}{4}R^{2} \sigma_{3}\right)\,,\] (C.2)
which is automatically closed. We look instead to the \((2+3)\) split in (4.8) to obtain a nontrivial Kahler condition. For that, we rewrite (4.8) in the form \(ds_{5}^{2}=f^{2}(dt+\omega)^{2}-f^{-1}ds_{4}^{2}\), and find the warp factor
\[f=(e^{2\rho}-e^{-U_{2}}(a_{t}^{0})^{2})^{1/2}\,\] (C.3)
the 1-form
\[\omega=-f^{-2}e^{-U_{2}}a_{t}^{0}\ \sigma_{3}\,\] (C.4)
and the 4D base geometry
\[ds_{4}^{2}=fe^{2\sigma}dR^{2}+\frac{1}{4}R^{2}(\sigma_{1}^{2}+\sigma_{2}^{2})+ f^{-1}e^{2\rho-U_{2}}\sigma_{3}^{2}\.\] (C.5)
To find the condition for which (C.5) is Kahler, we introduce the basis 1-forms
\[e^{1} =f^{1/2}e^{\sigma}dR\,\] (C.6) \[e^{2} =\frac{1}{2}R\sigma_{1}\,\] (C.7) \[e^{3} =\frac{1}{2}R\sigma_{2}\,\] (C.8) \[e^{4} =f^{-1/2}e^{\rho-U_{2}/2}\sigma_{3}\.\] (C.9)
The Kahler 2-form \(J=e^{1}\wedge e^{4}-e^{2}\wedge e^{3}\) becomes
\[J=e^{\sigma+\rho-U_{2}/2}dR\wedge\sigma_{3}-\frac{1}{4}R^{2}\sigma_{1}\wedge \sigma_{2}\.\] (C.10)
The Kahler condition demands that \(J\) is closed
\[dJ=e^{\sigma+\rho-U_{2}/2}dR\wedge\sigma_{1}\wedge\sigma_{2}-\frac{1}{2}RdR \wedge\sigma_{1}\wedge\sigma_{2}=0\.\] (C.11)
We therefore find
\[e^{\sigma+\rho-U_{2}/2}=\frac{1}{2}R\.\] (C.12)
This condition must be satisfied so that the general ansatz (4.8) can support supersymmetry. The Kahler condition allow us to rewrite the base geometry (C.5) as
\[ds_{4}^{2}=fe^{2\sigma}dR^{2}+\frac{1}{4}R^{2}(\sigma_{1}^{2}+\sigma_{2}^{2}+f^{- 1}e^{-2\sigma}\sigma_{3}^{2})\.\] (C.13)
This form of the base geometry depends on a single function \(fe^{2\sigma}\).
### Kahler potential
The Kahler condition (C.12) relates the 1-forms \(e^{1}\) and \(e^{4}\) in (4.5). If we define a radial coordinate \(r\) such that
\[\partial_{r}R=f^{-\frac{1}{2}}e^{-\sigma}\,\] (C.14)
the tetrad simplifies so
\[e^{1} =dr\,\] (C.15) \[e^{4} =\partial_{r}\left(\tfrac{1}{4}R^{2}\right)\sigma_{3}\.\] (C.16)
with \(e^{2}\) and \(e^{3}\) unchanged. In these coordinates, the unique spin connections solving Cartan's equations \(de^{a}+\omega^{a}_{\ b}e^{b}=0\) are
\[{}^{4}\omega^{2}_{\ 1} =^{4}\omega^{4}_{\ 3} =\frac{\partial_{r}R}{R}e^{2}\,\] (C.17) \[{}^{4}\omega^{3}_{\ 1} =^{4}\omega^{2}_{\ 4} =\frac{\partial_{r}R}{R}e^{3}\,\] (C.18) \[{}^{4}\omega^{4}_{\ 1} =\left(\frac{\partial_{r}R}{R}+\frac{\partial_{r}^{2}R}{\partial _{r}R}\right)e^{4}\,\] (C.19) \[{}^{4}\omega^{2}_{\ 3} =\left(\frac{\partial_{r}R}{R}-\frac{2}{R\partial_{r}R}\right)e^{ 4}\,\] (C.20)
where the \({}^{4}\) superscript distinguishes these 4D spin connections from the 5D spin connections that will appear in later computations. The resulting curvature 2-forms \(R^{a}_{\ b}=d\omega^{a}_{\ b}+\omega^{a}_{\ c}\omega^{c}_{\ b}\) on the 4D base become
\[R^{2}_{\ 1} =R^{4}_{\ 3}=\frac{\partial_{r}^{2}R}{R}(e^{1}e^{2}+e^{3}e^{4})\,\] (C.21) \[R^{3}_{\ 1} =R^{2}_{\ 4}=\frac{\partial_{r}^{2}R}{R}(e^{1}e^{3}-e^{2}e^{4})\,\] (C.22) \[R^{4}_{\ 1} =\left(\frac{\partial_{r}^{3}R}{\partial_{r}R}+3\frac{\partial_{ r}^{2}R}{R}\right)e^{1}e^{4}+2\frac{\partial_{r}^{2}R}{R}e^{3}e^{2}\,\] (C.23) \[R^{2}_{\ 3} =2\frac{\partial_{r}^{2}R}{R}e^{1}e^{4}+\frac{4}{R^{2}}((\partial _{r}R)^{2}-1)e^{3}e^{2}\.\] (C.24)
The components of the Riemann curvature are read off from \(R^{a}_{\ b}=\frac{1}{2}\text{Riem}^{a}_{\ bcd}e^{c}e^{d}\). For a complex manifold they are collected succinctly in the Kahler curvature 2-form with components \(\mathcal{R}_{ab}=\frac{1}{2}\text{Riem}_{abcd}J^{cd}\). In the context of our _ansatz_ (4.4), we have
\[\mathcal{R}_{14} =\epsilon(\text{Riem}_{1423}-\text{Riem}_{1414})=\epsilon\left( \frac{\partial_{r}^{3}R}{\partial_{r}R}+5\frac{\partial_{r}^{2}R}{R}\right)\,\] \[\mathcal{R}_{23} =\epsilon(\text{Riem}_{2323}-\text{Riem}_{1423})=-\epsilon\left(2 \frac{\partial_{r}^{2}R}{R}+\frac{4}{R^{2}}((\partial_{r}R)^{2}-1)\right)\,\] (C.25)
and so the Kahler curvature 2-form becomes
\[{\cal R} =\frac{1}{2}\epsilon\left(R\partial_{r}^{3}R+5\partial_{r}R\partial_ {r}^{2}R\right)dr\sigma_{3}-\frac{1}{2}\epsilon\left(R\partial_{r}^{2}R+2(( \partial_{r}R)^{2}-1)\,\sigma_{1}\sigma_{2}\right.\] \[=\epsilon d\left((\frac{1}{2}R\partial_{r}^{2}R+(\partial_{r}R)^ {2}-1)\sigma_{3}\right). \tag{111}\]
It is manifestly of the form \({\cal R}=dP\) where \(P=p\sigma_{3}\) with
\[p=\epsilon\left(\frac{1}{2}R\partial_{r}^{2}R+(\partial_{r}R)^{2}-1\right)= \epsilon\left(\frac{1}{4}R\partial_{R}(\frac{1}{f}e^{-2\sigma})+\frac{1}{f}e ^{-2\sigma}-1\right). \tag{112}\]
The second equation follows by repeated use of (110). Since \({\cal R}\) is the exterior derivative of something, it is clearly closed. Thus the base manifold is Kahler.
The final expression (112) depends on the single scalar function \(fe^{2\sigma}\) that determines the base geometry (110). It encapsulates everything about the curvature of the 4D base.
### Supersymmetry conditions
The \({\cal N}=2\) supergravity theory we consider is, in particular, invariant under the fermionic transformations of the gaugino and the gravitino
\[\delta\lambda =\left[G_{IJ}\left(\frac{1}{2}\gamma^{\mu\nu}F^{J}_{\ \mu\nu}-\gamma^{\mu}\nabla_{\mu}X^{J}\right)\epsilon^{\alpha}-\xi_{I} \epsilon^{\alpha\beta}\epsilon^{\beta}\right]\partial_{i}X^{I}\, \tag{113}\] \[\delta\psi^{\alpha}_{\mu} =\left[(\partial_{\mu}-\frac{1}{4}\omega^{\nu\rho}_{\mu}\gamma_{ \nu\rho})+\frac{1}{24}(\gamma_{\mu}^{\ \nu\rho}-4\delta_{\mu}^{\ \nu}\gamma^{\rho})X_{I}F^{I}_{\nu\rho}\right] \epsilon^{\alpha}+\frac{1}{6}\xi_{I}(3A^{I}_{\mu}-X^{I}\gamma_{\mu}) \epsilon^{\alpha\beta}\epsilon^{\beta}\, \tag{114}\]
where \(\epsilon^{\alpha}\) (\(\alpha=1,2\)) are symplectic Majorana spinors. For bosonic solutions to the theory that respect at least some supersymmetry these variations vanish for the spinors \(\epsilon^{\alpha}\) that generate the preserved supersymmetry. Supersymmetric black holes in AdS\({}_{5}\) with finite horizon area preserve the supersymmetry generated by the spinors \(\epsilon^{\alpha}\) that satisfy the projections
\[\gamma^{0}\epsilon^{\alpha} =\epsilon^{\alpha}\, \tag{115}\] \[\frac{1}{4}J^{(1)}_{mn}\gamma^{mn}\epsilon^{\alpha} =-\epsilon^{\alpha\beta}\epsilon^{\beta}. \tag{116}\]
Each of these equations impose two projections on the spinor \(\epsilon^{\alpha}\). All these projections commute, so the resulting black holes preserve \(2^{-4}=1/16\) of the maximal supersymmetry.
We seek to work out the conditions that set the supersymmetric variations (113) and (114) to zero, satisfying the projections (115) and (116) imposed on purely bosonic solutions. We use the matter ansatz and geometry in (18) and (17), respectively.
The gamma matrices are defined with respect to a flat 5D space and satisfy the Clifford algebra
\[\{\gamma^{\mu},\gamma^{\nu}\}=2\eta^{\mu\nu}\, \tag{117}\]
with the flat 5D space defined in (17) via the following veilbein
\[E^{0} =f(dt+w\sigma_{3})\, \tag{118}\] \[E^{i} =f^{-\frac{1}{2}}e^{i}\, \tag{119}\]
where \(e^{i}\) with spatial indices refers to the 4D veilbein introduced in (4.5). Furthermore, the gamma matrices \(\gamma^{\mu}\) following the projection (4.12) satisfy
\[-\frac{\epsilon}{2}(\gamma^{23}-\gamma^{14})=\epsilon\epsilon^{\alpha\beta} \epsilon^{\beta}\,\] (C.35)
where \(\gamma^{\mu\nu}\) is the antisymmetrized product for \(a\neq b\), which means that after squaring (C.35) we obtain
\[\gamma^{1234}\epsilon^{\alpha}=\epsilon^{\alpha}\,\] (C.36)
and thus
\[\gamma^{14}\epsilon^{\alpha}=-\gamma^{23}\epsilon^{\alpha}\.\] (C.37)
This becomes relevant for evaluating inner products of components of 2-forms and \(\gamma^{ab}\) as well as their decomposition into self-dual and anti-self-dual terms.
#### c.3.1 The gaugino equation
Recall that the gaugino equation is given by (C.28), where the 5D 2-form \(F^{I}=dA^{I}\) can be computed from (4.7)
\[F^{I}=\partial_{R}(fY^{I})e^{-\sigma}f^{-1}E^{1}\wedge E^{0}+4f\left(fY^{I} \partial_{R^{2}}w+\partial_{R^{2}}u^{I}\right)E^{1}\wedge E^{4}-\frac{4f}{R^{ 2}}\left(fY^{I}w+u^{I}\right)E^{2}\wedge E^{3}\.\] (C.38)
The spatial \(F^{I}_{mn}\) components can be rearranged into self-dual and anti-self-dual terms
\[\begin{split} F^{I}&=\partial_{R}(fY^{I})e^{- \sigma}f^{-1}E^{1}\wedge E^{0}\\ &\quad+2f\left(fY^{I}\left(\partial_{R^{2}}-\frac{1}{R^{2}} \right)w+\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)u^{I}\right)(E^{1}\wedge E ^{4}+E^{2}\wedge E^{3})\\ &\quad+2f\left(fY^{I}\left(\partial_{R^{2}}+\frac{1}{R^{2}} \right)w+\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}\right)(E^{1}\wedge E ^{4}-E^{2}\wedge E^{3})\.\end{split}\] (C.39)
Since \((\gamma^{14}+\gamma^{23})\epsilon^{\alpha}=0\) per (C.37), only the anti-self-dual components of \(F^{\mu\nu}\) via \(F^{J}_{\mu\nu}\gamma^{\mu\nu}\) contributes to the gaugino variation. We thus simplify \(G_{IJ}\frac{1}{2}\gamma^{\mu\nu}F^{J}_{\mu\nu}\) to find
\[\begin{split}& G_{IJ}\frac{1}{2}\gamma^{\mu\nu}F^{J}_{\mu\nu} \epsilon^{\alpha}\\ &=G_{IJ}\left[\partial_{R}(fY^{I})e^{-\sigma}f^{-1}\gamma^{10}+2 f\left(fY^{I}\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)w+\left(\partial_{R^{2}}+ \frac{1}{R^{2}}\right)u^{I}\right)(\gamma^{14}-\gamma^{23})\right]\epsilon^{ \alpha}\.\end{split}\] (C.40)
We then move on to the second term of (C.28), noting that \(X^{I}\) is only a function of \(R\)
\[G_{IJ}(-\gamma^{\mu}\nabla_{\mu}X^{J})\epsilon^{\alpha}=G_{IJ}\left(-\gamma^{ 1}e^{-\sigma}\partial_{R}X^{J}\right)\epsilon^{\alpha}\.\] (C.41)
Lastly, the third term of (C.28) becomes
\[-\xi_{I}\epsilon^{\alpha\beta}\epsilon^{\beta}=+\epsilon\xi_{I}\gamma^{23} \epsilon^{\alpha}\.\] (C.42)
Combining all three contributions, we obtain the following equations
\[G_{IJ}\left[\partial_{R}(fY^{I})-f\partial_{R}X^{I}\right]\partial_ {i}X^{I} =0\, \tag{110}\] \[\left[4G_{IJ}f\left(\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u ^{J}+fY^{J}\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)w\right)+\epsilon\xi_{I }\right]\partial_{i}X^{I} =0. \tag{111}\]
Since \(X_{I}\partial_{i}X^{I}=\frac{1}{2}\partial_{i}(X_{I}X^{I})=0\), the \(f\partial_{R}X^{J}\) term can be rewritten as \(\partial_{R}(fX^{J})\) and thus we obtain
\[\boxed{G_{IJ}\left[\partial_{R}(fY^{J})-\partial_{R}(fX^{J})\right]\partial_ {i}X^{I}=0\,} \tag{112}\]
This can be reexpressed by defining a vector \(\delta^{I}=fY^{I}-fX^{I}\), to imply that \(\partial_{R}\delta^{I}\) is orthogonal to \(\partial_{i}X^{I}\), and thus proportional to \(X^{I}\):
\[\partial_{R}\delta^{I}=kX^{I}\,, \tag{113}\]
for some constant \(k\). We will focus on the special solution where \(\delta^{I}\) vanishes, meaning
\[X^{I}=Y^{I}. \tag{114}\]
Using this relation, we now move on to (111), the second gaugino variation result. It is a projection of the vector quantity in the square brackets, along the direction of \(\partial_{i}X^{I}\). The immediate consequence of it is that this quantity is proportional to \(X_{I}\). Rearranging terms, we obtain the ambiguous result
\[\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}=\frac{1}{2}\epsilon f^{-1} c^{IJK}X_{J}\xi_{K}+\frac{1}{2}f^{-1}\lambda X^{I}\, \tag{115}\]
with \(\lambda\) a scalar coefficient that arises from the ambiguity in defining the quantity in square brackets in (111) as orthogonal to \(\partial_{i}X^{I}\). Determining this quantity requires resorting to further supersymmetry relations, which leads us to the vanishing of the gravitino variation (113).
#### c.3.2 The gravitino equation
In order to simplify the vanishing of the gravitino equation (113), we need to establish the components of the 5D spin connection that appears in the term \(-\frac{1}{4}\ \omega^{\nu\rho}_{\mu}\gamma_{\nu\rho}\epsilon^{\alpha}\). Based on the vielbein (110) and (111), we have
\[\begin{array}{ll}\omega^{0}_{\ 1}=f^{-1}e^{-\sigma}\partial_{R}fE^{0}+2f^{2 }\partial_{R^{2}}wE^{4}\,&\omega^{0}_{\ 2}=-\frac{2f^{2}w}{R^{2}}E^{3}\,\\ \omega^{0}_{\ 3}=\frac{2f^{2}w}{R^{2}}E^{2}\,&\omega^{0}_{\ 4}=-2f^{2} \partial_{R^{2}}wE^{1}\,\\ \omega^{2}_{\ 1}=\ ^{4}\omega^{2}_{\ 1}-\frac{1}{2}f^{-1}e^{-\sigma} \partial_{R}fE^{2}\,&\omega^{3}_{\ 1}=\ ^{4}\omega^{3}_{\ 1}-\frac{1}{2}f^{-1}e^{-\sigma} \partial_{R}fE^{3}\,\\ \omega^{4}_{\ 1}=\ ^{4}\omega^{4}_{\ 1}-\frac{1}{2}f^{-1}e^{-\sigma} \partial_{R}fE^{4}-2f^{2}\partial_{R^{2}}wE^{0}\,&\omega^{2}_{\ 3}=\ ^{4}\omega^{2}_{\ 3}-\frac{2f^{2}w}{R^{2}}E^{0}\,\\ \omega^{3}_{\ 4}=\ ^{4}\omega^{3}_{\ 4}\,&\omega^{4}_{\ 2}=\ ^{4}\omega^{4}_{\ 2}\,\end{array} \tag{116}\]
where \({}^{4}\omega^{m}_{\ n}\) represents the 4D spin connections (C.17-C.20), and \(e^{m}\) are the 4D tetrad 1-forms, which are related to the \(E^{\mu}\) (C.33) and (C.34) via \(e^{m}=f^{1/2}E^{m}\). We now proceed to evaluate the components of the gravitino variation (C.29), starting with \(\mu=0\):
\[\left(\partial_{0}-\frac{1}{4}\omega^{\nu\rho}_{0}\gamma_{\nu\rho }\right)\epsilon^{\alpha} =\left(\partial_{0}+\gamma^{23}f^{2}\left(\partial_{R^{2}}+\frac {1}{R^{2}}\right)w-\frac{1}{2}f^{-1}e^{-\sigma}\partial_{R}f\gamma^{1}\right) \epsilon^{\alpha}\,\] (C.50) \[\frac{1}{24}(\gamma^{\nu\rho}_{0}-4\delta^{\nu}_{0}\gamma^{\rho}) X_{I}F^{I}_{\nu\rho}\epsilon^{\alpha} =\left(-\gamma^{23}f^{2}\left[\left(\partial_{R^{2}}+\frac{1}{R^{2}} \right)w+\frac{1}{3}f^{-1}X_{I}\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u ^{I}\right]\right.\] \[\left.\qquad+\frac{1}{2}f^{-1}e^{-\sigma}\partial_{R}f\gamma^{1} \right)\epsilon^{\alpha}\,\] (C.51)
\[\frac{1}{6}\xi_{I}(3A^{I}_{0}-X^{I}\gamma_{0})\epsilon^{\alpha \beta}\epsilon^{\beta} =\epsilon\frac{1}{3}\xi_{I}X^{I}\gamma^{23}\epsilon^{\alpha}\,\] (C.52)
where \(A^{I}_{0}\) stands for the component of the \(A^{I}\) 1-form along the \(E^{0}\) flat veilbein, which amounts to \(A^{I}_{0}=f^{-1}A^{I}_{t}=f^{-1}(fX^{I})=X^{I}\). Thus, adding up the three contributions in (C.50), (C.51) and (C.52), we note that the terms proportional to \(\gamma^{1}\) cancel out identically. What is left is terms proportional to the identity and to \(\gamma^{23}\), which when made to vanish separately, lead to two results
\[\partial_{0}\epsilon=0\,\] (C.53) \[fX_{I}\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}- \epsilon\xi_{I}X^{I}=0\.\] (C.54)
This equation is another expression involving a projection of the quantity \(\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}\). Rather than a redundant relation, it can in fact be used to further constrain the ambiguity in \(u^{I}\) that arose from the projection in the gaugino variation (C.44). In fact, (C.48) and (C.54) imply that
\[fX_{I}\left(\frac{1}{2}\epsilon f^{-1}c^{IJK}X_{J}\xi_{K}+\frac{1}{2}f^{-1} \lambda X^{I}\right)-\epsilon\xi_{I}X^{I}=0\,\] (C.55)
which immediately means that \(\lambda=0\). The final result is given by
\[\boxed{\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}=\frac{1}{2}\epsilon f ^{-1}c^{IJK}X_{J}\xi_{K}\.}\] (C.56)
We now move on to the spatial components of (C.29). For \(\mu=1\):
\[\left(\partial_{1}-\frac{1}{4}\omega^{\nu\rho}_{1}\gamma_{\nu \rho}\right)\epsilon^{\alpha} =\left(\partial_{1}+\gamma^{4}f^{2}\partial_{R^{2}}w\right) \epsilon^{\alpha}\,\] (C.57) \[\frac{1}{24}(\gamma^{\nu\rho}_{1}-4\delta^{\nu}_{1}\gamma^{\rho}) X_{I}F^{I}_{\nu\rho}\epsilon^{\alpha} =\left(-\gamma^{4}f^{2}\left[\left(2\partial_{R^{2}}-\frac{1}{R^ {2}}\right)w+\frac{1}{3}f^{-1}X_{I}\left(2\partial_{R^{2}}-\frac{1}{R^{2}} \right)u^{I}\right]\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-\frac{1}{2}f^{ -1}e^{-\sigma}\partial_{R}f\gamma^{1}\right)\epsilon^{\alpha}\] (C.58) \[\frac{1}{6}\xi_{I}(3A^{I}_{1}-X^{I}\gamma_{1})\epsilon^{\alpha \beta}\epsilon^{\beta} =\frac{1}{6}\xi_{I}X^{I}\gamma^{4}\epsilon^{\alpha}=\frac{1}{6}fX_{I} \left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}\gamma^{4}\epsilon^{\alpha}\.\] (C.59)
Again, adding the contributions (C.57), (C.58) and (C.59), and separating out the terms proportional to the identity, \(\gamma^{1}\) and \(\gamma^{4}\), we obtain
\[\boxed{\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)w+\frac{1}{2}f^{-1}X_{I} \left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)u^{I}=0\,}\] (C.60)
as well as the spatial dependence of the spinor \(\epsilon\): \(\partial_{R}\epsilon=\frac{1}{2}f^{-1}(\partial_{R}f)\epsilon\), which leads to \(\epsilon=\epsilon_{0}f^{1/2}\) for some constant \(\epsilon_{0}\). The \(\mu=2\) and \(\mu=3\) components of (C.29) yield the same condition (C.60), which leaves us with \(\mu=4\) that introduces an additional term due to the appearance of the 4D spin connection terms:
\[\left(\partial_{4}-\frac{1}{4}\omega_{4}^{\nu\rho}\gamma_{\nu\rho}\right) \epsilon^{\alpha}=\left(\partial_{4}-\gamma^{1}f^{2}\partial_{R^{2}}w-\frac{ 1}{4}f^{-1}e^{-\sigma}\partial_{R}f\gamma^{14}+\frac{1}{4}\ ^{4}\omega_{mn}\gamma^{mn}\right) \epsilon^{\alpha}\,\] (C.61)
\[\frac{1}{24}(\gamma_{4}^{\nu\rho}-4\delta_{4}^{\nu}\gamma^{\rho}) X_{I}F_{\nu\rho}^{I}\epsilon^{\alpha} =\left(\gamma^{1}f^{2}\left[(2\partial_{R^{2}}-\frac{1}{R^{2}})w+ \frac{1}{3}f^{-1}X_{I}(2\partial_{R^{2}}-\frac{1}{R^{2}})u^{I}\right]\right.\] (C.62) \[\left.+\frac{1}{4}f^{-1}e^{-\sigma}\partial_{R}f\gamma^{14} \right)\epsilon^{\alpha}\,\] \[\frac{1}{6}\xi_{I}(3A_{4}^{I}-X^{I}\gamma_{4})\epsilon^{\alpha \beta}\epsilon^{\beta} =\left(\frac{1}{2}\epsilon\xi_{I}u^{I}\gamma^{23}-\frac{1}{6}fX_{I }(\partial_{R^{2}}+\frac{1}{R^{2}})u^{I}\gamma^{1}\right)\epsilon^{\alpha}\.\] (C.63)
Combining these terms leads to the condition (C.60) as well as the 4D relation
\[\left(\frac{1}{4}\ ^{4}\omega_{mn}\gamma^{mn}+\frac{1}{2}\xi_{I}u^{I}\gamma^{ 23}\right)\epsilon^{\alpha}=0\.\] (C.64)
Using the 4D spin connections (C.17-C.20), we find that \({}^{4}\omega_{mn}\gamma^{mn}\epsilon^{\alpha}=-2p\gamma^{23}\epsilon^{\alpha}\) with \(p\) from (C.27). We relate \(fe^{2\sigma}\) to \(g_{m}\) based on (4.9), and find that
\[\boxed{p=\epsilon\left(\frac{1}{2}R^{2}(\partial_{R^{2}}g_{m})+g_{m}-1)\right) =\xi_{I}u^{I}\.}\] (C.65)
We can now gather the four main supersymmetry relations that were derived:
\[0 =G_{IJ}\left(\partial_{R}(fY^{I})-\partial_{R}(fX^{I})\right) \partial_{i}X^{J}\,\] (C.66) \[0 =\left(\partial_{R^{2}}+\frac{1}{R^{2}}\right)u^{I}-\frac{1}{2} \epsilon f^{-1}c^{IJK}X_{J}\xi_{K}\,\] (C.67) \[0 =\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)w+\frac{1}{2}f^{-1} X_{I}\left(\partial_{R^{2}}-\frac{1}{R^{2}}\right)u^{I}\,\] (C.68) \[0 =-\epsilon R^{2}(\partial_{R^{2}}g_{m})+2\epsilon(1-g_{m})+2\xi_{ I}u^{I}\.\] (C.69)
|
2305.13134 | The Minimizer of the Sum of Two Strongly Convex Functions | The optimization problem concerning the determination of the minimizer for
the sum of convex functions holds significant importance in the realm of
distributed and decentralized optimization. In scenarios where full knowledge
of the functions is not available, limiting information to individual
minimizers and convexity parameters -- either due to privacy concerns or the
nature of solution analysis -- necessitates an exploration of the region
encompassing potential minimizers based solely on these known quantities. The
characterization of this region becomes notably intricate when dealing with
multivariate strongly convex functions compared to the univariate case. This
paper contributes outer and inner approximations for the region harboring the
minimizer of the sum of two strongly convex functions, given a constraint on
the norm of the gradient at the minimizer of the sum. Notably, we explicitly
delineate the boundaries and interiors of both the outer and inner
approximations. Intriguingly, the boundaries as well as the interiors turn out
to be identical. Furthermore, we establish that the boundary of the region
containing potential minimizers aligns with that of the outer and inner
approximations. | Kananart Kuwaranancharoen, Shreyas Sundaram | 2023-05-22T15:31:48Z | http://arxiv.org/abs/2305.13134v2 | # The Minimizer of the Sum of Two Strongly Convex Functions
###### Abstract
The problem of finding the minimizer of a sum of convex functions is central to the field of optimization. In cases where the functions themselves are not fully known (other than their individual minimizers and convexity parameters), it is of interest to understand the region containing the potential minimizers of the sum based only on those known quantities. Characterizing this region in the case of multivariate strongly convex functions is far more complicated than the univariate case. In this paper, we provide both outer and inner approximations for the region containing the minimizer of the sum of two strongly convex functions, subject to a constraint on the norm of the gradient at the minimizer of the sum. In particular, we explicitly characterize the boundary and interior of both outer and inner approximations. Interestingly, the boundaries as well as the interiors turn out to be identical and we show that the boundary of the region containing the potential minimizers is also identical to that of the outer and inner approximations.
Convex Optimization Distributed Optimization Minimizer Analysis Quadratic Functions Strongly Convex Functions
## 1 Introduction
The problem of optimizing a sum of functions arises in a variety of applications, including machine learning (Shalev-Shwartz, 2011; Boyd et al., 2011; Sayed, 2014), control of large-scale systems (Molzahn et al., 2017; Li et al., 2011), and cooperative robotic systems (Zavlanos et al., 2012; Tron et al., 2016). In these settings, each node in a network is assumed to possess a convex function. There are many proposed algorithms to find the minimizer of the sum of these functions (Nedic et al., 2010; Johansson et al., 2009; Zhu and Martinez, 2012; Gharesifard and Cortes, 2014; Nedic and Olshevsky, 2015), under some common assumptions such as the functions being strongly convex and the gradients being bounded.
In some cases, the exact functions themselves may not be fully known, and only certain characteristics (such as minimizer and convexity parameters) may be known. In this case, it is of interest to understand the potential set of minimizers of the sum of functions, despite this limited knowledge of the individual functions. For example, in resilient distributed optimization settings (Sundaram and Gharesifard, 2018; Kuwarananacharoen et al., 2020), the network contains malicious nodes that do not follow the distributed optimization algorithm and one cannot guarantee that all nodes calculate the true minimizer. Instead, one must settle for algorithms that allow the non-malicious nodes to converge to a certain region (Sundaram and Gharesifard, 2016; Su and Vaidya, 2015). In such situations, knowing the region where the minimizer can lie would allow us to evaluate the efficacy of such resilient distributed optimization algorithms. Another example involves the scenario where two similar machine learning models trained on similar data are combined to achieve a common goal. However, due to privacy concerns (Shokri and Shmatikov, 2015), the datasets used to train these models may not available directly, which means we know the minimizer of each function for inference but not the function itself. In this case, it is of interest to obtain a potential new minimizer, i.e., a combined machine learning model, which could offer enhanced performance for the original inference task. More specifically, consider a federated learning setup (Konevcny et al., 2016; Kairouz et al., 2021) with two clients, each with access to local data. Each client, denoted by \(i\), obtains a local optimal parameter \(\mathbf{x}_{i}^{\star}\) that minimizes its own loss function \(f_{i}\)
Now, if each client only reports \(\mathbf{x}_{i}^{*}\) to a server, can the server determine the set of potential optimal parameters for the combined loss function \(f_{1}+f_{2}\), based solely on the information it has (i.e., \(\mathbf{x}_{1}^{*}\), \(\mathbf{x}_{2}^{*}\), and knowledge of the convexity properties of \(f_{1}\) and \(f_{2}\))?
When the local functions \(f_{i}\) at each node \(v_{i}\) are univariate (i.e., \(f_{i}:\mathbb{R}\to\mathbb{R}\)), and strongly convex, it is easy to argue that the minimizer of the sum must lie in the interval bracketed by the smallest and largest minimizers of the functions (Sundaram and Gharesifard, 2018). This is because the gradients of all the functions will have the same sign outside that region, and thus cannot sum to zero. However, a similar characterization of the region containing the minimizer of multivariate functions is lacking in the literature, and is significantly more challenging to obtain. For example, the conjecture that the minimizer of a sum of convex functions is in the convex hull of their local minimizers can be easily disproved via simple examples; consider \(f_{1}(x,y)=x^{2}-xy+\frac{1}{2}y^{2}\) and \(f_{2}(x,y)=x^{2}+xy+\frac{1}{2}y^{2}-4x-2y\) with minimizers \((0,0)\) and \((2,0)\) respectively, whose sum has minimizer \((1,1)\). In our recent work (Kuwaranancharoen and Sundaram, 2018, 2020), we studied this problem and provided an _outer approximation_ on the region containing the minimizer of two strongly convex functions in a specific case; this region is determined by the minimizers of the individual functions, their strong convexity parameters, and the specified bound on the norms of the gradients of the functions at the location of the minimizer.
In this paper, **our goal is to characterize an outer approximation (i.e., a region containing all valid minimizers) as well as an inner approximation (i.e., a region where every point is a valid minimizer) for the sum of two unknown strongly convex functions.** More specifically, we provide an outer approximation that is more general than the one given in (Kuwaranancharoen and Sundaram, 2018). As we will see, the inner approximation essentially almost coincides with the outer approximation. More precisely, the boundary of both outer and inner approximations are the same under the assumption that the gradients of the two original functions are bounded by a finite number at the potential minimizer of the sum. Thus, our analysis in this paper complements and completes the analysis in (Kuwaranancharoen and Sundaram, 2018) by fully characterizing the region containing the minimizer of the sum of two strongly convex functions. While the analysis is complicated even for this scenario involving two functions, our analysis provides insights that could be leveraged in future work to tackle the sum of multiple functions.
The paper is organized as follows. The notations used throughout the paper and preliminaries are provided in Section 2. The problem formulation is in Section 3. Our main results regarding the outer approximation are in Section 4, our analysis of the inner approximation and potential solution region is in Section 5, and the conclusions follow in Section 7.
## 2 Preliminaries
### Sets
Let \(\mathbb{R}\) denotes the set of real numbers. We denote by \(\mathbb{R}^{n}\) the \(n\)-dimensional Euclidean space. For a subset \(\mathcal{E}\) of a topological space, we denote the complement, closure and interior of a set \(\mathcal{E}\) by \(\mathcal{E}^{c}\), \(\overline{\mathcal{E}}\) and \(\mathcal{E}^{\circ}\), respectively. The boundary of \(\mathcal{E}\) is defined as \(\partial\mathcal{E}=\overline{\mathcal{E}}\setminus\mathcal{E}^{\circ}\). We also use \(\textbf{dom}(f)\) to denote the domain of function \(f\). In addition, we use \(\sqcup\) to denote the disjoint union operation. We will use this simple lemma later in the paper.
**Lemma 2.1**.: _Let \(\mathcal{G}\) and \(\mathcal{H}\) be subsets of a topological space \(X\) such that \(\mathcal{G}\subseteq\mathcal{H}\). Let \(\mathfrak{P}\) be a partition of \(\mathcal{H}\). Then,_
\[\mathcal{G}^{\circ}=\bigsqcup_{\mathcal{Z}\in\mathfrak{P}}\big{(}(\mathcal{G }\cap\mathcal{Z})\setminus(\partial\mathcal{G}\cap\mathcal{Z})\big{)}.\]
Proof.: For \(\mathcal{Z}\in\mathfrak{P}\), since \(\mathcal{G}\cap\mathcal{Z}\cap\mathcal{Z}^{c}=\emptyset\), we have
\[\mathcal{G}^{\circ}\cap\mathcal{Z}=(\mathcal{G}\cap(\partial\mathcal{G})^{c} \cap\mathcal{Z})\cup(\mathcal{G}\cap\mathcal{Z}\cap\mathcal{Z}^{c})=( \mathcal{G}\cap\mathcal{Z})\cap(\partial\mathcal{G}\cap\mathcal{Z})^{c}=( \mathcal{G}\cap\mathcal{Z})\setminus(\partial\mathcal{G}\cap\mathcal{Z}).\]
Using the above equation, we can write
\[\mathcal{G}^{\circ}=\bigsqcup_{\mathcal{Z}\in\mathfrak{P}}(\mathcal{G}^{\circ }\cap\mathcal{Z})=\bigsqcup_{\mathcal{Z}\in\mathfrak{P}}\big{(}(\mathcal{G} \cap\mathcal{Z})\setminus(\partial\mathcal{G}\cap\mathcal{Z})\big{)}.\]
### Linear Algebra
For simplicity, we often use \((x_{1},\ldots,x_{n})\) to represent the column vector \(\mathbf{x}=[x_{1}\quad x_{2}\quad\cdots\quad x_{n}]^{\intercal}\). We use \(\mathbf{0}\) to denote the all-zero vector with appropriate dimension and \(\mathbf{e}_{i}\) to denote the \(i\)-th basis vector (the vector of all zeros
except for a one in the \(i\)-th position). We denote by \(\langle\mathbf{u},\mathbf{v}\rangle\) the Euclidean inner product of vectors \(\mathbf{u}\) and \(\mathbf{v}\), i.e., \(\langle\mathbf{u},\mathbf{v}\rangle\triangleq\mathbf{u}^{\intercal}\mathbf{v}\), by \(\|\mathbf{u}\|\) the Euclidean norm of \(\mathbf{u}\), i.e., \(\|\mathbf{u}\|\triangleq\sqrt{\langle\mathbf{u},\mathbf{u}\rangle}=(\sum_{i}u_{i}^{2})^{1/2}\). We define the functions \(\angle:(\mathbb{R}^{n}\setminus\{\mathbf{0}\})\times(\mathbb{R}^{n}\setminus\{ \mathbf{0}\})\rightarrow[0,\pi]\) and \(\angle:(\mathbb{R}^{2}\setminus\{\mathbf{0}\})\times(\mathbb{R}^{2}\setminus\{ \mathbf{0}\})\rightarrow\big{[}-\frac{\pi}{2},\frac{\pi}{2}\big{]}\) as
\[\angle(\mathbf{u},\mathbf{v})\triangleq\arccos\bigg{(}\frac{\langle\mathbf{u},\mathbf{v} \rangle}{\|\mathbf{u}\|\|\mathbf{v}\|}\bigg{)}\quad\text{and}\quad\angle(\mathbf{u},\mathbf{v })\triangleq\arcsin\bigg{(}\frac{u_{2}v_{1}-u_{1}v_{2}}{\|\mathbf{u}\|\|\mathbf{v}\|} \bigg{)}, \tag{1}\]
respectively. Note that \(\angle(\mathbf{u},\mathbf{v})=\angle(\mathbf{v},\mathbf{u})\) but \(\angle(\mathbf{u},\mathbf{v})=-\angle(\mathbf{v},\mathbf{u})\). We use
\[\mathcal{B}(\mathbf{x}_{0},r_{0})\triangleq\{\mathbf{x}\in\mathbb{R}^{n}:\|\mathbf{x}- \mathbf{x}_{0}\|<r_{0}\} \tag{2}\]
and \(\overline{\mathcal{B}}(\mathbf{x}_{0},r_{0})\) to denote the open and closed balls, respectively, in \(\mathbb{R}^{n}\) centered at \(\mathbf{x}_{0}\in\mathbb{R}^{n}\) and with radius \(r_{0}\in\mathbb{R}_{>0}\). We use \(\mathbf{I}\) to denote the identity matrix of appropriate dimension. For square matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\), we use \(\lambda(\mathbf{A})\), \(\lambda_{\text{min}}(\mathbf{A})\) and \(\operatorname{Tr}(\mathbf{A})\) to denote an eigenvalue, the minimum eigenvalue and the trace of matrix \(\mathbf{A}\), respectively. For \(\mathbf{A}\in\mathbb{R}^{m\times n}\), we use \(\mathcal{R}(\mathbf{A})\) and \(\mathcal{N}(\mathbf{A})\) to denote the column space and null space of matrix \(\mathbf{A}\), respectively.
### Convex Sets and Functions
A set \(\mathcal{C}\subseteq\mathbb{R}^{n}\) is said to be convex if, for all \(\mathbf{x}\) and \(\mathbf{y}\) in \(\mathcal{C}\) and all \(t\) in the interval \((0,1)\), the point \((1-t)\mathbf{x}+t\mathbf{y}\) also belongs to \(\mathcal{C}\). A differentiable function \(f\) is called strongly convex with parameter \(\sigma\in\mathbb{R}_{>0}\) (or \(\sigma\)-strongly convex) if
\[\langle\nabla f(\mathbf{x})-\nabla f(\mathbf{y}),\ \mathbf{x}-\mathbf{y}\rangle\geq\sigma\|\mathbf{x}- \mathbf{y}\|^{2} \tag{3}\]
holds for all points \(\mathbf{x},\mathbf{y}\) in its domain. We use \(\mathcal{S}(\mathbf{x}^{*},\sigma)\) to denote the set of all differentiable and \(\sigma\)-strongly convex functions that have their minimizer at \(\mathbf{x}^{*}\in\mathbb{R}^{n}\). Define \(\mathbb{S}^{n}\) to be the set of symmetric matrices in \(\mathbb{R}^{n\times n}\), and \(\mathcal{Q}^{n}\) to be the set of all quadratic functions that map \(\mathbb{R}^{n}\) to \(\mathbb{R}\). A quadratic function \(f\in\mathcal{Q}^{n}\) parameterized by \(\mathbf{Q}\in\mathbb{S}^{n}\), \(\mathbf{b}\in\mathbb{R}^{n}\), and \(c\in\mathbb{R}\) is given by
\[f(x;\mathbf{Q},\mathbf{b},c)=\frac{1}{2}\mathbf{x}^{\intercal}\mathbf{Q}\mathbf{x}+\mathbf{b}^{ \intercal}\mathbf{x}+c.\]
For \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and \(\sigma\in\mathbb{R}_{>0}\), define
\[\mathcal{Q}^{(n)}(\mathbf{x}^{*},\sigma)\triangleq\big{\{}f(x;\mathbf{Q},\mathbf{b},c)\in \mathcal{Q}^{n}:\lambda_{\text{min}}(\mathbf{Q})=\sigma,\ \ \mathbf{Q}\mathbf{x}^{*}=-\mathbf{b}\big{\}}. \tag{4}\]
We will omit the superscript \((n)\) of \(\mathcal{Q}^{(n)}\) when it is clear from contexts. Note that every function in \(\mathcal{Q}(\mathbf{x}^{*},\sigma)\) is \(\sigma\)-strongly convex quadratic and has the minimizer at \(\mathbf{x}^{*}\), and
\[\mathcal{Q}(\mathbf{x}^{*},\sigma)\subset\bigcup_{\tilde{\sigma}\geq\sigma} \mathcal{Q}(\mathbf{x}^{*},\tilde{\sigma})\subset\mathcal{S}(\mathbf{x}^{*},\sigma). \tag{5}\]
The following lemma shows that the strong convexity of functions is invariant under some particular affine transformations. This property will help us to simplify the analysis throughout the paper.
**Lemma 2.2**.: _Let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) be an orthogonal matrix and \(\mathbf{b}\in\mathbb{R}^{n}\). Suppose \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a differentiable function and define \(h(\mathbf{x})=f(\mathbf{A}\mathbf{x}+\mathbf{b})\). Then, \(f\) is \(\sigma\)-strongly convex if and only if \(h\) is \(\sigma\)-strongly convex._
Proof.: By the definition of strongly convex functions in (3), we have that
\[\langle\nabla f(\mathbf{x})-\nabla f(\mathbf{y}),\ \mathbf{x}-\mathbf{y}\rangle\geq\sigma\|\mathbf{x}- \mathbf{y}\|^{2}\quad\text{for all}\quad\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}.\]
Since \(\mathbf{A}\) is invertible, we can replace \(\mathbf{x}\) and \(\mathbf{y}\) by \(\mathbf{A}\mathbf{x}+\mathbf{b}\) and \(\mathbf{A}\mathbf{y}+\mathbf{b}\), respectively, and the above inequality is equivalent to
\[\big{\langle}\nabla f(\mathbf{A}\mathbf{x}+\mathbf{b})-\nabla f(\mathbf{A}\mathbf{y}+\mathbf{b}),\ \mathbf{A}(\mathbf{x}-\mathbf{y})\big{\rangle}\geq\sigma\|\mathbf{A}(\mathbf{x}-\mathbf{y})\|^{2}\ \text{ for all}\ \ \mathbf{x},\mathbf{y}\in\mathbb{R}^{n}. \tag{6}\]
Since \(\nabla h(\mathbf{x})=\mathbf{A}^{\intercal}\nabla f(\mathbf{A}\mathbf{x}+\mathbf{b})\), we can rewrite the LHS of (6) as \(\big{\langle}\mathbf{A}^{\intercal}\big{(}\nabla f(\mathbf{A}\mathbf{x}+\mathbf{b})-\nabla f( \mathbf{A}\mathbf{y}+\mathbf{b})\big{)},\ \mathbf{x}-\mathbf{y}\big{\rangle}=\langle\nabla h(\mathbf{x})-\nabla h(\mathbf{y}),\ \mathbf{x}-\mathbf{y}\rangle\). On the other hand, since \(\mathbf{A}\) is an orthogonal matrix, the RHS of (6) becomes \(\sigma\|\mathbf{x}-\mathbf{y}\|^{2}\).
## 3 Problem Formulation
Consider two (unknown) functions \(f_{1}\) and \(f_{2}\). In order to investigate the minimizer of the sum of two unknown functions \(f_{1}+f_{2}\), we will impose the following assumptions on the structure of both functions.
1. Given \(\sigma_{1},\sigma_{2}\in\mathbb{R}_{>0}\), the functions \(f_{1}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(f_{2}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) are differentiable and strongly convex with parameters \(\sigma_{1}\) and \(\sigma_{2}\), respectively.
2. Given \(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\in\mathbb{R}^{n}\), the minimizers of \(f_{1}\) and \(f_{2}\) are at \(\mathbf{x}_{1}^{*}\) and \(\mathbf{x}_{2}^{*}\), respectively.
3. Suppose \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) is the minimizer of \(f_{1}+f_{2}\). There is a finite (given) number \(L\in\mathbb{R}_{>0}\) such that the norm of gradient of \(f_{1}\) and \(f_{2}\) evaluated at \(\mathbf{x}^{*}\) is less than \(L\).
Assumption 1 and 2 will be captured using the notations introduced earlier: \(f_{1}\in\mathcal{S}(\mathbf{x}_{1}^{*},\sigma_{1})\) and \(f_{2}\in\mathcal{S}(\mathbf{x}_{2}^{*},\sigma_{2})\). For Assumption 3, since \(\mathbf{x}^{*}\) is the minimizer of \(f_{1}+f_{2}\), we have that \(\nabla f_{1}(\mathbf{x}^{*})=-\nabla f_{2}(\mathbf{x}^{*})\). In addition, we can rewrite the bounded gradient at \(\mathbf{x}^{*}\) condition as \(\|\nabla f_{1}(\mathbf{x}^{*})\|=\|\nabla f_{2}(\mathbf{x}^{*})\|\leq L\). Essentially, our goal is to estimate the region \(\mathcal{M}\) containing all possible values \(\mathbf{x}^{*}\) satisfying the above conditions. More specifically, given \(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\in\mathbb{R}^{n}\), \(\sigma_{1},\sigma_{2}\in\mathbb{R}_{>0}\), and \(L\in\mathbb{R}_{>0}\), we wish to estimate the potential solution region
\[\mathcal{M}(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*},\sigma_{1},\sigma_{2}, L)\triangleq\big{\{}\mathbf{x}\in\mathbb{R}^{n}:\exists f_{1}\in\mathcal{S}(\mathbf{x}_{1}^ {*},\sigma_{1}),\quad\exists f_{2}\in\mathcal{S}(\mathbf{x}_{2}^{*},\sigma_{2}),\\ \nabla f_{1}(\mathbf{x})=-\nabla f_{2}(\mathbf{x}),\quad\|\nabla f_{1}( \mathbf{x})\|=\|\nabla f_{2}(\mathbf{x})\|\leq L\big{\}}. \tag{7}\]
For simplicity of notation, we will omit the argument of the set \(\mathcal{M}(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*},\sigma_{1},\sigma_{2},L)\) and write it as \(\mathcal{M}\).
### Discussion of Assumptions
Functions that satisfy both differentiable and strongly convex conditions (Assumption 1) are common in many applications. In machine learning applications, for example, linear regression and logistic regression models with \(L_{2}\)-regularization are commonly used when only a small amount of training data is available (Hastie et al., 2009).
Assumption 2 can be generalized by assuming that for \(i\in\{1,2\}\), the minimizer \(\mathbf{x}_{i}^{*}\) of the function \(f_{i}\) is not available but instead \(\mathbf{x}_{i}^{*}\) is located in a known compact set \(\mathcal{A}_{i}\subset\mathbb{R}^{n}\) as in (Kuwarananachraon and Sundaram, 2020). However, the analysis will be more involved, so we defer these assumptions to our future works.
Assumption 3 is a technical assumption. Given \(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\in\mathbb{R}^{n}\) such that \(\mathbf{x}_{1}^{*}\neq\mathbf{x}_{2}^{*}\), let
\[\mathcal{L}=\big{\{}\mathbf{x}\in\mathbb{R}^{n}:\text{there exists }k\in\mathbb{R} \setminus(-1,1)\ \text{ such that }\ \mathbf{x}-\mathbf{x}_{1}^{*}=k(\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*})\big{\}}.\]
Without Assumption 3, i.e., the norm of the gradient of each function at the minimizer of the sum can be arbitrarily large, one can use the result from Proposition 5.1 to show that \(\mathcal{M}=\mathbb{R}^{n}\setminus\mathcal{L}\). We can see that for \(n\in\mathbb{N}\setminus\{1\}\), the set \(\mathcal{L}\) has measure zero and hence, \(\mathcal{M}\) covers almost the entire space. In other words, almost all points can be minimizers. One can think of imposing the bound on the gradients as one of the ways to implicitly limit the functions that we can choose from \(\mathcal{S}(\mathbf{x}_{1}^{*},\sigma_{1})\) and \(\mathcal{S}(\mathbf{x}_{2}^{*},\sigma_{2})\). However, there might be other ways to restrict the class of functions that we can select, for example, considering the functions that have Lipschitz continuous gradients in addition to Assumptions 1 and 2. For now, we restrict ourselves to the simpler assumption, Assumption 3, and leave such alternative assumptions for future work.
### A Preview of the Solution
Recall the definition of the potential solution region \(\mathcal{M}\) from (7). One way to characterize the set \(\mathcal{M}\) is to provide an explicit formula for the boundary \(\partial\mathcal{M}\) in terms of \(\mathbf{x}_{1}^{*}\), \(\mathbf{x}_{2}^{*}\), \(\sigma_{1}\), \(\sigma_{2}\) and \(L\). In Fig. 1, we provide a preview of the boundary \(\partial\mathcal{M}\) in \(\mathbb{R}^{2}\) given fixed parameters \(\sigma_{1}=1.5\), \(\sigma_{2}=1\), and \(L=10\), and a variable parameter \(r\in\mathbb{R}_{>0}\). Suppose \(\mathbf{x}_{1}^{*}=(-r,0)\) and \(\mathbf{x}_{2}^{*}=(r,0)\). We illustrate \(\partial\mathcal{M}\) for the case where \(r=2,4\), and \(6\) in Fig. 0(a), Fig. 0(b) and Fig. 0(c), respectively. The different colors in the figures indicate different equations that combine together to yield the boundary (as we will explicitly characterize in the rest of the paper).
### Solution Approach
Since the analysis of the case \(\mathbf{x}_{1}^{*}=\mathbf{x}_{2}^{*}\) is trivial (i.e., the potential solution region is \(\mathcal{M}=\{\mathbf{x}_{1}^{*}\}\)), without loss of generality, we assume that \(\mathbf{x}_{1}^{*}=(-r,0,\ldots,0)\in\mathbb{R}^{n}\) and \(\mathbf{x}_{2}^{*}=(r,0,\ldots,0)\in\mathbb{R}^{n}\) with \(r=\frac{1}{2}\|\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}\|>0\).
To show this, given general \(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\in\mathbb{R}^{n}\) with \(\mathbf{x}_{1}^{*}\neq\mathbf{x}_{2}^{*}\), let the set of new bases \(\mathcal{J}=\{\mathbf{e}_{1}^{\prime},\mathbf{e}_{2}^{\prime},\ldots,\mathbf{e}_{n}^{\prime}\}\) be such that \(\mathbf{e}_{1}^{\prime}=\frac{\mathbf{e}_{2}^{*}-\mathbf{x}_{1}^{*}}{\|\mathbf{x}_{2}^{*}-\mathbf{x}_ {1}^{*}\|}\) and \(\{\mathbf{e}_{2}^{\prime},\mathbf{e}_{3}^{\prime},\ldots,\mathbf{e}_{n}^{\prime}\}\) is obtained by Gram-Schmidt orthogonalization. Let
\[\mathbf{e}=[\mathbf{e}_{1}^{\prime}\quad\mathbf{e}_{2}^{\prime}\quad\cdots\quad\mathbf{e}_{n}^{ \prime}]\quad\text{and}\quad\mathbf{b}=\frac{1}{2}(\mathbf{x}_{1}^{*}+\mathbf{x}_{2}^{*}).\]
We let \(\mathbf{x}_{\mathcal{J}}=\mathbf{e}^{\intercal}(\mathbf{x}-\mathbf{b})\) be the coordinate transformation. One can verify that if \(\mathbf{x}=\mathbf{x}_{1}^{*}\) then \(\mathbf{x}_{\mathcal{J}}=\big{(}-\frac{1}{2}\|\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}\|,\ \mathbf{0}\big{)}=(-r,\mathbf{0})\) and if \(\mathbf{x}=\mathbf{x}_{2}^{*}\) then \(\mathbf{x}_{\mathcal{J}}=\big{(}\frac{1}{2}\|\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}\|,\ \mathbf{0}\big{)}=(r,\mathbf{0})\). For \(i\in\{1,2\}\), let \(\tilde{f}_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be the
function such that \(\tilde{f}_{i}(\mathbf{x}_{\mathcal{J}})=f_{i}(\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{R}^{n}\), i.e., \(\tilde{f}_{i}\)'s value at the coordinate of point \(\mathbf{x}\) on the new bases \(\mathcal{J}\) is the same as \(f_{i}\) at point \(\mathbf{x}\). We can write \(\tilde{f}_{i}(\mathbf{x})=f_{i}(\mathbf{E}\mathbf{x}+\mathbf{b})\) for \(i\in\{1,2\}\). Applying Lemma 2.2, we have that \(\tilde{f}_{i}\) is \(\sigma_{i}\)-strongly convex for \(i\in\{1,2\}\). Once we attain the potential solution region \(\mathcal{M}\) in terms of \(\mathbf{x}_{\mathcal{J}}\), we can always use the transformation to obtain the region in terms of \(\mathbf{x}\), i.e., the original coordinate system.
For convenience, we introduce the shorthand notation of sets that will be encountered throughout the paper. Recall the definition of \(\mathcal{B}\) from (2). For \(i\in\{1,2\}\), define
\[\mathcal{B}_{i}\triangleq\mathcal{B}\Big{(}\mathbf{x}_{i}^{*},\frac{L}{\sigma_{i }}\Big{)}. \tag{8}\]
Now, we introduce the functions that will be used to define the outer and inner approximations of \(\mathcal{M}\). For \(i\in\{1,2\}\), define the functions \(\tilde{\phi}_{i}:\ \overline{\mathcal{B}}_{i}\rightarrow\big{[}0,\frac{ \overline{z}}{2}\big{]}\) to be such that
\[\tilde{\phi}_{i}(\mathbf{x})\triangleq\arccos\Big{(}\frac{\sigma_{i}}{L}\|\mathbf{x}- \mathbf{x}_{i}^{*}\|\Big{)}, \tag{9}\]
and the functions \(\alpha_{i}:\ \mathbb{R}^{n}\setminus\{\mathbf{x}_{i}^{*}\}\rightarrow[0,\pi]\) to be such that
\[\alpha_{i}(\mathbf{x})\triangleq\angle(\mathbf{x}-\mathbf{x}_{i}^{*},\ \mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}), \tag{10}\]
i.e., the angle between vectors \(\mathbf{x}-\mathbf{x}_{i}^{*}\) and \(\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}\). Note that \(\alpha_{2}(\mathbf{x})\geq\alpha_{1}(\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) due to the assumption that \(\mathbf{x}_{1}^{*}=(-r,\mathbf{0})\) and \(\mathbf{x}_{2}^{*}=(r,\mathbf{0})\). We define \(\psi:\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\rightarrow[0,\pi]\) to be the function such that
\[\psi(\mathbf{x})\triangleq\pi-\big{(}\alpha_{2}(\mathbf{x})-\alpha_{1}(\mathbf{x})\big{)}. \tag{11}\]
The interpretation of the angles \(\tilde{\phi}_{i}(\mathbf{x})\) and \(\psi(\mathbf{x})\) will be clarified later (in Fig. 2). In addition, given \(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\in\mathbb{R}^{n}\), \(\sigma_{1},\sigma_{2}\in\mathbb{R}_{>0}\), and \(L\in\mathbb{R}_{>0}\), we define
\[\mathcal{X}\triangleq\begin{cases}\Big{\{}\mathbf{x}\in\mathbb{R}^{n}:\|\mathbf{x}- \mathbf{x}_{i}^{*}\|=\frac{L}{\sigma_{i}}\ \ \text{for all $i\in\{1,2\}$}\Big{\}}&\text{if}\ \|\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}\|=L\big{(}\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}}\big{)},\\ \emptyset&\text{otherwise}.\end{cases} \tag{12}\]
Due to the assumption that \(\mathbf{x}_{1}^{*}=(-r,\mathbf{0})\) and \(\mathbf{x}_{2}^{*}=(r,\mathbf{0})\), for \(\|\mathbf{x}_{2}^{*}-\mathbf{x}_{1}^{*}\|=L\big{(}\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}}\big{)}\), we have \(\mathcal{X}=\big{\{}\big{(}-r+\frac{L}{\sigma_{1}},\ \mathbf{0}\big{)}\big{\}}\). With these definitions in place, given \(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\in\mathbb{R}^{n}\), \(\sigma_{1},\sigma_{2}\in\mathbb{R}_{>0}\), and \(L\in\mathbb{R}_{>0}\), we define the outer and inner approximations of \(\mathcal{M}\) as
\[\mathcal{M}^{\dagger}(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*},\sigma_{1},\sigma_{2},L) \triangleq\big{\{}\mathbf{x}\in\mathbb{R}^{n}:\ \tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\geq\psi(\mathbf{x})\big{\}} \tag{13}\]
and
\[\mathcal{M}_{\downarrow}(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*},\sigma_{1},\sigma_{2},L) \triangleq\big{\{}\mathbf{x}\in\mathbb{R}^{n}:\ \tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})>\psi(\mathbf{x})\big{\}}\cup \mathcal{X}, \tag{14}\]
respectively. As before, we will omit the argument of the sets \(\mathcal{M}^{\dagger}(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*},\sigma_{1},\sigma_{2},L)\) and \(\mathcal{M}_{\downarrow}(\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*},\sigma_{1},\sigma_{2},L)\), and write them as \(\mathcal{M}^{\dagger}\) and \(\mathcal{M}_{\downarrow}\), respectively.
_Remark 1_.: Recall the definition of \(\tilde{\phi}_{i}\) for \(i\in\{1,2\}\) and \(\psi\) from (9) and (11), respectively. Since \(\mathcal{M}^{\dagger}\) and \(\mathcal{M}_{\downarrow}\) are defined using \(\tilde{\phi}_{1}\), \(\tilde{\phi}_{2}\) and \(\psi\), implicitly, they must be subsets of \(\textbf{dom}(\tilde{\phi}_{1})\cap\textbf{dom}(\tilde{\phi}_{2})\cap\textbf{ dom}(\psi)\). In other words, the sets \(\mathcal{M}^{\dagger}\subseteq(\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) and \(\mathcal{M}_{\downarrow}\subseteq(\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) where \(\mathcal{B}_{i}\) for \(i\in\{1,2\}\) are defined in (8).
In order to characterize the potential solution region \(\mathcal{M}\), we proceed as follows. First, in Proposition 4.1, we show that \(\mathcal{M}\subseteq\mathcal{M}^{\dagger}\) by considering a property of strongly convex functions. Then, we characterize the boundary and interior of the outer approximation (\(\partial\mathcal{M}^{\dagger}\) and \((\mathcal{M}^{\dagger})^{\circ}\)) for each value of \(r\) in Theorem 4.10. In Proposition 5.1, we consider quadratic functions and show that \(\mathcal{M}_{\downarrow}\subseteq\mathcal{M}\) in Proposition 5.3. We use a similar approach as in Theorem 4.10 to characterize the boundary and interior of the inner approximation (\(\partial\mathcal{M}_{\downarrow}\) and \((\mathcal{M}_{\downarrow})^{\circ}\)) for each value of \(r\) which is presented in Theorem 5.4. Finally, by observing that \(\partial\mathcal{M}^{\dagger}=\partial\mathcal{M}_{\downarrow}\) and \((\mathcal{M}^{\dagger})^{\circ}=(\mathcal{M}_{\downarrow})^{\circ}\) from Theorem 4.10 and Theorem 5.4, we conclude the paper by showing that, in fact, the boundary of the potential solution region, outer approximation, and inner approximation are identical, i.e., \(\partial\mathcal{M}=\partial\mathcal{M}^{\dagger}=\partial\mathcal{M}_{\downarrow}\), in Theorem 6.2.
## 4 Outer Approximation
In this section, we derive necessary conditions for a point to be in the potential solution region \(\mathcal{M}\) and show that \(\mathcal{M}\subseteq\mathcal{M}^{\dagger}\) in Proposition 4.1. Then, we explicitly characterize an important part of \(\partial\mathcal{M}^{\dagger}\) (and also \(\partial\mathcal{M}_{\downarrow}\)) in Proposition 4.2. In Theorem 4.10, which is the main result of this section, we identify \(\partial\mathcal{M}^{\dagger}\) and \((\mathcal{M}^{\dagger})^{\circ}\), and also provide a property of \(\mathcal{M}^{\dagger}\). Other lemmas in this section are presented as tools that will be utilized in the proof of Theorem 4.10 (and also Theorem 5.4).
We will be using the following functions throughout our analysis. For \(i\in\{1,2\}\), define \(\mathbf{u}_{i}:\ \mathbb{R}^{n}\setminus\{\mathbf{x}_{i}^{*}\}\to\mathbb{R}^{n}\) to be the function such that
\[\mathbf{u}_{i}(\mathbf{x})\triangleq\frac{\mathbf{x}-\mathbf{x}_{i}^{*}}{\|\mathbf{x}-\mathbf{x}_{i}^ {*}\|}, \tag{15}\]
i.e., the unit vector in the direction of \(\mathbf{x}-\mathbf{x}_{i}^{*}\). Recall the definition of \(\angle(\cdot,\cdot)\) from (1). For \(i\in\{1,2\}\), we define \(\phi_{i}:\mathbb{R}^{n}\setminus\{\mathbf{x}_{i}^{*}\}\to\big{[}0,\frac{\pi}{2} \big{]}\) to be the function such that
\[\phi_{i}(\mathbf{x})\triangleq\angle\big{(}\nabla f_{i}(\mathbf{x}),\ \mathbf{u}_{i}(\mathbf{x}) \big{)}, \tag{16}\]
and \(\underline{L}_{i}:\mathbb{R}^{n}\to\mathbb{R}\) to be the function such that
\[\underline{L}_{i}(\mathbf{x})\triangleq\sigma_{i}\|\mathbf{x}-\mathbf{x}_{i}^{*}\|. \tag{17}\]
Note that for \(i\in\{1,2\}\), the quantity \(\underline{L}_{i}(\mathbf{x})\) is a lower bound on the norm of the gradient of \(f_{i}\) at \(\mathbf{x}\in\mathbb{R}^{n}\) if \(f_{i}\in\mathcal{S}(\mathbf{x}_{i}^{*},\sigma_{i})\).
In Fig. 2, we illustrate the definition of \(\mathbf{u}_{i}\), \(\phi_{i}\), \(\tilde{\phi}_{i}\), \(\alpha_{i}\) for \(i\in\{1,2\}\), and \(\psi\). Moreover, we illustrate the inequality \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\geq\psi(\mathbf{x})\) which is used to describe the outer approximation \(\mathcal{M}^{\dagger}\) in (13).
In the following proposition, we show a crucial result that the set \(\mathcal{M}^{\dagger}\) covers the set that we want to characterize, \(\mathcal{M}\). In other words, the points in the set \(\mathcal{M}^{\dagger}\) satisfy necessary conditions of a point to be a minimizer of the sum \(f_{1}+f_{2}\).
**Proposition 4.1**.: _Suppose the sets \(\mathcal{M}\) and \(\mathcal{M}^{\dagger}\) are defined as in (7) and (13), respectively. Then, \(\mathcal{M}\subseteq\mathcal{M}^{\dagger}\)._
Proof.: Recall the definition of the sets \(\mathcal{B}_{i}\) for \(i\in\{1,2\}\), the angles \(\tilde{\phi}_{i}\) for \(i\in\{1,2\}\), and the angle \(\psi\) from (8), (9), and (11), respectively. First, we want to show that the necessary conditions for a point \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) to be in \(\mathcal{M}\) are
* \(\mathbf{x}\in\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}\), and
* \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\geq\psi(\mathbf{x})\).
From the definition of strongly convex functions in (3), we have
\[\big{\langle}\nabla f_{i}(\mathbf{x})-\nabla f_{i}(\mathbf{y}),\ \mathbf{x}-\mathbf{y}\big{\rangle}\geq\sigma_{i}\|\mathbf{x}-\mathbf{y}\|^{2}\]
for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\) and for \(i\in\{1,2\}\). For \(i\in\{1,2\}\), recall the definition of \(\mathbf{u}_{i}(\mathbf{x})\) and \(\phi_{i}(\mathbf{x})\) from (15) and (16), respectively. Since \(\mathbf{x}_{1}^{*}\) and \(\mathbf{x}_{2}^{*}\) are the minimizers of \(f_{1}\) and \(f_{2}\), respectively, for \(\mathbf{x}\notin\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\), we get
\[\big{\langle}\nabla f_{i}(\mathbf{x})-\nabla f_{i}(\mathbf{x}_{i}^{*}), \ \mathbf{x}-\mathbf{x}_{i}^{*}\big{\rangle}\geq\sigma_{i}\|\mathbf{x}-\mathbf{x}_{i}^{*}\|^{2},\] \[\Leftrightarrow\quad\|\nabla f_{i}(\mathbf{x})\|\cos(\phi_{i}(\mathbf{x}) )=\langle\nabla f_{i}(\mathbf{x}),\ \mathbf{u}_{i}(\mathbf{x})\rangle\geq\sigma_{i}\|\mathbf{x}-\mathbf{x}_{i}^{*}\|>0. \tag{18}\]
Suppose \(\mathbf{x}\) is a candidate minimizer. Then, we have that \(\|\nabla f_{i}(\mathbf{x})\|\leq L\) for \(i\in\{1,2\}\) by our assumption. Recall the definition of \(\underline{L}_{i}\) for \(i\in\{1,2\}\) from (17). Inequality (18) becomes
\[\cos(\phi_{i}(\mathbf{x}))\geq\frac{\sigma_{i}}{L}\|\mathbf{x}-\mathbf{x}_{i}^{*}\|=\frac{ \underline{L}_{i}(\mathbf{x})}{L}. \tag{19}\]
If \(\underline{L}_{1}(\mathbf{x})>L\) or \(\underline{L}_{2}(\mathbf{x})>L\), we have that \(\mathbf{x}\) cannot be the minimizer of the function \(f_{1}+f_{2}\) since there is no \(\phi_{i}(\mathbf{x})\) that can satisfy inequality (19). Thus, a necessary condition for \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) to be a minimizer of \(f_{1}+f_{2}\) is that \(\underline{L}_{i}(\mathbf{x})\leq L\) for \(i\in\{1,2\}\) or equivalently, \(\mathbf{x}\in\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}\), yielding part (i) of the claim. We now prove part (ii).
From the definition of \(\psi(\mathbf{x})\) in (11) and that \(\angle(-\mathbf{u}_{1}(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x}))\), \(\alpha_{1}(\mathbf{x})\) and \(\pi-\alpha_{2}(\mathbf{x})\) are the angles of the triangle formed by the points \(\mathbf{x}\), \(\mathbf{x}_{1}^{*}\) and \(\mathbf{x}_{2}^{*}\), we can write that for all \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\),
\[\psi(\mathbf{x})=(\pi-\alpha_{2}(\mathbf{x}))+\alpha_{1}(\mathbf{x})=\pi-\angle(-\mathbf{u}_{1 }(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x}))=\angle(\mathbf{u}_{1}(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x})). \tag{20}\]
Suppose that \(\mathbf{x}\in\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}\). Recall the definition of \(\tilde{\phi}_{i}\) for \(i\in\{1,2\}\) from (9). From inequality (19), we have \(\phi_{i}(\mathbf{x})\leq\tilde{\phi}_{i}(\mathbf{x})\) for \(i\in\{1,2\}\). If \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})<\psi(\mathbf{x})\), then using (16) and (20), we have
\[\angle(\nabla f_{1}(\mathbf{x}),\mathbf{u}_{1}(\mathbf{x}))+\angle(-\nabla f_{2}(\mathbf{x}), -\mathbf{u}_{2}(\mathbf{x}))<\angle(\mathbf{u}_{1}(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x})).\]
However, using [1, Corollary 12], we can write \(\angle(\mathbf{u}_{1}(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x}))\leq\angle(\nabla f_{1}(\mathbf{x}),\mathbf{u}_{1}(\mathbf{x}))\)\(+\angle(\nabla f_{1}(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x}))\). Therefore, if \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})<\psi(\mathbf{x})\), we have that \(\nabla f_{1}(\mathbf{x})\neq-\nabla f_{2}(\mathbf{x})\) which implies that \(\mathbf{x}\) is not the minimizer of \(f_{1}+f_{2}\). This means that one of the necessary conditions is that \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\geq\psi(\mathbf{x})\) which completes the proof of the claim.
In the above analysis, we considered the case when \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\). We are left with the case when \(\mathbf{x}\in\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\). From the definition of strongly convex functions, for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\),
\[\big{\langle}\nabla f_{2}(\mathbf{x})-\nabla f_{2}(\mathbf{y}),\ \mathbf{x}-\mathbf{y} \big{\rangle}\geq\sigma_{2}\|\mathbf{x}-\mathbf{y}\|^{2}.\]
Since \(\mathbf{x}_{2}^{*}\) is the minimizer of \(f_{2}\) and \(\mathbf{x}_{1}^{*}\neq\mathbf{x}_{2}^{*}\), we get
\[\big{\langle}\nabla f_{2}(\mathbf{x}_{1}^{*}),\ \mathbf{x}_{1}^{*}-\mathbf{x}_{2}^{*} \big{\rangle}=\big{\langle}\nabla f_{2}(\mathbf{x}_{1}^{*})-\nabla f_{2}(\mathbf{x}_{2 }^{*}),\ \mathbf{x}_{1}^{*}-\mathbf{x}_{2}^{*}\big{\rangle}\geq\sigma_{2}\|\mathbf{x}_{1}^{*}-\mathbf{x}_ {2}^{*}\|^{2}>0,\]
and thus, \(\nabla f_{2}(\mathbf{x}_{1}^{*})\neq\mathbf{0}\). This implies that \(\nabla f_{2}(\mathbf{x}_{1}^{*})+\nabla f_{1}(\mathbf{x}_{1}^{*})\neq\mathbf{0}\) and \(\mathbf{x}_{1}^{*}\) is not the minimizer of \(f_{1}+f_{2}\). By using similar approach, we can also conclude that \(\mathbf{x}_{2}^{*}\) is not the minimizer of \(f_{1}+f_{2}\).
_Remark 2_.: The angle functions \(\tilde{\phi}_{i}\) and \(\alpha_{i}\) for \(i\in\{1,2\}\) defined in (16) and (10), respectively, can be expressed as functions of the distances \(\|\mathbf{x}_{1}^{*}-\mathbf{x}_{2}^{*}\|\), \(\|\mathbf{x}-\mathbf{x}_{1}^{*}\|\), and \(\|\mathbf{x}-\mathbf{x}_{2}^{*}\|\). This means that the inequality \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\geq\psi(\mathbf{x})\) depends only on the distance among three points \(\mathbf{x}\), \(\mathbf{x}_{1}^{*}\) and \(\mathbf{x}_{2}^{*}\). Since \(\mathbf{x}_{1}^{*}=(-r,\mathbf{0})\) and \(\mathbf{x}_{2}^{*}=(r,\mathbf{0})\), we conclude that the shape of \(\mathcal{M}^{\uparrow}\) (and \(\mathcal{M}_{\downarrow}\)) is symmetric around \(x_{1}\)-axis.
From this point, we will denote \(\mathbf{x}=(x_{1},\tilde{\mathbf{x}})\in\mathbb{R}^{n}\) where \(x_{1}\in\mathbb{R}\) and \(\tilde{\mathbf{x}}=(x_{2},x_{3},\ldots,x_{n})\in\mathbb{R}^{n-1}\). Next, we will provide an algebraic expression for a certain portion of \(\partial\mathcal{M}^{\uparrow}\) (and \(\partial\mathcal{M}_{\downarrow}\)) based on the geometric equation \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})=\psi(\mathbf{x})\), where \(\tilde{\phi}_{i}\) for \(i\in\{1,2\}\) and \(\psi\) are defined in (9) and (11), respectively. For convenience, we define
\[d_{1}(\mathbf{x}) \triangleq\|\mathbf{x}-\mathbf{x}_{1}^{*}\|=\sqrt{(x_{1}+r)^{2}+\|\tilde{ \mathbf{x}}\|^{2}}\quad\text{and}\] \[d_{2}(\mathbf{x}) \triangleq\|\mathbf{x}-\mathbf{x}_{2}^{*}\|=\sqrt{(x_{1}-r)^{2}+\|\tilde{ \mathbf{x}}\|^{2}}. \tag{21}\]
Define the set of points
\[\mathcal{T}\triangleq\bigg{\{}\mathbf{x}\in\mathbb{R}^{n}:\ \frac{\|\mathbf{x}\|^{2}-r^{2}}{d_{1} ^{2}(\mathbf{x})\cdot d_{2}^{2}(\mathbf{x})}+\frac{\sigma_{1}\sigma_{2}}{L^{2}}=\sqrt {\frac{1}{d_{1}^{2}(\mathbf{x})}-\frac{\sigma_{1}^{2}}{L^{2}}}\cdot\sqrt{\frac{1}{ d_{2}^{2}(\mathbf{x})}-\frac{\sigma_{2}^{2}}{L^{2}}}\bigg{\}}. \tag{22}\]
**Proposition 4.2**.: _The set \(\mathcal{T}\) defined in (22) can equivalently be written as \(\mathcal{T}=\big{\{}\mathbf{x}\in\mathbb{R}^{n}:\tilde{\phi}_{1}(\mathbf{x})+\tilde{ \phi}_{2}(\mathbf{x})=\psi(\mathbf{x})\big{\}}.\)_
Proof.: Based on the definition of \(\alpha_{i}(\mathbf{x})\) for \(i\in\{1,2\}\) in (10), for any point \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\), we have
\[x_{1}=d_{1}(\mathbf{x})\cos(\alpha_{1}(\mathbf{x}))-r=d_{2}(\mathbf{x})\cos( \alpha_{2}(\mathbf{x}))+r,\] \[\Leftrightarrow \cos(\alpha_{1}(\mathbf{x}))=\frac{x_{1}+r}{d_{1}(\mathbf{x})}\quad \text{and}\quad\cos(\alpha_{2}(\mathbf{x}))=\frac{x_{1}-r}{d_{2}(\mathbf{x})}. \tag{23}\]
Similarly,
\[\|\tilde{\mathbf{x}}\|=d_{1}(\mathbf{x})\sin(\alpha_{1}(\mathbf{x}))=d_{2 }(\mathbf{x})\sin(\alpha_{2}(\mathbf{x})),\] \[\Leftrightarrow \sin(\alpha_{1}(\mathbf{x}))=\frac{\|\tilde{\mathbf{x}}\|}{d_{1}(\bm {x})}\quad\text{and}\quad\sin(\alpha_{2}(\mathbf{x}))=\frac{\|\tilde{\mathbf{x}} \|}{d_{2}(\mathbf{x})}. \tag{24}\]
Since \(\tilde{\phi}_{i}(\mathbf{x})\in\left[0,\frac{\pi}{2}\right]\) for \(i\in\{1,2\}\), we get \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\in[0,\pi]\). Recall from (11) that \(\psi(\mathbf{x})\in[0,\pi]\). Since the cosine function is one-to-one for this range of angles, equation \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})=\psi(\mathbf{x})\) is equivalent to
\[\cos\big{(}\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\big{)}=\cos\big{(} \pi-(\alpha_{2}(\mathbf{x})-\alpha_{1}(\mathbf{x}))\big{)}=-\cos\big{(}\alpha_{2}(\bm {x})-\alpha_{1}(\mathbf{x})\big{)}.\]
Expanding this equation and substituting (23), (24), and \(\cos(\tilde{\phi}_{i}(\mathbf{x}))=\frac{\sigma_{i}}{L}d_{i}(\mathbf{x})\) for \(i\in\{1,2\}\), we get
\[\frac{\sigma_{1}}{L}d_{1}(\mathbf{x})\cdot\frac{\sigma_{2}}{L}d_{2}(\mathbf{x})-\sqrt{ 1-\Big{(}\frac{\sigma_{1}}{L}d_{1}(\mathbf{x})\Big{)}^{2}}\cdot\sqrt{1-\Big{(}\frac {\sigma_{2}}{L}d_{2}(\mathbf{x})\Big{)}^{2}}=-\frac{x_{1}-r}{d_{2}(\mathbf{x})}\cdot \frac{x_{1}+r}{d_{1}(\mathbf{x})}-\frac{\|\tilde{\mathbf{x}}\|}{d_{2}(\mathbf{x})} \cdot\frac{\|\tilde{\mathbf{x}}\|}{d_{1}(\mathbf{x})}.\]
Dividing the above equation by \(d_{1}(\mathbf{x})\cdot d_{2}(\mathbf{x})\) and rearranging it yields the result.
The subsequent lemmas (Lemma 4.3 - 4.9) are useful ingredients for proving the characterization of the outer approximation \(\mathcal{M}^{\uparrow}\) (defined in (13)) given in Theorem 4.10, and their proofs are provided in Appendix A.
The following lemma provides a sufficient condition for the minimizers \(\mathbf{x}_{1}^{*}\) and \(\mathbf{x}_{2}^{*}\) to be on the boundary of the outer approximation \(\mathcal{M}^{\uparrow}\) and the inner approximation \(\mathcal{M}_{\downarrow}\).
**Lemma 4.3**.: _Let \(\mathcal{M}^{\uparrow}\) and \(\mathcal{M}_{\downarrow}\) be as defined in (13) and (14), respectively._
* _If_ \(r\in\left(0,\ \frac{L}{2\sigma_{2}}\right]\) _then_ \(\mathbf{x}_{1}^{*}\in\partial\mathcal{M}^{\uparrow}\) _and_ \(\mathbf{x}_{1}^{*}\in\partial\mathcal{M}_{\downarrow}\)_._
* _If_ \(r\in\left(0,\ \frac{L}{2\sigma_{1}}\right]\) _then_ \(\mathbf{x}_{2}^{*}\in\partial\mathcal{M}^{\uparrow}\) _and_ \(\mathbf{x}_{2}^{*}\in\partial\mathcal{M}_{\downarrow}\)_._
In the next lemma, we provide a property of points in a particular set which will be used to characterize the sets \(\mathcal{M}^{\uparrow}\) and \(\mathcal{M}_{\downarrow}\) defined in (13) and (14), respectively. Roughly speaking, if \(\mathbf{x}\in\mathcal{M}^{\uparrow}\) and \(x_{1}\in[-r,r]\), then each point that has the same first component and is closer to the \(x_{1}\)-axis is also in \(\mathcal{M}^{\uparrow}\).
**Lemma 4.4**.: _Consider two points \(\mathbf{x}=(x_{1},\tilde{\mathbf{x}})\) and \(\mathbf{y}=(y_{1},\tilde{\mathbf{y}})\). Suppose \(-r\leq x_{1}=y_{1}\leq r\) and \(\|\tilde{\mathbf{x}}\|>\|\tilde{\mathbf{y}}\|\). If \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\geq\psi(\mathbf{x})\) then either \(\tilde{\phi}_{1}(\mathbf{y})+\tilde{\phi}_{2}(\mathbf{y})>\psi(\mathbf{y})\) or \(\mathbf{y}\in\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\)._
Recall the definition of \(\mathcal{B}\) from (8). Since \(\mathbf{x}_{1}^{*}=(-r,\mathbf{0})\) and \(\mathbf{x}_{2}^{*}=(r,\mathbf{0})\) by our assumption, we can explicitly write \(\partial\mathcal{B}_{i}\) for \(i\in\{1,2\}\) as follows:
\[\partial\mathcal{B}_{1} =\partial\mathcal{B}\Big{(}\mathbf{x}_{1}^{*},\frac{L}{\sigma_{1}} \Big{)}=\bigg{\{}\mathbf{x}\in\mathbb{R}^{n}:(x_{1}+r)^{2}+\|\tilde{\mathbf{x}}\| ^{2}=\frac{L^{2}}{\sigma_{1}^{2}}\bigg{\}}, \tag{25}\] \[\partial\mathcal{B}_{2} =\partial\mathcal{B}\Big{(}\mathbf{x}_{2}^{*},\frac{L}{\sigma_{2}} \Big{)}=\bigg{\{}\mathbf{x}\in\mathbb{R}^{n}:(x_{1}-r)^{2}+\|\tilde{\mathbf{x}}\| ^{2}=\frac{L^{2}}{\sigma_{2}^{2}}\bigg{\}}.\]
For convenience, we define
\[\gamma_{i}\triangleq\frac{L^{2}}{\sigma_{i}^{2}}\ \ \text{for}\ \ i\in\{1,2\}\ \ \ \text{and}\ \ \ \beta\triangleq\frac{\sigma_{2}}{\sigma_{1}}. \tag{26}\]
By using the definitions above, we define
\[\lambda_{1}\triangleq\Big{(}\frac{1+\beta}{1+2\beta}\Big{)}\frac{\gamma_{1}}{ 2r}-\frac{r}{1+2\beta}\ \ \ \text{and}\ \ \ \lambda_{2}\triangleq-\Big{(}\frac{1+\beta}{2+\beta}\Big{)}\frac{\gamma_{2}}{ 2r}+\frac{\beta r}{2+\beta}. \tag{27}\]
In the following lemma, we will show that if \(\mathbf{x}\in\{\partial\mathcal{B}_{1},\partial\mathcal{B}_{2}\}\), the value of the first component \(x_{1}\) is necessary and sufficient to determine whether \(\mathbf{x}\) is in \(\mathcal{M}^{\uparrow}\) and \(\mathcal{M}_{\downarrow}\), which are defined in (13) and (14), respectively. In other words, the angle condition \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\lessdot\psi(\mathbf{x})\) can be simplified if we consider a point in \(\partial\mathcal{B}_{1}\) or \(\partial\mathcal{B}_{2}\).
**Lemma 4.5**.: _Let \(\lambda_{1}\) and \(\lambda_{2}\) be as defined in (27). Consider \(\mathbf{x}=(x_{1},\tilde{\mathbf{x}})\in(\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\)._
1. _If_ \(\mathbf{x}\in\partial\mathcal{B}_{1}\) _then_ \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\lessdot\psi(\mathbf{x})\) _if and only if_ \(x_{1}\lessdot\lambda_{1}\)_._
2. _If_ \(\mathbf{x}\in\partial\mathcal{B}_{2}\) _then_ \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})\lessdot\psi(\mathbf{x})\) _if and only if_ \(x_{1}\gtrdot\lambda_{2}\)_._
In the following lemma, we will show that the points in the set of intersection between \(\mathcal{T}\) and \(\partial\mathcal{B}_{1}\) (resp. \(\mathcal{T}\) and \(\partial\mathcal{B}_{2}\)) have the same first component, if the intersection is non-empty. Moreover, the first component of these points is \(\lambda_{1}\) (resp. \(\lambda_{2}\)) where \(\lambda_{i}\) for \(i\in\{1,2\}\) are defined in (27). By using the definition of \(\gamma_{1}\), \(\gamma_{2}\) and \(\beta\) in (26), define
\[\nu_{1} \triangleq\frac{r}{2(1+2\beta)}\sqrt{-\Big{(}\frac{\gamma_{1}}{r^ {2}}-4\Big{)}\Big{(}(1+\beta)^{2}\frac{\gamma_{1}}{r^{2}}-4\beta^{2}\Big{)}} \ \ \ \text{and} \tag{28}\] \[\nu_{2} \triangleq\frac{r}{2(2+\beta)}\sqrt{-\Big{(}\frac{\gamma_{2}}{r^ {2}}-4\Big{)}\Big{(}(1+\beta)^{2}\frac{\gamma_{2}}{r^{2}}-4\Big{)}}.\]
**Lemma 4.6**.: _Consider the sets of points \(\mathcal{T}\) and \(\partial\mathcal{B}_{i}\) for \(i\in\{1,2\}\) defined in (22) and (25), respectively. Let \(\lambda_{i}\) and \(\nu_{i}\) for \(i\in\{1,2\}\) be as defined in (27) and (28), respectively._
1. _For_ \(i\in\{1,2\}\)_, if_ \(r\in\big{(}0,\frac{L}{2\sigma_{i}}\big{]}\)_, then_ \(\mathcal{T}\cap\partial\mathcal{B}_{i}=\emptyset\)_._
2. _For_ \(i\in\{1,2\}\)_, if_ \(r\in\big{(}\frac{L}{2\sigma_{i}},\frac{L}{2}(\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}})\big{]}\)_, then_ \(\mathcal{T}\cap\partial\mathcal{B}_{i}=\big{\{}\mathbf{x}\in\mathbb{R}^{n}:x_{1} =\lambda_{i},\ \ \|\tilde{\mathbf{x}}\|=\nu_{i}\big{\}}\)_._
Recall that \(\lambda_{1}\) and \(\lambda_{2}\) are defined in (27). In the following lemma, for \(i\in\{1,2\}\), we consider a relationship between \(\frac{L}{\sigma_{i}r}\) and \(\frac{\lambda_{i}}{r}\). In particular, for \(r\in\big{(}0,\frac{L}{2}(\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}})\big{]}\), recall from Lemma 4.6 that if \(\mathcal{T}\cap\partial\mathcal{B}_{1}\neq\emptyset\) (resp. \(\mathcal{T}\cap\partial\mathcal{B}_{2}\neq\emptyset\)), then every point in the intersection has the first component equal to \(\lambda_{1}\) (resp. \(\lambda_{2}\)). The next lemma compares \(\lambda_{1}\) to the maximum value of the first component over all points of \(\partial\mathcal{B}_{1}\) (which is \(-r+\frac{L}{\sigma_{1}}\)), and compares \(\lambda_{2}\) to the minimum value of the first component over all points of \(\partial\mathcal{B}_{2}\) (which is \(r-\frac{L}{\sigma_{1}}\)), respectively.
**Lemma 4.7**.: _Let \(\lambda_{i}\) for \(i\in\{1,2\}\) be as defined in (27)._
1. _If_ \(r\in\big{(}0,\ \frac{L}{2\sigma_{1}}\big{]}\) _then_ \(\lambda_{1}\geq\frac{L}{\sigma_{1}}-r\)_, with equality only if_ \(r=\frac{L}{2\sigma_{1}}\)_._
2. \(r\in\big{(}\frac{L}{2\sigma_{1}},\ \frac{L}{2}(\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}})\big{)}\) _if and only if_ \(\lambda_{1}<\frac{L}{\sigma_{1}}-r\)_._
3. _If_ \(r\in\big{(}0,\ \frac{L}{2\sigma_{2}}\big{]}\) _then_ \(\lambda_{2}\leq r-\frac{L}{\sigma_{2}}\)_, with equality only if_ \(r=\frac{L}{2\sigma_{2}}\)_._
4. \(r\in\big{(}\frac{L}{2\sigma_{2}},\ \frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}}\big{)}\big{)}\) _if and only if_ \(\lambda_{2}>r-\frac{L}{\sigma_{2}}\)
In the next lemma, we will consider a relationship between \(\mathcal{T}\) and \(\partial\mathcal{M}^{\uparrow}\) (the boundary of the outer approximation), and \(\mathcal{T}\) and \(\partial\mathcal{M}_{\downarrow}\) (the boundary of the inner approximation). In particular, we will show that \(\mathcal{T}\subseteq\partial\mathcal{M}^{\uparrow}\) and \(\mathcal{T}\subseteq\partial\mathcal{M}_{\downarrow}\) for a particular range of \(r\).
**Lemma 4.8**.: _Let \(\mathcal{M}^{\uparrow},\,\mathcal{M}_{\downarrow}\) and \(\mathcal{T}\) be defined as in (13), (14) and (22), respectively. If \(r\in\left(0,\ \frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}\right)\), then \(\mathcal{T}\subseteq\partial\mathcal{M}^{\uparrow}\) and \(\mathcal{T}\subseteq\partial\mathcal{M}_{\downarrow}\)._
For \(i\in\{1,2\}\), define half-planes
\[\mathcal{H}_{i}^{+}\triangleq\{\mathbf{x}\in\mathbb{R}^{n}:x_{1}\geq\lambda_{i}\} \quad\text{and}\quad\mathcal{H}_{i}^{-}\triangleq\{\mathbf{x}\in\mathbb{R}^{n}:x _{1}\leq\lambda_{i}\}, \tag{29}\]
where \(\lambda_{i}\) for \(i\in\{1,2\}\) are defined in (27). In the lemma below, we examine properties of points \(\mathbf{x}\in\partial\mathcal{B}_{1}\cap\mathcal{H}_{1}^{+}\) (resp. \(\mathbf{x}\in\partial\mathcal{B}_{2}\cap\mathcal{H}_{2}^{-}\)).
**Lemma 4.9**.: _Let the sets \(\mathcal{B}_{i}\) for \(i\in\{1,2\}\), and \(\mathcal{H}_{1}^{+}\) and \(\mathcal{H}_{2}^{-}\) be defined as in (8) and (29), respectively._
1. _If_ \(r\in\left(\frac{L}{2\sigma_{1}},\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1 }{\sigma_{2}}\big{)}\right)\) _then_ \(\big{[}\lambda_{1},\ -r+\frac{L}{\sigma_{1}}\big{]}\subseteq(-r,r)\) _and_ \(\partial\mathcal{B}_{1}\cap\mathcal{H}_{1}^{+}\subseteq(\overline{\mathcal{B }}_{1}\cap\overline{\mathcal{B}}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\)_._
2. _If_ \(r\in\left(\frac{L}{2\sigma_{2}},\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1 }{\sigma_{2}}\big{)}\right)\) _then_ \(\big{[}r-\frac{L}{\sigma_{2}},\ \lambda_{2}\big{]}\subseteq(-r,r)\) _and_ \(\partial\mathcal{B}_{2}\cap\mathcal{H}_{2}^{-}\subseteq(\overline{\mathcal{B }}_{1}\cap\overline{\mathcal{B}}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\)_._
In the theorem below, we give the characterization of the boundary \(\partial\mathcal{M}^{\uparrow}\) and interior \((\mathcal{M}^{\uparrow})^{\circ}\), and also a property of the set \(\mathcal{M}^{\uparrow}\) for each range of \(r\). Define the set
\[\widetilde{\mathcal{T}}\triangleq\{\mathbf{x}\in\mathbb{R}^{n}:\tilde{\phi}_{1}( \mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})>\psi(\mathbf{x})\}, \tag{30}\]
which will be used especially in Theorem 4.10 and Theorem 5.4.
**Theorem 4.10**.: _Assume \(\sigma_{1}\geq\sigma_{2}\). Let the sets \(\mathcal{M}^{\uparrow},\,\mathcal{T},\,\widetilde{\mathcal{T}}\), and \(\mathcal{B}_{i}\) for \(i\in\{1,2\}\) be defined as in (13), (22), (30), and (8), respectively. Also, let the sets \(\mathcal{H}_{i}^{+}\) and \(\mathcal{H}_{i}^{-}\) for \(i\in\{1,2\}\) be defined as in (29)._
1. _If_ \(r\in\left(0,\ \frac{L}{2\sigma_{1}}\right]\) _then_ \(\mathcal{M}^{\uparrow}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) _is closed,_ \[\partial\mathcal{M}^{\uparrow}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*} \}\quad\text{and}\quad(\mathcal{M}^{\uparrow})^{\circ}=\widetilde{\mathcal{T}}.\]
2. _If_ \(r\in\left(\frac{L}{2\sigma_{1}},\ \frac{L}{2\sigma_{2}}\right]\) _then_ \(\mathcal{M}^{\uparrow}\sqcup\{\mathbf{x}_{1}^{*}\}\) _is closed,_ \[\partial\mathcal{M}^{\uparrow} =\left[\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c} \right]\sqcup\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*}\}\quad\text{and}\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\widetilde{\mathcal{T}}\cap\mathcal{H}_{1}^{-}\right].\]
3. _If_ \(r\in\left(\frac{L}{2\sigma_{2}},\ \frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1 }{\sigma_{2}}\big{)}\right)\) _then_ \(\mathcal{M}^{\uparrow}\) _is closed,_ \[\partial\mathcal{M}^{\uparrow} =\left[\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c} \right]\sqcup\left[\partial\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c} \right]\sqcup\mathcal{T}\quad\text{and}\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[ \widetilde{\mathcal{T}}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[ \widetilde{\mathcal{T}}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[ \widetilde{\mathcal{T}}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[ \widetilde{\mathcal{T}}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[\mathcal{T }\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[ \mathcal{T}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[ \mathcal{T}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\left[\mathcal{T }\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\right]\] \[(\mathcal{M}^{\uparrow})^{\circ} =\left[\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\right]\sqcup \left[\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\right]\sqcup\
Suppose \(\mathbf{x}\in\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\) and \(\mathbf{x}\in\partial\mathcal{B}_{1}\). Since \(\overline{\mathcal{B}}_{1}\) is closed and \(x_{1}\in\big{[}-r-\frac{L}{\sigma_{1}},-r+\frac{L}{\sigma_{1}}\big{]}\), from Lemma 4.7 part (i), we get \(x_{1}\leq\frac{L}{\sigma_{1}}-r\leq\lambda_{1}\). If \(x_{1}<\lambda_{1}\), from Lemma 4.5 part (i), we obtain \(\varphi(\mathbf{x})<0\). On the other hand, if \(x_{1}=\lambda_{1}\) (i.e., \(\frac{L}{\sigma_{1}}-r=\lambda_{1}\)), from Lemma 4.7 part (i), we get \(r=\frac{L}{2\sigma_{1}}\). Substituting into \(x_{1}=\frac{L}{\sigma_{1}}-r\), we obtain that \(x_{1}=\frac{L}{2\sigma_{1}}=r\). Since \(\mathbf{x}\in\partial\mathcal{B}(\mathbf{x}_{1}^{*},2r)\) and \(x_{1}=r\), we conclude that \(\mathbf{x}=\mathbf{x}_{2}^{*}=(r,\mathbf{0})\).
From the assumption \(\sigma_{1}\geq\sigma_{2}\) and the inequality \(r\leq\frac{L}{2\sigma_{1}}\), we get \(r\leq\frac{L}{2\sigma_{2}}\). We can similarly show that if \(\mathbf{x}\in\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\) and \(\mathbf{x}\in\partial\mathcal{B}_{2}\) then either \(\varphi(\mathbf{x})<0\) or \(\mathbf{x}=\mathbf{x}_{1}^{*}=(-r,\mathbf{0})\) by using Lemma 4.7 part (iii) and Lemma 4.5 part (ii). Since \(\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\subseteq\partial\mathcal{B}_{1 }\cup\partial\mathcal{B}_{2}\), we have proved our claim.
Since \(\mathcal{M}^{\uparrow}\subseteq\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2}\) from the definition of \(\mathcal{M}^{\uparrow}\) in (13) and \(\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\subset(\mathcal{M}^{\uparrow})^{c}\) from (32), we have \(\mathcal{M}^{\uparrow}\subseteq\mathcal{B}_{1}\cap\mathcal{B}_{2}\). Recall the definition of \(\varphi\) in (31). Let \(\mathcal{R}=(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{ x}_{2}^{*}\}\subset(\partial\mathcal{M}^{\uparrow})^{c}\). We then partition the set \(\mathcal{R}\) into 3 parts as follows:
\[\mathcal{R}_{1}=\big{\{}\mathbf{z}\in\mathcal{R}:\varphi(\mathbf{z})>0\big{\}},\ \ \mathcal{R}_{2}=\big{\{}\mathbf{z}\in\mathcal{R}:\varphi(\mathbf{z})<0\big{\}},\ \ \ \text{and}\ \ \ \mathcal{R}_{3}=\big{\{}\mathbf{z}\in\mathcal{R}:\varphi(\mathbf{z})=0\big{\}}= \mathcal{T},\]
where the last equality comes from Proposition 4.2. We will show that
\[\begin{cases}\mathcal{R}_{1}\subset(\partial\mathcal{M}^{\uparrow})^{c},\\ \mathcal{R}_{2}\subset(\partial\mathcal{M}^{\uparrow})^{c},\\ \mathcal{R}_{3}\subseteq\partial\mathcal{M}^{\uparrow},\\ \partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}\subset(\partial\mathcal{M}^{\uparrow})^{c},\\ (\textbf{dom}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\subset( \partial\mathcal{M}^{\uparrow})^{c}.\end{cases} \tag{33}\]
Suppose \(\mathbf{x}\in\mathcal{R}_{1}\). Since \(\varphi\) is continuous, there exists \(\epsilon>0\) such that for all \(\mathbf{x}_{0}\in\mathcal{B}(\mathbf{x},\epsilon)\), we have \(\mathbf{x}_{0}\in\mathcal{R}_{1}\) and \(\varphi(\mathbf{x}_{0})>0\). Since \(\mathcal{R}_{1}\subseteq\mathcal{M}^{\uparrow}\) and is open, we have \(\mathcal{R}_{1}\subseteq(\mathcal{M}^{\uparrow})^{\circ}\). Similarly, we have \(\mathcal{R}_{2}\subseteq(\mathcal{R}\setminus\mathcal{M}^{\uparrow})^{\circ}\). Suppose \(\mathbf{x}\in\mathcal{R}_{3}=\mathcal{T}\). Using Lemma 4.8, we have that \(\mathcal{R}_{3}\subseteq\partial\mathcal{M}^{\uparrow}\). Since \((\mathcal{M}^{\uparrow})^{\circ}\), \((\mathcal{R}\setminus\mathcal{M}^{\uparrow})^{\circ}\), and \(\partial\mathcal{M}^{\uparrow}\) are disjoint, we conclude that \(\mathcal{R}_{1}\subset(\partial\mathcal{M}^{\uparrow})^{c}\) and \(\mathcal{R}_{2}\subset(\partial\mathcal{M}^{\uparrow})^{c}\).
Consider \(\mathbf{x}\in\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\). From (32), we have \(\mathbf{x}\in\{\mathbf{z}\in\mathbb{R}^{n}:\varphi(\mathbf{z})<0\}\). Since \(\textbf{dom}(\varphi)=(\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}) \setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) and \(\varphi\) is continuous, there exists \(\epsilon>0\) such that for all \(\mathbf{x}_{0}\in\mathcal{B}(\mathbf{x},\epsilon)\cap\textbf{dom}(\varphi)\), we have \(\varphi(\mathbf{x}_{0})<0\). Thus, \(\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}\subset((\mathcal{M}^{\uparrow})^{c})^{\circ}\) which implies that \(\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}\subseteq(\partial\mathcal{M}^{\uparrow})^{c}\). In addition, we have \((\textbf{dom}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\subseteq(( \mathcal{M}^{\uparrow})^{c})^{c}\) (since \((\textbf{dom}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\subseteq( \mathcal{M}^{\uparrow})^{c}\) and is open) which implies \((\textbf{dom}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\subset( \partial\mathcal{M}^{\uparrow})^{c}\). Therefore, we have proved the claim (33).
Since we can partition \(\mathbb{R}^{n}\) into \(\mathcal{R}\), \(\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}\), \((\textbf{dom}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) and \(\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\), using (33), we obtain that \(\partial\mathcal{M}^{\uparrow}\subseteq\mathcal{R}_{3}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}\). However, we know that \(\mathcal{R}_{3}\subseteq\partial\mathcal{M}^{\uparrow}\) from the above analysis and \(\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\subseteq\partial\mathcal{M}^{\uparrow}\) from Lemma 4.3. Thus, we have \(\partial\mathcal{M}^{\uparrow}=\mathcal{R}_{3}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) by Proposition 4.2.
From Proposition 4.2, we have \(\mathcal{T}=\{\mathbf{z}\in\mathbb{R}^{n}:\varphi(\mathbf{z})=0\}\). Using the definition of \(\mathcal{M}^{\uparrow}\) in (13) and \(\partial\mathcal{M}^{\uparrow}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\), we can write \((\mathcal{M}^{\uparrow})^{\circ}=\mathcal{M}^{\uparrow}\setminus\partial\mathcal{M}^{ \uparrow}=\widetilde{\mathcal{T}}\) where \(\widetilde{\mathcal{T}}\) is defined in (30). Since \(\mathcal{T}\subseteq\mathcal{M}^{\uparrow}\), this implies that \(\partial\mathcal{M}^{\uparrow}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\} \subseteq\mathcal{M}^{\uparrow}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\). Thus, the set \(\mathcal{M}^{\uparrow}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) is closed
Since \((\mathcal{H}_{1}^{-})^{c}\) is open, (37) implies that \((\mathcal{M}^{\uparrow})^{\circ}\cap(\mathcal{H}_{1}^{-})^{c}=\mathcal{B}_{1} \cap(\mathcal{H}_{1}^{-})^{c}\). Then, subtracting this equation from (37), we obtain that \(\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1}^{-})^{c}=\partial\mathcal{B }_{1}\cap(\mathcal{H}_{1}^{-})^{c}\), which completes the second part of claim (34).
Next, consider the second region \(\mathcal{H}_{1}^{+}\cap\mathcal{H}_{1}^{-}=\{\mathbf{z}\in\mathbb{R}^{n}:z_{1}= \lambda_{1}\}\). Recall the definition of \(\nu_{1}\) in (28). Consider the following three cases.
* Suppose \(\mathbf{x}\in\{\mathbf{z}\in\mathbb{R}^{n}:z_{1}=\lambda_{1},\ \|\tilde{\mathbf{z}}\|>\nu_{1}\}\). Then, \(\mathbf{x}\in(\mathbf{\mathrm{dom}}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^ {*}\}\) which implies that \(\mathbf{x}\notin\mathcal{T}\). Since \((\mathbf{\mathrm{dom}}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\} \subseteq(\mathcal{M}^{\uparrow})^{c}\) and is open, we also have \(\mathbf{x}\notin\partial\mathcal{M}^{\uparrow}\).
* Suppose \(\mathbf{x}\in\{\mathbf{z}\in\mathbb{R}^{n}:z_{1}=\lambda_{1},\ \|\tilde{\mathbf{z}}\|=\nu_{1}\}\). From Lemma 4.6 part (ii), we have \(\mathbf{x}\in\mathcal{T}\). Using Lemma 4.8, we obtain that \(\mathbf{x}\in\partial\mathcal{M}^{\uparrow}\).
* Suppose \(\mathbf{x}\in\{\mathbf{z}\in\mathbb{R}^{n}:z_{1}=\lambda_{1},\ \|\tilde{\mathbf{z}}\|<\nu_{1}\}\). Since \(\{\mathbf{z}\in\mathbb{R}^{n}:z_{1}=\lambda_{1},\ \|\tilde{\mathbf{z}}\|=\nu_{1}\} \subseteq\mathcal{T}\), from Lemma 4.4, we have \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})>\psi(\mathbf{x})\) which implies that \(\mathbf{x}\notin\mathcal{T}\). Since \(\mathbf{x}\in(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_ {2}^{*}\}\) and \(\varphi\) is continuous, there exists \(\epsilon\in\mathbb{R}_{>0}\) such that for all \(\mathbf{x}_{0}\in\mathcal{B}(\mathbf{x},\epsilon)\), we have \(\mathbf{x}_{0}\in\mathcal{M}^{\uparrow}\) by the definition of \(\mathcal{M}^{\uparrow}\) in (13). This means that \(\mathbf{x}\in(\mathcal{M}^{\uparrow})^{\circ}\) and thus, \(\mathbf{x}\notin\partial\mathcal{M}^{\uparrow}\).
Combining the analysis of these three cases, we have that
\[\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1}^{+}\cap\mathcal{H}_{1}^{-}) =\{\mathbf{z}\in\mathbb{R}^{n}:z_{1}=\lambda_{1},\ \|\tilde{\mathbf{z}}\|=\nu_{1}\}=\mathcal{T}\cap(\mathcal{H}_{1}^{+}\cap \mathcal{H}_{1}^{-}). \tag{38}\]
Next, consider the third region \((\mathcal{H}_{1}^{+})^{c}=\mathbb{R}^{n}\setminus\mathcal{H}_{1}^{+}\). First, we want to show that
\[\text{if}\quad\mathbf{x}\in\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\cap( \mathcal{H}_{1}^{+})^{c}\quad\text{then}\quad\mathbf{x}\in\{\mathbf{z}\in\mathbb{R}^{n }:\varphi(\mathbf{z})<0\}\cup\{\mathbf{x}_{1}^{*}\}. \tag{39}\]
Suppose \(\mathbf{x}\in\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\) and \(\mathbf{x}\in\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{+})^{c}\). Since \(x_{1}<\lambda_{1}\), from Lemma 4.5 part (i), we obtain \(\varphi(\mathbf{x})<0\). By using the result from the proof of part (i), we have that if \(\mathbf{x}\in\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\) and \(\mathbf{x}\in\partial\mathcal{B}_{2}\) then either \(\varphi(\mathbf{x})<0\) or \(\mathbf{x}=\mathbf{x}_{1}^{*}\). Combining the two results, we have proved the claim.
Since \(\mathcal{M}^{\uparrow}\subseteq\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2}\) from the definition of \(\mathcal{M}^{\uparrow}\) in (13) and \(\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\cap(\mathcal{H}_{1}^{+})^{c} \subset(\mathcal{M}^{\uparrow})^{c}\) from (39), we have \(\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1}^{+})^{c}\subseteq(\mathcal{B}_{1} \cap\mathcal{B}_{2})\cap(\mathcal{H}_{1}^{+})^{c}\). Let \(\mathcal{R}^{\prime}=\mathcal{R}\cap(\mathcal{H}_{1}^{+})^{c}\). We then partition the set \(\mathcal{R}^{\prime}\) into \(\mathcal{R}_{1}^{\prime}\), \(\mathcal{R}_{2}^{\prime}\), and \(\mathcal{R}_{3}^{\prime}\) where \(\mathcal{R}_{i}^{\prime}=\mathcal{R}_{i}\cap(\mathcal{H}_{1}^{+})^{c}\) for \(i\in\{1,2,3\}\). We can use a similar argument as in the proof of (33) to show that
\[\begin{cases}\mathcal{R}_{1}^{\prime}\subset(\partial\mathcal{M}^{\uparrow})^{c} \cap(\mathcal{H}_{1}^{+})^{c},\\ \mathcal{R}_{2}^{\prime}\subset(\partial\mathcal{M}^{\uparrow})^{c}\cap( \mathcal{H}_{1}^{+})^{c},\\ \mathcal{R}_{3}^{\prime}\subseteq\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1 }^{+})^{c},\\ ((\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*}\})\cap( \mathcal{H}_{1}^{+})^{c}\subset(\partial\mathcal{M}^{\uparrow})^{c}\cap( \mathcal{H}_{1}^{+})^{c},\\ ((\mathbf{\mathrm{dom}}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*}\})\cap(\mathcal{H}_{1 }^{+})^{c}\subset(\partial\mathcal{M}^{\uparrow})^{c}\cap(\mathcal{H}_{1}^{+})^{c}. \end{cases} \tag{40}\]
Since we can partition \((\mathcal{H}_{1}^{+})^{c}\) into \(\mathcal{R}^{\prime}\), \((\partial(\mathcal{B}_{1}\cap\mathcal{B}_{2})\setminus\{\mathbf{x}_{1}^{*}\})\cap( \mathcal{H}_{1}^{+})^{c}\), \(\big{(}(\mathbf{\mathrm{dom}}(\varphi))^{c}\setminus\{\mathbf{x}_{1}^{*}\}\big{)}\cap( \mathcal{H}_{1}^{+})^{c}\) and \(\{\mathbf{x}_{1}^{*}\}\), using (40), we obtain that \(\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1}^{+})^{c}\subseteq\mathcal{R}_{ 3}^{\prime}\sqcup\{\mathbf{x}_{1}^{*}\}\). However, we know that \(\mathcal{R}_{3}^{\prime}\subseteq\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1 }^{+})^{c}\) from (40) and \(\{\mathbf{x}_{1}^{*}\}\subseteq\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1}^{+})^{c}\) from Lemma 4.3. Thus, using \(\mathcal{R}_{3}^{\prime}=\mathcal{R}_{3}\cap(\mathcal{H}_{1}^{+})^{c}\) and Proposition 4.2 we have
\[\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{1}^{+})^{c}=\mathcal{R}_{3}^{ \prime}\sqcup\{\mathbf{x}_{1}^{*}\}=\big{[}\mathcal{T}\cap(\mathcal{H}_{1}^{+})^{c} \big{]}\sqcup\{\mathbf{x}_{1}^{*}\}. \tag{41}\]
Since \(\mathbb{R}^{n}=(\mathcal{H}_{1}^{-})^{c}\sqcup(\mathcal{H}_{1}^{+}\cap\mathcal{H}_{1 }^{-})\sqcup(\mathcal{H}_{1}^{+})^{c}\), using (34), (38) and (41), we obtain that
\[\partial\mathcal{M}^{\uparrow}=\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1 }^{-})^{c}\big{]}\sqcup\big{[}\mathcal{T}\cap(\mathcal{H}_{1}^{+}\cap \mathcal{H}_{1}^{-})\big{]}\sqcup\big{[}\mathcal{T}\cap(\mathcal{H}_{1}^{
**Part (iii):**\(r\in\left(\frac{L}{2\sigma_{2}},\,\frac{L}{2}\left(\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}}\right)\right)\). We can use a similar argument as in the proof of part (ii) to show that
\[\begin{cases}\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{ \mathrm{t}}^{-})^{c}=\partial\mathcal{B}_{1}\cap(\mathcal{H}_{\mathrm{t}}^{-} )^{c}\subseteq\mathcal{M}^{\uparrow},&\text{(similar to proving (\ref{eq:M2}))}\\ \partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{\mathrm{t}}^{+})^{c}= \partial\mathcal{B}_{2}\cap(\mathcal{H}_{\mathrm{t}}^{+})^{c}\subseteq \mathcal{M}^{\uparrow},&\text{(similar to proving (\ref{eq:M2}))}\\ \partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{\mathrm{t}}^{+}\cap\mathcal{H} _{\mathrm{t}}^{-})=\mathcal{T}\cap(\mathcal{H}_{\mathrm{t}}^{+}\cap\mathcal{H} _{\mathrm{t}}^{-})\subseteq\mathcal{M}^{\uparrow},&\text{(similar to proving (\ref{eq:M2}))}\\ \partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{\mathrm{t}}^{+}\cap\mathcal{H} _{\mathrm{t}}^{-})^{c}=\mathcal{T}\cap(\mathcal{H}_{\mathrm{t}}^{+}\cup \mathcal{H}_{\mathrm{t}}^{-})^{c}\subseteq\mathcal{M}^{\uparrow}.&\text{( similar to proving (\ref{eq:M2}))}\end{cases}\]
Similar to (36), in this case, we have that \(\mathcal{T}\cap(\mathcal{H}_{\mathrm{t}}^{-})^{c}=\emptyset\) and \(\mathcal{T}\cap(\mathcal{H}_{\mathrm{t}}^{+})^{c}=\emptyset\). This means that the last three equations regarding \(\partial\mathcal{M}^{\uparrow}\) above can be combined into \(\partial\mathcal{M}^{\uparrow}\cap(\mathcal{H}_{\mathrm{t}}^{-}\cap\mathcal{H }_{\mathrm{t}}^{+})=\mathcal{T}\). Combining this equation with the first two equations regarding \(\partial\mathcal{M}^{\uparrow}\) above, we obtain the characterization of \(\partial\mathcal{M}^{\uparrow}\). For the characterization of \((\mathcal{M}^{\uparrow})^{\circ}\), we can use the same technique as shown in the analysis of part (ii) to obtain the result. From the five inclusions regarding \(\partial\mathcal{M}^{\uparrow}\) above, we can write \(\partial\mathcal{M}^{\uparrow}\subseteq\mathcal{M}^{\uparrow}\) and we conclude that \(\mathcal{M}^{\uparrow}\) is closed.
**Part (iv):**\(r=\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}\). In this case, we have \(\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}=\Big{\{}\Big{(}\frac {L}{2}\big{(}\frac{1}{\sigma_{1}}-\frac{1}{\sigma_{2}}\big{)},\ \mathbf{0}\Big{)}\Big{\}}\). Suppose \(\mathbf{x}=\Big{(}\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}-\frac{1}{\sigma_{2}}\big{)},\ \mathbf{0}\Big{)}\). Since \(\mathcal{M}^{\uparrow}\subseteq\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2}\), we only need to check point \(\mathbf{x}\). At this point, we get \(\phi_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})=\psi(\mathbf{x})=0\) (since \(d_{1}(\mathbf{x})=\frac{L}{\sigma_{1}}\), \(d_{2}(\mathbf{x})=\frac{L}{\sigma_{2}}\), \(\alpha_{1}(\mathbf{x})=0\) and \(\alpha_{2}(\mathbf{x})=\pi\)). So, we conclude that \(\mathcal{M}^{\uparrow}=\Big{\{}\Big{(}\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}- \frac{1}{\sigma_{2}}\big{)},\ \mathbf{0}\Big{)}\Big{\}}\).
**Part (v):**\(r\in\Big{(}\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)},\ \infty\Big{)}\). Since \(r>\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}\), we have \(\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}=\emptyset\). Since \(\mathcal{M}^{\uparrow}\subseteq\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2}\), we conclude that \(\mathcal{M}^{\uparrow}=\emptyset\).
Examples of the boundary \(\partial\mathcal{M}^{\uparrow}\) in \(\mathbb{R}^{2}\) for the first three cases of Theorem 4.10 are shown in Fig. 3. We consider parameters \(\sigma_{1}=1.5\), \(\sigma_{2}=1\) and \(L=10\). For \(r=2\), as we can see from Fig. 3a, we have \(\partial\mathcal{M}^{\uparrow}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\) (i.e., solid blue line + two red dots) consistent with part (i) of Theorem 4.10. For \(r=4\), as we can see from Fig. 3b, we have \(\partial\mathcal{M}^{\uparrow}=\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{ \mathrm{t}}^{-})^{c}\big{]}\sqcup\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*}\}\) (i.e., solid cyan line + solid blue line + left red dot) consistent with part (ii) of Theorem 4.10. For \(r=6\), as we can see from Fig. 3c, we have \(\partial\mathcal{M}^{\uparrow}=\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{ \mathrm{t}}^{-})^{c}\big{]}\sqcup\big{[}\partial\mathcal{B}_{2}\cap(\mathcal{H }_{\mathrm{t}}^{+})^{c}\big{]}\sqcup\mathcal{T}\) (i.e., solid cyan line + solid magenta line + solid blue line) consistent with part (iii) of Theorem 4.10. Note that the solid blue line, solid cyan line and solid magenta line in the figures indicate that the corresponding sets of points are subsets of the outer approximation \(\mathcal{M}^{\uparrow}\), i.e., \(\mathcal{T}\subseteq\mathcal{M}^{\uparrow}\), \(\partial\mathcal{B}_{1}\cap(\mathcal{H}_{\mathrm{t}}^{-})^{c}\subseteq\mathcal{M} ^{\uparrow}\) and \(\partial\mathcal{B}_{2}\cap(\mathcal{H}_{\mathrm{t}}^{+})^{c}\subseteq\mathcal{M} ^{\uparrow}\), respectively.
## 5 Inner Approximation and Potential Solution Region
In the previous section, we showed that the set \(\mathcal{M}^{\uparrow}\) defined in (13) is an outer approximation for the desired set \(\mathcal{M}\) defined in (7), in that \(\mathcal{M}\subseteq\mathcal{M}^{\uparrow}\). We now turn our attention to the set \(\mathcal{M}_{\downarrow}\) defined in (14). We will show that \(\mathcal{M}_{\downarrow}\subseteq\mathcal{M}\), and consequently, provide a tight characterization of \(\mathcal{M}\).
Since we have \(\bigcup_{\tilde{\sigma}\geq\sigma}\mathcal{Q}(\mathbf{x}^{*},\tilde{\sigma})\subset \mathcal{S}(\mathbf{x}^{*},\sigma)\) for all \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and \(\sigma\in\mathbb{R}_{>0}\) from (5), we can provide a region contained in the potential solution region \(\mathcal{M}\) by restricting our consideration to only some classes of quadratic functions. In Section 5.1, we analyze a sufficient and necessary condition for constructing a quadratic function with a given minimizer, gradient and curvature. Then, using the result from Section 5.1, in Section 5.2, we prove a relationship between the potential solution region \(\mathcal{M}\) and the inner approximation \(\mathcal{M}_{\downarrow}\), and also provide a characterization of \(\mathcal{M}_{\downarrow}\).
### Quadratic Functions Analysis
In this subsection, first we consider an equivalent condition for the existence of a quadratic function with a given minimizer, gradient at a specific point and the smallest eigenvalue associated to the quadratic term in \(n\)-dimensional space (i.e., in \(\mathbb{R}^{n}\)) which is presented in Proposition 5.1. Then, we present Corollary 5.2 in which we provide a similar equivalent condition for the class \(\bigcup_{\tilde{\sigma}\geq\sigma}\mathcal{Q}(\mathbf{x}^{*},\tilde{\sigma})\) for a given \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and \(\sigma\in\mathbb{R}_{>0}\).
In the following proposition (whose proof is provided in Appendix B.1), we consider an equivalent condition for the existence of a quadratic function with more than one independent variable satisfying certain properties.
**Proposition 5.1**.: _Let \(\mathcal{Q}\) be defined as in (4). For \(n\in\mathbb{N}\setminus\{1\}\), suppose we are given points \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and \(\mathbf{x}_{0}\in\mathbb{R}^{n}\) such that \(\mathbf{x}_{0}\neq\mathbf{x}^{*}\), vector \(\mathbf{g}\in\mathbb{R}^{n}\), and scalar \(\sigma\in\mathbb{R}_{>0}\). Then, there exists a function \(f\in\mathcal{Q}^{(n)}(\mathbf{x}^{*},\sigma)\) with a gradient \(\nabla f(\mathbf{x}_{0})=\mathbf{g}\) if and only if_
1. \(\mathbf{x}_{0}\in\overline{\mathcal{B}}\big{(}\mathbf{x}^{*},\frac{\|\mathbf{g}\|}{\sigma }\big{)}\) _and_
2. \(\angle(\mathbf{g},\mathbf{x}_{0}-\mathbf{x}^{*})\in\{0\}\cup\Big{[}0,\ \arccos\big{(}\frac{\sigma}{\|\mathbf{g}\|}\|\mathbf{x}_{0}-\mathbf{x}^{*}\|\big{)}\Big{)}\)_._
_Note that if \(\sigma\|\mathbf{x}_{0}-\mathbf{x}^{*}\|=\|\mathbf{g}\|\), then \(\Big{[}0,\ \arccos(\frac{\sigma}{\|\mathbf{g}\|}\|\mathbf{x}_{0}-\mathbf{x}^{*}\|)\Big{)}=\emptyset\)._
Recall from (5) that \(\bigcup_{\tilde{\sigma}\geq\sigma}\mathcal{Q}(\mathbf{x}^{*},\hat{\sigma})\subset \mathcal{S}(\mathbf{x}^{*},\sigma)\) for all \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and \(\sigma\in\mathbb{R}_{>0}\). One way to characterize the inner approximation \(\mathcal{M}_{\downarrow}\) is to utilize sufficient conditions for the construction of a function \(f\in\bigcup_{\tilde{\sigma}\geq\sigma}\mathcal{Q}(\mathbf{x}^{*},\hat{\sigma})\). More generally, the following corollary presents necessary and sufficient conditions for such construction.
**Corollary 5.2**.: _Let \(\mathcal{Q}\) be defined as in (4). For \(n\in\mathbb{N}\setminus\{1\}\), suppose we are given points \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) and \(\mathbf{x}_{0}\in\mathbb{R}^{n}\) such that \(\mathbf{x}_{0}\neq\mathbf{x}^{*}\), a vector \(\mathbf{g}\in\mathbb{R}^{n}\), scalar \(L\in\mathbb{R}_{>0}\) such that \(\|\mathbf{g}\|=L\), and a scalar \(\sigma\in\mathbb{R}_{>0}\). Then, there exists a function \(f\in\bigcup_{\tilde{\sigma}\geq\sigma}\mathcal{Q}(\mathbf{x}^{*},\hat{\sigma})\) with \(\nabla f(\mathbf{x}_{0})=\mathbf{g}\) if and only if_
1. \(\mathbf{x}_{0}\in\overline{\mathcal{B}}\big{(}\mathbf{x}^{*},\frac{L}{\sigma}\big{)}\) _and_
2. \(\angle(\mathbf{g},\ \mathbf{x}_{0}-\mathbf{x}^{*})\in\{0\}\cup\Big{[}0,\ \arccos\big{(}\frac{\sigma}{L}\|\mathbf{x}_{0}-\mathbf{x}^{*}\|\big{)}\Big{)}\)_._
_Note that if \(\sigma\|\mathbf{x}_{0}-\mathbf{x}^{*}\|=L\), then \(\Big{[}0,\ \arccos(\frac{\sigma}{L}\|\mathbf{x}_{0}-\mathbf{x}^{*}\|)\Big{)}=\emptyset\)._
Proof.: From Proposition 5.1, we can write that there exists a function \(f\in\bigcup_{\tilde{\sigma}\geq\sigma}\mathcal{Q}(\mathbf{x}^{*},\hat{\sigma})\) with a gradient \(\nabla f(\mathbf{x}_{0})=\mathbf{g}\) and \(\|\nabla f(\mathbf{x}_{0})\|=L\) if and only if
\[\mathbf{x}_{0}\in\bigcup_{\tilde{\sigma}\geq\sigma}\overline{\mathcal{B}}\Big{(}\mathbf{ x}^{*},\frac{\|\mathbf{g}\|}{\hat{\sigma}}\Big{)}=\overline{\mathcal{B}}\Big{(}\mathbf{x}^{*}, \frac{L}{\sigma}\Big{)},\]
and
\[\angle(\mathbf{g},\ \mathbf{x}_{0}-\mathbf{x}^{*}) \in\{0\}\cup\bigcup_{\tilde{\sigma}\geq\sigma}\bigg{[}0,\ \arccos\Big{(}\frac{\hat{\sigma}}{\|\mathbf{g}\|}\|\mathbf{x}_{0}-\mathbf{x}^{*}\|\Big{)} \bigg{)}\] \[=\{0\}\cup\bigg{[}0,\ \arccos\Big{(}\frac{\sigma}{L}\|\mathbf{x}_{0}-\mathbf{x}^{*}\|\Big{)}\bigg{)}.\]
### Inner Approximation Characterization
In this subsection, we use results from Section 5.1 to derive a sufficient condition for a point to be in the potential solution region \(\mathcal{M}\), defined in (7). In fact, the sufficient condition is encapsulated in the description of the inner
approximation \(\mathcal{M}_{\downarrow}\); therefore, \(\mathcal{M}_{\downarrow}\subseteq\mathcal{M}\) which is presented in Proposition 5.3. Then, in Theorem 5.4, we characterize the boundary \(\partial\mathcal{M}_{\downarrow}\) and interior \((\mathcal{M}_{\downarrow})^{\circ}\), and provide a property of \(\mathcal{M}_{\downarrow}\) similar to Theorem 4.10.
Recall the definition of \(\underline{L}_{i}\) for \(i\in\{1,2\}\) from (17). Given \(i\in\{1,2\}\), \(\mathbf{x}_{i}^{*}\in\mathbb{R}^{n}\), \(\sigma_{i}\in\mathbb{R}_{>0}\), and \(L\in\mathbb{R}_{>0}\), from Corollary 5.2, we define the set of gradient angles \(\angle(\nabla f_{i}(\mathbf{x}),\ \mathbf{x}-\mathbf{x}_{i}^{*})\) that we can choose to construct a quadratic function \(f_{i}\in\bigcup_{\tilde{\sigma}_{i}\geq\sigma_{i}}\mathcal{Q}(\mathbf{x}_{i}^{*}, \tilde{\sigma}_{i})\) with \(\|\nabla f_{i}(\mathbf{x})\|\leq L\) as \(\Phi_{i}:\overline{\mathcal{B}}\big{(}\mathbf{x}_{i}^{*},\frac{L}{\sigma_{i}} \big{)}\to 2^{[0,\frac{\pi}{2})}\) with
\[\Phi_{i}(\mathbf{x})\triangleq\begin{cases}\begin{bmatrix}0,\ \arccos\big{(}\underline{L}_{i}(\mathbf{x})\big{)} \end{bmatrix}&\quad\text{if}\quad\underline{L}_{i}(\mathbf{x})<L,\\ \{0\}&\quad\text{if}\quad\underline{L}_{i}(\mathbf{x})=L.\end{cases} \tag{44}\]
Notice that the supremum of the set of angles \(\Phi_{i}(\mathbf{x})\) is equal to \(\tilde{\phi}_{i}(\mathbf{x})\) which is defined in (9). That is, for \(i\in\{1,2\}\), for all \(\mathbf{x}\in\overline{\mathcal{B}}\big{(}\mathbf{x}_{i}^{*},\frac{L}{\sigma_{i}}\big{)}\), we have
\[\sup\Phi_{i}(\mathbf{x})=\arccos\Big{(}\frac{\underline{L}_{i}(\mathbf{x})}{L}\Big{)}= \tilde{\phi}_{i}(\mathbf{x}).\]
In two-dimensional space, for a given \(\mathbf{x}\in\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2}\), for \(i\in\{1,2\}\), the set of admissible angles \(\Phi_{i}(\mathbf{x})\) and the quantity \(\tilde{\phi}_{i}(\mathbf{x})\) are shown in Fig. 3(a). In the next proposition, using Corollary 5.2, we show that the inner approximation \(\mathcal{M}_{\downarrow}\) is contained in the potential solution region \(\mathcal{M}\).
**Proposition 5.3**.: _Suppose the sets \(\mathcal{M}\) and \(\mathcal{M}_{\downarrow}\) are defined as in (7) and (14), respectively. Then, \(\mathcal{M}\supseteq\mathcal{M}_{\downarrow}\)._
Proof.: Suppose \(\mathbf{x}\in(\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2})\setminus \{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\). Recall the definition of \(\mathcal{X}\) from (12). We want to show that if \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})>\psi(\mathbf{x})\) or \(\mathbf{x}\in\mathcal{X}\), then \(\mathbf{x}\in\mathcal{M}\). Recall the definition of \(\mathbf{u}_{i}(\mathbf{x})\) for \(i\in\{1,2\}\) from (15). Consider the following two cases.
* Suppose \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})>\psi(\mathbf{x})\). Since \(\psi(\mathbf{x})=\angle(\mathbf{u}_{1}(\mathbf{x}),-\mathbf{u}_{2}(\mathbf{x}))\) from (20), there exists a vector \(\mathbf{g}\in\mathbb{R}^{n}\) with \(\|\mathbf{g}\|=L\) such that \(\angle(\mathbf{g},\mathbf{u}_{1}(\mathbf{x}))<\tilde{\phi}_{1}(\mathbf{x})\) and \(\angle(\mathbf{g},-\mathbf{u}_{2}(\mathbf{x}))<\tilde{\phi}_{2}(\mathbf{x})\). By the definition of \(\Phi_{i}\) in (44), this means that \(\angle(\mathbf{g},\mathbf{u}_{1}(\mathbf{x}))\in\Phi_{1}(\mathbf{x})\) and \(\angle(-\mathbf{g},\mathbf{u}_{2}(\mathbf{x}))\in\Phi_{2}(\mathbf{x})\).
* Suppose \(\mathbf{x}\in\mathcal{X}\). Then, the point \(\mathbf{x}=\big{(}-r+\frac{L}{\sigma_{1}},\mathbf{0}\big{)}\) and \(-r+\frac{L}{\sigma_{1}}\in(-r,r)\) as discussed below (12). In this case, we choose \(\mathbf{g}=L\mathbf{e}_{1}^{\prime}\) where \(\mathbf{e}_{1}^{\prime}=\frac{\mathbf{e}_{2}^{\prime}-\mathbf{x}_{1}^{*}}{\|\mathbf{x}_{2}^{*} -\mathbf{x}_{1}^{*}\|}\). This implies that \(\angle(\mathbf{g},\mathbf{u}_{1}(\mathbf{x}))=0\in\Phi_{1}(\mathbf{x})\) and \(\angle(-\mathbf{g},\mathbf{u}_{2}(\mathbf{x}))=0\in\Phi_{2}(\mathbf{x})\) by the definition of \(\Phi_{i}\) in (44).
Using Corollary 5.2, for both cases, we have that for \(i\in\{1,2\}\), there exist functions \(f_{i}\in\bigcup_{\tilde{\nu}>\sigma}\mathcal{Q}(\mathbf{x}_{i}^{*},\hat{\sigma})\) such that \(\mathbf{g}=\nabla f_{1}(\mathbf{x})=-\nabla f_{2}(\mathbf{x})\) and \(\|\nabla f_{1}(\mathbf{x})\|=\|\nabla f_{2}(\mathbf{x})\|=L\). Using (5), we have that there exist \(f_{i}\in\mathcal{S}(\mathbf{x}_{i}^{*},\sigma)\) with \(\|\nabla f_{i}(\mathbf{x})\|\leq L\) for \(i\in\{1,2\}\) and \(\mathbf{x}\) is the minimizer of \(f_{1}+f_{2}\). Therefore, \(\mathbf{x}\in\mathcal{M}\). Since \(\mathcal{M}_{\downarrow}\subseteq(\overline{\mathcal{B}}_{1}\cap\overline{ \mathcal{B}}_{2})\setminus\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\), we conclude that \(\mathcal{M}\supseteq\mathcal{M}_{\downarrow}\).
In two-dimensional space, for a given \(\mathbf{x}\in(\overline{\mathcal{B}}_{1}\cap\overline{\mathcal{B}}_{2})\setminus \{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\}\), the geometrical interpretation of the inequality \(\tilde{\phi}_{1}(\mathbf{x})+\tilde{\phi}_{2}(\mathbf{x})>\psi(\mathbf{x})\), which is used to describe the inner approximation \(\mathcal{M}_{\downarrow}\), is represented in Fig. 3(b).
Before characterizing the set \(\mathcal{M}_{\downarrow}\), recall the definition of \(\lambda_{i}\) and \(\nu_{i}\) in (27) and (28), respectively. For \(i\in\{1,2\}\), we define
\[\mathcal{C}_{i}\triangleq\big{\{}\mathbf{x}\in\mathbb{R}^{n}:x_{1}=\lambda_{i},\ \ \|\tilde{\mathbf{x}}\|=\nu_{i}\big{\}}. \tag{45}\]
Comparing the definition of \(\mathcal{M}^{\uparrow}\) in (13) to that of \(\mathcal{M}_{\downarrow}\) in (14), we see that the description of \(\mathcal{M}_{\downarrow}\) involves a strict inequality while it is not for \(\mathcal{M}^{\uparrow}\). Since the only difference is the inequality sign, we could expect to see similar results as in Theorem 4.10. Specifically, the following theorem provides a characterization of \(\partial\mathcal{M}_{\downarrow}\) and \((\mathcal{M}_{\downarrow})^{\circ}\) explicitly, and also a property of \(\mathcal{M}_{\downarrow}\). Since most parts of the proof are similar to that of Theorem 4.10, we defer the proof to Appendix B.2.
**Theorem 5.4**.: _Assume \(\sigma_{1}\geq\sigma_{2}\). Let the sets \(\mathcal{M}_{\downarrow}\), \(\mathcal{T}\), \(\widetilde{\mathcal{T}}\), and \(\mathcal{B}_{i}\) for \(i\in\{1,2\}\) be defined as in (13), (22), (30), and (8), respectively. Also, let the sets \(\mathcal{H}_{i}^{+}\) and \(\mathcal{H}_{i}^{-}\) for \(i\in\{1,2\}\) be defined as in (29), and the sets \(\mathcal{C}_{i}\) for \(i\in\{1,2\}\) be defined as in (45)._
* _If_ \(r\in\big{(}0,\ \frac{L}{2\sigma_{1}}\big{]}\) _then_ \(\mathcal{M}_{\downarrow}\) _is open and_ \[\partial\mathcal{M}_{\downarrow}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^{*}\} \quad\text{and}\quad(\mathcal{M}_{\downarrow})^{\circ}=\widetilde{\mathcal{T}}.\]
* _If_ \(r\in\left(\frac{L}{2\sigma_{1}},\ \frac{L}{2\sigma_{2}}\right)\) _then_ \((\mathcal{M}_{\downarrow}\cup\mathcal{C}_{1})\cap\mathcal{H}_{1}^{+}\) _is closed while_ \(\mathcal{M}_{\downarrow}\cap(\mathcal{H}_{1}^{+})^{c}\) _is open, and_ \[\partial\mathcal{M}_{\downarrow} =\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\big{]} \sqcup\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*}\}\quad\text{and}\] \[(\mathcal{M}_{\downarrow})^{\circ} =\big{[}\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\big{]} \sqcup\big{[}\widetilde{\mathcal{T}}\cap\mathcal{H}_{1}^{-}\big{]}.\]
* _If_ \(r\in\left(\frac{L}{2\sigma_{2}},\ \frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{ \sigma_{2}}\big{)}\right)\) _then_ \((\mathcal{M}_{\downarrow}\cup\mathcal{C}_{1})\cap\mathcal{H}_{1}^{+}\) _and_ \((\mathcal{M}_{\downarrow}\cup\mathcal{C}_{2})\cap\mathcal{H}_{2}^{-}\) _are closed while_ \(\mathcal{M}_{\downarrow}\cap(\mathcal{H}_{1}^{+}\cup\mathcal{H}_{2}^{-})^{c}\) _is open, and_ \[\partial\mathcal{M}_{\downarrow} =\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c} \big{]}\sqcup\big{[}\partial\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c} \big{]}\sqcup\mathcal{T}\quad\text{and}\] \[(\mathcal{M}_{\downarrow})^{\circ} =\big{[}\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\big{]} \sqcup\big{[}\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\big{]}\sqcup\big{[} \widetilde{\mathcal{T}}\cap(\mathcal{H}_{1}^{-}\cap\mathcal{H}_{2}^{+})\big{]}.\]
* _If_ \(r=\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}\) _then_ \(\mathcal{M}_{\downarrow}=\Big{\{}\Big{(}\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}- \frac{1}{\sigma_{2}}\big{)},\ \mathbf{0}\Big{)}\Big{\}}\)_._
* _If_ \(r\in\left(\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}, \ \infty\right)\) _then_ \(\mathcal{M}_{\downarrow}=\emptyset\)_._
Examples of the boundary \(\partial\mathcal{M}_{\downarrow}\) in \(\mathbb{R}^{2}\) for the first three cases of Theorem 5.4 are shown in Fig. 5. Again, we consider parameters \(\sigma_{1}=1.5\), \(\sigma_{2}=1\) and \(L=10\). For \(r=2\), as we can see from Fig. 4(a), we have \(\partial\mathcal{M}_{\downarrow}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^ {*}\}\) (i.e., dotted blue line + two red dots) consistent with part (i) of Theorem 5.4. For \(r=4\), as we can see from Fig. 4(b), we have \(\partial\mathcal{M}_{\downarrow}=\left[\partial\mathcal{B}_{1}\cap(\mathcal{H }_{1}^{-})^{c}\right]\sqcup\mathcal{T}\cup\{\mathbf{x}_{1}^{*}\}\) (i.e., solid cyan line + dotted blue line + left red dot) consistent with part (ii) of Theorem 5.4. For \(r=6\), as we can see from Fig. 4(c), we have \(\partial\mathcal{M}_{\downarrow}=\left[\partial\mathcal{B}_{1}\cap(\mathcal{H }_{1}^{-})^{c}\right]\sqcup\left[\partial\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+ })^{c}\right]\sqcup\mathcal{T}\) (i.e., solid cyan line + solid magenta line + dotted blue line) consistent with part (iii) of Theorem 5.4. Note that the dotted blue line in the figures indicates that the corresponding set of points is not a subset of the inner approximation \(\mathcal{M}_{\downarrow}\), i.e., \(\mathcal{T}\not\subseteq\mathcal{M}_{\downarrow}\) whereas the solid cyan line and solid magenta line indicate that \(\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\subseteq\mathcal{M}_{\downarrow}\) and \(\partial\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\subseteq\mathcal{M}_{\downarrow}\), respectively.
## 6 Potential Solution Region
In this section, using results from analyzing the outer approximation \(\mathcal{M}^{\uparrow}\) in Section 4 and the inner approximation \(\mathcal{M}_{\downarrow}\) in Section 5.2, we derive relationships among the potential solution region \(\mathcal{M}\) (which is the set that we want to identify), outer approximation \(\mathcal{M}^{\uparrow}\) and inner approximation \(\mathcal{M}_{\downarrow}\).
Before stating the main theorem, we summarize the inclusions among the three sets. Specifically, based on Proposition 4.1 and Proposition 5.3, we get \(\mathcal{M}\subseteq\mathcal{M}^{\uparrow}\) and \(\mathcal{M}_{\downarrow}\subseteq\mathcal{M}\), respectively, and we can state the following proposition.
**Proposition 6.1**.: _Suppose the sets \(\mathcal{M}\), \(\mathcal{M}^{\uparrow}\) and \(\mathcal{M}_{\downarrow}\) are defined as in (7), (13) and (14), respectively. Then, \(\mathcal{M}_{\downarrow}\subseteq\mathcal{M}\subseteq\mathcal{M}^{\uparrow}\)._
Since Theorem 4.10 and Theorem 5.4 are similar, we will see that, in fact, the boundary of the outer approximation \(\partial\mathcal{M}^{\uparrow}\) and inner approximation \(\partial\mathcal{M}_{\downarrow}\) are equal to the boundary of the potential solution region \(\partial\mathcal{M}\) for all values of \(r\). This means that we obtain the explicit characterization of \(\partial\mathcal{M}\) from Theorem 4.10 or Theorem 5.4. We present this result in the following theorem.
**Theorem 6.2**.: _Suppose \(\mathcal{M}\), \(\mathcal{M}^{\uparrow}\) and \(\mathcal{M}_{\downarrow}\) are defined as in (7), (13) and (14), respectively. Then, \(\partial\mathcal{M}=\partial\mathcal{M}^{\uparrow}=\partial\mathcal{M}_{\downarrow}\)._
Proof.: Recall from Proposition 6.1 that \(\mathcal{M}_{\downarrow}\subseteq\mathcal{M}\subseteq\mathcal{M}^{\uparrow}\). This entails that \((\mathcal{M}_{\downarrow})^{\circ}\subseteq(\mathcal{M}^{\uparrow})^{\circ}\) and \(\overline{\mathcal{M}_{\downarrow}}\subseteq\overline{\mathcal{M}}\subseteq \overline{\mathcal{M}^{\uparrow}}\). On the other hand, we have \((\mathcal{M}_{\downarrow})^{\circ}=(\mathcal{M}^{\uparrow})^{\circ}\) and \(\partial\mathcal{M}_{\downarrow}=\partial\mathcal{M}^{\uparrow}\) from Theorem 4.10 and Theorem 5.4. Combine the facts regarding the interiors to obtain that \((\mathcal{M}_{\downarrow})^{\circ}=(\mathcal{M})^{\circ}=(\mathcal{M}^{ \uparrow})^{\circ}\). Then, we can write
\[\overline{\mathcal{M}_{\downarrow}}=(\mathcal{M}_{\downarrow})^{\circ}\sqcup \partial\mathcal{M}_{\downarrow}=(\mathcal{M}^{\uparrow})^{\circ}\sqcup \partial\mathcal{M}^{\uparrow}=\overline{\mathcal{M}^{\uparrow}}.\]
Combining the above equation with \(\overline{\mathcal{M}_{\downarrow}}\subseteq\overline{\mathcal{M}}\subseteq \overline{\mathcal{M}^{\uparrow}}\), we can write \(\overline{\mathcal{M}_{\downarrow}}=\overline{\mathcal{M}}=\overline{ \mathcal{M}^{\uparrow}}\). Since \(\partial\mathcal{E}=\overline{\mathcal{E}}\setminus\mathcal{E}^{\circ}\) for any subset \(\mathcal{E}\) in a topological space, we conclude that \(\partial\mathcal{M}=\partial\mathcal{M}^{\uparrow}=\partial\mathcal{M}_{\downarrow}\).
Recall the definition of \(\mathcal{M}\), \(\mathcal{T}\), \(\{\partial\mathcal{B}_{i}\text{ for }i\in\{1,2\}\}\) and \(\{\mathcal{H}_{1}^{-},\mathcal{H}_{2}^{+}\}\) from (7), (22), (25), and (29), respectively. Assuming that \(\sigma_{1}\geq\sigma_{2}\), we summarize a characterization of the potential solution region \(\mathcal{M}\) as follows:
\[\begin{cases}\partial\mathcal{M}=\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*},\mathbf{x}_{2}^ {*}\}&\text{if}\quad r\in\big{(}0,\frac{L}{2\sigma_{1}}\big{]},\\ \partial\mathcal{M}=\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{ c}\big{]}\sqcup\mathcal{T}\sqcup\{\mathbf{x}_{1}^{*}\}&\text{if}\quad r\in\big{(} \frac{L}{2\sigma_{1}},\frac{L}{2\sigma_{2}}\big{]},\\ \partial\mathcal{M}=\big{[}\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{ c}\big{]}\sqcup\big{[}\partial\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c} \big{]}\sqcup\mathcal{T}&\text{if}\quad r\in\big{(}\frac{L}{2\sigma_{2}}, \frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}\big{)},\\ \mathcal{M}=\Big{\{}\Big{(}\frac{L}{2}\big{(}\frac{1}{\sigma_{1}}-\frac{1}{ \sigma_{2}}\big{)},\ \mathbf{0}\Big{)}\Big{\}}&\text{if}\quad r\in\Big{(}\frac{L}{2\sigma_{2}}, \frac{L}{2}\big{(}\frac{1}{\sigma_{1}}+\frac{1}{\sigma_{2}}\big{)}\Big{)},\\ \mathcal{M}=\emptyset&\text{if}\quad r\in\Big{(}\frac{L}{2}\big{(}\frac{1}{ \sigma_{1}}+\frac{1}{\sigma_{2}}\big{)},\ \infty\Big{)},\end{cases}\]
where the first three equations are obtained by applying Theorem 4.10 and Theorem 5.4 to Theorem 6.2, and the last two equations are obtained by applying Theorem 4.10 and Theorem 5.4 to Proposition 6.1. Examples of the boundary \(\partial\mathcal{M}\) in \(\mathbb{R}^{2}\) with different values of \(r\) are shown in Fig. 1. In particular, the solid blue, cyan and magenta curves in the figure correspond to the sets \(\mathcal{T}\), \(\partial\mathcal{B}_{1}\cap(\mathcal{H}_{1}^{-})^{c}\) and \(\partial\mathcal{B}_{2}\cap(\mathcal{H}_{2}^{+})^{c}\), respectively.
## 7 Discussion and Conclusions
In this work, we studied the possible locations of the minimizer of the sum of two strongly convex functions. Based on the location of the two minimizers of the individual functions, strong convexity parameters and a bound on the gradients at the minimizer of the sum, we established a necessary condition and a sufficient condition for a given point to be a minimizer, and called the set of points that satisfies the conditions as the outer approximation \(\mathcal{M}^{\dagger}\) and inner approximation \(\mathcal{M}_{\downarrow}\), respectively. We then explicitly characterized the boundary and interior of the outer and inner approximations. The characterization of these boundaries and interiors turned out to be identical. Subsequently, we showed that the boundary of the potential solution region \(\partial\mathcal{M}\) is also identical to those boundaries. In particular, we showed that it is sufficient to consider quadratic functions to establish (almost) the entire set of potential minimizers. To visualize the boundary of the potential solution region \(\partial\mathcal{M}\), we provided examples with different distances between the two original minimizers in Fig. 1.
Our work in this paper focused on the case of two functions. Future work could include identifying the region that the minimizer of the sum can lie for the case of multiple strongly convex functions. One can also modify some assumptions, for example by considering strongly convex functions with Lipschitz continuous gradient condition.
|
2302.00689 | Universal lower bound on topological entanglement entropy | Entanglement entropies of two-dimensional gapped ground states are expected
to satisfy an area law, with a constant correction term known as the
topological entanglement entropy (TEE). In many models, the TEE takes a
universal value that characterizes the underlying topological phase. However,
the TEE is not truly universal: it can differ even for two states related by
constant-depth circuits, which are necessarily in the same phase. The
difference between the TEE and the value predicted by the anyon theory is often
called the spurious topological entanglement entropy. We show that this
spurious contribution is always nonnegative, thus the value predicted by the
anyon theory provides a universal lower bound. This observation also leads to a
definition of TEE that is invariant under constant-depth quantum circuits. | Isaac H. Kim, Michael Levin, Ting-Chun Lin, Daniel Ranard, Bowen Shi | 2023-02-01T19:00:03Z | http://arxiv.org/abs/2302.00689v2 | # Universal lower bound on topological entanglement entropy
###### Abstract
Entanglement entropies of two-dimensional gapped ground states are expected to satisfy an area law, with a constant correction term known as the topological entanglement entropy (TEE). In many models, the TEE takes a universal value that characterizes the underlying topological phase. However, the TEE is not truly universal: it can differ even for two states related by constant-depth circuits, which are necessarily in the same phase. The difference between the TEE and the value predicted by the anyon theory is often called the _spurious_ topological entanglement entropy. We show that this spurious contribution is always nonnegative, thus the value predicted by the anyon theory provides a universal lower bound. This observation also leads to a definition of TEE which is invariant under constant-depth quantum circuits.
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
+
Footnote †: preprint: preprint: APS/123-QED
+
Footnote †: preprint: APS/123-QED
**Definition 1**.: _A state \(\sigma\) is a reference state if (i) the TEE calculated as \(\gamma_{0}=\frac{1}{2}I(A:C|B)_{\sigma}\) is the same for any choice of regions topologically equivalent to Fig. 1(a) and (ii) the mutual information between two subsystems is zero for any two non-adjacent subsystems._
Our main technical result is the following inequality, which holds for any reference state \(\sigma\), and for any circuit \(U\) whose depth is small compared to the radius and thickness of the annulus \(ABC\):
\[I(A:C|B)_{U\sigma U^{\dagger}}\geq I(A:C|B)_{\sigma}\equiv 2\gamma_{0}. \tag{5}\]
In other words, we show that constant-depth circuits can never decrease the TEE, _when acting on a reference state_.
To understand the implications of this result, note that the set of reference states includes all ground states of string-net [7; 8; 9] and quantum double models [6], and more generally any state satisfying the entanglement bootstrap axioms [16]. Furthermore, for any of these examples, the RHS of Eq. (5) is known to be equal to \(2\log\mathcal{D}\)[1; 2; 16]. Therefore, (5) implies the claimed lower bound (3) for any state obtained by a constant-depth circuit acting on a string-net, quantum double, or entanglement bootstrap state. (In fact, with heuristic arguments, we can go further and derive (3) for general 2D gapped ground states, as we explain in the discussion section). Similarly, we deduce (4) with \(\gamma_{\min}=\log\mathcal{D}\) for any state \(\rho\) given by a finite-depth circuit \(V\) applied to a reference state, where the minimum is achieved by the circuit \(U=V^{-1}\).
As a final comment, while we work in the plane for concreteness, our proof also applies to the TEE defined on any disk-like region embedded in an arbitrary manifold, such as a torus.
_Example: Toric code --_ To explain the key idea behind our proof, it is instructive to first focus on a concrete reference state \(\sigma\), namely the toric code ground state [6] on a plane. For this state, \(I(A:C|B)_{\sigma}=2\log 2\) so that \(\gamma_{0}=\log 2\)[5]. If we now apply a constant-depth quantum circuit \(U\), defining \(\widetilde{\sigma}=U\sigma U^{\dagger}\), in general \(I(A:C|B)_{\widetilde{\sigma}}\neq 2\log 2\). Nevertheless, we will show that for a sufficiently large annulus \(ABC\), we still have the lower bound
\[I(A:C|B)_{\widetilde{\sigma}}\geq 2\log 2. \tag{6}\]
We first prove the bound (6) for a special class of constant-depth circuits \(U\), namely those that are supported within a constant distance of \(BC\) [Fig. 1(b)]. Later we will extend this result to general constant-depth circuits.
Our basic strategy is to construct a state \(\widetilde{\lambda}\) that is "locally indistinguishable" from \(\widetilde{\sigma}\). More precisely, we will construct a state \(\widetilde{\lambda}\) that is indistinguishable from \(\widetilde{\sigma}\) over \(AB\) and \(BC\): that is, \(\widetilde{\lambda}_{AB}=\widetilde{\sigma}_{AB}\) and \(\widetilde{\lambda}_{BC}=\widetilde{\sigma}_{BC}\). We can then express \(I(A:C|B)_{\widetilde{\sigma}}\) in terms of \(I(A:C|B)_{\widetilde{\lambda}}\) using the identity
\[I(A:C|B)_{\widetilde{\sigma}}=I(A:C|B)_{\widetilde{\lambda}}+S(\widetilde{ \lambda}_{ABC})-S(\widetilde{\sigma}_{ABC}). \tag{7}\]
Applying the strong subadditivity of the entropy (SSA) [17], we know that \(I(A:C|B)_{\widetilde{\lambda}}\geq 0\), and therefore
\[I(A:C|B)_{\widetilde{\sigma}}\geq S(\widetilde{\lambda}_{ABC})-S(\widetilde{ \sigma}_{ABC}). \tag{8}\]
We will obtain the desired lower bound (6) from a judicious choice of \(\widetilde{\lambda}\).
The easiest way to construct an appropriate \(\widetilde{\lambda}\) is to first find a state \(\lambda\) that is locally indistinguishable from the toric code ground state \(\sigma\). More precisely, we need a \(\lambda\) that is indistinguishable from \(\sigma\) over the past light cone of \(AB\) and \(BC\) (with respect to \(U\)). Once we find such a \(\lambda\), we can then set \(\widetilde{\lambda}=U\lambda U^{\dagger}\).
How do we construct such a \(\lambda\)? We use a more general approach later; in the present case of the toric code, our approach is to consider a probabilistic mixture of toric code _excited_ states. More specifically, for each anyon type \(a\in\mathcal{C}=\{1,e,m,e\}\), we define a corresponding excited state \(\rho^{(a)}\) by \(\rho^{(a)}=V_{a}\sigma V_{a}^{\dagger}\), where \(V_{a}\) is a unitary (open) string operator that places an anyon excitation \(a\) in the interior of the annulus and its antiparticle in the exterior [Fig. 1(c)]. We then define \(\lambda=\sum_{a}p_{a}\rho^{(a)}\) for some probability distribution \(\{p_{a}:a\in\mathcal{C}\}\). Note that \(\lambda\) has the requisite indistinguishability property as long as the endpoints of the string operators \(V_{a}\) (where the anyons are created) are sufficiently far from the annulus that they are outside the past light cones of \(AB\) and \(BC\).
To proceed further, we need to evaluate the entropy difference \(S(\widetilde{\lambda}_{ABC})-S(\widetilde{\sigma}_{ABC})\). To do this, it is convenient to choose the path of the string operators \(V_{a}\) so that they avoid the region of support of the constant-depth circuit \(U\) [which by assumption is supported near \(BC\)]. Then \(V_{a}\) commutes with \(U\) so \(\widetilde{\lambda}\) can be written as a probabilistic mixture of the form
\[\widetilde{\lambda}=\sum_{a}p_{a}\widetilde{\rho}^{(a)},\qquad\widetilde{\rho }^{(a)}=V_{a}\widetilde{\sigma}V_{a}^{\dagger}. \tag{9}\]
Crucially, the \(\widetilde{\rho}^{(a)}\) states have two simplifying properties: (i) different \(\widetilde{\rho}^{(a)}_{ABC}\) are orthogonal to one another, and (ii) \(S(\widetilde{\rho}^{(a)}_{ABC})=S(\widetilde{\sigma}_{ABC})\). At an intuitive level, property (i) follows from the fact that each \(\widetilde{\rho}^{(a)}_{ABC}\) belongs to a different anyon sector on the annulus. More formally, (i) follows from the existence of a collection of (closed) string operators that are supported within \(ABC\) and that
Figure 1: (a) The partition used to calculate the TEE. (b) Support of the unitary \(U\) considered. (c) String operator \(V_{a}\) creates an anyon \(a\) in the interior of the annulus and its antiparticle in the exterior.
take on different eigenvalues in each state \(\widetilde{\rho}_{ABC}^{(a)}\). (These string operators are simply the closed versions of \(UV_{a}U^{\dagger}\) and they can be drawn within \(ABC\) as long as \(ABC\) is wider than twice the circuit depth of \(U\)). As for property (ii), this follows from the fact that the \(V_{a}\) are products of single-site unitaries; in particular, each \(V_{a}\) can be written as a product of a unitary acting entirely within \(ABC\) and a unitary acting entirely outside \(ABC\), neither of which changes the entanglement entropy of \(ABC\).
Given properties (i) and (ii) of \(\widetilde{\rho}^{(a)}\), the entropy difference can be computed as
\[S(\widetilde{\lambda}_{ABC})-S(\widetilde{\sigma}_{ABC})=H(\{p_{a}\}), \tag{10}\]
where \(H(\{p_{a}\})=-\sum_{a}p_{a}\log(p_{a})\) is the Shannon entropy of the probability distribution \(\{p_{a}\}\). Substituting (10) into (8), we obtain
\[I(A:C|B)_{\widetilde{\sigma}}\geq H(\{p_{a}\}). \tag{11}\]
To get the best bound, we choose the probability distribution that maximizes \(H(\{p_{a}\})\), namely \(p_{a}=\frac{1}{4}\) for all \(a\in\mathcal{C}\). For this choice, \(H(\{p_{a}\})=2\log 2\), yielding the desired bound (6).
To complete the argument, we now extend the bound (11) to general constant-depth circuits \(U\). First, recall that the entanglement entropy of a subsystem is invariant under unitaries that act exclusively within the subsystem or its complement. Using this fact, we can make the replacement
\[I(A:C|B)_{U\sigma U^{\dagger}}=I(A:C|B)_{U^{\prime}\sigma U^{\prime\dagger}}, \tag{12}\]
where \(U^{\prime}\) is a constant-depth quantum circuit that acts trivially deep in the interior of \(A\) and also trivially far outside \(ABC\) [Fig. 3(a)]. Here, we are using the fact that \(U\) is a constant-depth quantum circuit, and therefore we can "cancel out" its action in a subsystem \(S\) by multiplying by an appropriate unitary supported in the light cone of \(S\) (Fig. 2).
Next, using SSA, we know that \(I(A:C|B)\) is non-increasing when the region \(A\) is reduced in size. Therefore
\[I(A:C|B)_{U^{\prime}\sigma U^{\prime\dagger}}\geq I(A^{\prime}:C|B)_{U^{ \prime}\sigma U^{\prime\dagger}} \tag{13}\]
where \(A^{\prime}\subset A\) is shown in Fig. 3(b). Finally, applying the same reasoning as in (12), we can replace
\[I(A^{\prime}:C|B)_{U^{\prime}\sigma U^{\prime\dagger}}=I(A^{\prime}:C|B)_{U^{ \prime\prime}\sigma U^{\prime\dagger}} \tag{14}\]
where \(U^{\prime\prime}\) is a constant-depth circuit acting on the region shown in Fig. 3(c). Combining (12-14), we deduce that
\[I(A:C|B)_{U\sigma U^{\dagger}}\geq I(A^{\prime}:C|B)_{U^{\prime\prime}\sigma U ^{\prime\dagger}}. \tag{15}\]
The lower bound (15) is useful because it allows us to leverage our results from the first part of the proof. In particular, \(U^{\prime\prime}\) is precisely the kind of constant-depth quantum circuit that we analyzed above, and therefore we know that \(I(A^{\prime}:C|B)_{U^{\prime\prime}\sigma U^{\prime\dagger}}\geq 2\log 2\) for any sufficiently large annulus \(A^{\prime}BC\). Substituting this inequality into (15), we obtain the desired bound (6).
_General case --_ Our proof for the toric code proceeded in three steps. First, we derived a lower bound (8) for \(I(A:C|B)_{\widetilde{\sigma}}\) in terms of the entropy difference \(S(\widetilde{\lambda}_{ABC})-S(\widetilde{\sigma}_{ABC})\) where \(\widetilde{\lambda}\) is any state that is indistinguishable from \(\widetilde{\sigma}\) over \(AB\) and \(BC\). Second, we constructed an appropriate \(\widetilde{\lambda}\) and computed the desired entropy difference (10) in the special case where \(U\) is a constant-depth circuit supported within a constant distance of \(BC\) [Fig. 1(b)]. Combining these two results, we obtained the desired lower bound, but only for this special class of circuits \(U\). In the third and final step, we extended this bound to arbitrary constant-depth circuits \(U\) using the inequality (15).
Conveniently, the first and third steps of our proof immediately generalize to any reference state \(\sigma\) since they do not use any properties of the toric code. On the other hand, in the second step, we used the specific structure of the toric code string operators1, so we need a different argument for this step in the general case. In particular, instead of defining \(\widetilde{\lambda}\) in terms of a mixture of excited states, we will now define it in terms of the _maximum-entropy state_, or max-entropy state for short. Consider a larger annulus \(Y=ABC\cup\)Supp\((U)\), with \(U\) again as in [Fig. 1(b)]. Define a density matrix \(\lambda\) to be the maximum-entropy state consistent with the reduced
Figure 2: For any constant-depth circuit \(U\), and for any subsystem \(S\), we can obtain a circuit \(U^{\prime}\) of same depth acting trivially on \(S\), by removing from \(U\) the “light cone” (white gates) of \(S\).
Figure 3: (a) We first remove gates from \(U\) on a “hole” within \(A\); we call the new circuit \(U^{\prime}\). (b) We then deform \(A\) to \(A^{\prime}\subset A\) such that the boundary of the annulus \(A^{\prime}BC\) is only partially covered. (c) We then further remove some of the gates in the vicinity of \(A^{\prime}\), obtaining \(U^{\prime\prime}\).
density matrices of \(\sigma\) over the past light cones of \(AB\) and \(BC\). We then define \(\widetilde{\lambda}=U\lambda U^{\dagger}\).
By construction, \(\widetilde{\lambda}\) is indistinguishable from \(\widetilde{\sigma}\) over \(AB\) and \(BC\) and therefore the lower bound (8) still holds. The only remaining question is the value of the entropy difference on the right-hand side. We claim that
\[S(\widetilde{\lambda}_{ABC})-S(\widetilde{\sigma}_{ABC})=S(\lambda_{Y})-S( \sigma_{Y}) \tag{16}\]
and in turn
\[S(\lambda_{Y})-S(\sigma_{Y})=2\gamma_{0} \tag{17}\]
provided that each subsystem \(A,B\), and \(C\) is sufficiently large compared to the circuit depth. See Eq. (10) in the Supplemental Material (SM) for a self-contained derivation of Eq. (17) starting from Definition 1. We will then be finished once we prove Eq. (16).
We now present the proof of (16). Our main tool is the following lemma, which gives a condition under which the entropy difference of two bipartite states \(\rho\) and \(\rho^{\prime}\) remains invariant under a quantum channel \(\mathcal{R}\).
**Lemma 1**.: _Let \(\rho\) and \(\rho^{\prime}\) be density matrices over \(PQ\) such that \(\rho^{\prime}_{Q}=\rho_{Q}\) and \(\rho^{\prime}_{P}=\rho_{P}\). Let \(\mathcal{R},\mathcal{T}\) be a pair of quantum channels \(\mathcal{R}:Q\to\widehat{Q}\) and \(\mathcal{T}:\widehat{Q}\to Q\). If_
\[\mathcal{T}\circ\mathcal{R}(\rho_{PQ})=\rho_{PQ} \tag{18}\] \[\mathcal{T}\circ\mathcal{R}(\rho^{\prime}_{PQ})=\rho^{\prime}_{PQ}\]
_then_
\[S(\rho_{PQ})-S(\rho^{\prime}_{PQ})=S(\mathcal{R}(\rho_{PQ}))-S(\mathcal{R}( \rho^{\prime}_{PQ})). \tag{19}\]
We present the proof of Lemma 1 in the SM.
To apply Lemma 1 to our setup, we let \(\rho=\overline{\lambda}\) and \(\rho^{\prime}=\overline{\sigma}\) where \(\overline{\lambda}=\overline{U}\lambda\overline{U}^{\dagger}\) and \(\overline{\sigma}=\overline{U}\sigma\overline{U}^{\dagger}\) and where \(\overline{U}\) is a unitary obtained by removing the gates in \(U\) that are deep in the interior of the annulus \(ABC\) [Fig. 4(a-b)]. We then let \(P,Q\) be a partition of the annulus \(ABC\) of the form shown in Fig. 4, such that \(P\subset A\) is sufficiently far away from the support of \(\overline{U}\).2 By construction, \(\overline{\lambda}\) and \(\overline{\sigma}\) are indistinguishable on \(P\). In Appendix E, we show that the two states are indistinguishable on \(Q\) as well [Eq. (11)], thus fulfilling the premise of Lemma 1.
Footnote 2: More precisely, \(P\) must be at least two lattice spacings away from the support of \(\overline{U}\); see Fig. 14 in the Supplemental Material for details.
Below we will construct quantum channels \(\mathcal{R}:Q\to\widehat{Q}\) and \(\mathcal{T}:\widehat{Q}\to Q\), with \(\widehat{Q}\equiv Q\cup\overline{u}\) where \(\overline{u}\) is the support of \(\overline{U}\). These will obey (18) with \(\mathcal{R}(\widetilde{\lambda}_{PQ})=\lambda_{PQ\cup\overline{u}}\) and \(\mathcal{R}(\overline{\sigma}_{PQ})=\sigma_{PQ\cup\overline{u}}\). Because \(PQ\cup\overline{u}=Y\) and also \(S(\overline{\lambda}_{PQ})=S(\widetilde{\lambda}_{PQ})\) and \(S(\overline{\sigma}_{PQ})=S(\widetilde{\sigma}_{PQ})\), once we construct these channels, we can immediately deduce Eq. (16) from Lemma 1. This will then complete our proof of the bound (5), as explained earlier.
Now let us discuss our construction of \(\mathcal{R}\) and \(\mathcal{T}\). These maps are constructed from compositions of \(\overline{U}\), partial trace, and the Petz map [19].
In order to make the construction of \(\mathcal{R}\) and \(\mathcal{T}\) transparent, we will depict \(\overline{u}\) as two disks of smaller sizes, as shown in Fig. 4(c). Loosely speaking, \(\mathcal{R}\) removes the circuits in the disks and \(\mathcal{T}\) adds them back.
The map \(\mathcal{R}\) is constructed by applying a partial trace followed by a Petz map. The construction is best described by Fig. 5. In the first step, we trace out the region \(Q\cap\overline{u}\). This step effectively removes the circuit \(\overline{U}\), mapping \(\overline{\lambda}_{PQ}\) to \(\lambda_{PQ\setminus\overline{u}}\) and \(\overline{\sigma}_{PQ}\) to \(\sigma_{PQ\setminus\overline{u}}\). In the second step, we apply the Petz map \(\Phi^{\sigma}_{v\to\overline{u}}\). We show in the Supplemental Material, Eq. (12), that this step extends \(\lambda_{PQ\setminus\overline{u}}\) to \(\lambda_{PQ\cup\overline{u}}\) and \(\sigma_{PQ\setminus\overline{u}}\) to \(\sigma_{PQ\cup\overline{u}}\):
\[\lambda_{PQ\cup\overline{u}}=\Phi^{\sigma}_{v\to v\overline{u}}(\lambda_{PQ \setminus\overline{u}}),\ \sigma_{PQ\cup\overline{u}}=\Phi^{\sigma}_{v\to v\overline{u}}(\sigma_{PQ \setminus\overline{u}}). \tag{20}\]
Combining the two steps we see that \(\mathcal{R}\) maps \(\overline{\lambda}_{PQ}\) to \(\lambda_{PQ\cup\overline{u}}\) and \(\overline{\sigma}_{PQ}\) to \(\sigma_{PQ\cup\overline{u}}\), as required. As for the map \(\mathcal{T}\), this can be constructed by simply applying \(\overline{U}\) and tracing out \(\overline{u}\setminus Q\). Clearly these operations map \(\lambda_{PQ\cup\overline{u}}\) to \(\overline{\lambda}_{PQ}\) and \(\sigma_{PQ\cup\overline{u}}\) to \(\overline{\sigma}_{PQ}\), as required.
_Discussion --_ We remark that our lower bound (5) can be easily generalized to the case where \(\sigma\) contains an anyon in the interior of the annulus. All that was needed
Figure 4: Using the procedure in Fig. 2, we can remove the gates in \(U\) that act deep in the interior of the annulus without changing the entropy; starting from a \(U\), whose support is depicted as the blue region in (a), we obtain a unitary \(\overline{U}\), whose support is shown in (b). At this point, the support of \(\overline{U}\) is a union of two disks, which, under regrouping of sites, is topologically equivalent to (c).
in our proof was the invariance of the TEE of \(\sigma\) under topology-preserving deformations of the annulus, a fact that remains true even in the presence of an anyon. In this case, we get a lower bound of \(\gamma_{0}=\log(\mathcal{D}/d_{a})\)[16], assuming anyon \(a\) is placed in the interior.
We emphasize again that we have proven the lower bound (3) for any state obtained by a constant-depth circuit acting on a valid reference state, where in particular, the reference state could be any string-net ground state [7; 8; 9]. Using this result, we can argue heuristically that (3) holds for general 2D bosonic gapped ground states. The key point is that string-net ground states are believed to realize all "doubled" 2D topological phases obtained by stacking a bosonic topological phase onto its time-reversed partner. So one expects that for any 2D bosonic gapped ground state \(\rho\), we should be able to write the doubled state \(\rho\otimes\rho^{*}\) (realized in a bilayer geometry) as \(\rho\otimes\rho^{*}=U\sigma U^{\dagger}\) for some valid reference state \(\sigma\). Hence the lower bound (3) should apply to \(\rho\otimes\rho^{*}\) and therefore also to \(\rho\) itself (since the TEE of \(\rho\otimes\rho^{*}\) is exactly twice that of \(\rho\), as is the total quantum dimension \(\log\mathcal{D}\)). However, a careful argument in this direction would require a treatment of _quasi-local_ circuits \(U\), i.e. circuits with decaying tails, which arise from the adiabatic flow of Hamiltonians [20; 21]. We leave this question to future work.
Our proof relies on Markov state techniques supported by strong subadditivity and the fact that constant-depth circuits can be truncated in certain ways without affecting the entropies of regions. Due to the generality of these techniques, we expect that similar lower bounds can be derived in a variety of other setups, including systems with defects or higher dimensional systems. Note, however, that the TEE can also be defined using the alternative partition in Ref. [1], for which it is unclear if our argument applies. It would be interesting to understand this partition better.
It is worth noting that our Definition 1 and bound (5) apply beyond the scope of area law states. As a concrete example, coupling an area law reference state to a hot surface (modeled by an identity density matrix) for a short period of time cannot decrease the TEE of the joint system. We also speculate that similar arguments may apply to show that the TEE of the 3D toric code at finite temperature [22] cannot decrease under a circuit.
An important question for future work is to understand how generically our bound (3) is _saturated._ That is, how much fine-tuning is required to obtain a spurious contribution to the TEE that does not decay for regions of increasing size? There are some hints in this direction [11; 13; 14; 23], but the general question remains open.
## Acknowledgements
D.R. thanks Aram Harrow and Jonathan Sorce for discussion. D.R. acknowledges support from NTT (Grant AGMT DTD 9/24/20). This work was supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651442, M.L.; 652264, B.S.). T.C.L. was supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under the cooperative research agreement DE-SC0009919.
|
2304.04761 | Connecting Fairness in Machine Learning with Public Health Equity | Machine learning (ML) has become a critical tool in public health, offering
the potential to improve population health, diagnosis, treatment selection, and
health system efficiency. However, biases in data and model design can result
in disparities for certain protected groups and amplify existing inequalities
in healthcare. To address this challenge, this study summarizes seminal
literature on ML fairness and presents a framework for identifying and
mitigating biases in the data and model. The framework provides guidance on
incorporating fairness into different stages of the typical ML pipeline, such
as data processing, model design, deployment, and evaluation. To illustrate the
impact of biases in data on ML models, we present examples that demonstrate how
systematic biases can be amplified through model predictions. These case
studies suggest how the framework can be used to prevent these biases and
highlight the need for fair and equitable ML models in public health. This work
aims to inform and guide the use of ML in public health towards a more ethical
and equitable outcome for all populations. | Shaina Raza | 2023-04-08T10:21:49Z | http://arxiv.org/abs/2304.04761v1 | # Connecting Fairness in Machine Learning with Public Health Equity
###### Abstract
Machine learning (ML) has become a critical tool in public health, offering the potential to improve population health, diagnosis, treatment selection, and health system efficiency. However, biases in data and model design can result in disparities for certain protected groups and amplify existing inequalities in healthcare. To address this challenge, this study summarizes seminal literature on ML fairness and presents a framework for identifying and mitigating biases in the data and model. The framework provides guidance on incorporating fairness into different stages of the typical ML pipeline, such as data processing, model design, deployment, and evaluation. To illustrate the impact of biases in data on ML models, we present examples that demonstrate how systematic biases can be amplified through model predictions. These case studies suggest how the framework can be used to prevent these biases and highlight the need for fair and equitable ML models in public health. This work aims to inform and guide the use of ML in public health towards a more ethical and equitable outcome for all populations.
Fairness; Equity; Public Health; Machine Learning.
## I Introduction
Health equity [1] is a crucial principle in public health, which aims to eliminate disparities in health outcomes and healthcare access among various populations. The World Health Organization (WHO) [2] and the United Nations (UN) [3] both prioritize health equity as a key aspect of their missions to improve global health outcomes. However, despite these efforts, disparities in health outcomes continue to persist, particularly among marginalized and underserved populations [4].
Machine learning (ML) has the potential to transform the way we approach health and healthcare with its advanced analytical and predictive capabilities. ML can aid in comprehending complex health systems, identifying disease patterns and trends, and improving patient outcomes [5]. However, it is essential to exercise caution when employing ML and consider potential biases and inequalities that may be present in the data used to train these models [6, 7]. Such biases can lead to discrimination and unjust outcomes for specific populations, exacerbating existing health disparities [8].
This study aims to promote health equity through ML by reviewing the literature on ML fairness and presenting a novel ML pipeline approach to integrate fairness into various stages of a standard ML pipeline. Although fair ML has been explored in artificial intelligence (AI) literature [9, 10], its implementation in public health has received limited attention. This study endeavors to provide the public health community with accessible methods for ensuring equitable outcomes when using ML. The specific contributions of this research are:
* Summarizing the concepts of fair ML and presenting an ML pipeline approach for public health use to achieve equitable outcomes.
* Providing examples that demonstrate the importance of the pipeline approach and how disparities can be amplified and mitigated through ML.
* Offering straightforward and accessible methods for the public health community to incorporate fairness in their use of ML.
Unlike previous studies [6, 11, 12, 13] that have primarily focused on specific applications of ML in healthcare, this review adopts a different approach by presenting a methodology for incorporating fairness during various stages of a standard ML pipeline. This pipeline idea is proposed based on a thorough review of pertinent literature on fair ML. The study primary focus is on promoting health equity for marginalized and underserved populations and providing straightforward and accessible methods for the public health community.
## II Background
Health equity [1, 14] is a fundamental principle in public health that emphasizes the importance of ensuring equal access to healthcare resources, opportunities, and outcomes for all individuals, regardless of their background or social status. It seeks to address the root causes of health disparities and strives to eliminate barriers that prevent certain populations from achieving optimal health.
Fairness in ML [9, 15, 16] refers to the development and application of ML models that minimize biases and disparities in their predictions, ensuring equitable outcomes for different groups in a population. As ML models are increasingly used in various domains, including healthcare, finance, and criminal justice, it is crucial to address potential biases in the data and algorithms used. Fairness in ML involves identifying and mitigating these biases to prevent discriminatory consequences and to promote equitable decision-making [17]. This subsection discusses previous studies that have investigated the use of ML in the field of health, with a focus on fairness and equity.
Rajkomar et al. [18] emphasize the importance of ensuring fairness in ML to advance health equity in healthcare. They highlight the need for implementing inclusive and equitable research guidelines and utilizing technical solutions to address biases in the models to prevent them from amplifying and contribute to promoting health equity.
Fletcher et al. [19] address the use of ML and artificial intelligence (AI) in Low- and Middle-Income Countries (LMICs). They suggest the use of three criteria, Appropriateness, Fairness, and Bias, to help evaluate the use of AI and ML in the global health context. Appropriateness involves deciding the appropriate use of the algorithm in the local context and matching the model to the target population. Bias refers to systematic tendencies in the model that may favor one demographic group over another. Fairness involves examining the impact on various demographic groups and choosing a mathematical definition of group fairness that satisfies cultural, legal, and ethical
Mhasawade et al. [20] discuss the potential for using ML in the field of public and population health. They highlight the importance of considering the connection between social, cultural, and environmental factors and their effect on health. The authors also emphasize the importance of addressing health equity and disparities through ML methods that focus on algorithmic fairness.
Thomasian et al. [7] discuss the impact of AI in the field of population and public health and the potential consequences of unmitigated bias in AI-driven health care. They argue that a consensus on the regulation of algorithmic bias at the policy level is necessary to ensure the ethical integration of AI into the health system. The authors present three overarching principles for mitigating bias in healthcare AI and call for a framework for federal oversight.
Wesson et al. [21] discuss the potential benefits and drawbacks of using big data in public health research. They caution against perpetuating discriminatory practices and highlight the importance of incorporating an equity lens in order to advance equity in public health. The authors frame the concept of a sixth V, virtuosity, in the big data context of the five Vs (volume, velocity, veracity, variety, and value). The idea is to encompass equity and justice frameworks and provide examples of analytic approaches to improving equity in big data.
Sikstrom et al. [22] conduct a comprehensive environmental scan of literature on fairness in the context of AI and ML. The main aim of their study is to advance efforts in operationalizing fairness in medicine by synthesizing a broad range of literature. The authors searched electronic databases and conducted hand searches to gather data from 213 selected publications, which were then analyzed using rapid framework analysis. The search and analysis were conducted in two rounds to explore both pre-identified and emerging issues related to fairness in ML.
Gervasi et al. [23] address the issue of fairness, equity, and bias in the use of ML algorithms in the health insurance industry. They provide a guide to the data ecosystem used by health insurers and highlight potential sources of bias in the machine learning pipelines.
Gichoya et al. [11] discuss the use of ML in healthcare and the need to establish guidelines and protocols for its development and implementation. They note that while technical considerations are well addressed in current guidelines, there is a lack of engagement with issues of fairness, bias, and unintended disparate impact.
In summary, these works discuss the rapid growth of ML for healthcare and the need to formalize processes and procedures for characterizing and evaluating the performance of these tools.
## III Fair Machine Learning Framework
### _Preliminaries_
The definitions of key terms used in this work have been extracted from relevant literature sources [9, 17, 24] and given below.
* _Bias_: A systematic error in an ML model, data or even research.
* _Fairness_: The process of addressing biases present in data and algorithms to mitigate their adverse impact.
* _Protected Attributes_: Characteristics that divide a population into distinct groups (e.g. race, gender, age).
* _Privileged Values of Protected Attributes_: Characteristics that denote groups that have a systematic advantage (e.g., male gender, white race in many cases).
* _Underprivileged Groups of Protected Attributes_: Characteristics that denote groups that face prejudice.
* _Favorable Outcome:_ A desirable result, such as obtaining a loan or insurance.
* _Demographic Parity_: A measure of fairness that requires that the distribution of positive predictions be equal for different protected attribute groups.
* _Disparate Impact (DI)_: A metric used to evaluate fairness in ML models. DI compares the percentage of favorable outcomes for unprivileged groups to the percentage of favorable outcomes for privileged groups.
* _Equal Opportunity_: A measure of fairness that requires that the false positive rate be equal across different protected attribute groups.
* _Counterfactual Fairness_: A measure of fairness that requires that the outcomes of a model's predictions be fair for different counterfactual scenarios, e.g., scenarios where a person's protected attribute values are different.
### _Pipeline Approach_
Our framework for achieving fairness in ML is based on an extensive literature review. The framework, shown in Figure 1, is structured as a pipeline [25] approach, with specific steps outlined below to create fair and unbiased models.
_Data Pre-processing:_ This step involves preparing the data for the model by cleaning, normalizing, and transforming it, as well as splitting it into training, validation, and test sets. In this stage, fairness is achieved through pre-processing techniques such as editing feature values [26], learning fair representations [27], and re-sampling or re-weighting [28] techniques. The end result of this step is a dataset that is prepared for the next stage in the pipeline.
_Algorithmic Learning and Training_: This step involves selecting a suitable ML algorithm and training the model using the prepared data. In this stage, fairness is achieved through in-processing techniques such as fair classification, clustering, adversarial learning [29], and counterfactual fair learning [30]algorithms.
_Validate Model_: This step involves evaluating the performance of the model using various metrics such as accuracy, precision, recall, F1 score, etc. It also includes tuning the hyperparameters to improve the model performance. Fairness in this step, also known as post-processing fairness, is achieved by employing techniques such as counterfactual analysis, calibration, and interpretability techniques, which work by altering the predictions made by a model [31].
Fig. 1: Fair Machine Learning Pipeline
_Deploy:_ This step involves deploying the final model in a production environment [25], where it can be used to make predictions on new data. It is important to note that monitoring and testing the model performance on an ongoing basis is crucial to ensure continued fairness and bias mitigation.
By following this pipeline approach, the proposed framework ensures that fairness is considered and integrated throughout the entire ML process, from data pre-processing to deployment. This holistic approach to fairness helps to mitigate biases and ensure the development of fair and unbiased models.
## IV Example use cases
In the field of precision medicine and public health, researchers have investigated potential racial disparities in various aspects, such as the administration of a genetic test for a specific type of cancer [32] and the impact of COVID-19 on different demographic groups [33]. The findings from these studies are both alarming and compelling, as they demonstrate that biases present in healthcare models can lead to systemic unfair treatment and adverse outcomes for underprivileged groups. If such biases remain unaddressed within the data, it is highly likely that they will be exacerbated through the predictions generated by ML models. This highlights the urgent need to develop and implement fair and unbiased ML algorithms to ensure equitable access to healthcare and improve outcomes for all individuals, regardless of their demographic background.
The first step in addressing biases in ML models is to identify protected attributes and their corresponding privileged and unprivileged groups. In the genetic test for cancer example [32], the authors analyzed a large electronic health record dataset and discovered that African American patients were significantly less likely to receive the genetic test compared to white and/or other race patients, despite having a higher incidence of the cancer the test was designed to detect. In the COVID-19 use case example [33], the study found that COVID-19 disproportionately affected communities of color, with African American, Hispanic, and Native American populations experiencing higher rates of infection, hospitalization, and death compared to white populations.
To address these issues, the Fair ML pipeline presented in this work can be utilized to ensure that the algorithms used for predictions and decisions are fair and unbiased. Fairness methods can be incorporated at specific stages of the ML pipeline, depending on the nature of the problem and the biases identified in the dataset.
During the data pre-processing stage, the dataset can be scrutinized to detect potential sources of bias and discrimination, and adjustments can be made to guarantee balanced and representative data of the population of interest. Fairness metrics, such as disparate impact [26] or equal opportunity [34], can be applied to measure fairness in the data.
In the algorithmic learning and training stage, the ML model can be trained on the transformed data (from the pre-processing stage) to ensure fairness and unbiasedness across different demographic groups. Techniques like regularization [34] or adversarial learning [35] can be employed to encourage the model to make equally accurate predictions for all individuals, and fairness metrics can be used to measure and correct bias during the training process.
In the validation stage, if biases are identified in the model's predictions, post-processing fairness techniques, such as equalized odds [36] or calibrated equalized odds [31]can be applied. These techniques, such as adjusting the decision threshold or predicted probabilities, ensure that the model's predictions are equally accurate for all individuals.
By incorporating fairness methods at relevant stages of the ML pipeline, the proposed framework can help address the biases identified in the precision medicine and COVID-19 outcome examples, ensuring that the resulting models are fair, equitable, and produce accurate, unbiased predictions for all individuals, irrespective of their race or ethnicity. This approach not only mitigates potential sources of bias in healthcare datasets but also contributes to improved health outcomes for marginalized and underrepresented populations, ultimately leading to a more just and equitable healthcare system for all.
## V Discussion
### _Main Findings_
Fairness in ML plays a vital role in promoting public health equity by addressing biases in healthcare models and ensuring that predictive algorithms are both accurate and equitable. By connecting the principles of fair ML [37, 9] with the broader goals of public health equity, scientists and healthcare professionals can work together to develop innovative techniques and methodologies that address existing biases and disparities. This collaborative approach not only mitigates the potential for discriminatory outcomes but also contributes to improved health outcomes for all individuals, regardless of their demographic background.
Furthermore, by integrating interdisciplinary knowledge from fields such as computer science, statistics, and public health, researchers can develop robust fairness-aware ML algorithms that uncover hidden patterns, enabling a deeper understanding of the complex relationship between demographic factors and health outcomes. However, addressing public health equity requires a comprehensive approach that includes analyzing health disparities, targeting interventions, addressing social determinants of health [38, 39], and promoting equal access to healthcare services. By incorporating fair ML models into public health decision-making processes, we can ensure that predictions and decisions are fair and do not perpetuate or exacerbate existing health disparities. Ultimately, the connection between fair ML and public health equity is critical to creating a more just and equitable system for all.
### _Practical Impact_
The practical impact of fairness in ML can be significant in several ways. Fairness helps prevent discrimination and bias in predictive models, ensuring equitable decision-making and reducing the risk of harmful actions based on flawed models [20]. This can lead to greater trust in AI systems. Moreover, fair ML can lead to better outcomes for individuals and society as a whole. In healthcare, fair predictive models can help identify high-risk patients, enabling targeted care and interventions, leading to improved patient outcomes and reduced healthcare costs [19, 40].
### _Limitations_
The use of fairness in ML is subject to several limitations. There is a potential for over-correction or unintended consequences, introducing new biases or errors into the model. Additionally, there is difficulty in defining and measuring fairness, leading to different outcomes and interpretations. The lack of diverse and representative data is another challenge, potentially resulting in unfair or inaccurate models. Furthermore, the complexity of models makes interpretation and understanding challenging.
In the public health domain, additional limitations include small sample sizes and imbalanced datasets, making it difficult to train fair and accurate models. There is also the potential for confounding variables affecting accuracy and fairness. Finally, there are challenges in integrating predictive models into existing healthcare systems and workflows.
### _Recommendations_
To address public health equity, a comprehensive approach is required. One important recommendation is to analyze health disparities among different population groups and target interventions accordingly. Another strategy is to address the social determinants of health, such as poverty, education, and access to healthcare. These underlying issues play a major role in health disparities, and addressing them can help reduce them.
Increasing access to quality healthcare for all, regardless of socioeconomic status, is also essential for promoting health equity. In addition, government policies that address issues such as poverty, education, and access to healthcare can help promote health equity. Engaging and empowering communities is crucial to ensure that interventions are tailored to their specific needs and are more likely to be successful.
Regularly monitoring and evaluating interventions is also essential for understanding their effectiveness and making any necessary adjustments to improve their impact. Finally, using fair ML models that align with the principles of distributive justice, such as equity, representation, and accountability, can help ensure that predictions and decisions made by the models are fair and do not perpetuate or exacerbate existing health disparities.
The connection between fair ML and public health equity is critical, as the use of unbiased ML models can aid in the reduction of existing health disparities and promote equal access to healthcare services. We can effectively analyse and address the underlying factors that contribute to health inequities in various population groups by ensuring that ML algorithms are fair and unbiased. Fair ML models can be used to identify at-risk populations, optimise resource allocation, and tailor public health interventions to specific community needs. Furthermore, incorporating fair ML into public health decision-making processes aids in the elimination of potential biases in predictive models, preventing the perpetuation or exacerbation of existing health disparities.
## VI Conclusion
This paper has explored the importance of fairness in machine learning, providing a comprehensive framework for developing fair and accurate predictive models. Through the examination of various case studies in the public health domain, this paper has demonstrated the practical applications and potential impact of fair ML models on healthcare outcomes. By offering
recommendations and addressing limitations, this paper contributes to the ongoing discourse on the responsible use of ML in public health and other domains, paving the way for future research and advancements in ethical AI.
|
2306.15015 | Scaling and Resizing Symmetry in Feedforward Networks | Weights initialization in deep neural networks have a strong impact on the
speed of converge of the learning map. Recent studies have shown that in the
case of random initializations, a chaos/order phase transition occur in the
space of variances of random weights and biases. Experiments then had shown
that large improvements can be made, in terms of the training speed, if a
neural network is initialized on values along the critical line of such phase
transition. In this contribution, we show evidence that the scaling property
exhibited by physical systems at criticality, is also present in untrained
feedforward networks with random weights initialization at the critical line.
Additionally, we suggest an additional data-resizing symmetry, which is
directly inherited from the scaling symmetry at criticality. | Carlos Cardona | 2023-06-26T18:55:54Z | http://arxiv.org/abs/2306.15015v1 | # Scaling and Resizing Symmetry in Feedforward Networks
###### Abstract
Weights initialization in deep neural networks have a strong impact on the speed of converge of the learning map. Recent studies have shown that in the case of random initializations, a chaos/order phase transition occur in the space of variances of random weights and biases. Experiments then had shown that large improvements can be made, in terms of the training speed, if a neural network is initialized on values along the critical line of such phase transition. In this contribution, we show evidence that the scaling property exhibited by physical systems at criticality, is also present in untrained feedforward networks with random weights initialization at the critical line. Additionally, we suggest an additional data-resizing symmetry, which is directly inherited from the scaling symmetry at criticality.
[email protected]
Introduction
Deep learning achievements during the last decade are quite impressive in fields such as pattern recognition [1], speech recognition [2], large language models [3, 4], video and board games [5, 6] and neurobiology [7, 8], just to mention a few. This success has come with increasingly larger networks size, deep and complexity [9, 10], which often lead to undesired effects such a exploding or vanishing gradients [11] which hinders the minimization of the cost function.
Early work in deep learning [11, 12] has shown that exploding or vanishing gradients crucially depend on weights initialization, and henceforth a properly chosen random distribution of weights at initialization can prevent such exploding or vanishing gradients.
The dependence of correlations on the deep of untrained random initialized feedforward networks has been studied in [13, 14, 15], where it has been found that random Gaussian initialization develops an order/chaos phase transition in the space of weights-biases variances, which in turn translates into the development of vanishing/exploding gradients. Even more interestingly, the existence of a critical line separating the ordered from the chaotic phase, acts as a region where information can be propagated over very deep scales, and as such, signals the ability to train extremely deep networks at criticality. This phase transition has been observed so far also in convolutional networks [16], autoencoders [17] and recurrent networks [18].
In this paper, we build on the results of [14, 15], by looking a bit closer into the phase transition properties. After a quick review of the relevant results from [14, 15], we offer an analogous view for the covariance matrix propagation through a feedforward network in terms of a two-dimensional statistical physical system. Based on such analogy, we propose an order parameter associated to the phase transition and show that a scaling symmetry in deepness arise at the critical phase, for which we compute numerically some critical exponents. Then we argue at the level of conjecture, that such scaling symmetry translates into a re-sizing symmetry for other dimension of the feedforward network, such as input data size, hidden layers width and stochastic gradient descent batch size. Lately, we perform experiments showing that resizing down by a half the input data, hidden layers width or stochastic gradient descent batch size has little detrimental impact on learning performance at the critical phase.
This results suggest that random initialization of feedforward networks at the critical phase, might allow to train networks with much less data or smaller architectures, leading consequently to speed up learning.
Theoretical Background
In this section we present few very well known definitions and properties of feedforward deep neural networks and set some notation.
### Feedforward Networks
Let us start by considering a fully-connected untrained deep feedforward neural network with \(L-\)layers, each of width \(N_{\ell}\), with \(\ell\) denoting the corresponding layer label. Let \(W^{\ell}\) be the matrix of weights linking the \(\ell-1\) layer to the \(\ell\) layer (hence of dimension \(\dim(W^{\ell})=N_{\ell}\times N_{\ell-1}\)) and \(\vec{b}_{\ell}\) the bias vector at layer \(\ell\). The vector output at each layer (or post-activation) is then defined by the recursion relation:
\[\vec{x}^{\ell}=\phi(\vec{h}^{\ell})\,,\quad\vec{h}^{\ell}=W^{\ell}\vec{x}^{ \ell-1}+\vec{b}^{\ell}\,, \tag{2.1}\]
with the initial condition \(\vec{x}^{0}=\)input-data, of dimension \(N_{0}\). The activation function \(\phi\) is a scalar function acting element-wise on the vector components of its argument. For now we consider \(\phi\) to have a sigmoid shape
\[\phi(\pm\infty)=\pm 1,\quad\phi(-x)=\phi(x)\,, \tag{2.2}\]
but otherwise to be arbitrary 1.
Footnote 1: However, for the experiments later in this note we will use \(\phi(x)=\tanh(x)\)
The goal of the network is to learn the mapping,
\[\vec{x}^{L}=\Omega(\vec{x}^{0})\,, \tag{2.3}\]
that both, best approximate the input data to the output data, and that best generalizes to new data. This is done by minimizing a cost function measuring how far off the learned mapping is from the actual input-output map. Such cost function is in most cases taken to be the _sum of square errors_:
\[C=\frac{1}{2}\sum_{i=1}^{N_{0}}|\Omega(\vec{x}^{0}_{i})-\vec{x}^{(\text{output data})}_{i}|^{2}\,, \tag{2.4}\]
An important object characterizing \(\Omega\) is the input-output Jacobian \(J\in\mathbb{R}^{N_{0}\times N_{L}}\) given by:
\[J=\frac{\partial\vec{x}^{L}}{\partial\vec{x}^{0}}=\prod_{\ell=1}^{L}D^{\ell}W^{ \ell}\,, \tag{2.5}\]
with the components of \(D^{\ell}\) given by \(D^{\ell}_{ij}=\phi^{\prime}(\vec{x}^{\ell})\delta_{ij}\).
We want to consider an initial value for the weights and biases to be drawn from the following normal distributions: \(W^{\ell}_{i,j}\sim\mathcal{N}(0,\frac{\sigma_{w}^{2}}{N_{\ell-1}})\), \(b^{\ell}_{i}\sim\mathcal{N}(0,\sigma_{b}^{2})\). The weight re-scaling over \(N_{\ell-1}\) is to preserve the weights finite for large sizes of hidden layers.
### Mean Field Theory of Deep Feedforward Signal Propagation
Our theoretical starting point follows from the results of [14, 15], which we now proceed to quickly review, but with a slight physical spinoff.
### Physical Statistical System
We want to understand how the empirical distribution of pre-activations \(\vec{h}^{\ell}\) propagates through the network. To do that, we can define the following partition function,
\[Z=\sum_{E(\ell)}e^{-\beta\mathcal{H}_{\ell}} \tag{2.6}\]
with a quadratic Hamiltonian \(\mathcal{H}\) given by,
\[\mathcal{H}^{(\ell)}=B\sum_{a,b}q^{\ell}_{a,b}\,, \tag{2.7}\]
with \(B\) an arbitrary parameter and \(q^{\ell}_{a,b}\) being the covariance matrix,
\[q^{\ell}_{a,b}=\frac{1}{N_{\ell}}\sum_{i=1}^{N_{\ell}}h^{\ell}_{i}(\vec{x}^{0,a})h^{\ell}_{i}(\vec{x}^{0,b})\,,\quad a,b=1,\cdots,N_{0}\,. \tag{2.8}\]
We additionally define the _free energy_\(\mathcal{F}\) as it's usual in physical statistical systems,
\[\mathcal{F}=-\frac{1}{\beta}\log Z\,, \tag{2.9}\]
from which we can see the following relation
\[\frac{\partial{\cal F}}{\partial B}=\frac{1}{Z}\sum_{a,b}\sum_{E(\ell)}q^{\ell}_{a,b}\,e^{-\beta{\cal H}_{\ell}}\equiv\sum_{a,b}\langle q^{\ell}_{a,b}\rangle\,. \tag{2.10}\]
As we will see in next subsection, as \(\langle q^{\ell}_{a,b}\rangle\) propagates through the network, after few layers it will eventually reach an equilibrium constant value \(q^{*}_{a,b}\), i,e,
\[\frac{\partial{\cal F}}{\partial B}=\sum_{a,b}q^{*}_{a,b}\,. \tag{2.11}\]
For convenience, let us split the Hamiltonian in diagonal and non-diagonal parts,
\[{\cal H}^{(\ell)}={\cal H}^{(\ell)}_{d}+{\cal H}^{(\ell)}_{nd}\,, \tag{2.12}\]
with
\[{\cal H}^{(\ell)}_{d}=B_{d}\sum_{a}q^{\ell}_{a,a}\,,\quad{\cal H}^{(\ell)}_{ nd}=B_{nd}\sum_{a\neq b}q^{\ell}_{a,b}\,. \tag{2.13}\]
For large \(N_{\ell}\), it has been shown in [14, 15] that \(\vec{h}^{\ell}\) converge to a zero mean Gaussian distribution over the given layer \(\ell\), and in such limit, we can replace the sums for integrals with the corresponding Gaussian probability density (see [14] for details). At equilibrium, we can write the diagonal Hamiltonian as,
\[{\cal H}^{*}_{d}=B_{d}\sum_{a}\left[\sigma^{2}_{w}\int_{-\infty}^{\infty}\frac {dz}{\sqrt{2\pi}}\,e^{-\frac{z^{2}}{2}}\,\phi(\sqrt{q^{*}_{a,a}}z)^{2}+\sigma^ {2}_{b}\right]\,, \tag{2.14}\]
whereas the non-diagonal looks like,
\[{\cal H}^{*}_{nd}=B_{nd}\sum_{a\neq b}\left[\sigma^{2}_{w}\int_{-\infty}^{ \infty}\frac{dz_{a}dz_{b}}{2\pi}\,e^{\frac{-(z^{2}_{a}+z^{2}_{b})}{2}}\,\phi( \sqrt{q^{*}_{a,a}}z_{a})\,\phi(u_{b})+\sigma^{2}_{b}\right]\,, \tag{2.15}\]
with,
\[u_{b}=\sqrt{q^{*}_{b,b}}\left[c^{*}_{a,b}z_{1}+\sqrt{1-(c^{*}_{a,b})^{2}}\,z_ {2}\right]\,,\quad c^{*}_{a,b}=\frac{q^{*}_{a,b}}{\sqrt{q^{*}_{a,a}}\sqrt{q^{ *}_{b,b}}}\,, \tag{2.16}\]
here \(z_{a}\) and \(z_{b}\) are independent standard Gaussian variables, while \(u_{a}\) and \(u_{b}\) are correlated Gaussian variables with covariance matrix \(q^{\ell-1}_{a,b}\), which become \(q^{*}_{a,b}\) at equilibrium.
Once the system reach equilibrium, we approximate the partition function by,
\[Z_{d}=e^{-\beta\,L\,{\cal H}^{*}_{d}-\beta\,L{\cal H}^{*}_{nd}}\,, \tag{2.17}\]
where \(L\) is the total number of layers. From it, we have
\[\mathcal{F}=L\mathcal{H}_{d}^{*}+L\mathcal{H}_{nd}^{*}\,, \tag{2.18}\]
which lead us to the following consistency equation,
\[\frac{\partial\mathcal{F}}{\partial B_{d}}=q_{a,a}^{*}=\sigma_{w}^{2}\int_{- \infty}^{\infty}\frac{dz}{\sqrt{2\pi}}\,e^{-\frac{z^{2}}{2}}\,\phi(\sqrt{q_{a, a}^{*}}z)^{2}+\sigma_{b}^{2}\,, \tag{2.19}\]
with a similar equation for the non-diagonal,
\[\frac{\partial\mathcal{F}}{B_{nd}}=q_{a,b}^{*}=\,\sigma_{w}^{2}\int_{-\infty} ^{\infty}\frac{dz_{a}dz_{b}}{2\pi}\,e^{\frac{-(z_{a}^{2}+z_{b}^{2})}{2}}\, \phi(\sqrt{q_{a,a}^{*}}z_{a})\,\phi(u_{b})+\sigma_{b}^{2}\,. \tag{2.20}\]
The solutions to these consistency conditions, or fixed points, correspond to the equilibrium values of the covariance matrix. They can be solved very efficiently numerically, by pinning down the interception of the unity line with the value from the integrals at the right hand side of both conditions, as illustrated at the left frames of figures 1 and 2,
### Dynamical System
Although the storytelling in subsection above might be useful to build some physical intuition, it does not provides us with a good view of the non-equilibrium dynamics that the system undergoes. For that it will convenient to resort back to [14, 15], where instead of focusing solely on the equilibrium state, the random network has been approached from
Figure 1: Left: Intersection of unity line with integral at RHS of (2.19) for three different \(\sigma_{w}\) and \(\sigma_{b}=0.3\). Right: Dynamics of \(q^{\ell}\) across layer propagation, for two different values of the initial condition. To make both plots, we have used \(\phi(z)=\tanh(z)\)
the point of view of dynamical systems, by considering the recursion relations leading to (2.19) and (2.20),
\[q_{a,a}^{\ell}=\sigma_{w}^{2}\int_{-\infty}^{\infty}\frac{dz}{\sqrt{2\pi}}\,e^{- \frac{z^{2}}{2}}\,\phi(\sqrt{q_{a,a}^{\ell-1}}z)^{2}+\sigma_{b}^{2}\,, \tag{2.21}\]
\[q_{a,b}^{\ell}=\sigma_{w}^{2}\int_{-\infty}^{\infty}\frac{dz_{a}dz_{b}}{2\pi} \,e^{\frac{-(z_{a}^{2}+z_{b}^{2})}{2}}\,\phi(\sqrt{q_{a,a}^{\ell-1}}z_{a})\, \phi(u_{b})+\sigma_{b}^{2}\,, \tag{2.22}\]
with,
\[u_{b}=\sqrt{q_{b,b}^{\ell-1}}\left[c_{a,b}^{\ell-1}z_{1}+\sqrt{1-(c_{a,b}^{ \ell})^{2}}\,z_{2}\right]\,,\quad c_{a,b}=\frac{q_{a,b}}{\sqrt{q_{a,a}^{\ell- 1}}\sqrt{q_{b,b}^{\ell-1}}}\,, \tag{2.23}\]
for the diagonal and non-diagonal components respectively. This recursions define an _iterative map_.
By iterating this recursion a few times, we can see the dynamics of the covariance matrix before reaching equilibrium, as shown at the right frame in figure 1. Notice that the fixed point \(q_{a,a}^{*}\) is reached after few layers.
For the non-diagonal component, it is convenient to use the normalized correlation coefficient \(c_{a,b}\), to write instead,
\[c_{a,b}^{\ell}=\frac{\sigma_{w}^{2}}{q_{a,a}^{*}}\int_{-\infty}^{\infty}\frac {dz_{a}dz_{b}}{2\pi}\,e^{\frac{-(z_{a}^{2}+z_{b}^{2})}{2}}\,\phi(\sqrt{q_{a,a} ^{*}}z_{a})\,\phi(u_{b})+\sigma_{b}^{2}\,, \tag{2.24}\]
Figure 2: Left: Intersection of unity line with integral at RHS of (2.20) for three different \(\sigma_{w}\) and \(\sigma_{b}=0.3\). Right: Dynamics of \(c^{\ell}\) across layer propagation, for two different values of the initial condition. To make both plots, we have used \(\phi(z)=\tanh(z)\)
with
\[u_{b}=\sqrt{q_{a,a}^{*}}\left[c_{a,b}^{\ell-1}z_{1}+\sqrt{1-(c_{a,b}^{\ell})^{2}} \,z_{2}\right]\,. \tag{2.25}\]
\(c_{a,b}\) exhibit a more interesting behavior: for low values of \(\sigma_{w}\) it only has one fixed point at \(c_{a,b}^{*}=1\) which is also stable, but for values larger than a given \(\sigma_{w}^{\rm critical}\) ( \(\sigma_{w}^{\rm critical}\sim 1.39\) for the cases depicted in figure 2), the recursion develops a second fixed stable point, as shown in the left figure 2, and the fixed point \(c_{a,b}^{*}=1\) becomes unstable, signaling a phase transition in \((\sigma_{w},\sigma_{b})\) space. For a given tuple of values \((\sigma_{w},\sigma_{b})\), we can check the stability of the covariance map by looking into the derivative of the iterative map at the fixed point,
\[\chi_{1}\equiv\left.\frac{\partial c_{a,b}^{\ell}}{c_{a,b}^{\ell-1}}\right|_{ c^{*}}=\sigma_{w}^{2}\int_{-\infty}^{\infty}\frac{dz}{\sqrt{2\pi}}\,e^{-\frac{z ^{2}}{2}}\,\left(\phi^{\prime}(\sqrt{q_{a,a}^{*}}z)\right)^{2}\,, \tag{2.26}\]
so the critical values are those that solve the equation \(\chi_{1}=1\).
From the right figure 2, we can see that it takes longer for the \(c_{a,b}^{*}\) fixed point to be reached than it takes to reach \(q_{a,a}^{*}\). In figure 2 we have used \(\sigma_{b}=0.3\) in all cases, in which case the critical \(\sigma_{w}\) turns out to be \(\sigma_{w}^{\rm critical}=1.39\). For values below \(\sigma_{w}^{\rm critical}\) the network correlates initial data, for values above \(\sigma_{w}^{\rm critical}\) the network uncorrelates initial data, and for values near \(\sigma_{w}^{\rm critical}\) the network tend to preserve the initial data correlations. A property of a dynamical system signaling chaotic behavior is that by evolving two infinitesimally close initial conditions, they both reach values significatively apart from each other. Considering the covariance matrix, by initializing a network with random Gaussian weights at \(\sigma_{w}>\sigma_{w}^{\rm critical}\), and tracking two input data points initially very correlated (very close in correlation space), they will get uncorrelated as they propagate through the network (separate from each other in correlation space), signaling a chaotic phase. Conversely, uncorrelated initial data points, will correlated for an initialization with \(\sigma_{w}<\sigma_{w}^{\rm critical}\), signaling an ordered phase. This two phases are related to exploding or vanishing gradients during stochastic gradient descent.
In the next section, we would like to take a closer look into the phase.
## 3 Phase Transition
For the examples and experiments treated in the rest of this note, we will take the MNIST dataset [19, 20] of handwritten numbers. Let us then consider the normalized correlation coefficient matrix, or covariance matrix of such input data (depicted in figure 3 for a subset of 100 input data points for visualization clarity) as it propagates through the network. Think of this matrix as a two-dimensional lattice statistical mechanical system
at some initial state (out of equilibrium), where each "pixel" or component of the matrix represents a physical quantity ( such a magnetic moment, for example) whose value is between zero and one. Now consider that the system evolution is the same as the one undergone by the covariance matrix as it propagates through a feedforward network. A given initial \(\sigma_{b}\) can be thought of as an external field (such a magnetic or electric field), and a given \(\sigma_{w}\) as a temperature.
Now imagine that we set a "temperature" \(\sigma_{w}\) and a "magnetic field" \(\sigma_{b}\), and let the system evolve towards equilibrium 2. Evolving in this context means that we let the initial data propagate through the network with the given random initialization. Once the data propagate across all layers in the order phase, the covariance matrix looks like a system at zero temperature (completely correlated), the order phase looks like a system at infinite temperature (completely uncorrelated) after propagation in the chaotic phase and a system at critical temperature after propagating in the critical phase. To illustrate this, we created a feedforward network with 10 hidden layers, each layer composed of 50 units 3 and propagate the MNIST data once over initializations corresponding to each of these three phases, then we compute the correlation matrix of the resulting output. The results are illustrated in the pictures at figure 4
Figure 3: Covariance matrix for 100 input data points from MNIST
By propagating the input data at the ordered phase, the network do a very good job correlating the data whereas at the disorder phase the data gets almost completely uncorrelated. At the critical phase however, even though the network tends to correlated data slightly, it reach a fixed point where the data propagate to a "fix" intermediate (non-saturated) correlation. Within the analogy to statistical physics, we define the following _order parameter_
\[\langle c\rangle=\frac{2}{N}\sum_{a>b}c_{a,b}^{*}\,, \tag{3.1}\]
which we will call, the mean correlation. Here \(N\) is the number of data points in a given case. In the order phase, \(\langle c\rangle\) is close to one, and we have a large mean correlation, at the disorder phase the \(\langle c\rangle\) is small and we have a vanishing mean correlation. Lastly, at the critical phase, the mean correlation is in between the two other phases. Henceforth, the change of \(\langle c\rangle\) from non-zero to zero signal a phase transition, and that is why we have
Figure 4: Left: Correlation matrix output order phase, Right: Correlation matrix output disorder phase. Center Below: Correlation matrix output critical phase
interpreted it as an order parameter.
Computing the _mean correlation_ for the particular examples displayed in figures 3 and 4 we get the values in table 1.
This values reflect the values of the corresponding fixed points.
An observation from the table above that will play a role in the next section's discussion, is that the value of order parameter for the input data is not too far from the value at the critical phase output.
## 4 Scaling Symmetry
It is well known that physical systems having second order phase transition exhibit scaling symmetry at the critical phase. In this section, we ask the question if such symmetry is also exhibited in the feedforward initializing network that we have discussed in the previous section.
### Theoretical Considerations
One can ask how far the input data need to propagate in order for the correlation matrix to reach its fix point. This question was detailed answered in [15]. For enough large \(\ell\), the covariance matrix approach the fix point exponentially,
\[|p_{a,b}^{\ell}-p^{*}|\sim e^{-\frac{\ell}{\zeta_{p}}}\,, \tag{4.1}\]
with \(p\) indicating that the propagation length \(\zeta_{p}\) is different for the diagonal terms \(p\equiv q\) than for the non-diagonals \(p\equiv c\), respectively,
\[\zeta_{q}^{-1}=-\log\left[\chi_{1}+\sigma_{w}^{2}\int_{-\infty}^{\infty}\frac {dz}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}\phi^{\prime\prime}(z\sqrt{q^{*}})\phi(z \sqrt{q^{*}})\right] \tag{4.2}\]
and
\[\zeta_{c}^{-1}=-\log\left[\sigma_{w}^{2}\int_{-\infty}^{\infty}\frac{dz_{a}}{ \sqrt{2\pi}}\frac{dz_{b}}{\sqrt{2\pi}}e^{-\frac{(z_{a}^{2}+z_{b}^{2})}{2}} \phi^{\prime}(z_{a}\sqrt{q^{*}})\phi^{\prime}(u_{b}^{*})\right]\,. \tag{4.3}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Input & Ordered & Disordered & Critical \\ \hline \(\langle c\rangle\) & 0.414 & 0.99 & 0.041 & 0.50 \\ \hline \end{tabular}
\end{table}
Table 1: Mean Correlations
\(\zeta_{p}\) establish a depth scale that measures how far deep the input information propagates into a random initialized network.
We are concerned about the correlation length at the fixed point, therefore evaluating \(\zeta_{c}\) at \(c^{*}\), reduces it to
\[\zeta_{c}=-\frac{1}{\log\chi_{1}}\,. \tag{4.4}\]
Since the phase transition occurs when \(\chi_{1}=1\), the correlation length diverges there, hence at the critical phase the information will propagate at all deep lengths, as long as the fixed point has been already reached. This is indeed a signal of scaling symmetry.
In experiments on physical systems as well as in solvable theories, it has been observed that at the critical phase, when the correlation length diverges, the behavior of the covariance (or two-point correlation function) given by (4.1), breaks down, and instead \(p_{a,b}\) approaches the fixed point in a power-law fashion,
\[|p_{a,b}^{\ell}-p^{*}|\sim\frac{1}{\ell^{\alpha}}\,, \tag{4.5}\]
with \(\alpha\) some coefficient, which in the physical literature corresponds to the so-called _critical exponents_. For the current case of the feedforward network, the break down of (4.1) near the phase transition can be seen by realizing that its derivation at [15] relies on a perturbative expansion around \(p^{*}\) with perturbation parameter given by
\[\epsilon^{\ell}\sim e^{-\frac{\ell}{\zeta_{p}}}\,, \tag{4.6}\]
which becomes of order one around the critical phase, hence breaking the perturbative approximation.
We now want to check experimentally, if such power-law behavior from physical statistical systems, is also observed in the random network phase transition discussed here.
### Critical Exponents for Information Propagation
First we want to check that the power law behavior observed at criticality in phase transitions of physical systems can be, at least approximately, observed in the information propagation in random networks at the critical phase. In order to see that, we have plotted \(|c_{a,b}^{\ell}-c^{*}|\) for \(\sigma_{b}=2,3,\cdots 6\) with an initial input \(c_{a,b}\) very close to the critical value and
fitted to each case a power law of the form,
\[|p_{a,b}^{\ell}-p^{*}|=\frac{c}{\ell^{\alpha}}+b\,, \tag{4.7}\]
the resulting plots and fits are shown in figure 5, from where we can see that indeed a power law behavior 4.7 provides a good approximation for the behavior of information propagation very close to criticality. The correspondent fitted critical exponents for each \(\sigma_{b}\) are shown in the table below,
### Exploring Scaling Symmetry in Covariance Space
The divergence of the correlation length in layer space \(\ell\) discussed above, provide us with evidence for a scaling symmetry taking place at criticality. This scaling symmetry tell us that, once the critical fixed point is reached, the correlation matrix is preserved through the subsequent layers.
We have observed in table 1 that the input mean correlation for MNIST data set is not too far from the output mean correlation at criticality, which is a consequence of
Figure 5: Power law behavior of information propagation at criticality. The red curves indicate the fit (4.7) for each \(\sigma_{b}\)
the approximate scaling symmetry in deepness space. We expect such scaling symmetry to be reflected in the output correlation matrix, or in other words: since input mean correlations are approximately preserved for propagation at the critical phase, we should be able to re-size the input correlation matrix in a way that the input mean correlation is approximately preserved in the output mean correlation, and _our conjecture is that such mean correlation symmetry is reflected as a symmetry of learning performance at criticality_. More concretely, due to the law of large numbers, mean correlations of sub-correlation matrices approximate the mean correlation of the whole matrix. We then can take a subsample of the input data, preserving the input mean correlation. Since such mean correlation is approximately preserved through the network at criticality, the mean correlation of the output correlation matrix does not change appreciably and as a consequence the learning map should not change much and therefore, resizing the input data will preserve (approximately) learning performance, as we will explore experimentally in what follows. For that, we compute the normalized correlation matrix and the mean correlation for the input MNIST data set and propagate the data through the network, in order to compute the matrix correlation and mean correlation of the propagated output. Next, we reduce the size of the input data and compute once again mean correlations for input and output. The graphical representation of input and output correlation matrices is shown in figure 6 for 1000 MNIST examples The corresponding mean correlation for the input \(1000\times 1000\) and \(500\times 500\) correlation matrices are shown in table 2 where we can check the conservation of output mean correlations from input resizing.
Figure 6: Left: Input correlation matrix, Right: Output correlation matrix at criticality.
Table 2 shows that propagating half the input data has barely any effect on the mean correlation of the output. In next section we will explore the impact of this data resizing on training.
## 5 Experiments
### Reducing Input Data
Based on last section's discussion, we can intuitively think that at critical initialization, we can reduce the amount of input data without considerably degrading accuracy in the learned input-output map. In this section we will provide experimental evidence that is the case.
We train a very simple feedforward network on the MNIST data set. Our network consist of 6 hidden layers 4 and a 10 units output layer representing the output probabilities for the digits from 0 to 9 5. We use stochastic gradient descent on a cross-entropy cost function. Since we are only worry for the performance of the network as a function of initial data size, we will not use any particular regularization.
Footnote 4: More than two feedforward layers to learn the classification problem of MNIST is a bit overkill, but here we are interested in the impact of many layers.
Footnote 5: In this section we add a 10-units output layer corresponding to the 10 digits of the MNIST dataset.
First we train the network with 50000 MNIST examples, and let it run for just some few epochs, getting the accuracies over validation (unseen) data in the three phases as plotted in figure 7 left. Then we re-train the same architecture but with half the examples, i.e 25000, and from the right figure 7 we can observe that, even though the accuracy and learning speed at the order phase deteriorates considerably (the chaotic case is already very deteriorated even with the full dataset), the performance of the network at the critical phase does not diminish at all, with the advantage that with that many less examples, the network takes almost half the time in running the same number of epochs.
Just out of curiosity, we push harder, and re-train the network with only 15000 examples, and even thought the accuracy at the critical initialization suffer some deterioration, is not as strong as in the chaotic and order phases where the accuracy and learning speed do get very badly behaved. For some practical cases, the small punishment on the accuracy
\begin{table}
\begin{tabular}{|c|c|c||c|c|} \hline & Input & Half-Input & Critical Output & Half-Input Critical Output \\ \hline \(\langle c\rangle\) & 0.40 & 0.39 & 0.72 & 0.72 \\ \hline \end{tabular}
\end{table}
Table 2: Mean Correlations for Whole Data Subset and Half Data Subset
of the learned map at the critical phase maybe beneficial in terms of the smaller number of required input data examples and decreased training time, which in our experiments is three times smaller than when in use of the full data set.
### Reducing Hidden Width and Batch Size
Similarly as to the intuition that resizing of the input data is an approximate symmetry of the output correlation matrix at criticality, we can consider the effect of resizing the width of the hidden layers, in this section we take the same architecture as in the previous subsection but will resize the hidden layers by half, however preserving the same size of
Figure 8: Accuracy for several epochs at the ordered, disordered and critical initializations for 15000 input MNIST examples.
Figure 7: Left: Accuracy for several epochs at the ordered, disordered and critical initializations for 50000 input MNIST examples, Right: Accuracy for several epochs at the ordered, disordered and critical initializations for 25000 input MNIST examples.
the input data.
As in previous section, reducing the hidden width, has almost not detrimental impact on the performance of the network initialized at the critical phase, while it does hurt considerably the performance at initializations in the order and disorder phases, as shown in figure 9.
Lastly, we can also examine the impact of resizing the batch for stochastic gradient descent, while keeping the original sizes for the input data and hidden layers width. We reduce the size of this batch by a half and plot the resulting accuracies over the validation set in figure 10, once again observing little impact performance on initialization at the critical phase, unlike the other two phases.
Figure 10: Accuracy for several epochs at the ordered, disordered and critical initializations with half-reduced batch size for stochastic gradient descent.
Figure 9: Accuracy for several epochs at the ordered, disordered and critical initializations with half-reduced hidden width.
Discussion and Conclusions
In this paper we have studied the phase transition occurring in the propagation of the covariance matrix through a random deep feedforward network. In summary, we have shown that a naive statistical physics description for such system can be developed, which in turns lead us to consider some well known properties of physical phase transitions, such as scaling symmetry at criticality. We additionally argued, that the given scaling symmetry translates into a data-resizing symmetry which in principle, would allow to scale down a large network into a smaller one without degrading learning performance.
It is worth mentioning that more precise correspondences between neural networks and statistical physical system have been developed previously in several other works. Such as those recently developed at [21] and [22]. It would be very interesting to studied the phase transition treated in this note, from the point of view of such interesting physical approaches.
Another recent interesting relation between physical phase transition and deep neural networks have been studied in the context of the loss landscape in [23, 24, 25]. It would be interesting to ask if the phase transitions in random networks studied in this paper has any relation to those studied there. |
2307.02640 | Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts | The massive collection of user posts across social media platforms is
primarily untapped for artificial intelligence (AI) use cases based on the
sheer volume and velocity of textual data. Natural language processing (NLP) is
a subfield of AI that leverages bodies of documents, known as corpora, to train
computers in human-like language understanding. Using a word ranking method,
term frequency-inverse document frequency (TF-IDF), to create features across
documents, it is possible to perform unsupervised analytics, machine learning
(ML) that can group the documents without a human manually labeling the data.
For large datasets with thousands of features, t-distributed stochastic
neighbor embedding (t-SNE), k-means clustering and Latent Dirichlet allocation
(LDA) are employed to learn top words and generate topics for a Reddit and
Twitter combined corpus. Using extremely simple deep learning models, this
study demonstrates that the applied results of unsupervised analysis allow a
computer to predict either negative, positive, or neutral user sentiment
towards plastic surgery based on a tweet or subreddit post with almost 90%
accuracy. Furthermore, the model is capable of achieving higher accuracy on the
unsupervised sentiment task than on a rudimentary supervised document
classification task. Therefore, unsupervised learning may be considered a
viable option in labeling social media documents for NLP tasks. | Alexandrea K. Ramnarine | 2023-07-05T20:16:20Z | http://arxiv.org/abs/2307.02640v1 | # Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts
###### Abstract
The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases based on the sheer volume and velocity of textual data. Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding. Using a word ranking method, term frequency-inverse document frequency (TF-IDF), to create features across documents, it is possible to perform unsupervised analytics, machine learning (ML) that can group the documents without a human manually labeling the data. For large datasets with thousands of features, t-distributed stochastic neighbor embedding (t-SNE), k-means clustering and Latent Dirichlet allocation (LDA) are employed to learn top words and generate topics for a Reddit and Twitter combined corpus. Using extremely simple deep learning models, this study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery based on a tweet or subreddit post with almost 90% accuracy. Furthermore, the model is capable of achieving higher accuracy on the unsupervised sentiment task than on a rudimentary supervised document classification task. Therefore, unsupervised learning may be considered a viable option in labeling social media documents for NLP tasks.
Natural language processing unsupervised analysis social media Twitter Reddit plastic surgery
## 1 Introduction
Cosmetic plastic surgery is an expensive yet increasingly popular set of procedures for both men and women, especially considering the ease of accessibility to surgeons, patient testimonials, and procedure-specific information, such as "before-and-after" visual aids, afforded by the Internet. The Internet is virtually a bottomless trove of textual data that are easily created at velocities of millions of posts per second across social media platforms. The exponential adoption of social media, largely facilitated by the wide global distribution of smart phone technology, is a known disseminator of beauty standards that is highly targeted to billions of users daily. Cosmetic surgery becomes a quick, permanent fix in adopting sought after beauty standards set by celebrities, models and social media "influencers", relative to longer term alternatives such as dieting and exercise, or temporary alternatives such as adoption of fashion and cosmetic trends.
Social media, while distributing information about plastic surgery procedures, also provides a setting for public social commentary on the election of undergoing these surgeries. Users across many platforms are able to freely communicate their sentiments on a broad scale and even as granular as commenting on another individual's posted surgical outcomes. Different social media platforms exist for different purposes, and thus attract and form distinct user bases that comprise these Internet communities. Each text post is unique to a user, time-stamped, geo- located, and has the capability to possess multiple data formats including but not limited to links and images. Therefore, text posts from social media sites provide specific insight into the public's opinion on cosmetic surgery. It is thus reasonable to assume that the text posts made on one platform can be used to distinguish user-derived text from other platforms.
Curating massive corpora of text post documents from popular social media networks, Twitter and Reddit, is feasible with the implementation of AI web scraping technology. NLP is then leveraged to process and mathematically transform text to computationally understandable representations. ML methods can then identify patterns among the corpora that is an otherwise impossible task for a human to accomplish given the sheer volume of data. Deep learning (DL) methods, relying on powerful and speedy neural network technology, are then poised to use the NLP-curated and ML-processed data in order to accurately predict document class and user sentiment across the corpora. This study demonstrates that very simple, regularized neural network architectures effectively use unsupervised NLP to answer an easy to ask yet difficult to answer question, "how does the Internet feel about plastic surgery?"
## 2 Literature Review
Opinion mining, better known as sentiment analysis, is an application of NLP that is particularly suited to extracting and assessing human opinion from textual data over the Internet and social media networks [1]. While spoken language offers context surrounding feelings and opinions through auditory cues such as tone and pitch, written language often broadly captures polarity in discussions, which can be leveraged by AI. Trained AI are able to detect polarity, whether negative, positive, or neutral, based on word associations captured mathematically by distance metrics. Distance is able to represent and capture context, giving connotative rather than denotative meaning to the words that ultimately decide whether a word is positive or negative [2]. Therefore, ranking word importance to use as term features for AI is critical to achieve high accuracy for sentiment analysis, particularly unsupervised sentiment assignment. This study adopts the information retrieval ranking function of TF-IDF, combining two methods by [3] and [4] to assign weights to important terms across a corpus of documents.
Two popular unsupervised analyses are utilized in this study to support analyst judgment for assigning sentiment to social media posts that lack these labels. [5] and [6] proposed the "k-means" method as an unsupervised algorithm to group observations based on minimizing variance, or the sum of squared deviations of points, to generate clusters of points using centroided means. [7] formulated LDA, which uses Bayesian probabilistic modeling to generate topic representations of NLP corpora based on their documents and associated terms. LDA therefore will ultimately support clustering analysis in generating labels for subsequent sentiment analysis.
More recently, [8] applied LDA to extract features of YouTube comments, proposing that semantic topic extraction can directly aid in sentiment scoring of comments as "negative" or "positive" through an NLP-hybrid framework when applied to fuzzy lattice reasoning. In conjunction with unsupervised analysis, this application is useful in identifying user groups within social media networks. [9] created an unsupervised approach to determine interests of social media users based on tweet semantics. A ML survey of unsupervised learning applications to NLP specifically highlights clustering algorithms such as k-means to address categorical outputs when labeled training data is not available for analytics.
## 3 Methods
### Data Acquisition
A Python 3.8 development version of snscrape package was utilized to run command line bash scripts that scrape top and relevant social media posts from chosen Reddit subreddits and Twitter hashtags through March 2021, which serve as the document categories. Reddit queries from three different subreddits, "PlasticSurgery", "CosmeticSurgery", and "BotchedSurgeries", had a maximum of 8000 or 4000 result scrapes of the latest posts based on total reported posts on each subreddit's webpage. Twitter queries for each of the following hashtags, "plasticsurgery", "liposuction", "lipinjections", "botox", and "nosejob", had a maximum of 8000 result scrapes of top tweets as determined by Twitter's internal algorithm. Each scrape was saved as a JSON line file and subsequently read into respective Pandas dataframes. Null data were replaced with empty strings using NumPy. All Reddit and Twitter dataframes were concatenated respectively.
### Data Pre-processing
The separate corpora dataframes were joined based on identification number, category, text, and title columns. Pre-processing steps for the combined corpus utilized a Python implementation of NLTK and custom functions to convert text to lowercase, remove punctuation, remove emojis and other Unicode characters, tokenize each document, remove English stop words, and stem each token. TF-IDF vectorization of the combined corpus employed additional preprocessing to drop duplicates and strip Unicode characters.
### Unsupervised Analysis
The Scikit-learn TF-IDF Vectorizer was set to generate unigrams and subsequently fit to the combined corpus after randomly shuffling the dataframe. Scipy and Scikit-learn implementations of k-means using k of 8, 3, and 2, and a 2-component t-SNE multidimensionality rescale using cosine distance and perplexity of 50.0 were applied to the TF-IDF matrix. Each method underwent at least ten initializations and between 300 and 1000 iterations before convergence. The Scikit-learn implementation of LDA was used for topic modeling on the TF-IDF matrix, generating the top 20 words for 8, 3, and 2 topics. All visualizations were generated using MatPlotLib and Seaborn.
### Deep Learning
The high-level Keras API on TensorFlow was utilized to build a Sequential dense neural network (DNN) with one input layer using rectified linear unit (ReLU) activation, one dropout regularization layer, and one output layer using softmax activation for both document category classification and sentiment analysis tasks. For sentiment analysis tasks, 1-D temporal convolutional (1D-CNN) Sequential models were built with an input layer of 32 units and a 3x3 kernel, ReLU activation and He uniform initialization, a 2x2 1-D max pooling layer, followed by a dropout and flatten layer feeding to a dense layer with 128 units before the final dense output layer. Each model was compiled using the Adam optimizer and a categorical cross-entropy loss function. After Scikit-learn 80% train, 20% test splitting of the TF-IDF matrix and labels, the models were fit to shuffled data and trained over 15 epochs with an internal training validation split of 20
For classification labels, each of the eight document categories were represented as an integer. For sentiment labels, analyst judgment of both the k-means clusters mapped into the t- SNE space and LDA topics was used to create three classes corresponding to negative, positive, or neutral sentiment. All labels were converted to categorical vectors using TensorFlow.
Training and validation loss and accuracy were tracked over each epoch before evaluating each model on the test sets. Scikit-learn classification reports of precision and recall, and confusion matrices were generated for each test instance.
## 4 Results
After vectorization of the combined corpus, unsupervised analyses were performed to visualize the distribution of document categories. Figure 1 illustrates the most similar documents towards the middle of the t-SNE space, while outlier documents sparsely separate at the fringe of the main cluster.
Documents from Reddit are more similar to each other than documents from Twitter, primarily falling in the mid to top right quadrant of the space while the Twitter documents cluster together along the bottom and left of the distribution. Documents related generally to plastic surgery, sourced from the plastic surgery Twitter hashtag or the Plastic Surgery and Cosmetic Surgery subreddits, are the most similar to each other and fall within the middle of the distribution. About half of the Botched Surgery subreddit documents strongly form a smaller cluster away from the rest, however the second half is well interspersed within the other subreddits toward the center of the distribution.
The liposuction and lip injection documents are most dissimilar from each other of the Twitter-sourced hashtags, while the nose job hashtag is similar to the liposuction and half of the Botox hashtag. The other half of the Botox hashtag is more similar to the lip injection hashtag source. This half of the Botox hashtag, along with the lip injection hashtag, are more dissimilar than the general plastic surgery hashtag than the nose job and liposuction hashtags.
The outlier documents are distributed in smaller yet distinguishable groups. There are three nose job Twitter hashtag outlier groups where two are somewhat related to the larger group, but one is more related to the main Botox hashtag group. There are many liposuction tweets that are more closely related to the Reddit documents than to Twitter documents. There is one Botched Surgery subreddit outlier group that is more related to the liposuction tweets. Finally, there are two strongly separated groups of mixed documents, however primarily comprised of Plastic Surgery and Botched Surgery subreddit documents. One of these falls within the main distribution but largely distanced in its entire circumference from the general plastic surgery tweets and subreddit posts, and the second falls completely out of the main distribution towards the most negative t-SNE 1 space, closer to a polarized outlier group of the twitter lip injection hashtag.
Centroid clustering was applied to the t-SNE space using a k-means approach. The eight generated clusters highlight the strongest outlier document groups, the differences among the twitter hashtags, and the similarities among both the Reddit documents and the general plastic surgery documents, as illustrated in Figure 2.
Cluster 3 is a "catch all", predominantly mapping to the Reddit documents and general plastic surgery related documents. Cluster 0 mapped directly to the Botched Surgery outlier group, cluster 1 to the Botox hashtag, cluster 7 to
the nose job hashtag, and cluster 5 to the liposuction hashtag. Cluster 2 seemed to map to documents that fell directly central within the Botox and lip injection document space but were not sourced from either of those hashtags. Clusters 3 and 6 highlight strong outlier groups in the t-SNE space, where the former maps to the outliers of the general plastic surgery documents, and the latter maps to the fringe outliers of the liposuction hashtag tweets.
In order to stratify the documents into three balanced groups corresponding to positive, negative, or neutral sentiment, k-means clustering was performed again on the TF-IDF matrix using k equal to 3 and 2. Appendix A demonstrates that the most significant differences between the documents using centroid clustering is between the main distribution and the large Botched Surgery outlier group. Therefore, LDA was employed to further support analyst judgement in unsupervised sentiment labeling. Figure 3 depicts the results of the top 20 terms for eight topics across the combined corpus, corresponding to the eight document categories.
While the majority of the top terms can be considered neutral, there are a few that can be mapped back to the k-means top term results, shown in Table 1, in order to assign sentiment labels to each k-means cluster. From Topic 4, "delete" and "improve", and from Topic 8, "remove", "addict", and "stubborn", are key terms indicative of negative sentiment if also highlighted by k-means. Reducing the LDA to three or two topics still captures these negative connotation terms. Therefore, k-means clusters 0, 4, and 5 were assigned a negative sentiment label, clusters 1, 6, and 7 were assigned a
\begin{table}
\begin{tabular}{l l l l l l l l} Cluster 0 & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 & Cluster 5 & Cluster 6 & Cluster 7 \\ \hline remove & inject & fabulous & thank & delete & & (contains & surgeon \\ botch & skincare & gratitude & gorgeous & ban & & numerical & surgery \\ excess skin & filler & motivate & happiness & swollen & & terms, like & beauty \\ evil & treatment & fit & amaze & swell & & & injection & medic \\ evildick & facial & well & wonder & evil & improve & doses & cosmetic \\ evilqueen & antiaging & comfort & please & excess & & & and & rhinplasty \\ exaggerate & botox & lip goal & pretty & exboyfriend & & double-chin & phone & procedure \\ exboyfriend & wrinkle & love & candylylipz & didnt & & & numbers) & patient \\ \end{tabular}
\end{table}
Table 1: Selected k-means Top Terms
Figure 1: t-SNE Dimensionality Reduction Mapped by Category
neutral sentiment label, and clusters 2 and 3 were assigned a positive sentiment label. To correct for class imbalance, any Botched Surgery subreddit documents not assigned to negative sentiment were reassigned a negative label based on that particular subreddit's culture of smoking and shaming plastic surgery procedure outcomes subjectively deemed poor.
### Predicting on Supervised versus Unsupervised Labels
A very simple one-layer DNN architecture, utilizing 30% dropout regularization, was used to test supervised document category classification versus the unsupervised sentiment analysis. Appendix C illustrates that training accuracy increases with epochs; however, validation accuracy stagnates. Training loss decreases with epochs, but validation loss increases in both classification and sentiment analysis cases. Table 2 compares the performances between classification and sentiment analysis tasks on the combined corpus.
Overall, the model was able to achieve better performance on unsupervised sentiment analysis versus supervised document classification. Appendices D and E compare the harmonic mean and confusion matrices of the two learning tasks. For classification, there was a class imbalance for the lip injections Twitter hashtag, therefore the F1-score was low and misclassification rate was high relative to the performance on the other labels. The model performed best on correctly classifying the nose job Twitter documents.
For sentiment analysis, although there was class imbalance skewed towards over-representation of the positive-labeled documents, this had no noticeable effect on model performance. The model performed best on predicting neutral
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Training} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Test} \\ \cline{2-7} Task & Accuracy & Loss & Accuracy & Loss & Accuracy & Loss \\ \hline Classification & 93.13 & 0.1767 & 78.16 & 0.7555 & 77.78 & 0.7859 \\ Sentiment & 97.77 & 0.0473 & 87.84 & 0.4652 & 87.12 & 0.4768 \\ \hline \multicolumn{7}{l}{_Note:_ Accuracy shown in percent.} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dense Neural Network Performance
Figure 2: k-means Clustering Mapped to t-SNE Space
sentiment and relatively worse on predicting negative sentiment, however the precision and recall metrics between the three sentiments are similar. Almost all of the neutral documents were correctly predicted as such, and less than 20
### Sentiment Analysis
Given that a simple DNN could achieve near 90% accuracy on unsupervised sentiment analysis, experiments varying dropout regularization and use of a temporal convolutional neural network were conducted. Table 3 summarizes the training, validation, and test results of these experiments to predict sentiment.
Increasing dropout rate in both model cases increases both validation and test accuracies overall. However, increasing dropout rate for the 1D-CNN does not improve validation or test loss compared to using no dropout regularization, and instead caused the validation loss to behave erratically over training epochs (Appendix G). Using dropout had no substantial effect on validation accuracy over epochs of the 1D-CNN. Appendix F illustrates that using high dropout rates for the DNN shrinks the gap between training and validation metrics at each epoch, notably shrinking validation loss despite the similar upwards trending loss over epochs in both zero and 60% dropout cases.
Appendix H displays the test results of the sentiment analysis comparing dropout regularization between the two models. The 1D-CNN using 60% had the highest true classification rate of positive sentiment, while the 1D-CNN using 30% dropout had the lowest. The DNN using 60% dropout had the best classification rate of negative sentiment, while the DNN using no dropout performed relatively poorly on correctly classifying negative sentiment. The 1D-CNN using 30% dropout correctly predicted neutral sentiment at the highest rate, and the DNN using no dropout correctly predicted
Figure 3: Top 20 Terms by LDA Topic Modeling
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Training} & \multicolumn{3}{c}{Validation} & Test \\ \cline{2-7} Model & Dropout & Accuracy & Loss & Accuracy & Loss & Accuracy & Loss \\ \hline DNN & 0 & 97.33 & 0.0567 & 86.98 & 0.5057 & 86.27 & 0.5227 \\ DNN & 0.3 & 97.77 & 0.0473 & 87.84 & 0.4652 & 87.12 & 0.4768 \\ DNN & 0.6 & 96.41 & 0.0830 & 88.26 & 0.3627 & 87.53 & 0.3729 \\
1D-CNN & 0 & 98.64 & 0.028 & 87.59 & 0.4912 & 87.08 & 0.5292 \\
1D-CNN & 0.3 & 98.61 & 0.0311 & 87.12 & 0.5203 & 86.92 & 0.5604 \\
1D-CNN & 0.6 & 98.40 & 0.0388 & 88.80 & 0.5048 & 88.21 & 0.5552 \\ \hline \hline \end{tabular} _Note:_ Dropout rate shown, accuracy shown in percent.
\end{table}
Table 3: Model Regularization and Architecture Experiments for Sentiment Prediction
neutral sentiment at the lowest relative rate among the models. All models displayed almost negligible misclassification rates bidirectionally between negative and neutral sentiment.
## 5 Discussion
While it is reasonable to assume that virtual communities formed over social media forums tend to attract like-minded opinions, this over-generalization may conflate the outlier user posts within each group. Herein, it is demonstrated that applying unsupervised dimensional reductions and clustering algorithms to an extremely large and heterogenous corpus of Twitter and Reddit text data is a viable option to capture user sentiment based on word rankings. In conjunction with topic modeling, these methods may be employed to label noisy text data in a semi-supervised manner.
The Twitter and Reddit documents mostly separate in the t-SNE space, suggesting that the types of posts, and therefore the user bases, can distinguish between the two social media networks. The relative homogeneity of the Reddit to Twitter distribution supports the idea that Reddit posts, and therefore users, are somewhat more similar to each other that Twitter users. This is likely a function of subreddits being niche internet communities with users sharing multiple posts within the same subreddit, and perhaps even between the three sampled plastic surgery subreddits since there are no other major related communities that were found on Reddit pertaining to plastic or cosmetic surgery. It is a fair assumption that Twitter has a more heterogenous representation because hashtags do not act as niche communities the way subreddits are structured to.
Given that the sourced tweets are stratified mainly by procedure related hashtags, it is unsurprising that non-invasive, injection-based procedures cluster closely together, such as the lip injection and Botox clusters, while the invasive procedures such as nose job and liposucion cluster together. k-means cluster 1 therefore must be representative of non-invasive or injection- based procedures. That nose job and Botox documents are still relatively close in distance in the t-SNE space indicates relatedness due to terms associated with facial procedures. Interestingly, these four hashtags for multiple smaller outlier clusters in the t-SNE space, probably indicative of underlying sentiment distributions given the k-means mapped analysis and strong predictive power of the neutral networks. Despite the biased sourcing used for these tweets, the general plastic surgery Twitter hashtag documents almost uniformly span all of the procedure-specific Twitter documents in the t-SNE space.
Surprisingly, the unsupervised generated labels seemingly allowed the simplest of neural networks to outperform a supervised NLP task, which may suggest that the content of plastic surgery related documents sourced from Twitter and Reddit are better captured by analyst judgment sentiment and not by the empirical document source. This further suggests that the term ranking methodology employed, together with topic modeling, generated strong indicators of social media user opinion, effectively grouping words based on cosine distance.
Using a temporal convolutional network, a model theoretically better suited to capturing high and low dimensionalities of sequential text data, showed negligible improvement over the DNN in terms of accuracy, loss, and sentiment true classification rate. In general, both neural networks overfit the training data, averaging about 10% differences in accuracy between training and test instances. Training and validation instances indicate that both models would benefit from early stopping well before 10 epochs in order to achieve higher validation accuracy and lower validation loss; it is assumed that the test metrics would follow in suit.
All models had comparably high precision and recall for both neutral and positive sentiment, although the F1-scores for negative sentiment prediction were not dramatically lower. Given absolute true classifications, the models overall were able to distinguish negative from neutral sentiment very well. For each model, most of the misclassifications occurred between positive and negative sentiment, followed by positive and neutral sentiment. The top-ranking terms therefore must strongly segregate neutral from other sentiment in the case of plastic surgery, indicative of volume of terms used and associated with the medical procedures rather than with user opinions of those procedures or their outcomes. That the models struggled most with misclassifications between positive and negative sentiment could be indicative of vernacular and colloquial usage of terms mixed with denotative usage, confounding learning and thus impairing the decision boundary between these two sentiments. Additionally, it may be useful to use n-grams rather than unigrams to better define terms, such as "beauty" and "change", that could realistically fall into any sentiment category for plastic surgery depending on the context it is used in.
The high predictive capacities of these simple models indicate that favored NLP recurrent neural networks (RNNs), including gated recurrent unit networks and long short-term memory networks, may not perform that much better for sentiment analysis of these social media sourced documents, given the abbreviated length of each document and the frequently associated vanishing gradient problems with RNNs. While it may be interesting to pursue future work with different model architectures, the results from the temporal convolutional network, considered an advancement to simple RNNs given its ability to capture spatio-temporal information, indicate that it may be better to invest efforts in curation, vectorization and thus representation of top terms, using fewer terms but a more polarizing vocabulary to model the data after. Additionally, it may be fruitful to capture a wider breadth of hashtags from Twitter, more posts from the subreddits, and even venture to other social media networks for relevant plastic surgery documents to expand the user base, and thus opinion, representation. |
2305.03327 | FlowText: Synthesizing Realistic Scene Text Video with Optical Flow
Estimation | Current video text spotting methods can achieve preferable performance,
powered with sufficient labeled training data. However, labeling data manually
is time-consuming and labor-intensive. To overcome this, using low-cost
synthetic data is a promising alternative. This paper introduces a novel video
text synthesis technique called FlowText, which utilizes optical flow
estimation to synthesize a large amount of text video data at a low cost for
training robust video text spotters. Unlike existing methods that focus on
image-level synthesis, FlowText concentrates on synthesizing temporal
information of text instances across consecutive frames using optical flow.
This temporal information is crucial for accurately tracking and spotting text
in video sequences, including text movement, distortion, appearance,
disappearance, shelter, and blur. Experiments show that combining general
detectors like TransDETR with the proposed FlowText produces remarkable results
on various datasets, such as ICDAR2015video and ICDAR2013video. Code is
available at https://github.com/callsys/FlowText. | Yuzhong Zhao, Weijia Wu, Zhuang Li, Jiahong Li, Weiqiang Wang | 2023-05-05T07:15:49Z | http://arxiv.org/abs/2305.03327v1 | # FlowText: Synthesizing Realistic Scene Text Video with Optical Flow Estimation
###### Abstract
Current video text spotting methods can achieve preferable performance, powered with sufficient labeled training data. However, labeling data manually is time-consuming and labor-intensive. To overcome this, using low-cost synthetic data is a promising alternative. This paper introduces a novel video text synthesis technique called FlowText, which utilizes optical flow estimation to synthesize a large amount of text video data at a low cost for training robust video text spotters. Unlike existing methods that focus on image-level synthesis, FlowText concentrates on synthesizing temporal information of text instances across consecutive frames using optical flow. This temporal information is crucial for accurately tracking and spotting text in video sequences, including text movement, distortion, appearance, disappearance, shelter, and blur. Experiments show that combining general detectors like TransDETR with the proposed FlowText produces remarkable results on various datasets, such as ICDAR2015video and ICDAR2013video. Code is available at [https://github.com/callsys/FlowText](https://github.com/callsys/FlowText).
Synthetic data, Video text spotting.
## I Introduction
Video text spotting is a task that involves detecting, tracking, and reading text in a video sequence, and has gained popularity due to its various applications in computer vision, such as video understanding [1], video retrieval [2], video text translation, and license plate recognition [3], etc. Current video text spotters [4, 5] achieve preferable performance, powered with sufficient labeled training data. However, manually annotating such data is time-consuming, expensive, and prone to human errors. According to the annotation report of BOVText [6] dataset, \(2,021\) videos, \(7,292,261\) text instances require the work of \(30\) personnel over three months, _i.e._, \(21,600\) manhours, which is time-consuming and frustrating. Moreover, it is also difficult to collect enough data to cover various applications from traffic sign reading to video retrieval tasks.
To reduce the cost of video text annotation and collection, an alternative way is to utilize synthetic data, which is largely available and the ground truth can be freely generated at a low cost. Previous image-based synthesis algorithms [7, 8] have proven they are beneficial and potential for image-level text tasks. SynthText [8] firstly attempt to propose a synthetic data engine, which overlays synthetic text in existing background images, accounting for the local 3D scene geometry. Based on SynthText, VISD [7] tries to synthesize more verisimilar image data by using semantic segmentation and visual saliency to determine the embedding locations. However, the above synthetic engines only focus on the synthesis quality of image-level. None of them are dedicated to generating effective and efficient video-based content, which is particularly challenging for tasks such as video text spotting. Compared to image-level synthetic algorithms, video tasks present mainly two challenges. Firstly, video synthesis requires the generation of verisimilar spatiotemporal information, including the movement and deformation of text in a video sequence. This information is vital for spatiotemporal modeling of video text spotting methods, and cannot be provided by image-based synthesis. Secondly, text in video sequences generally presents more complex and challenging cases than static images, due to issues such as motion blur, out-of-focus, artifacts, and occlusion. To address these challenges, we propose a novel video synthesis technique that incorporates optical flow estimation, which we call FlowText.
Our main contributions are summarized as follows:
* We propose a new technique for synthesizing video called FlowText, which focuses on creating realistic scene text video, even in complex situations such as motion blur, occlusion, and being out of focus.
* FlowText covers a wide range of text scenarios in video sequences, _i.e._, motion blur, occlusion, out of focus.
* As the first video text synthesis method, FlowText achieves significant improvement compared with other synthesis methods on two datasets for multi-stage
tasks (_i.e.,_ video text detection, tracking, and spotting). Especially, FlowText achieves \(60.1\%\)\(\mathrm{ID}_{\mathrm{F1}}\) for video text tracking task and \(66.5\%\)\(\mathrm{ID}_{\mathrm{F1}}\) for video text spotting task on ICDAR2015video [9], with \(2.0\%\) and \(3.7\%\) improvements than the previous SOTA methods, respectively.
## II Related work
### _Video Text Spotting_
Video scene text spotting [10, 11, 12] has been studied for years and it has attracted increasing interest in recent years due to its numerous applications in computer vision. ICDAR2013 video [13], the first competition for the task, established the first public dataset to push the field. ICDAR2023 DSText [14] presents a new challenge for dense and text scenarios of video text spotting. As for the algorithm, based on Wang _et al._[15], Nguyen _et al._[16] firstly try to explore and propose an end-to-end solution for text recognition in natural images to video, which trains local character models and explore methods to capitalize on the temporal redundancy of text. Wang _et al._[17] proposed an multi-frame tracking based video text spotting method, which firstly detects and recognizes text in each frame of the input video, then associates texts in the current frame and several previous frames to obtain the tracking results by post-processing. Cheng _et al._[18] and Cheng _et al._[19] propose a video text spotting framework by only recognizing the text one-time to replace frame-wisely recognition, which includes a spatial-temporal detector and text recommender to detect and recognize text. Wu _et al._[6] adopts query concept in the transformer to model text tracking, then uses an attention-based recognizer to obtain the recognition results. TransDETR [4] proposes an end-to-end trainable video text spotting framework with transformer, which each text query is responsible to predict the entire track trajectory and the content of a text in a video sequence.
The methods mentioned above heavily rely on manually annotated images from real-world datasets such as COCO-Text [20] and ICDAR2015video [9]. However, these datasets are expensive to create and still too small to cover the wide range of text appearances in scenes. And previous works [7, 8] all have proved the effectiveness and potential of synthetic data, which is a good way to solve the problem.
### _Synthetic Data_
Synthetic data [7, 8, 21, 22, 23, 24, 25], which is generated at a lower cost than manual annotations and detailed ground-truth annotations, has gained increasing attention. In the field of image-level text synthesis, there are some existing synthetic datasets that have become standard practices and are used as pre-trained datasets. The first work for image text synthesis is SynthText [8], which blended text into existing natural image scenes using off-the-shelf segmentation and depth estimation techniques to align text with the geometry of the background image and respect scene boundaries. Following SynthText, VISD [7] proposed three improvements to obtain more verisimilar synthetic images: semantic coherence, better embedding locations with visual saliency, and adaptive text appearance. However, these methods only focus on image-level synthesis, and there are currently no video-based synthetic engines. There are also challenges in spatiotemporal information synthesis that need to be addressed.
### _Optical Flow_
Optical flow estimation is a fundamental task in computer vision. Classical approaches, such as [26], typically model optical flow estimation with brightness constancy and spatial smoothness. However, these methods often struggle to handle large displacements and occlusions. In recent years, some approaches have used the coarse-to-fine approach [27, 28] and iterative refinements [29] to handle large displacements incrementally. RAFT [29] represents the iterative refinement approach and proposes to gradually improve the initial prediction with a large number of iterative refinements, achieving remarkable performance on standard benchmarks. Regarding occlusions, existing approaches [30, 31] conduct a forward-backward consistency check to identify and solve occluded regions with interpolation. GMA [32] is the first work to take an implicit approach to the occlusion challenge. It adopts global motion features to predict flow accurately in occluded regions. In video data synthesis, optical flow plays a vital role in ensuring smooth and verisimilar temporal text synthesis. Compared to large displacements, occlusions pose a more realistic challenge in video synthesis. Therefore, we adopt GMA [32] as the base optical flow estimation model for our FlowText.
## III Method
### _Formulation_
While generating synthetic video automatically demonstrate great potential in the field of video text, producing high-quality synthetic video remains a significant challenge. One possible solution to this challenge is to generate a single frame using an image synthesis method, and then iteratively map the synthetic visual effects of embedding text to adjacent frames using spatiotemporal relevance information in the video.
We take the pasting process of a single text T as an example (_e.g.,_ " The" in Fig. 1). Given one video sequence \(\{\mathbf{I}_{k}\}_{k\in\mathcal{N}},\mathbf{I}_{k}\in\mathbb{R}^{h\times w}\), \(\mathcal{N}=\{1,2,\dots,n\}\), where \(h\), \(w\), and \(n\) are the height, width, and length of the video. As shown in Fig. 1, we randomly sample the \(t\)-th frame \(\mathbf{I}_{t}\) from the video. Then, the corresponding synthetic image \(\mathbf{I}_{t}\in\mathbb{R}^{h\times w}\) and the embedding text map \(\mathbf{T}_{t}\) (representing visual text feature of text T, _e.g.,_ font and shape) of the sampled frame can be obtained with the existing image synthesis method (_e.g.,_ SynthText [8]). Next, a spatiotemporal propagation function \(\mathcal{F}_{t\to k}(\cdot)\) is calculated for mapping the embedding text map \(\mathbf{T}_{t}\) to that of other frames (_e.g.,_\(\mathbf{T}_{k}\)). Finally, the whole synthetic text video \(\{\mathbf{I}_{k}\}_{k\in\mathcal{N}}\) is obtained by embedding the corresponding embedding text maps \(\{\mathbf{T}_{k}\}_{k\in\mathcal{N}}\) to the video sequence \(\{\mathbf{I}_{k}\}_{k\in\mathcal{N}}\). However, obtaining the propagation function \(\mathcal{F}_{t\to k}(\cdot)\) is challenging. To solve this problem, we propose the Text Flow Propagation (TFP) Algorithm, which uses optical flow to fit the function.
### _Overview of FlowText_
In this section, we will provide a comprehensive introduction of the entire FlowText pipeline, which consists of two main steps: _Rendering Sampled Frame_ and _Text Flow Propagation_.
#### Iii-B1 Rendering Sampled Frame
as shown in Fig. 1 (upper), for each video sequence, we firstly randomly select the \(t\)-th frame \(\mathbf{I}_{t}\) as the sampled frame. Then, we adopt the approach used in SynthText [8] and VISD [7] to overlay the text onto the image. The location and orientation of the text are determined by the depth map \(D_{t}\) predicted by Monodepth2 [33] and the panoptic segmentation map \(S_{t}\) predicted by Mask2former [34], respectively. Especially, it can be formulated as:
\[\{\mathbf{I}_{t},\mathbf{T}_{t}\}=\underset{\mathrm{Render}}{\mathbb{M}}( \mathbf{I}_{t},\mathbf{T})|_{S_{t},D_{t}}\,, \tag{1}\]
where \(\mathbf{I}_{t}\) and \(\mathbf{T}_{t}\) denote synthetic frame and embedding text map of text T in the \(t\)-th frame. \(\underset{\mathrm{Render}}{\mathbb{M}}(\cdot)\) refer to SynthText [8] in this paper.
#### Iii-B2 Text Flow Propagation
After obtaining the embedding text map \(\mathbf{T}_{t}\), we can calculate the \(\mathbf{T}_{k}\) with the Text Flow Propagation algorithm \(\mathcal{F}_{t\to k}(\cdot)\) as:
\[\mathbf{T}_{k}=\mathcal{F}_{t\to k}(\mathbf{T}_{t})\,. \tag{2}\]
Finally, we produce the whole synthetic text video \(\{\mathbf{I}_{k}\}_{k\in\mathcal{N}}\) via overlapping the embedding text maps \(\{\mathbf{T}_{k}\}_{k\in\mathcal{N}}\) to the video sequence \(\{\mathbf{I}_{k}\}_{k\in\mathcal{N}}\).
### _Text Flow Propagation Algorithm_
There are two algorithms for Text Flow Propagation (TFP): Forward Text Flow Propagation (FTFP, \(\mathcal{F}_{t\to k},k>t\)) and Backward Text Flow Propagation (BTFP, \(\mathcal{F}_{t\to k},k<t\)), depending on whether the estimated frame is located before or after the sampled frame. Despite the difference in optical flow direction, both algorithms are quite similar. For convenience, we take FTFP as the example. FTFP is based on an existing optical flow estimation model (GMA [32] in this paper), which can be directly used to map points between frames as:
\[p_{k}=F_{t\Leftrightarrow k}(p_{t})+p_{t},\quad p_{t}\in\mathbf{T}_{t},p_{k} \in\mathbf{T}_{k}, \tag{3}\]
where \(p_{k}\) represents the coordinate of point \(p\) at \(k\)-th frame. However, there are main two obvious problems that affect the performance of the optical flow estimation: (1) _Unconstrained mapping_: Optical flow estimation does not view text geometry as a whole, and destroy the invariant properties, _e.g.,_ concurrency, collinearity, order of contact. (2) _Error mapping due to occlusion and noise_: Occlusion and outliers will cause inaccurate mapping and inauthentic synthetic data.
In this paper, we propose to solve the first problem by _Projective Transformation_ and solve the second problem by _Point Resample_.
#### Iii-C1 Mapping with Projective Transformation
We view the mapping function as a multiple view geometry problem, where text is painted on a planar surface (_e.g.,_ wall or sign). The same text in different frames is observed in different view planes, as shown in Fig. 1(b). And they can be transformed into each
Fig. 1: **Pipeline of the proposed FlowText**. Upper: For each video, we first randomly sample and render a frame \(\mathbf{I}_{t}\) with an image synthesis method (_e.g.,_ SynthText, VISD). The Text Flow Propagation (TFP) is used to propagate the synthetic visual text effects \(\mathbf{T}_{t}\) to other frames. Down: The detailed architecture of the TFP.
other with projective transform [35]. Specially, with a \(3\times 3\) homography matrix, _i.e._, \(\mathrm{H_{t,k}}\in\mathbb{R}^{3\times 3}\), we can formulate the mapping function \(\mathcal{F}_{t\to k}\) as:
\[\left[\begin{array}{c}{{p_{k}}^{T}}\\ 1\end{array}\right]=\mathrm{H_{t,k}^{-1}}\left[\begin{array}{c}{{p_{t}}^{T}} \\ 1\end{array}\right],\quad\mathrm{p_{t}}\in\mathbf{T_{t}},\mathrm{p_{k}}\in \mathbf{T_{k}}, \tag{4}\]
where the homography matrix \(\mathrm{H_{t,k}}\) transforms the coordinates of the embedding text map \(\mathbf{T_{\ell}}\) to \(\mathbf{T_{\ell}}\). Compared with dense optical flow mapping (Fig. 1(a)), the projective transform [35] (Fig. 1(b)) keeps invariant properties of text (_i.e._, concurrency, collinearity, order of contact). And its degree of freedom (df) is \(8\) (_e.g.,_ rotation, scaling, shearing, translating), which makes the mapping function stable.
According to the book 'Multiple View Geometry' [35], the exact solution for the matrix \(\mathrm{H_{t,k}}\) is possible, if the below theorem can be ensured.
**Theorem 1**.: _The 2D projective transformation: \(\mathrm{P^{2}}\rightarrow\mathrm{P^{2}}\) can be determined if there exists at lease four correspondences to calculate a non-singular \(3\times 3\) matrix \(\mathrm{H_{t,k}}\)._
To estimate \(\mathrm{H_{t,k}}\), we first identify points inside the text region in the \(t\)-th frame, denoted as \(S_{text}^{t}=\{p_{t}\mid\mathbf{T_{\ell}}(p_{t})>0,p_{t}\in\mathbb{Z}^{2}\}\). Here, \(p_{t}\) is a 2D point coordinate and \(\mathbf{T_{\ell}}(p_{t})\) is the value of \(\mathbf{T_{\ell}}\) at point \(p_{t}\). Next, we map each point \(p_{t}\) to its corresponding point \(p_{k}\) in the \(k\)-th frame using Equ. 3. We then collect the point pairs \((p_{t},p_{k})\), where \(p_{t}\in S_{text}^{t}\). Finally, we estimate the projective matrix \(\mathrm{H_{t,k}}\) by using the RANdom SAmple Consensus (RANSAC) algorithm [36] to robustly fit the point pairs.
#### Iii-A2 Point Resample
When occlusion occurs, abnormal sampled coordinate points in \(S_{text}^{t}\) can lead to unreasonable mapping associations and result in unreal text distortion. To address this issue, we can use optical flow and segmentation constraints to remove these abnormal points. In dense flow estimation, some points can be outliers or noise and deviate from other sampled points. We can use statistics, specifically the standard deviation (\(\sigma\)) and mean (\(\mu\)) of the L2 normalization of point pairs (\(||F_{t\Leftrightarrow k}(p_{t})||_{2},p_{t}\in S_{text}^{t}\)), to detect these outliers. We set the lower limit to (\(\mu-\sigma\)) and the upper limit to (\(\mu+\sigma\)). Any sample that falls outside this range is detected as an outlier and removed from the sampled coordinate points \(S_{text}^{t}\). We can define the resampled point set with optical flow constraint as follows:
\[S_{flow}^{t,k}=\{p_{t}\mid\mu-\sigma\leq||F_{t\Leftrightarrow k}(p_{t})||_{2} \leq\mu+\sigma\}. \tag{5}\]
Occlusion often leads to a breakdown in the semantic coherence of text within a video sequence. It is essential that each text is consistently associated with the same semantic entity in all frames of the video (for example, the word " The" should always be superimposed on the " flowerpot" in Fig. 1). To achieve this consistency, it is necessary to exclude sampled points that move outside of the semantic entity. We define the resampled point set that adheres to segmentation constraints as follows:
\[S_{segm}^{t,k}=\{p_{t}\mid S_{i}(p_{i})>0,i=t,t+1,\ldots,k\}, \tag{6}\]
where \(S_{i}\) denotes the segmentation map of the semantic entity in the \(i\)-th frame. Finally, we define sampling point set as:
\[S^{t,k}=S_{text}^{t}\bigcap S_{flow}^{t,k}\bigcap S_{segm}^{t,k}\,. \tag{7}\]
We use the sampling points in \(S^{t,k}\) to estimate the projective matrix \(\mathrm{H_{t,k}}\) between the \(t\)-th frame and the \(k\)-th frame.
#### Iii-A3 Estimating Motion blur with Optic flow
In order to simulate the text motion blur, we add motion blur to the embedding text map \(\{\mathbf{T_{k}}\}_{k\in\mathcal{N}}\) according to the direction \(\vec{v}\) and scale \(||\vec{v}||\) of the optical flow predicted by GMA [32] in the text region. Motion blur is realized by convolving the embedding text map \(\mathbf{T_{\ell}}\) with a specific convolution kernel. The convolution kernel apply average pooling for \(\alpha||\vec{v}||\) pixels along the direction of \(\vec{v}\), where \(\alpha\) is a hyperparameter that related to degree of blur.
#### Iii-A4 Pseudo Code
We describe the whole process of the Forward Text Flow Propagation in Algorithm 1. \(\mathrm{RANSAC}(\cdot)\) denotes the \(\mathrm{RANdom}\)\(\mathrm{SAmple}\)\(\mathrm{Consensus}\)\(\mathrm{RANSAC}\) algorithm and \(\mathrm{ProjectiveTransform}(\cdot)\) denotes transforming the image with the given homography matrix. For the case that Backward
Fig. 3: **Illustration of the two main points in our TFP. Upper: Projective transformation presents better performance than dense optical flow mapping. Down: Resample with two constrains can refine the error from occlusion.**
Fig. 2: **Comparison of different mapping methods. Mapping with projective transformation present better multiple view geometry transformation via view the embedding text map \(\mathbf{T}\) as a whole.**
Text Flow Propagation (BTFP) is needed, we directly reverse the optical flow and apply FTFP for calculation.
## IV Experiments
### _Settings_
Following the setting of previous works [7, 8], we verify the effectiveness of the proposed FlowText by training video text spotter on the synthesized images and evaluating them on real image datasets. In all experiments, we train the model with 8 Tesla V100 GPUs. The detailed setting of methods all follows the original paper and official code.
**Benchmark Datasets.** ICDAR2013video [13] is proposed in the ICDAR2013 Robust Reading Competition, which contains 13 videos for training and 15 videos for testing. These videos are harvested from indoors and outdoors scenarios, and each text is labeled as a quadrangle with 4 vertexes in word-level. ICDAR2015video [9] is the expanded version of ICDAR2013 video, which consists of a training set of 25 videos (13,450 frames) and a test set of 24 video (14,374 frames). Similar to ICDAR2013 video, text instances in this dataset are labeled at the word level. Quadilateral bounding boxes and transcriptions are provided.
**Text and Video Sources.** To better simulate the motion of text in video, we use Activitynet [37] as the video sources to build FlowText, which contains plenty of complex movement scenarios. For videos in Activitynet, we first use the Kuaishou VideoOCR api [38] to filter videos that do not contain text as candidate videos. Then, we random extract candidate texts from the Newsgroup20 dataset [39] and paint them onto candidate videos with FlowText.
**Video Text Methods.** We use TransDETR [4] to evaluate the performance of different synthetic datasets in this paper, which is the State-of-the-art method in both video text tracking and video text spotting domain.
### _Video Text Tracking_
As shown in Table II, without real data, FlowText outperforms the previous SOTA method by 3.3% in \(\mathrm{ID}_{\mathrm{F1}}\) on ICDAR2015. When real data is introduced, FlowText outperforms the previous SOTA method by 2.0% on ICDAR2015. This proves that the temporal information simulated by FlowText can effectively improve the tracking performance.
### _End to End Video Text Spotting_
As shown in Table I, with real data, FlowText outperforms the previous SOTA method SynthText by 3.7% in \(\mathrm{ID}_{\mathrm{F1}}\) on ICDAR2015. This proves that FlowText can greatly boost the training of video text spotter.
### _Ablation Study_
#### Iv-D1 Main components
As shown in line 1 of Table III, the image render used in FlowText (_i.e.,_ SynthText) achieve 36.7%
\begin{table}
\begin{tabular}{l|c|c c c|c c} \hline \multirow{2}{*}{Data} & \multirow{2}{*}{FT} & \multicolumn{3}{c|}{ICDAR2013 video/\%} & \multicolumn{3}{c}{ICDAR2015 video/\%} \\ & & \(\mathrm{ID}_{\mathrm{F1}}\) & \(\mathrm{MOTA}\) & \(\mathrm{MOTP}\) & \(\mathrm{ID}_{\mathrm{F1}}\) & \(\mathrm{MOTA}\) & \(\mathrm{MOTP}\) \\ \hline SynthText & \(\times\) & 42.9 & 17.3 & 69.8 & 38.2 & 15.9 & 70.4 \\ VISD & \(\times\) & 44.7 & 21.4 & 69.6 & 38.0 & 18.0 & 70.2 \\ FlowText & \(\times\) & 47.9 & 27.4 & 74.1 & 41.5 & 21.1 & 72.8 \\ \hline None & ✓ & 58.2 & 43.6 & 76.3 & 56.2 & 38.7 & 73.0 \\ SynthText & ✓ & 60.9 & 46.2 & 76.4 & 58.1 & 39.2 & 73.1 \\ VISD & ✓ & 59.4 & 44.7 & 76.4 & 57.7 & 39.8 & 73.1 \\ FlowText & ✓ & 64.1 & 48.9 & 76.5 & 60.1 & 42.4 & 73.5 \\ \hline \end{tabular}
\end{table} TABLE II: Text tracking performance on ICDAR2013 video and ICDAR2015 video [16].
\begin{table}
\begin{tabular}{l|c|c|c c c c} \hline \multirow{2}{*}{Data} & \multirow{2}{*}{FT} & \multirow{2}{*}{Data Size (Image)} & \multicolumn{3}{c}{ICDAR2015 video/\%} \\ & & & \(\mathrm{ID}_{\mathrm{F1}}\)\(\uparrow\) & \(\mathrm{MOTA}\)\(\uparrow\) & \(\mathrm{MOTP}\)\(\uparrow\) & \(\mathrm{M\text{-Tracked}}\)\(\uparrow\) & \(\mathrm{M\text{-}\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
\(\mathrm{ID}_{\mathrm{F1}}\) on ICDAR2013. When we directly use optic flow to propagate temporal information for text, relative improvement of 2.4% in \(\mathrm{ID}_{\mathrm{F1}}\) can be achieved. Then we use projective transform to replace optic flow wrapping, which brings relative improvement of 1.8% in \(\mathrm{ID}_{\mathrm{F1}}\). Finally, we use resample to remove abnormal sampled points in TFP and apply motion blur to the text map, which brings relative improvement of 6.6% and 0.4% in \(\mathrm{ID}_{\mathrm{F1}}\) respectively. The ablation experiment proves that introducing optical flow information into synthetic data can improve the performance of video text spotter, and the proposed TFP algorithm can effectively improve the quality of the synthesized text.
#### Iv-C2 Video sources
As shown in Table IV, we test three datasets as video sources, and Activitynet [37] get the best performance. Actvitynet is built for action recognition, which contains plenty of complex movement scenarios. Synthesis videos generated based on Activitynet can obtain more spatiotemporal information.
## V Conclusion
In this paper, we propose a novel video synthesis technique called FlowText, which generate fully labeled scene text video with no annotation cost. As the first video text synthesis method, FlowText achieves significant improvement compared with other synthesis methods for multi-stage tasks (_i.e.,_ video text detection, tracking and spotting).
|
2304.02304 | Irreducible representations of the braid group $B_3$ in dimension 6 | We use $q$-Pascal's triangle to define a family of representations of
dimension 6 of the braid group $B_3$ on three strings. Then we give a necessary
and sufficient condition for these representations to be irreducible. | Taher I. Mayassi, Mohammad N. Abdulrahim | 2023-04-05T08:53:38Z | http://arxiv.org/abs/2304.02304v1 | # Irreducible representations of the braid group \(B_{3}\) in dimension 6
###### Abstract.
We use \(q\)-Pascal's triangle to define a family of representations of dimension 6 of the braid group \(B_{3}\) on three strings. Then we give a necessary and sufficient condition for these representations to be irreducible.
Key words and phrases:Braid group, irreducibility _Mathematics Subject Classification._ Primary: 20F36
## 1. Introduction
Braid groups have an important role in many branches of mathematics like Knot Theory and Cryptography. In this work, we study the irreducibility of representations of the Braid group \(B_{3}\) of dimension 6. In [1], a family of representations of \(B_{3}\) of dimension \(n+1\) is constructed using \(q\)-deformed Pascal's triangle. This family of representations of \(B_{3}\) is a generalization of the representations given by Humphries [4] as well as the representations given by I. Tuba and H. Wenzl [7]. For more details, see [1,Theorem 3] and [2]. Kosyak mentioned in [5] that the irreducibility of the representations constructed by \(q\)-Pascal's triangle is still an open problem for dimensions \(\geq 6\), although some sufficient conditions are given in [1]. In our work, we consider these representations and we determine a necessary and sufficient condition for the irreducibility in the case the dimension is precisely 6.
In section 2, we use some notations that help us define a family of representations of \(B_{3}\) by \(q\)-Pascal's triangle (see Theorem 2.1). In section 3, we specialize the representations defined by Theorem 2.1 to \(n=5\), that is of dimension 6, by taking some specific values of some parameters. We get a subfamily of representations of \(B_{3}\). Proposition 3.1 proves that these representations have no invariant subspaces of dimension 1. However, Proposition 3.2, Proposition 3.3 and Proposition 3.4 state necessary and sufficient conditions for the non-existence of invariant subspaces of dimensions 2, 3 and 4 respectively. Proposition 3.5 gives a sufficient condition for these representations to have no invariant subspaces of dimension 5. Our main result is Theorem 3.6, which determines a necessary and sufficient condition for the irreducibility of this family of representations of \(B_{3}\). In section 4, we consider the cases where the representations are reducible. Then, we reduce one of these reducible representations to a sub-representation of dimension 4 and we prove that this sub-representation is irreducible (Theorem 4.1).
## 2. Notations, Definitions and Basic Theorems
**Definition 2.1**.: [3] The braid group on \(n\) strings, \(B_{n}\), is the abstract group with \(n-1\) generators \(\sigma_{1},\sigma_{2},\cdots,\sigma_{n-1}\) satisfying the following relations
\(\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i}\) for \(|i-j|>1\) and \(\sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1}\) for \(i=1,\cdots,n-2\).
In order to use the \(q\)-Pascal's triangle, we need the following notations.
**Notations.**[1] For every \((n\times n)\)-matrix \(M=(m_{ij})\), we set the matrices \(M^{\sharp}=(m_{ij}^{\sharp})\) and \(M^{s}=(m_{ij}^{s})\) where \(m_{ij}^{\sharp}=m_{n-i,n-j}\) and \(m_{ij}^{s}=m_{n-j,n-i}\).
For \(q\in\mathbb{C}\setminus\{0\}\), \(n\in\mathbb{N}\), and for all integers \(j\) and \(r\) such that \(j>0\) and \(r\geqslant 0\) we define the following terms.
\((j)_{q}=1+q+\cdots+q^{j-1}\),
\((j)!_{q}=(1)_{q}(2)_{q}\cdots(j)_{q}\) and \((0)!_{q}=1\),
\(\binom{n}{r}_{q}=\frac{(n)!_{q}}{(r)!_{q}(n-r)!_{q}},\mbox{ for all integers $r$ and $n$ such that $0\leqslant r\leqslant n$}\),
\(q_{r}=q\frac{(r-1)r}{2}\).
\((j)_{q}\) and \(\binom{n}{r}_{q}\) are called \(q\)-natural numbers and \(q\)-binomial coefficients respectively.
**Definition 2.2**.: ([1],[5]) Let \(n\) be a non-negative integer. For all non-zero complex numbers \(q,\,\lambda_{0},\,\lambda_{1},\cdots,\,\lambda_{n}\), consider the matrices
\(D_{n}(q)=\operatorname{diag}(q_{r})_{r=0}^{n},\,\,\,\Lambda_{n}=\operatorname {diag}(\lambda_{0},\lambda_{1},\cdots,\lambda_{n})\) and \(A_{n}(q)=(a_{km})_{0\leqslant k,m\leqslant n}\),
where \(a_{km}=\binom{n-k}{n-m}_{q}=\frac{(n-k)!_{q}}{(n-m)!_{q}(m-k)!_{q}}\), for \(k\leqslant m\) and \(a_{km}=0\) for \(k>m\).
We define the following family of \((n+1)\times(n+1)\)-matrices
\(\sigma_{1}^{\Lambda_{n}}(q,n)=A_{n}(q)D_{n}^{\sharp}(q)\Lambda_{n}\) and \(\sigma_{2}^{\Lambda_{n}}(q,n)=\Lambda_{n}^{\sharp}D_{n}(q)\left(\left(A_{n} \left(q^{-1}\right)\right)^{-1}\right)^{\sharp}.\)
Using the definitions and notations above we state the following theorem.
**Theorem 2.1**.: [1] _The mapping \(B_{3}\to GL(n+1,\mathbb{C})\) defined by_
\(\sigma_{1}\mapsto\sigma_{1}^{\Lambda_{n}}(q,n)\) _and \(\sigma_{2}\mapsto\sigma_{2}^{\Lambda_{n}}(q,n)\)_
_is a representation of dimension \(n+1\) of the braid group \(B_{3}\) provided that \(\lambda_{i}\lambda_{n-i}=c\) for \(0\leqslant i\leqslant n\), where \(c\) is a constant non-zero complex number._
**Definition 2.3**.: A representation is called _subspace irreducible_ or _irreducible_, if there are no non-trivial invariant subspaces for all operators of the representation. A representation is called _operator irreducible_, if there are no non-trivial bounded operators commuting with all operators of the representation.
For the next theorem, we need to introduce the following operators. For \(n,r\in\mathbb{N}\) such that \(0\leqslant r\leqslant n\), and for \(\lambda=(\lambda_{0},\dots,\lambda_{n})\in\mathbb{C}^{n+1}\) and \(q\in\mathbb{C}\), we define
\[F_{r,n}(q,\lambda)=\exp_{(q)}\left(\sum_{k=0}^{n-1}(k+1)_{q}E_{k\;k+1}\right)-q _{n-r}\lambda_{r}\left(D_{n}(q)\Lambda_{n}^{\sharp}\right)^{-1},\]
where \(E_{km}\) is a matrix with \(1\) in the \(k,m\) entry and zeros elsewhere \((k,m\in\mathbb{Z})\), and \(\exp_{(q)}(X)=\sum_{m=0}^{\infty}(X^{m}/(m)!_{q})\). For the \((n+1)\times(n+1)\)-matrix \(C\) over \(\mathbb{C}\) and for \(0\leqslant i_{0}<i_{1}<\dots<i_{r}\leqslant n\), \(0\leqslant j_{0}<j_{1}<\dots<j_{r}\leqslant n\), we denote the minors of \(C\) with \(i_{1},i_{2},\dots,i_{r}\) rows and \(j_{1},j_{2},\dots,j_{r}\) columns by
\[M_{j_{1}j_{2}\dots j_{r}}^{i_{1}i_{2}\dots i_{r}}(C).\]
**Theorem 2.2**.: _[_1_]_ _The representation of the group \(B_{3}\) defined in Theorem 2.1 has the following properties:_
1. _for_ \(q=1\)_,_ \(\Lambda_{n}=I_{n+1}\) _(the identity matrix), it is subspace irreducible in arbitrary dimension_ \(n\in\mathbb{N}\)_;_
2. _for_ \(q=1\)_,_ \(\Lambda_{n}=\operatorname{diag}(\lambda_{0},\lambda_{1},\cdots,\lambda_{n}) \neq I_{n+1}\)_, it is operator irreducible if and only if for any_ \(0\leqslant r\leqslant[\frac{n}{2}]\)_, there exists_ \(0\leqslant i_{0}<i_{1}<\dots<i_{r}\leqslant n\) _such that_ \[M_{r+1}^{i_{0}i_{1}\dots i_{n-r-1}}\left(F_{r,n}^{s}(q,\lambda)\right)\neq 0;\]
3. _for_ \(q\neq 1\)_,_ \(\Lambda_{n}=I_{n}\)_, it is subspace irreducible if and only if_ \((n)_{q}\neq 0\)_. The representation has_ \([\frac{n+1}{2}]+1\) _free parameters._
**Theorem 2.3**.: _[_1_]_ _All representations of the braid group \(B_{3}\) of dimension \(\leqslant 5\) are the representations defined in Theorem 2.1._
Note that the irreducibility of the representation defined in Theorem 2.1 that are of dimension \(\leqslant 5\) is discussed in [6] and [7]. Also, Theorem 2.1 gives a family of the representations of \(B_{3}\) that are of dimension \(\geqslant 6\)\((n\geqslant 5)\). The subject of the irreducibility of these representations is still under study and research. In this work, we study the irreducibility of some of these representations of dimension \(6\).
Suppose, in what follows, that \(n=5\). Then, the matrix \(A_{5}(q)\) is given by
\[A_{5}(q)=\begin{pmatrix}1&(5)_{q}&(1+q^{2})(5)_{q}&(1+q^{2})(5)_{q}&(5)_{q}&1 \\ 0&1&(1+q)(1+q^{2})&(1+q^{2})(3)_{q}&(1+q)(1+q^{2})&1\\ 0&0&1&(3)_{q}&(3)_{q}&1\\ 0&0&0&1&1+q&1\\ 0&0&0&0&1&1\\ 0&0&0&0&0&1\\ \end{pmatrix}\]
and the representation of \(B_{3}\) defined in Theorem 2.1 is of dimension \(6\). Moreover, the matrices representing the generators \(\sigma_{1}\) and \(\sigma_{2}\) of \(B_{3}\) are given by
\[\sigma_{1}\mapsto\begin{pmatrix}\lambda_{0}q^{10}&\lambda_{1}q^{6}(5)_{q}& \lambda_{2}q^{3}(1+q^{2})(5)_{q}&\lambda_{3}q(1+q^{2})(5)_{q}&\lambda_{4}(5)_ {q}&\lambda_{5}\\ 0&\lambda_{1}q^{6}&\lambda_{2}q^{3}(1+q)(1+q^{2})&\lambda_{3}q(1+q^{2})(3)_{q }&\lambda_{4}(1+q)(1+q^{2})&\lambda_{5}\\ 0&0&\lambda_{2}q^{3}&\lambda_{3}q(3)_{q}&\lambda_{4}(3)_{q}&\lambda_{5}\\ 0&0&0&\lambda_{3}q&\lambda_{4}(1+q)&\lambda_{5}\\ 0&0&0&0&\lambda_{4}&\lambda_{5}\\ 0&0&0&0&0&\lambda_{5}\\ \end{pmatrix}\]
and
\[\sigma_{2}\mapsto\begin{pmatrix}\lambda_{5}&0&0&0&0&0\\ -\lambda_{4}&\lambda_{4}&0&0&0&0\\ \lambda_{3}&-\lambda_{3}(1+q)&\lambda_{3}q&0&0&0\\ -\lambda_{2}&\lambda_{2}(3)_{q}&-\lambda_{2}q(3)_{q}&\lambda_{2}q^{3}&0&0\\ \lambda_{1}&-\lambda_{1}(4)_{q}&\lambda_{1}q(1+q^{2})(3)_{q}&-\lambda_{1}q^{3 }(4)_{q}&\lambda_{1}q^{6}&0\\ -\lambda_{0}&\lambda_{0}(5)_{q}&-\lambda_{0}q(1+q^{2})(5)_{q}&\lambda_{0}q^{3 }(1+q^{2})(5)_{q}&-\lambda_{0}q^{6}(5)_{q}&\lambda_{0}q^{10}\end{pmatrix}.\]
## 3. Irreducibility of Representations of \(B_{3}\) of dimension \(6\)
In this section, let \(q\) be a primitive third root of unity (\(q^{3}=1\) and \(q\neq 1\)). By taking \(c=1\), \(\lambda_{0}=1\) and \(\lambda_{2}=q^{2}\), we get \(\lambda_{3}=\frac{1}{q^{2}}=q\), \(\lambda_{4}=\lambda_{1}^{-1}\) and \(\lambda_{5}=\frac{c}{\lambda_{0}}=1\). Under these conditions and for \(n=5\), we substitute these values in the matrices above to get the following definition.
**Definition 3.1**.: Let \(\rho:B_{3}\to GL(6,\mathbb{C})\) be the family of representations of \(B_{3}\) of dimension \(6\) that is defined by
\[\sigma_{1}\mapsto\begin{pmatrix}q&-q^{2}\lambda_{1}&q^{2}&q^{2}&-q^{2}\lambda _{1}^{-1}&1\\ 0&\lambda_{1}&q^{2}&0&\lambda_{1}^{-1}&1\\ 0&0&q^{2}&0&0&1\\ 0&0&0&q^{2}&-q^{2}\lambda_{1}^{-1}&1\\ 0&0&0&0&\lambda_{1}^{-1}&1\\ 0&0&0&0&0&1\end{pmatrix}\]
and
\[\sigma_{2}\mapsto\begin{pmatrix}1&0&0&0&0&0\\ -\lambda_{1}^{-1}&\lambda_{1}^{-1}&0&0&0&0\\ q&1&q^{2}&0&0&0\\ -q^{2}&0&0&q^{2}&0&0\\ \lambda_{1}&-\lambda_{1}&0&-\lambda_{1}&\lambda_{1}&0\\ -1&-q^{2}&-q&1&q^{2}&q\end{pmatrix}.\]
Note that \(\rho(\sigma_{1})\) and \(\rho(\sigma_{2})\) have the same eigenvalues, which are \(q,\lambda_{1},q^{2}\) (of multiplicity \(2\)), \(\lambda_{1}^{-1}\) and \(1\). The corresponding eigenvectors of \(\rho(\sigma_{1})\) are
\[u_{1}=\begin{pmatrix}1\\ 0\\ 0\\ 0\\ 0\\ 0\end{pmatrix},\;u_{2}=\begin{pmatrix}-\lambda_{1}(1+q)\\ -\lambda_{1}+q\\ 0\\ 0\\ 0\end{pmatrix},\;u_{3}=\begin{pmatrix}\lambda_{1}+q\\ q-1\\ (\lambda_{1}+q^{2})(-q+q^{2})\\ 0\\ 0\end{pmatrix},\;u_{4}=\begin{pmatrix}-1\\ 0\\ 0\\ q^{2}-1\\ 0\\ 0\end{pmatrix},\]
\[u_{5}=\begin{pmatrix}q-\lambda_{1}^{3}\\ (1-\lambda_{1}q)(\lambda_{1}q-q^{2})\\ 0\\ -q(1-\lambda_{1}^{2})(-1+\lambda_{1}q)\\ (-1+\lambda_{1}^{2})(-1+\lambda_{1}q)(\lambda_{1}q-q^{2})\\ 0\end{pmatrix}\text{ and }u_{6}=\begin{pmatrix}\lambda_{1}q^{2}-2\lambda_{1}+3+3 \lambda_{1}^{2}+\lambda_{1}q\\ 3q(\lambda_{1}q-1)\\ -3q^{2}(1-\lambda_{1})^{2}\\ -3(-1+\lambda_{1})(1+\lambda_{1}q^{2})\\ 3q(1-q)(-1+\lambda_{1})\\ 3q(1-q)(-1+\lambda_{1})^{2}\end{pmatrix}.\]
Assume that \(\lambda_{1}\not\in\{-1,1,q,q^{2}\}\). Then, the vectors \(u_{i}(i=1,2,\cdots,6)\) are linearly independent and the transition matrix \(P=(u_{1}\ u_{2}\ u_{3}\ u_{4}\ u_{5}\ u_{6})\) is invertible. Conjugating the representation by \(P\), we get an equivalent representation.
\[\rho(\sigma_{1})\mapsto X=\begin{pmatrix}q&0&0&0&0&0\\ 0&\lambda_{1}&0&0&0&0\\ 0&0&q^{2}&0&0&0\\ 0&0&0&q^{2}&0&0\\ 0&0&0&0&\lambda_{1}^{-1}&0\\ 0&0&0&0&0&1\end{pmatrix}\]
and \(\rho(\sigma_{2})\mapsto Y=P^{-1}\rho(\sigma_{2})P=(K_{1}\ K_{2}\ K_{3}\ K_{4}\ K_{5}\ K_{6})\), where
\[K_{1}=\begin{pmatrix}\frac{\lambda_{1}q}{(q^{2}-1)(-\lambda_{1}+q)(-1+\lambda_ {1}q)}\\ \frac{\lambda_{1}^{3}-q^{2}}{\lambda_{1}(\lambda_{1}-1)^{2}(\lambda_{1}+1)( \lambda_{1}-q)(\lambda_{1}-q^{2})}\\ -\frac{1}{3(\lambda_{1}+q^{2})}\\ -\frac{q^{2}(\lambda_{1}+q^{2})}{3(-\lambda_{1}+q)}\\ \frac{\lambda_{1}^{2}}{(\lambda_{1}-1)^{2}(\lambda_{1}+1)(\lambda_{1}-q)( \lambda_{1}q-1)}\\ \frac{\lambda_{1}^{2}}{(\lambda_{1}-1)^{2}(\lambda_{1}+1)(\lambda_{1}-q)( \lambda_{1}q-1)}\\ \frac{\lambda_{1}^{2}}{3(\lambda_{1}-1)^{2}(q^{2}-q)}\end{pmatrix},\ K_{2}= \begin{pmatrix}\frac{q[2+q-(1+2q)\lambda_{1}^{2}]}{3(\lambda_{1}-q)(-1+\lambda_ {1}q)}\\ \frac{1-\lambda_{1}^{3}q^{2}}{\lambda_{1}(\lambda_{1}-1)^{2}(\lambda_{1}+1)( \lambda_{1}-q)(\lambda_{1}-q^{2})}\\ -\frac{\lambda_{1}+q}{3(\lambda_{1}-q)}\\ -\frac{\lambda_{1}(\lambda_{1}^{2}+q)}{(\lambda_{1}-1)^{2}(\lambda_{1}+1)( \lambda_{1}-q)(\lambda_{1}q-1)}\\ \frac{q(\lambda_{1}^{2}+q)}{3(\lambda_{1}-1)^{2}(q^{2}-q)}\end{pmatrix},\]
\[K_{3}=\begin{pmatrix}\frac{(1+\lambda_{1})(-3q^{2}+(1-q^{2})\lambda_{1}+3 \lambda_{1}^{2})}{-3(\lambda_{1}-q)(-1+\lambda_{1}q)}\\ \frac{q^{2}(-1-\lambda_{1}+(q-q^{2})\lambda_{1}^{2}+\lambda_{1}^{3}+\lambda_{ 1}^{4})}{\lambda_{1}(\lambda_{1}-1)^{2}(\lambda_{1}+1)(\lambda_{1}-q)(\lambda_ {1}-q^{2})}\\ -\frac{1}{3q(-\lambda_{1}+q^{2})}\\ \frac{q(2+\lambda_{1}+2\lambda_{1}^{2})}{3(\lambda_{1}-q)}\\ \frac{q\lambda_{1}^{2}(1+q\lambda_{1})}{(\lambda_{1}-1)^{2}(\lambda_{1}+1)( \lambda_{1}-q)(\lambda_{1}q-1)}\\ \frac{q(q+\lambda_{1}^{4})}{3(\lambda_{1}-1)^{2}(q-1)}\end{pmatrix},\ K_{4}= \begin{pmatrix}\frac{-3+2(q-1)\lambda_{1}+3q\lambda_{1}^{2}}{-3(\lambda_{1}- 4)-1+\lambda_{1}q}\\ \frac{q^{2}+(q-q^{2})\lambda_{1}^{4}+(q-q^{2})\lambda_{1}^{2}-q\lambda_{1}^{ 3}}{\lambda_{1}(\lambda_{1}-1)^{2}(\lambda_{1}+1)(\lambda_{1}-q)(\lambda_{1}-q ^{2})}\\ \frac{q^{2}(-\lambda_{1}+q^{2})}{3(-\lambda_{1}+q)}\\ -\frac{q^{2}\lambda_{1}(1+q^{2})}{3(\lambda_{1}-1)^{2}(\lambda_{1}+1)( \lambda_{1}-q)(\lambda_{1}q-1)}\\ \frac{q}{3(\lambda_{1}-1)^{2}(q-1)}\end{pmatrix},\ K_{4}=\begin{pmatrix}\frac{ -3+2(q-1)\lambda_{1}+3q\lambda_{1}^{2}}{-3(\lambda_{1}-4)-1+\lambda_{1}q}\\ \frac{q^{2}+(q-q^{2})\lambda_{1}+(q-q^{2})\lambda_{1}^{2}-q\lambda_{1}^{3}}{ \lambda_{1}(\lambda_{1}-1)^{2}(\lambda_{1}+1)(\lambda_{1}-q)(\lambda_{1}-q^{2}) }\\ \frac{q^{2}+(q-q^{2})\lambda_{1}+(-1+q)\lambda_{1}^{2}}{(1-q)(\lambda_{1}-q)( \lambda_{1}-q)}\\ -\frac{3q^{2}\lambda_{1}^{2}-3q\lambda_{1}^{4}}{(\lambda_{1}-1)^{2}(\lambda_{1 }+1)(\lambda_{1}-q)(\lambda_{1}q-1)}\\ \frac{-2-q+(1-q^{2})\lambda_{1}+(-1+q^{2})\lambda_{1}^{2}}{3(\lambda_{1}-1)^ {2}}\end{pmatrix},\]
**Proposition 3.1**.: _The representation \(\rho\) has no invariant subspaces of dimension 1._
Proof.: The possible subspaces of dimension 1 that are invariant under \(X\) are \(\langle e_{i}\rangle\) for \(i=1,2,3,4,5,6\), and \(\langle\alpha e_{3}+e_{4}\rangle\) for \(\alpha\in\mathbb{C}\). Here \(e_{i}\) are the standard unit vectors in \(\mathbb{C}^{6}\), and are considered as column vectors.
It is clear to see that \(Y(e_{i})\not\in\langle e_{i}\rangle\) for \(i=1,2,3,4\). Assume that \(Y(e_{5})=K_{5}\in\langle e_{5}\rangle\). Then all components, except the fifth, of \(K_{5}\) are zeros. In particular the third and sixth components of \(K_{5}\) are zeros. So,
\(0\). Then, by direct computation, we get \(\lambda_{1}=q\) or \(q^{2}\) and \(\lambda_{1}^{3}=q\) which is impossible as \(q\neq 1\). So, \(Y(e_{5})\not\in\langle e_{5}\rangle\). Therefore, \(\langle e_{5}\rangle\) is not invariant under \(Y\).
Suppose that \(Y(e_{6})\in\langle e_{6}\rangle\). Then the \(5^{\rm th}\) and \(2^{\rm nd}\) components of \(K_{6}=Y(e_{6})\) are zeros. Then \(\lambda_{1}^{2}=-q\) and \(q+q^{2}\lambda_{1}^{2}-q^{2}\lambda_{1}^{3}-\lambda_{1}^{5}=0\). By direct computation, we get \(\lambda_{1}=-q\). Therefore, \(q=-1\), contradiction. So, \(Y(e_{6})\not\in\langle e_{6}\rangle\) and \(\langle e_{6}\rangle\) is not invariant under \(Y\).
It remains to show that any subspace of the form \(\langle\alpha e_{3}+e_{4}\rangle\), where \(\alpha\) is a non-zero complex number, is not invariant under \(Y\). Note that \(Y(\alpha e_{3}+e_{4})=\alpha K_{3}+K_{4}\). If \(\alpha K_{3}+K_{4}\in\langle\alpha e_{3}+e_{4}\rangle\) then the fifth and sixth components of \(\alpha K_{3}+K_{4}\) are zeros. So, \(\alpha q\lambda_{1}^{2}(1+q\lambda_{1})+q\lambda_{1}^{2}=0\) and \(\alpha q(q+1)-q=0\). This implies that \(\alpha=-q\) and \(\lambda_{1}=q-q^{2}\). Substitute the obtained values of \(\alpha\) and \(\lambda_{1}\) in the numerator of the \(2^{\rm nd}\) component of \(\alpha K_{3}+K_{4}\) to get
\[-(q-q^{2})^{3}-(q-q^{2})^{4}+q^{2}+(q-q^{2})^{2}-q(q-q^{2})^{3}=2(q-8)\neq 0 \mbox{ when }q^{3}=1,q\neq 1.\]
Therefore, the second component of \(\alpha K_{3}+K_{4}\) is not zero, contradiction.
**Proposition 3.2**.: _The representation \(\rho\) has no invariant subspaces of dimension 2 if and only if \(\lambda_{1}^{3}\neq q\)._
Proof.: The possible subspaces of dimension 2 that are invariant under \(X\) are: \(S_{ij}=\langle e_{i},e_{j}\rangle\), and \(S_{k}^{\alpha}=\langle e_{k},\alpha e_{3}+e_{4}\rangle\) for \(\alpha\in\mathbb{C}\), \(1\leqslant i<j\leqslant 6\) and \(k=1,2,5,6\).
We can easily see that \(Y(e_{1})=K_{1}\not\in S_{1i}\) for all \(i=2,3,4,5,6\). So, the subspaces \(S_{1i}\) (\(i=2,3,4,5,6\)) are not invariant under \(Y\).
Also \(Y(e_{2})=K_{2}\not\in S_{2i}\) for \(i=3,4,5,6\) since the third and sixth components of \(K_{2}\) are not zeros. Thus, the subspaces \(S_{2i}\) (\(i=3,4,5,6\)) are not invariant under \(Y\).
The fourth, fifth and sixth components of \(Y(e_{3})=K_{3}\) cannot be zeros at the same time for all values of \(\lambda_{1}\). So, \(Y(e_{3})=K_{3}\not\in S_{3i}\) for \(i=4,5,6\). So, \(S_{3i}\) is not invariant under \(Y\) for \(i=4,5,6\).
\(Y(e_{4})\not\in S_{4i}\) for \(i=5,6\) because the third component of \(Y(e_{4})=K_{4}\) is not zero. So, \(S_{4i}\) is not invariant under \(Y\) for \(i=5,6\).
If the third and fourth components of \(Y(e_{5})=K_{5}\) are zeros and since \(\lambda_{1}\neq 0\) we then have
\[\left\{\begin{array}{c}1+q+(2+q)\lambda_{1}-(1+2q)\lambda_{1}^{2}+q^{2} \lambda_{1}^{3}=0\\ 1+q^{2}-\lambda_{1}-(2+q)\lambda_{1}^{2}+q^{2}\lambda_{1}^{3}+q\lambda_{1}^{4} =0\end{array}\right.\]
By using Mathematica, we show that there is no complex solution in terms of \(q\) satisfying this system of equations. So \(Y(e_{3})\not\in S_{56}\). Therefore, \(S_{56}\) is not invariant under \(Y\).
It remains to discuss the subspaces \(S_{k}^{\alpha}\) for \(k=1,2,5,6\) and \(\alpha\in\mathbb{C}\). Since the sixth components of \(Y(e_{1})\) and \(Y(e_{2})\) are not zeros, it follows that \(Y(e_{1})\not\in\langle\alpha e_{3}+e_{4},e_{1}\rangle\) and \(Y(e_{2})\not\in\langle\alpha e_{3}+e_{4},e_{2}\rangle\) for all \(\alpha\in\mathbb{C}\). Therefore, \(S_{1}^{\alpha}\) and \(S_{2}^{\alpha}\) are not invariant under \(Y\). Now, if \(Y(e_{6})=K_{6}\in\langle\alpha e_{3}+e_{4},e_{6}\rangle\) then the second and fifth components of \(K_{6}\) are zeros. This yields to the following equations:
\[\left\{\begin{array}{c}q+q^{2}\lambda_{1}^{2}-q^{2}\lambda_{1}^{3}-\lambda_{1 }^{5}=0\\ -3q\lambda_{1}^{2}(q+\lambda_{1}^{2})=0\end{array}\right..\]
Using that fact that \(q\) is a primitive third root of unity, we get \(\lambda_{1}=\lambda_{1}^{2}=-q\), that is \(q=\lambda_{1}=0\) or \(1\), a contradiction. Hence, \(Y(e_{6})\not\in S_{6}^{\alpha}\) for all \(\alpha\in\mathbb{C}\). Therefore, \(S_{6}^{\alpha}\) is not invariant under \(Y\).
Finally, if a subspace \(S_{5}^{\alpha}\) is invariant under \(Y\) then, \(Y(e_{5})=K_{5}\in\langle S_{5}^{\alpha}\rangle\). Then, \(\lambda_{1}^{3}=q\). Conversely, if \(\lambda_{1}^{3}=q\) then, by direct computation and using Mathematica,
we show that \(Y(e_{5})=K_{5}\in S_{5}^{\alpha}\) and \(Y(\alpha e_{3}+e_{4})\in S_{5}^{\alpha}\) for \(\alpha=(-1-\lambda_{1})(1+q\lambda_{1})\). Therefore, \(\langle\alpha e_{3}+e_{4},e_{5}\rangle\) is invariant under \(Y\) if and only if \(\lambda_{1}^{3}=q\). The invariant subspaces corresponding to these values of \(\lambda_{1}\) are of the form
\[\langle\alpha e_{3}+e_{4},e_{5}\rangle\text{ where, }\alpha=(-1-\lambda_{1})(1+q \lambda_{1}).\]
**Proposition 3.3**.: _The representation \(\rho\) has no invariant subspaces of dimension 3 if and only if \(\lambda_{1}^{2}\neq-q\), \(-q^{2}\)._
Proof.: The subspaces of dimension 3 that are invariant under \(X\) are \(\langle e_{i},e_{j},e_{k}\rangle\) and \(\langle e_{s},\alpha e_{3}+e_{4},e_{t}\rangle\) for \(\alpha\in\mathbb{C}\), \(1\leqslant i<j<k\leqslant 6\) and \(s,t\in\{1,2,5,6\}\) with \(s<t\).
Since the third, fifth and sixth components of \(Y(e_{1})\) are not zeros, it follows that all the subspaces of the form \(\langle e_{1},e_{j},e_{k}\rangle\) together with the subspaces of the form \(\langle e_{1},\alpha e_{3}+e_{4},e_{t}\rangle\) are not invariant under \(Y\) for \(1<j<k\leqslant 6\) and \(t=2,5,6\).
The third and sixth components of \(K_{2}=Y(e_{2})\) are not zeros. So, \(Y(e_{2})\not\in\langle e_{2},e_{j},e_{k}\rangle\) and \(Y(e_{2})\not\in\langle e_{2},\alpha e_{3}+e_{4},e_{5}\rangle\) for all \(2<j<k\leqslant 6\) such that \(\{j,k\}\neq\{3,6\}\) and for all \(\alpha\in\mathbb{C}\). Then the subspaces of the form \(\langle e_{2},e_{j},e_{k}\rangle\) and \(\langle e_{2},\alpha e_{3}+e_{4},e_{5}\rangle\) are not invariant under \(Y\) for all \(2<j<k\leqslant 6\) such that \(\{j,k\}\neq\{3,6\}\).
Assume that the subspace \(S=\langle e_{2},e_{3},e_{6}\rangle\) is invariant under \(Y\) then, \(Y(e_{3})\in S\). So, \(\lambda_{1}=-q\) (as the sixth component of \(Y(e_{3})\) is zero). Substitute the value of \(\lambda_{1}\) in the first component of \(Y(e_{3})\) to get \(\frac{(q-1)(-q^{2}+1)}{6q}\neq 0\), contradiction. So, \(S\) is not invariant under \(Y\).
Consider the subspace \(S^{\alpha}=\langle e_{2},\alpha e_{3}+e_{4},e_{6}\rangle\), where \(\alpha\in\mathbb{C}\). Suppose \(S^{\alpha}\) is invariant under \(Y\) then, \(Y(e_{2})\in S^{\alpha}\). So, the fifth component of \(K_{2}\) is zero. Thus, \(\lambda_{1}^{2}=-q\). Conversely, assume that \(\lambda_{1}^{2}=-q\). Then, by direct computation and using Mathematica, we show that \(Y(e_{r})\in S^{\alpha}\) for \(r=2,6\) and \(Y(\alpha e_{3}+e_{4})\in S^{\alpha}\) and in this case \(\alpha=\frac{1}{2}\pm\frac{1}{2}i\). Therefore, the subspace \(\langle e_{2},\alpha e_{3}+e_{4},e_{6}\rangle\) is invariant under \(Y\) if and only if \(\lambda_{1}^{2}=-q\).
Since the sixth and fifth components of \(Y(e_{4})\) are not zeros, it follows that \(Y(e_{4})\not\in\langle e_{3},e_{4},e_{5}\rangle\) and \(Y(e_{4})\not\in\langle e_{3},e_{4},e_{6}\rangle\). Thus, the subspaces \(\langle e_{3},e_{4},e_{5}\rangle\) and \(\langle e_{3},e_{4},e_{6}\rangle\) are not invariant under \(Y\).
Assume that \(Y(e_{3})\in\langle e_{3},e_{5},e_{6}\rangle\). Then the fourth component of \(K_{3}\) is zero. So, \(\lambda_{1}=\frac{-1\pm i\sqrt{15}}{4}\). But for this value of \(\lambda_{1}\) and by direct calculation, the first component of \(K_{3}\) is \(-\frac{9}{8}\pm\frac{21\sqrt{5}}{16}+i\left(\pm\frac{9\sqrt{5}}{16}\pm\frac{3 \sqrt{15}}{8}\right)\neq 0\). This is a contradiction. Thus the subspace \(\langle e_{3},e_{5},e_{6}\rangle\) is not invariant under \(Y\).
Since the third component of \(K_{4}\) is not zero it follows that \(Y(e_{4})\not\in\langle e_{4},e_{5},e_{6}\rangle\). So, the subspace \(\langle e_{4},e_{5},e_{6}\rangle\) is not invariant under \(Y\).
Consider the subspace \(V=\langle e_{5},\alpha e_{3}+e_{4},e_{6}\rangle\). By using Mathematica, we show that \(Y(e_{6})\in V\) if and only if \(\lambda_{1}^{2}=-q^{2}\). Moreover,
\[\alpha=\left\{\begin{array}{rl}\frac{2}{11}(4+i-3\lambda_{1}-\lambda_{1}^{2} )&\text{for }\lambda_{1}=iq\\ \frac{2}{11}(4-i-3\lambda_{1}-\lambda_{1}^{2})&\text{for }\lambda_{1}=-iq \end{array}\right.\]
Also, we show that if \(\lambda_{1}^{2}=-q^{2}\) then \(Y(e_{5})\in V\) and \(Y(\alpha e_{3}+e_{4})\in V\) for
\[\alpha=\left\{\begin{array}{rl}\frac{2}{11}(4+i-3\lambda_{1}-\lambda_{1}^{2} )&\text{for }\lambda_{1}=iq\\ \frac{2}{11}(4-i-3\lambda_{1}-\lambda_{1}^{2})&\text{for }\lambda_{1}=-iq \end{array}\right..\]
Therefore, \(\langle e_{5},\alpha e_{3}+e_{4},e_{6}\rangle\) is invariant under \(Y\) if and only if \(\lambda_{1}^{2}=-q^{2}\).
**Proposition 3.4**.: _The representation \(\rho\) has no invariant subspaces of dimension 4 if and only if \(\lambda_{1}^{3}\neq q^{2}\)._
Proof.: The subspaces of dimension 4 that are invariant under \(X\) are \(\langle e_{i},e_{j},e_{k},e_{r}\rangle\) and \(\langle\alpha e_{3}+e_{4},e_{s},e_{t},e_{h}\rangle\) for \(\alpha\in\mathbb{C}\), \(1\leqslant i<j<k<r\leqslant 6\) and \(s,t,h\in\{1,2,5,6\}\) with \(s<t<h\).
Since the fifth and sixth components of \(Y(e_{1})\) are not zeros, it follows that the subspaces of the form \(\langle e_{1},e_{2},e_{3},e_{i}\rangle\), \(\langle e_{1},e_{2},e_{4},e_{j}\rangle\), \(\langle e_{1},e_{3},e_{4},e_{j}\rangle\), \(\langle\alpha e_{3}+e_{4},e_{1},e_{2},e_{5}\rangle\) and \(\langle\alpha e_{3}+e_{4},e_{1},e_{2},e_{6}\rangle\) are not invariant under \(Y\) for \(i=4,5,6\), \(j=5,6\) and all \(\alpha\in\mathbb{C}\).
The subspace \(\langle e_{1},e_{4},e_{5},e_{6}\rangle\) is not invariant under \(Y\) because the third component of \(Y(e_{4})\) is not zero.
Since the third component of \(Y(e_{2})\) is not zero, it follows that the subspaces \(\langle e_{1},e_{2},e_{5},e_{6}\rangle\) and \(\langle e_{2},e_{4},e_{5},e_{6}\rangle\) are not invariant under \(Y\).
Assume that the subspace \(\langle e_{1},e_{3},e_{5},e_{6}\rangle\) is invariant under \(Y\) then, \(Y(e_{1})\in\langle e_{1},e_{3},e_{5},e_{6}\rangle\). So, the second and fourth components of \(Y(e_{1})\) are zeros. Hence, \(\lambda_{1}^{3}=q^{2}\) and \(\lambda_{1}=-q^{2}\). Thus, \(-q^{6}=q^{2}\). But, this contradicts the fact that \(q\) is a third root of unity. Thus, \(\langle e_{1},e_{3},e_{5},e_{6}\rangle\) is not invariant under \(Y\).
Note that the sixth and fifth components of \(Y(e_{4})\) are not zeros. So, the subspaces \(\langle e_{2},e_{3},e_{4},e_{r}\rangle\) are not invariant under \(Y\) for \(r=5,6\).
Since \(\lambda_{1}\neq-1\) it follows that the fourth component of \(K_{2}\) is not zero and \(Y(e_{2})\not\in\langle e_{2},e_{3},e_{5},e_{6}\rangle\). Therefore, the subspace \(\langle e_{2},e_{3},e_{5},e_{6}\rangle\) is not invariant under \(Y\).
Suppose that the subspace \(\langle e_{3},e_{4},e_{5},e_{6}\rangle\) is invariant under \(Y\). Then the first and second components of \(Y(e_{3})\) are zeros. This implies that
\[\left\{\begin{array}{c}2+q-(1+2q)\lambda_{1}^{2}=0\\ 1-\lambda_{1}^{3}q^{2}=0\end{array}\right.,\]
Thus, \(\lambda_{1}^{2}=-q\) and \(\lambda_{1}^{3}=q\) but, this contradicts the fact that \(q\) is a primitive third root of unity. Therefore, \(\langle e_{3},e_{4},e_{5},e_{6}\rangle\) is not invariant under \(Y\).
Consider the subspace \(S=\langle\alpha e_{3}+e_{4},e_{1},e_{5},e_{6}\rangle\), where \(\alpha\in\mathbb{C}\). Suppose that \(S\) is invariant under \(Y\), then \(Y(e_{1})\in S\). Then, the second component of \(K_{1}\) is zero. So \(\lambda_{1}^{3}=q^{2}\). Conversely, if \(\lambda_{1}^{3}=q^{2}\), then the second component of \(K_{1}\) is zero and
\[\frac{\text{The third component of }K_{1}}{-\lambda_{1}^{2}q^{2}+1}=\frac{(- \lambda_{1}+q)\lambda_{1}}{-\lambda_{1}^{3}q^{2}+\lambda_{1}}=\frac{(-\lambda_ {1}+q)\lambda_{1}}{-q+\lambda_{1}}=-\lambda_{1}.\]
Thus, \(Y(e_{1})\in S\) and \(\alpha=-\lambda_{1}\). Also, by direct computation and using Mathematica, we show that the second component of each of \(Y(e_{5})\), \(Y(e_{6})\) and \(Y(-\lambda_{1}e_{3}+e_{4})\) is zero. As well as we show that the ratio of the third component to the fourth one of each of these vectors is \(-\lambda_{1}\). This means that \(Y(e_{5})\), \(Y(e_{6})\) and \(Y(-\lambda_{1}e_{3}+e_{4})\) are in \(S\). Therefore, \(S\) is invariant under \(Y\) if and only if \(\lambda_{1}^{3}=q^{2}\) and in this case \(\alpha=-\lambda_{1}\) and \(S=\langle-\lambda_{1}e_{3}+e_{4},e_{1},e_{5},e_{6}\rangle\).
It remains to prove that the subspace \(S^{\prime}=\langle\alpha e_{3}+e_{4},e_{2},e_{5},e_{6}\rangle\) is not invariant under \(Y\) for all \(\alpha\in\mathbb{C}\). Suppose that \(S^{\prime}=\langle\alpha e_{3}+e_{4},e_{2},e_{5},e_{6}\rangle\) is invariant under \(Y\) for some \(\alpha\in\mathbb{C}\), then \(Y(e_{2})\in S^{\prime}\). Then the first component of \(K_{2}\) is zero. Thus, \(\lambda_{1}^{2}=\frac{2+q}{1+2q}=-q\) (as \(q\) is a third root of unity). Substitute the obtained value of \(\lambda_{1}^{2}\) in the numerator of the first component of \(K_{5}\) to get
\[\lambda_{1}(-2q-1+(2+q)(-q)+(2+q^{2})(-q)\lambda_{1}+(2q+1)q^{2}\lambda_{1})=- 3q\lambda_{1}(1+\lambda_{1})\neq 0.\]
Hence, \(Y(e_{5})=K_{5}\not\in S^{\prime}\), a contradiction.
**Proposition 3.5**.: _If \(\lambda_{1}^{3}\neq q^{2}\) then the representation \(\rho\) has no invariant subspaces of dimension 5._
Proof.: The possible subspaces of dimension 5 that are invariant under \(X\) are: \(S_{6}=\langle e_{1},e_{2},e_{3},e_{4},e_{5}\rangle\), \(S_{5}=\langle e_{1},e_{2},e_{3},e_{4},e_{6}\rangle\), \(S_{4}=\langle e_{1},e_{2},e_{3},e_{5},e_{6}\rangle\), \(S_{3}=\langle e_{1},e_{2},e_{4},e_{5},e_{6}\rangle\), \(S_{2}=\langle e_{1},e_{3},e_{4},e_{5},e_{6}\rangle\), \(S_{1}=\langle e_{2},e_{3},e_{4},e_{5},e_{6}\rangle\) and \(S^{\alpha}=\langle e_{1},e_{2},\alpha e_{3}+e_{4},e_{5},e_{6}\rangle\) for \(\alpha\in\mathbb{C}\).
Since the third, fifth and sixth components of \(K_{1}\) are not zeros, it follows that \(Y(e_{1})\not\in S_{i}\) for \(i=3,5,6\). So, the subspaces \(S_{i}\) are not invariant under \(Y\) for \(i=3,5,6\).
Assume that \(S_{4}\) is invariant under \(Y\), then \(Y(e_{1})\in S_{4}\) and \(Y(e_{2})\in S_{4}\). So the fourth components of \(K_{1}\) and \(K_{2}\) are zeros. So, \(\lambda_{1}=-q^{2}\) and \(\lambda_{1}=-1\) which is impossible because \(q\) is a primitive third root of unity. So, \(S_{4}\) is not invariant under \(Y\).
Since \(\lambda_{1}^{2}\neq q^{3}\), the second component of \(K_{1}\) is not zero. So, \(Y(e_{1})\not\in S_{2}\). Thus, \(S_{2}\) is not invariant under \(Y\).
Assume that \(S_{1}\) is invariant under \(Y\), then \(Y(e_{2})\in S_{1}\). So, the first component of \(K_{2}\) is zero. Thus, \(\lambda_{1}^{2}=\frac{2+q}{1+2q}=-q\) (since \(q\) is a primitive third root of unity). Substitute in numerator of the first component of \(K_{5}\) to get \(\lambda_{1}(-3q-3q\lambda_{1})\), which is not zero for \(\lambda_{1}^{2}=-q\). So \(Y(e_{5})\not\in S_{1}\). Therefore, \(S_{1}\) is not invariant under \(Y\).
It remains to show that \(S^{\alpha}\) is not invariant under \(Y\) for all \(\alpha\in\mathbb{C}\). Assume, for some \(\alpha\in\mathbb{C}\), that \(S^{\alpha}\) is invariant then, \(Y(e_{1})\) and \(Y(e_{2})\) belong to \(S^{\alpha}\). So,
\[\frac{\text{The third component of }K_{1}}{\text{The fourth component of }K_{1}}=\frac{\text{ The third component of }K_{2}}{\text{The fourth component of }K_{2}}=\alpha.\]
This implies that
\[\frac{-\lambda_{1}+q}{-\lambda_{1}^{2}q^{2}+1}=\frac{\lambda_{1}-q}{(1+ \lambda_{1})(\lambda_{1}-q^{2})}.\]
So,
\[(q^{2}-1)(\lambda_{1}^{2}+\lambda_{1}+1)=0.\]
Hence, \(\lambda_{1}=q\) or \(q^{2}\), a primitive third root of unity. This is a contradiction because \(\lambda_{1}\neq q\) and \(\lambda_{1}\neq q^{2}\). Therefore, \(S^{\alpha}\) is not invariant under \(Y\) for all \(\alpha\in\mathbb{C}\).
**Theorem 3.6**.: _For \(\lambda_{1}\in\mathbb{C}\setminus\{-1,1,q,q^{2}\}\), the representation \(\rho\) is irreducible if and only if \(\lambda_{1}^{2}\neq-q\), \(\lambda_{1}^{2}\neq-q^{2}\), \(\lambda_{1}^{3}\neq q\) and \(\lambda_{1}^{3}\neq q^{2}\)._
Proof.: It follows directly from Proposition 3.1, Proposition 3.2, Proposition 3.3, Proposition 3.4 and Proposition 3.5.
## 4. Reducible Representation
By Theorem 3.6, the representation \(\rho\) is reducible if and only if \(\lambda_{1}^{2}=-q\), \(\lambda_{1}^{2}=-q^{2}\), \(\lambda_{1}^{3}=q\) or \(\lambda_{1}^{3}=q^{2}\). For \(\lambda_{1}^{3}=q^{2}\), \(\rho\) is reducible and the subspace \(V\), that is generated by the vectors \(-\lambda_{1}e_{3}+e_{4},\ e_{1},\ e_{5},\ e_{6}\), is invariant under \(\rho\). Let us write the matrices representing \(\rho(\sigma_{1})\) and \(\rho(\sigma_{2})\) relative to the basis \(\{-\lambda_{1}e_{3}+e_{4},e_{1},e_{5},e_{6},e_{4},e_{2}\}\). Let \(A=(-\lambda_{1}e_{3}+e_{4},e_{1},e_{5},e_{6},e_{4},e_{2})\) be the transition matrix.
Then,
\[A=\begin{pmatrix}0&1&0&0&0&0\\ 0&0&0&0&0&1\\ -\lambda_{1}&0&0&0&0&0\\ 1&0&0&0&1&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\end{pmatrix}\]
The matrix representing \(\sigma_{1}\) in this basis is
\[A^{-1}XA=\begin{pmatrix}q^{2}&0&0&0&0&0\\ 0&q&0&0&0&0\\ 0&0&\lambda_{1}^{-1}&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&q^{2}&0\\ 0&0&0&0&0&\lambda_{1}\end{pmatrix}\]
and the matrix representing \(\sigma_{2}\) in the same basis is
\[A^{-1}YA=\left(C_{1}\ C_{2}\ C_{3}\ C_{4}\ C_{5}\ C_{6}\right),\]
where
\[C_{1}=\begin{pmatrix}\frac{(1+2q^{1/3}+2q^{2/3}+2q+2q^{4/3})}{3q(1 +q^{1/3}+q^{2/3}+q)}\\ -\frac{q(2+q^{2/3}+q+2q^{5/3}+3q^{7/3})}{3(-1+q^{1/3})^{3}(1+q^{5/3})}\\ \frac{q+q^{5/3}+q^{7/3}}{(-1+q^{1/3})^{3}(1+q^{1/3})^{2}(1+q^{2/3})(-1+q^{9/3} )}\\ -\frac{q(1-q^{1/3}+q)}{3(-1+q^{1/3})^{3}(1+q^{1/3})^{2}}\\ 0\\ 0\end{pmatrix},\ C_{2}=\begin{pmatrix}\frac{1}{3q^{4/3}(-1+q^{4/3})}\\ \frac{q}{-1+q^{1/3}+q+q^{9/3}-q^{7/3}-q^{2/3}}\\ \frac{q^{1/3}}{(-1+q^{2/3})^{2}(1+q^{2/3})(q^{2/3}-q)(-1+q^{9/3})}\\ \frac{3(-1+q^{2/3})^{2}(-1+q)q}{3(-1+q^{2/3})^{2}(-1+q)q}\\ 0\\ 0\end{pmatrix},\]
\[C_{3}=\begin{pmatrix}\frac{(1+2q)(1-q^{4/3})+(2+q)q^{2/3}}{3(1 +q)(-q^{2/3}+q^{2})}\\ \frac{q^{2}(1+q^{2/3})}{(q^{2/3}-q)(-1+q^{5/3})}\\ 0\\ -\frac{q^{2/3}-q^{2/3}+q}{3(-1+q^{2/3})^{2}}\\ 0\end{pmatrix},\ C_{4}=\begin{pmatrix}-\frac{2+q+(q-1)(q^{4/3}+2q^{5/3})}{(q^ {2}-1)q^{9/3}(-q^{2/3}+q^{2})}\\ \frac{3(q^{4/3}-q^{8/3}+q)}{-1+q^{1/3}+q^{5/3}-q^{7/3}-q^{2/3}+q}\\ \frac{(-1+q)q^{7/3}(q^{3}-2(q+q)-q^{2}(1+2q))}{(-1+q^{2/3})^{2}(1+q^{2/3})(q^{ 2/3}-q)(-1+q^{9/3})}\\ \frac{q^{2/3}-q^{8/3}-(2+q)+q^{4/3}(-1+q^{2})}{3(-1+q^{2/3})^{2}}\\ 0\\ 0\end{pmatrix},\]
\[C_{5}=\begin{pmatrix}\frac{2q^{2/3}}{3(-1+q^{4/3}}\\ -\frac{q(2+2q^{1/3}+5q^{2/3}+5q+2q^{4/3}+2q^{5/3}))}{3(-1+q^{5/3})}\\ \frac{q^{7/3}}{(-1+q^{7/3})^{2}(1+q^{2/3})(q^{2/3}-q)(-1+q^{5/3})}\\ -\frac{q^{7/3}}{3(-1+q^{2/3})^{2}(-1+q)}\\ \frac{q^{2/3}}{1-q^{7/3}}\\ \frac{1+q^{2/3}+q^{4/3}}{1-q^{1/3}+q^{2/3}-q}\end{pmatrix}\text{and }C_{6}=\begin{pmatrix}\frac{1}{3q^{4/3}(-1+q^{4/3})}\\ \frac{q(2+q-q^{4/3}(1+2q))}{3(q^{2/3}-q)(-1+q^{5/3})}\\ \frac{q}{(-1+q^{1/3})^{2}(q^{4/3}-1)(1-q^{5/3})}\\ \frac{1}{3(-1+q^{2/3})^{2}(-1+q)q}\\ -\frac{q+q^{4/3}+2q^{2}-q^{3/3}}{3-3q^{4/3}}\\ -\frac{1+q^{4/3}+q^{8/3}}{(-1+q^{4/3})^{3}(1+q^{2/3})(q+q^{4/3})^{2}}\end{pmatrix}.\]
The restriction \(\rho_{V}\) of \(\rho\) to the subspace \(V\) is given by: \(\sigma_{1}\mapsto X^{\prime}=\begin{pmatrix}q^{2}&0&0&0\\ 0&q&0&0\\ 0&0&\lambda_{1}^{-1}&0\\ 0&0&0&1\end{pmatrix}\) and \(\sigma_{2}\mapsto Y^{\prime}=(E_{1}\ E_{2}\ E_{3}\ E_{4})\), where
\[E_{1}=\begin{pmatrix}\frac{1+2q^{1/3}+2q^{2/3}+2q+2q^{4/3}}{3(1+q^{1/3}+q^{2/3} +q)}\\ -\frac{q(2+q^{2/3}+q+2q^{5/3}+3q^{7/3})}{3(-1+q^{5/3})}\\ \frac{q+q^{5/3}+q^{7/3}}{(-1+q^{1/3})^{3}(1+q^{1/3})^{2}(1+q^{2/3})(-1+q^{2/3} )}\\ -\frac{q(1-q^{1/3}+q)}{3(-1+q^{1/3})^{3}(1+q^{1/3})^{2}}\end{pmatrix},\ E_{2}= \begin{pmatrix}\frac{\overline{3q^{4/3}}(-\frac{1}{1+q^{4/3}})}{q}\\ \frac{q^{1/3}}{(-1+q^{2/3})^{2}(1+q^{2/3})}(q^{2/3}-q)(-1+q^{5/3})\\ \frac{q^{1/3}}{3(-1+q^{2/3})^{2}(1+q^{2/3})}\frac{1}{3(-1+q^{2/3})^{2}(-1+q)q} \end{pmatrix},\]
\[E_{3}=\begin{pmatrix}\frac{(1+2q)(1-q^{4/3})+(2+q)q^{2/3}}{3(1+q)(-q^{2/3}+q^{2 })}\\ \frac{q^{2}(1+q^{2/3})}{(q^{2/3}-q)(-1+q^{5/3})}\\ 0\\ -\frac{q^{2/3}(1+2q^{2})}{3(-1+q^{2/3})^{2}(-1+q)}\end{pmatrix}\text{ and }E_{4}=\begin{pmatrix}-\frac{2+q+(q-1)(q^{4/3}+2q^{ 9/3})}{(q^{2}-1)q^{5/3}(-q^{2/3}+q^{2})}\\ \frac{3(q^{3}-q^{8/3}+q)+q^{3}-q^{7/3}-q^{2/3}+q}{-1+q^{1/3}+q^{9/3}-q^{7/3}-q^ {7/3}+q}\\ \frac{(-1+q)q^{7/3}(q^{4/3}(2+q)-q^{2}(1+2q))}{(-1+q^{2/3})^{2}(1+q^{2/3})(q^ {2/3}-q)(-1+q^{5/3})}\\ \frac{q^{2/3}-q^{8/3}-(2+q)+q^{4/3}(-1+q^{2})}{3(-1+q^{2/3})^{2}}\end{pmatrix}.\]
**Theorem 4.1**.: _The representation \(\rho_{V}\) is an irreducible representation of \(B_{3}\) of dimension 4._
Proof.: The vectors \(f_{1}=(1,0,0,0)^{T}\), \(f_{2}=(0,1,0,0)^{T}\), \(f_{3}=(0,0,1,0)^{T}\) and \(f_{4}=(0,0,0,1)^{T}\) are the eigenvectors of \(\sigma_{1}\) corresponding to the eigenvalues \(q^{2}\), \(q\), \(\lambda_{1}^{-1}\) and \(1\) respectively. Note that each component of \(Y^{\prime}(f_{i})=E_{i}\) is not zero for all \(i=1,2,3,4\) except the third component of \(E_{3}\). This implies that all the proper subspaces spanned by any subset of \(\{f_{1},f_{2},f_{3},f_{4}\}\) are not invariant under \(Y^{\prime}\). Hence, \(\rho_{V}\) has no invariant proper subspaces. So, \(\rho_{V}\) is irreducible.
## 5. Conclusion
We consider a family of representations of \(B_{3}\) constructed by \(q\)-Pascal's triangle [1]. We then specialize the parameters used in defining these representations to non-zero complex numbers. Kosyak mentioned in [5] that the irreducibility of these representations is an open problem for dimensions \(\geq 6\). We determine a necessary and sufficient condition for the irreducibility of representations when their dimension is 6. In addition, we present an irreducible representation of the braid group \(B_{3}\) of dimension 4; a representation obtained by reducing one of those reducible representations presented in this work.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.